text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Staff Overview
Rooms • Phone • Mail
Winter semester 2020
Prof. Dr. Till Tantau
Uni Lübeck › › Staff › Till Tantau › Research › Publications
Contact, Travelling
EATCS Council Vote
My Teaching
Courses, Lecture Notes
Impressum, Legal
Technical reports, other
Max Bannach, Zacharias Heinrich, Rüdiger Reischuk, Till Tantau:
Dynamic Kernels for Hitting Sets and Set Packing.
Algorithmica, 84:3459–3488, 2022.
Go to website | Show abstract
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k d (which is a polynomial as d is a constant), and they do so in time m · 2d poly(d) for a small polynomial poly(d) (which is linear in the hypergraph size for d fixed). We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k d kernel. This paper presents a deterministic solution with worst-case time 3d poly(d) for updating the kernel upon inserts and time 5d poly(d) for updates upon deletions. These bounds nearly match the time 2d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deter- ministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c k ) where c = d ? 1 + O(1/d) equals the best base known for the static setting.
Max Bannach, Malte Skambath, Till Tantau:
On the Parallel Parameterized Complexity of MaxSAT Variants.
In Proceedings of the 25th International Conference on Theory and Applications of Satisfiability Testing (SAT 2022), LIPIcs, 2022.
Max Bannach, Zacharias Heinrich, Till Tantau, Rüdiger Reischuk:
In Proceedings of the 16th International Symposium on Parameterized and Exact Computation (IPEC 2021), Volume 214 of LIPIcs, Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
Computing small kernels for the hitting set problem is a well-studied computational problem where we are given a hypergraph with n vertices and m hyperedges, each of size d for some small constant d, and a parameter k. The task is to compute a new hypergraph, called a kernel, whose size is polynomial with respect to the parameter k and which has a size-k hitting set if, and only if, the original hypergraph has one. State-of-the-art algorithms compute kernels of size k^d (which is a polynomial kernel size as d is a constant), and they do so in time m? 2^d poly(d) for a small polynomial poly(d) (which is a linear runtime as d is again a constant). We generalize this task to the dynamic setting where hyperedges may continuously be added or deleted and one constantly has to keep track of a size-k^d hitting set kernel in memory (including moments when no size-k hitting set exists). This paper presents a deterministic solution with worst-case time 3^d poly(d) for updating the kernel upon hyperedge inserts and time 5^d poly(d) for updates upon deletions. These bounds nearly match the time 2^d poly(d) needed by the best static algorithm per hyperedge. Let us stress that for constant d our algorithm maintains a dynamic hitting set kernel with constant, deterministic, worst-case update time that is independent of n, m, and the parameter k. As a consequence, we also get a deterministic dynamic algorithm for keeping track of size-k hitting sets in d-hypergraphs with update times O(1) and query times O(c^k) where c = d - 1 + O(1/d) equals the best base known for the static setting.
Max Bannach, Till Tantau:
On the Descriptive Complexity of Color Coding.
MDPI Algorithms, 2021. Special Issue: Parameterized Complexity and Algorithms for Nonclassical Logics
Color coding is an algorithmic technique used in parameterized complexity theory to detect "small" structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
Computing Hitting Set Kernels By AC^0-Circuits.
Theory of Computing Systems, 64(3):374--399, 2020.
Given a hypergraph $H = (V,E)$, what is the smallest subset $X \subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$? This problem, known as the \emph{hitting set problem,} is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph~$H$ and a number~$k$ as input and output a hypergraph~$H'$ such that (1) $H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a hitting set and (2) the size of $H'$ depends only on $k$ and on the maximum cardinality $d$ of edges in~$H$. The algorithms run in polynomial time and can be parallelized to a certain degree: one can compute hitting set kernels in parallel time $O(d)$ -- but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in \emph{constant} parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application.
Kernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time.
In Proceedings of the 17th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT 2020), LIPIcs, 2020.
We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the link of a set c of vertices consists of all edges that are supersets of c. We call such a set critical if its link has certain easy-to-check size properties. The rule states that the link of a critical c can be replaced by c. It is known that a simple linear-time algorithm for computing hitting set kernels (number of edges) at most k^d (k is the hitting set size, d is the maximum edge size) can be derived from this rule. We parallelize this algorithm and obtain the first AC^0 kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC^0-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.
Technical report , Electronic Colloquium on Computational Complexity, 2019.
Computing kernels for the hitting set problem (the problem of finding a size-$k$ set that intersects each hyperedge of a hypergraph) is a well-studied computational problem. For hypergraphs with $m$ hyperedges, each of size at most~$d$, the best algorithms can compute kernels of size $O(k^d)$ in time $O(2^d m)$. In this paper we generalize the task to the \emph{dynamic} setting where hyperedges may be continuously added and deleted and we always have to keep track of a hitting set kernel (including moments when no size-$k$ hitting set exists). We present a deterministic solution, based on a novel data structure, that needs worst-case time $O^*(3^d)$ for updating the kernel upon hyperedge inserts and time~$O^*(5^d)$ for updates upon deletions -- thus nearly matching the time $O^*(2^d)$ needed by the best static algorithm per hyperedge. As a novel technical feature, our approach does not use the standard replace-sunflowers-by-their-cores methodology, but introduces a generalized concept that is actually easier to compute and that allows us to achieve a kernel size of $\sum_{i=0}^d k^i$ rather than the typical size $d!\cdot k^d$ resulting from the Sunflower Lemma. We also show that our approach extends to the dual problem of finding packings in hypergraphs (the problem of finding $k$ pairwise disjoint hyperedges), albeit with a slightly larger kernel size of $\sum_{i=0}^d d^i(k-1)^i$.
In Proceedings of STACS 2019, LIPIcs, LIPIcs, 2019.
Go to website | Show PDF | Show abstract
Color coding is an algorithmic technique used in parameterized complexity theory to detect ``small'' structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing -- purely in terms of the syntactic structure of describing formulas -- when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
Towards Work-Efficient Parallel Parameterized Algorithms.
In Proceedings of the 13th International Conference and Workshops on Algorithms and Computation (WALCOM 2019), Springer, 2019.
Parallel parameterized complexity theory studies how fixed-parameter tractable (fpt) problems can be solved in parallel. Previous theoretical work focused on parallel algorithms that are very fast in principle, but did not take into account that when we only have a small number of processors (between 2 and, say, 1024), it is more important that the parallel algorithms are work-efficient. In the present paper we investigate how work-efficient fpt algorithms can be designed. We review standard methods from fpt theory, like kernelization, search trees, and interleaving, and prove trade-offs for them between work efficiency and runtime improvements. This results in a toolbox for developing work-efficient parallel fpt algorithms.
In Proceedings of the 35th International Symposium on Theoretical Aspects of Computer Science, LIPIcs, 2018.
Given a hypergraph $H = (V,E)$, what is the smallest subset $X \subseteq V$ such that $e \cap X \neq \emptyset$ holds for all $e \in E$? This problem, known as the \emph{hitting set problem,} is a basic problem in parameterized complexity theory. There are well-known kernelization algorithms for it, which get a hypergraph~$H$ and a number~$k$ as input and output a hypergraph~$H'$ such that (1) $H$ has a hitting set of size~$k$ if, and only if, $H'$ has such a hitting set and (2) the size of $H'$ depends only on $k$ and on the maximum cardinality $d$ of edges in~$H$. The algorithms run in polynomial time, but are highly sequential. Recently, it has been shown that one of them can be parallelized to a certain degree: one can compute hitting set kernels in parallel time $O(d)$ -- but it was conjectured that this is the best parallel algorithm possible. We refute this conjecture and show how hitting set kernels can be computed in \emph{constant} parallel time. For our proof, we introduce a new, generalized notion of hypergraph sunflowers and show how iterated applications of the color coding technique can sometimes be collapsed into a single application.
Computing Kernels in Parallel: Lower and Upper Bounds.
In Proceedings of the 13th International Symposium on Parameterized and Exact Computation (IPEC 2018), LIPIcs, 2018.
Parallel fixed-parameter tractability studies how parameterized problems can be solved in parallel. A surprisingly large number of parameterized problems admit a high level of parallelization, but this does not mean that we can also efficiently compute small problem kernels in parallel: known kernelization algorithms are typically highly sequential. In the present paper, we establish a number of upper and lower bounds concerning the sizes of kernels that can be computed in parallel. An intriguing finding is that there are complex trade-offs between kernel size and the depth of the circuits needed to compute them: For the vertex cover problem, an exponential kernel can be computed by AC$^0$-circuits, a quadratic kernel by TC$^0$-circuits, and a linear kernel by randomized NC-circuits with derandomization being possible only if it is also possible for the matching problem. Other natural problems for which similar (but quantitatively different) effects can be observed include tree decomposition problems parameterized by the vertex cover number, the undirected feedback vertex set problem, the matching problem, or the point line cover problem. We also present natural problems for which computing kernels is inherently sequential.
Till Tantau:
Applications of Algorithmic Metatheorems to Space Complexity and Parallelism (Invited Talk)..
In 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017), pp. 4:1--4:4. DROPS, 2017.
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle's Theorem all monadic second-order ("in a certain logic") properties of graphs of bounded tree width ("structured in a certain way") can be solved in linear time ("with a certain amount of resources"). Such theorems have become a valuable tool in algorithmics: If a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. The talk is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space classes and parallel computation, and tries to give a flavor of the range of
Parallel Multivariate Meta-Theorems.
Fixed-parameter tractability is based on the observation that many hard problems become tractable even on large inputs as long as certain input parameters are small. Originally, ``tractable'' just meant ``solvable in polynomial time,'' but especially modern hardware raises the question of whether we can also achieve ``solvable in polylogarithmic parallel time.'' A framework for this study of \emph{parallel fixed-parameter tractability} is available and a number of isolated algorithmic results have been obtained in recent years, but one of the unifying core tools of classical FPT theory has been missing: algorithmic meta-theorems. We establish two such theorems by giving new upper bounds on the circuit depth necessary to solve the model checking problem for monadic second-order logic, once parameterized by the tree width and the formula (this is a parallel version of Courcelle's Theorem) and once by the tree depth and the formula. For our proofs we refine the analysis of earlier algorithms, especially of Bodlaender's, but also need to add new ideas, especially in the context where the parallel runtime is bounded by a function of the parameter and does not depend on the length of the input.
Michael Elberfeld, Martin Grohe, Till Tantau:
Where First-Order and Monadic Second-Order Logic Coincide.
Transactions on Computational Logic, (Volume 17 Issue 4, November 2016 Article No. 25)2016.
Malte Skambath, Till Tantau:
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration.
In GD 2016: Graph Drawing and Network Visualization, Volume 9801 of LNCS, pp. 572-586. Springer, 2016.
While the algorithmic drawing of static trees is well-understood and well-supported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular Open image in new window typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running Open image in new window on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete.
Offline Drawing of Dynamic Trees: Algorithmics and Document Integration Analysis of Binary Search Trees.
Technical report arXiv:1608.08385, CoRR, 2016.
While the algorithmic drawing of static trees is well-understood and well-supported by software tools, creating animations depicting how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular TEX typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running TEX on the documents then results in documents in the SVG format with visually pleasing embedded animations. Our algorithm produces animations that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete.
A Gentle Introduction to Applications of Algorithmic Metatheorems for Space and Circuit Classes.
Algorithms, 9(3):1-44, 2016.
Algorithmic metatheorems state that if a problem can be described in a certain logic and the inputs are structured in a certain way, then the problem can be solved with a certain amount of resources. As an example, by Courcelle's Theorem, all monadic second-order ("in a certain logic") properties of graphs of bounded tree width ("structured in a certain way") can be solved in linear time ("with a certain amount of resources"). Such theorems have become valuable tools in algorithmics: if a problem happens to have the right structure and can be described in the right logic, they immediately yield a (typically tight) upper bound on the time complexity of the problem. Perhaps even more importantly, several complex algorithms rely on algorithmic metatheorems internally to solve subproblems, which considerably broadens the range of applications of these theorems. This paper is intended as a gentle introduction to the ideas behind algorithmic metatheorems, especially behind some recent results concerning space and circuit classes, and tries to give a flavor of the range of their applications.
Max Bannach, Christoph Stockhusen, Till Tantau:
Fast Parallel Fixed-Parameter Algorithms via Color Coding.
Fixed-parameter algorithms have been successfully applied to solve numerous difficult problems within acceptable time bounds on large inputs. However, most fixed-parameter algorithms are inherently \emph{sequential} and, thus, make no use of the parallel hardware present in modern computers. We show that parallel fixed-parameter algorithms do not only exist for numerous parameterized problems from the literature -- including vertex cover, packing problems, cluster editing, cutting vertices, finding embeddings, or finding matchings -- but that there are parallel algorithms working in \emph{constant} time or at least in time \emph{depending only on the parameter} (and not on the size of the input) for these problems. Phrased in terms of complexity classes, we place numerous natural parameterized problems in parameterized versions of AC$^0$. On a more technical level, we show how the \emph{color coding} method can be implemented in constant time and apply it to embedding problems for graphs of bounded tree-width or tree-depth and to model checking first-order formulas in graphs of bounded degree.
Michael Elberfeld, Christoph Stockhusen, Till Tantau:
On the Space and Circuit Complexity of Parameterized Problems: Classes and Completeness.
Algorithmica, 71(3):661-701, 2015.
The parameterized complexity of a problem is generally considered ``settled'' once it has been shown to be fixed-parameter tractable or to be complete for a class in a parameterized hierarchy such as the weft hierarchy. Several natural parameterized problems have, however, resisted such a classification. In the present paper we argue that, %at least in some cases, this is due to the fact that the parameterized complexity of these problems can be better understood in terms of their \emph{parameterized space} or \emph{parameterized circuit} complexity. This includes well-studied, natural problems like the feedback vertex set problem, the associative generability problem, or the longest common subsequence problem. We show that these problems lie in and may even be complete for different parameterized space classes, leading to new insights into the problems' complexity. The classes we study are defined in terms of different forms of bounded nondeterminism and simultaneous time--space bounds.
Existential Second-order Logic over Graphs: A Complete Complexity-theoretic Classification.
In 32nd International Symposium on Theoretical Aspects of Computer Science (STACS 2015), Volume 30 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 703-715. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2015.
Descriptive complexity theory aims at inferring a problem's computational complexity from the syntactic complexity of its description. A cornerstone of this theory is Fagin's Theorem, by which a property is expressible in existential second-order logic (ESO logic) if, and only if, it is in NP. A natural question, from the theory's point of view, is which syntactic fragments of ESO logic also still characterize NP. Research on this question has culminated in a dichotomy result by Gottlob, Kolaitis, and Schwentick: for each possible quantifier prefix of an ESO formula, the resulting prefix class over graphs either contains an NP-complete problem or is contained in P. However, the exact complexity of the prefix classes inside P remained elusive. In the present paper, we clear up the picture by showing that for each prefix class of ESO logic, its reduction closure under first-order reductions is either FO, L, NL, or NP. For undirected self-loop-free graphs two containment results are especially challenging to prove: containment in L for the prefix \exists R_1\cdots \exists R_n \forall x \exists y and containment in FO for the prefix \exists M \forall x \exists y for monadic M. The complex argument by Gottlob et al. concerning polynomial time needs to be carefully reexamined and either combined with the logspace version of Courcelle's Theorem or directly improved to first-order computations. A different challenge is posed by formulas with the prefix \exists M \forall x\forall y, which we show to express special constraint satisfaction problems that lie in L.
Christoph Stockhusen, Till Tantau:
Completeness Results for Parameterized Space Classes.
Technical report arxiv:1308.2892, ArXiv, 2013.
The parameterized complexity of a problem is considered "settled" once it has been shown to lie in FPT or to be complete for a class in the W-hierarchy or a similar parameterized hierarchy. Several natural parameterized problems have, however, resisted such a classification. At least in some cases, the reason is that upper and lower bounds for their parameterized space complexity have recently been obtained that rule out completeness results for parameterized time classes. In this paper, we make progress in this direction by proving that the associative generability problem and the longest common subsequence problem are complete for parameterized space classes. These classes are defined in terms of different forms of bounded nondeterminism and in terms of simultaneous time--space bounds. As a technical tool we introduce a "union operation" that translates between problems complete for classical complexity classes and for W-classes.
In 8th International Symposium on Parameterized and Exact Computation (IPEC 2013), Lecture Notes in Computer Science, Springer, 2013 (to appear).
The parameterized complexity of a problem is generally considered ``settled'' once it has been shown to lie in FPT or to be complete for a class in the W-hierarchy or a similar parameterized hierarchy. Several natural parameterized problems have, however, resisted such a classification. At least in some cases, the reason is that upper and lower bounds for their parameterized \emph{space} complexity have recently been obtained that rule out completeness results for parameterized \emph{time} classes. In this paper, we make progress in this direction by proving that the associative generability problem and the longest common subsequence problem are complete for parameterized space classes. These classes are defined in terms of different forms of bounded nondeterminism and in terms of simultaneous time--space bounds. As a technical tool we introduce a ``union operation'' that translates between problems complete for classical complexity classes and for W-classes.
Graph Drawing in TikZ.
In Proceedings of Graph Drawing 2012, Volume 7704 of Lecture Notes in Computer Science, pp. 517-528. Springer, 2013.
At the heart of every good graph drawing algorithm lies an efficient procedure for assigning canvas positions to a graph's nodes and the bend points of its edges. However, every real-world implementation of such an algorithm must address numerous problems that have little to do with the actual algorithm, like handling input and output for-mats, formatting node texts, and styling nodes and edges. We present a new framework, implemented in the Lua programming language and integrated into the TikZ graphics description language, that aims at sim-plifying the implementation of graph drawing algorithms. Implementers using the framework can focus on the core algorithmic ideas and will au-tomatically profit from the framework's pre- and post-processing steps as well as from the extensive capabilities of the TikZ graphics language and the TeX typesetting engine. Algorithms already implemented using the framework include the Reingold-Tilford tree drawing algorithm, a mod-ular version of Sugiyama's layered algorithm, and several force-based multilevel algorithms.
Journal of Graph Algorithms and Applications, 17(4):495-513, 2013.
Show PDF | Show abstract
Valentina Damerow, Bodo Manthey, Friedhelm Meyer auf der Heide, Harald Räcke, Christian Scheideler, Christian Sohler, Till Tantau:
Smoothed Analysis of Left-To-Right Maxima with Applications.
ACM Transactions on Algorithms, 8(3):Article No. 30, 2012.
A left-to-right maximum in a sequence of n numbers s1, …, sn is a number that is strictly larger than all preceding numbers. In this article we present a smoothed analysis of the number of left-to-right maxima in the presence of additive random noise. We show that for every sequence of n numbers si ? [0,1] that are perturbed by uniform noise from the interval [-?,?], the expected number of left-to-right maxima is ?(&sqrt;n/? + log n) for ?>1/n. For Gaussian noise with standard deviation ? we obtain a bound of O((log3/2 n)/? + log n). We apply our results to the analysis of the smoothed height of binary search trees and the smoothed number of comparisons in the quicksort algorithm and prove bounds of ?(&sqrt;n/? + log n) and ?(n/?+1&sqrt;n/? + n log n), respectively, for uniform random noise from the interval [-?,?]. Our results can also be applied to bound the smoothed number of points on a convex hull of points in the two-dimensional plane and to smoothed motion complexity, a concept we describe in this article. We bound how often one needs to update a data structure storing the smallest axis-aligned box enclosing a set of points moving in d-dimensional space.
Michael Elberfeld, Andreas Jakoby, Till Tantau:
Algorithmic Meta Theorems for Circuit Classes of Constant and Logarithmic Depth.
In Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science (STACS 2012), Volume 14 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 66-77. Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2012.
Show PDF | Go to website | Show abstract
An algorithmic meta theorem for a logic and a class C of structures states that all problems ex- pressible in this logic can be solved efficiently for inputs from C. The prime example is Courcelle's Theorem, which states that monadic second-order (mso) definable problems are linear-time solv- able on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that mso-definable problems are (a) solvable by uniform constant-depth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmic-depth circuit families (NC1 for decision prob- lems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0-completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata's computations algebraically using convolution circuits; and a lemma on computing balanced width-3 tree de- compositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.
Michael Elberfeld, Ilka Schnoor, Till Tantau:
Influence of Tree Topology Restrictions on the Complexity of Haplotyping with Missing Data.
Theoretical Computer Science, 432:38–51, 2012.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing, which is often the case in laboratory data, the resulting formal problem IPPH, which stands for incomplete perfect phylogeny haplotyping, is NP-complete. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete; but here, at least, a fixed-parameter algorithm is known. Such drastic and ad hoc simplifications turn out to be unnecessary to make IPPH tractable: we present the first theoretical analysis of a parametrized algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. This tractability result is optimal insofar as we prove IPPH to be NP-complete whenever any of the parameters we consider is not fixed, but part of the input.
On the Space Complexity of Parameterized Problems.
In Proceedings of the 7th International Symposium on Parameterized and Exact Computation (IPEC 2012), Volume 7535 of Lecture Notes in Computer Science, pp. 206-217. Springer, 2012.
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of well-studied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L = NL; which explains why the two problem variants seem to necessitate different algorithmic approaches even though their parameterized time complexity is the same. For a number of further natural parameterized problems, including the longest common subsequence problem and the acceptance problem for multi-head automata, we show that they lie in or are complete for different parameterized space classes; which explains why previous attempts at proving completeness of these problems for parameterized time classes have failed.
Technical report , ECCC, 2012.
Parameterized complexity theory measures the complexity of computational problems predominantly in terms of their parameterized time complexity. The purpose of the present paper is to demonstrate that the study of parameterized space complexity can give new insights into the complexity of well-studied parameterized problems like the feedback vertex set problem. We show that the undirected and the directed feedback vertex set problems have different parameterized space complexities, unless L=NL. For a number of further natural parameterized problems, including the longest common subsequence problem, the acceptance problem for multi-head automata, and the associative generability problem we show that they lie in or are complete for different parameterized space classes. Our results explain why previous attempts at proving completeness of different problems for parameterized time classes have failed.
Michael Elberfeld, Till Tantau:
Phylogeny- and Parsimony-Based Haplotype Inference with Constraints.
Information and Computation, 213:33–47, 2012.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. An extension of this approach tries to incorporate prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. The objective is to increase the accuracy of haplotyping methods, but it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NP-complete. In the paper at hand we present a polynomial-time algorithm for it. Our algorithmic ideas also yield new fixed-parameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
In Proceedings of the 27th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS 2012), pp. 265-274. IEEE Computer Society, 2012.
We study on which classes of graphs first-order logic (FO) and monadic second-order logic (MSO) have the same expressive power. We show that for each class of graphs that is closed under taking subgraphs, FO and MSO have the same expressive power on the class if, and only if, it has bounded tree depth. Tree depth is a graph invariant that measures the similarity of a graph to a star in a similar way that tree width measures the similarity of a graph to a tree. For classes just closed under taking induced subgraphs, we show an analogous result for guarded second-order logic (GSO), the variant of MSO that not only allows quantification over vertex sets but also over edge sets. A key tool in our proof is a Feferman-Vaught-type theorem that is constructive and still works for unbounded partitions.
Technical report ECCC-TR11-128, Electronic Colloquium on Computational Complexity, 2011.
An algorithmic meta theorem for a logic and a class C of structures states that all problems expressible in this logic can be solved efficiently for inputs from C. The prime example is Courcelle's Theorem, which states that monadic second-order (MSO) definable problems are linear-time solvable on graphs of bounded tree width. We contribute new algorithmic meta theorems, which state that MSO-definable problems are (a) solvable by uniform constant-depth circuit families (AC0 for decision problems and TC0 for counting problems) when restricted to input structures of bounded tree depth and (b) solvable by uniform logarithmic-depth circuit families (NC1 for decision problems and #NC1 for counting problems) when a tree decomposition of bounded width in term representation is part of the input. Applications of our theorems include a TC0-completeness proof for the unary version of integer linear programming with a fixed number of equations and extensions of a recent result that counting the number of accepting paths of a visible pushdown automaton lies in #NC1. Our main technical contributions are a new tree automata model for unordered, unranked, labeled trees; a method for representing the tree automata's computations algebraically using convolution circuits; and a lemma on computing balanced width-3 tree decompositions of trees in TC0, which encapsulates most of the technical difficulties surrounding earlier results connecting tree automata and NC1.
Logspace Versions of the Theorems of Bodlaender and Courcelle.
In Proceedings of the 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS 2010), pp. 143-152. IEEE Computer Society, 2010.
Bodlaender's Theorem states that for every k there is a linear-time algorithm that decides whether an input graph has tree width k and, if so, computes a width-k tree composition. Courcelle's Theorem builds on Bodlaender's Theorem and states that for every monadic second-order formula φ and for every k there is a linear-time algorithm that decides whether a given logical structure A of tree width at most k satisfies φ. We prove that both theorems still hold when ``linear time'' is replaced by ``logarithmic space.'' The transfer of the powerful theoretical framework of monadic second-order logic and bounded tree width to logarithmic space allows us to settle a number of both old and recent open problems in the logspace world.
Technical report SIIM-TR-A-10-01, Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck, 2010.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolution- ary model where a perfect phylogenetic tree is sought that explains the observed data. In their CPM'09 paper, Fellows et al. studied an exten- sion of this approach that incorporates prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. While this approach is attractive to increase the accuracy of haplotyping methods, it was conjectured that the resulting formal prob- lem constrained perfect phylogeny haplotyping might be NP-complete. In the paper at hand we present a polynomial-time algorithm for it. Our algorithmic ideas also yield new fixed-parameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
In Proceedings of the 21st Annual Symposium on Combinatorial Pattern Matching (CPM 2010), Volume 6129 of Lecture Notes in Computer Science, pp. 177-189. Springer, 2010.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast computational haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. In their CPM 2009 paper, Fellows et al. studied an extension of this approach that incorporates prior knowledge in the form of a set of candidate haplotypes from which the right haplotypes must be chosen. While this approach may help to increase the accuracy of haplotyping methods, it was conjectured that the resulting formal problem constrained perfect phylogeny haplotyping might be NP-complete. In the present paper we present a polynomial-time algorithm for it. Our algorithmic ideas also yield new fixed-parameter algorithms for related haplotyping problems based on the maximum parsimony assumption.
Edith Hemaspaandra, Lane A. Hemaspaandra, Till Tantau, Osamu Watanabe:
On the Complexity of Kings.
Theoretical Computer Science, 411(2010):783-798, 2010.
A k-king in a directed graph is a node from which each node in the graph can be reached via paths of length at most~k. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of reachability problems and semifeasible sets. In this paper, we study the complexity of recognizing k-kings. For each succinctly specified family of tournaments (completely oriented digraphs), the k-king problem is easily seen to belong to $\Pi_2^{\mathrm p}$. We prove that the complexity of kingship problems is a rich enough vocabulary to pinpoint every nontrivial many-one degree in $\Pi_2^{\mathrm p}$. That is, we show that for every $k \ge 2$ every set in $\Pi_2^{\mathrm p}$ other than $\emptyset$ and $\Sigma^*$ is equivalent to a k-king problem under $\leq_{\mathrm m}^{\mathrm p}$-reductions. The equivalence can be instantiated via a simple padding function. Our results can be used to show that the radius problem for arbitrary succinctly represented graphs is $\Sigma_3^{\mathrm p}$-complete. In contrast, the diameter problem for arbitrary succinctly represented graphs (or even tournaments) is $\Pi_2^{\mathrm p}$-complete.
In Proceedings of the 6th Annual Conference on Theory and Applications of Models of Computation (TAMC 2009), Volume 5532 of Lecture Notes in Computer Science, pp. 201-210. Springer, 2009.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes from genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing as is often the case in laboratory data, the resulting incomplete perfect phylogeny haplotyping problem IPPH is NP-complete and no theoretical results are known concerning its approximability, fixed-parameter tractability, or exact algorithms for it. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete; but here a fixed-parameter algorithm is known. We show that such drastic and ad hoc simplifications are not necessary to make IPPH fixed-parameter tractable: We present the first theoretical analysis of an algorithm, which we develop in the course of the paper, that works for arbitrary instances of IPPH. On the negative side we show that restricting the topology of perfect phylogenies does not always reduce the computational complexity: while the incomplete directed perfect phylogeny problem is well-known to be solvable in polynomial time, we show that the same problem restricted to path topologies is NP-complete.
Jens Gramm, Tzvika Hartman, Till Nierhoff, Roded Sharan, Till Tantau:
On the complexity of SNP block partitioning under the perfect phylogeny model.
Discrete Mathematics, 309(18):5610-5617, 2009.
ecent technologies for typing single nucleotide polymorphisms (SNPs) across a population are producing genome-wide genotype data for tens of thousands of SNP sites. The emergence of such large data sets underscores the importance of algorithms for large-scale haplotyping. Common haplotyping approaches first partition the SNPs into blocks of high linkage-disequilibrium, and then infer haplotypes for each block separately. We investigate an integrated haplotyping approach where a partition of the SNPs into a minimum number of non-contiguous subsets is sought, such that each subset can be haplotyped under the perfect phylogeny model. We show that finding an optimum partition is NP-hard even if we are guaranteed that two subsets suffice. On the positive side, we show that a variant of the problem, in which each subset is required to admit a perfect path phylogeny haplotyping, is solvable in polynomial time.
Jens Gramm, Arfst Nickelsen, Till Tantau:
Fixed-Parameter Algorithms in Phylogenetics.
In Bioinformatics: Volume I: Data, Sequence Analysis and Evolution, Volume 452 of Methods in Molecular Biology, pp. 507-535. Springer, 2008.
We survey the use of fixed-parameter algorithms in phylogenetics. A central computational problem in this field is the construction of a likely phylogeny (genealogical tree) for a set of species based on observed differences in the phenotype, on differences in the genotype, or on given partial phylogenies. Ideally, one would like to construct so-called perfect phylogenies, which arise from an elementary evolutionary model, but in practice one must often be content with phylogenies whose "distance from perfection" is as small as possible. The computation of phylogenies also has applications in seemingly unrelated areas such as genomic sequencing and finding and understanding genes. The numerous computational problems arising in phylogenetics are often NP-complete, but for many natural parametrizations they can be solved using fixed-parameter algorithms.
Bodo Manthey, Till Tantau:
Smoothed Analysis of Binary Search Trees and Quicksort Under Additive Noise.
In Proceedings of MFCS 2008, Volume 5162 of Lecture Notes in Computer Science, pp. 467-478. Springer, 2008.
Binary search trees are a fundamental data structure and their height plays a key role in the analysis of divide-and-conquer algorithms like quicksort. We analyze their smoothed height under additive uniform noise: An adversary chooses a sequence of~$n$ real numbers in the range $[0,1]$, each number is individually perturbed by adding a value drawn uniformly at random from an interval of size~$d$, and the resulting numbers are inserted into a search tree. An analysis of the smoothed tree height subject to $n$ and $d$ lies at the heart of our paper: We prove that the smoothed height of binary search trees is $\Theta (\sqrt{n/d} + \log n)$, where $d \ge 1/n$ may depend on~$n$. Our analysis starts with the simpler problem of determining the smoothed number of left-to-right maxima in a sequence. We establish matching bounds, namely once more $\Theta (\sqrt{n/d} + \log n)$. We also apply our findings to the performance of the quicksort algorithm and prove that the smoothed number of comparisons made by quicksort is $\Theta(\frac{n}{d+1} \sqrt{n/d} + n \log n)$.
Complexity of the Undirected Radius and Diameter Problems for Succinctly Represented Graphs.
Show PDF
Der One-Time-Pad-Algorithmus.
In Taschenbuch der Algorithmen, Springer, 2008.
Ein Beitrag über den One-Time-Pad-Algorithmus im Rahmen der Algorithmen der Woche während des Jahrs der Informatik 2006.
Generalizations of the Hartmanis-Immerman-Sewelson Theorem and Applications to Infinite Subsets of P-Selective Sets.
The Hartmanis--Immerman--Sewelson theorem is the classical link between the exponential and the polynomial time realm. It states that NE = E if, and only if, every sparse set in NP lies in P. We establish similar links for classes other than sparse sets: 1. E = UE if, and only if, all functions f: {1}^* to Sigma^* in NPSV_g lie in FP. 2. E = NE if, and only if, all functions f: {1}^* to Sigma^* in NPFewV lie in FP. 3. E = E^NP if, and only if, all functions f: {1}^* to Sigma^* in OptP lie in FP. 4. E = E^NP if, and only if, all standard left cuts in NP lie in P. 5. E = EH if, and only if, PH cap P/poly = P. We apply these results to the immunity of P-selective sets. It is known that they can be bi-immune, but not Pi_2^p/1-immune. Their immunity is closely related to top-Toda languages, whose complexity we link to the exponential realm, and also to king languages. We introduce the new notion of superkings, which are characterized in terms of existsforall-predicates rather than forallexists-predicates, and show that king languages cannot be Sigma_2^p-immune. As a consequence, P-selective sets cannot be Sigma_2^/1-immune and, if E^NP^NP = E, not even P/1-immune.
Computational Complexity of Perfect-Phylogeny-Related Haplotyping Problems.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. This problem, which has strong practical applications, can be approached using both statistical as well as combinatorial methods. While the most direct combinatorial approach, maximum parsimony, leads to NP-complete problems, the perfect phylogeny model proposed by Gusfield yields a problem, called PPH, that can be solved in polynomial (even linear) time. Even this may not be fast enough when the whole genome is studied, leading to the question of whether parallel algorithms can be used to solve the PPH problem. In the present paper we answer this question affirmatively, but we also give lower complexity bounds on its complexity. In detail, we show that the problem lies in Mod$_2$L, a subclass of the circuit complexity class NC$^2$, and is hard for logarithmic space and thus presumably not in NC$^1$. We also investigate variants of the PPH problem that have been studied in the literature, like the perfect path phylogeny haplotyping problem and the combined problem where a perfect phylogeny of maximal parsimony is sought, and show that some of these variants are TC$^0$-complete or lie in AC$^0$.
In Proceedings of the 33rd International Symposium on Mathematical Foundations of Computer Science (MFCS 2008), Volume 5162 of Lecture Notes in Computer Science, pp. 299-310. Springer, 2008.
Haplotyping, also known as haplotype phase prediction, is the problem of predicting likely haplotypes based on genotype data. One fast haplotyping method is based on an evolutionary model where a perfect phylogenetic tree is sought that explains the observed data. Unfortunately, when data entries are missing, as is often the case in real laboratory data, the resulting formal problem IPPH, which stands for incomplete perfect phylogeny haplotyping, is NP-complete and no theoretical results are known concerning its approximability, fixed-parameter tractability or exact algorithms for it. Even radically simplified versions, such as the restriction to phylogenetic trees consisting of just two directed paths from a given root, are still NP-complete, but here, at least, a fixed-parameter algorithm is known. We generalize this algorithm to arbitrary tree topologies and present the first theoretical analysis of an algorithm that works on arbitrary instances of the original IPPH problem. At the same time we also show that restricting the tree topology does not always make finding phylogenies easier: while the incomplete directed perfect phylogeny problem is well-known to be solvable in polynomial time, we show that the same problem restricted to path topologies is NP-complete.
The Computer Journal, 51(1):79--101, 2008.
We survey the use of fixed-parameter algorithms in the field of phylogenetics, which is the study of evolutionary relationships. The central problem in phylogenetics is the reconstruction of the evolutionary history of biological species, but its methods also apply to linguistics, philology, or architecture. A basic computational problem is the reconstruction of a likely phylogeny (genealogical tree) for a set of species based on observed differences in the phenotype like color or form of limbs, based on differences in the genotype like mutated nucleotide positions in the DNA sequence, or based on given partial phylogenies. Ideally, one would like to construct so-called perfect phylogenies, which arise from a very simple evolutionary model, but in practice one must often be content with phylogenies whose ``distance from perfection'' is as small as possible. The computation of phylogenies has applications in seemingly unrelated areas such as genomic sequencing and finding and understanding genes. The numerous computational problems arising in phylogenetics often are NP-complete, but for many natural parametrizations they can be solved using fixed-parameter algorithms.
Jens Gramm, Till Nierhoff, Roded Sharan, Till Tantau:
Haplotyping with Missing Data via Perfect Path Phylogenies.
Discrete and Applied Mathematics, 155:788-805, 2007.
Computational methods for inferring haplotype information from genotype data are used in studying the association between genomic variation and medical condition. Recently, Gusfield proposed a haplotype inference method that is based on perfect phylogeny principles. A fundamental problem arises when one tries to apply this approach in the presence of missing genotype data, which is common in practice. We show that the resulting theoretical problem is NP-hard even in very restricted cases. To cope with missing data, we introduce a variant of haplotyping via perfect phylogeny in which a path phylogeny is sought. Searching for perfect path phylogenies is strongly motivated by the characteristics of human genotype data: 70% of real instances that admit a perfect phylogeny also admit a perfect path phylogeny. Our main result is a fixed-parameter algorithm for haplotyping with missing data via perfect path phylogenies. We also present a simple linear-time algorithm for the problem on complete data.
In Proceedings of FCT 2007, Volume 4639 of Lecture Notes in Computer Science, pp. 328--340. Springer, 2007.
Andreas Jakoby, Till Tantau:
Logspace Algorithms for Computing Shortest and Longest Paths in Series-Parallel Graphs.
In Proceedings of the 27th International Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2007), Volume 4855 of Lecture Notes in Computer Science, pp. 216-227. Springer, 2007.
For many types of graphs, including directed acyclic graphs, undirected graphs, tournament graphs, and graphs with bounded independence number, the shortest path problem is NL-complete. The longest path problem is even NP-complete for many types of graphs, including undirected K5-minor-free graphs and even planar graphs. In the present paper we present logspace algorithms for computing shortest and longest paths in series-parallel graphs where the edges can be directed arbitrarily. The class of series-parallel graphs that we study can be characterized alternatively as the class of K4-minor-free graphs and also as the class of graphs of tree-width 2. It is well-known that for graphs of bounded tree-width many intractable problems can be solved efficiently, but previous work was focused on finding algorithms with low parallel or sequential time complexity. In contrast, our results concern the space complexity of shortest and longest path problems. In particular, our results imply that for graphs of tree-width 2 these problems are L-complete.
Binary search trees are a fundamental data structure and their height plays a key role in the analysis of divide-and-conquer algorithms like quicksort. Their worst-case height is linear; their average height, whose exact value is one of the best-studied problems in average-case complexity, is logarithmic. We analyze their smoothed height under additive noise: An adversary chooses a sequence of n real numbers in the range [0,1]; each number is individually perturbed by adding a random value from an interval of size d; and the resulting numbers are inserted into a search tree. The expected height of this tree is called smoothed tree height. If d is very small, namely for d < 1/n, the smoothed tree height is the same as the worst-case height; if d is very large, the smoothed tree height approaches the logarithmic average-case height. An analysis of what happens between these extremes lies at the heart of our paper: We prove that the smoothed height of binary search trees is $Theta (sqrt{n/d} + log n)$, where d >= 1/n may depend on n. This implies that the logarithmic average-case height becomes manifest only for $d in Omega (n/log^2 n)$. For the analysis, we first prove that the smoothed number of left-to-right maxima in a sequence is also $Theta (sqrt{n/d} + log n)$. We apply these findings to the performance of the quicksort algorithm, which needs $Theta(n^2)$ comparisons in the worst case and $Theta(n log n)$ on average, and prove that the smoothed number of comparisons made by quicksort is $Theta(n/d+1 sqrt{n/d} + n log n)$. This implies that the average-case becomes manifest already for $d in Omega(sqrt[3]{n/logsqr n})$.
Logspace Optimization Problems and Their Approximability Properties.
Theory of Computing Systems, 41(2):327-350, 2007.
Logspace optimization problems are the logspace analogues of the well-studied polynomial-time optimization problems. Similarly to them, logspace optimization problems can have vastly different approximation properties, even though the underlying decision problems have the same computational complexity. Natural problems -- including the shortest path problems for directed graphs, undirected graphs, tournaments, and forests -- exhibit such a varying complexity. In order to study the approximability of logspace optimization problems in a systematic way, polynomial-time approximation classes and polynomial-time reductions between optimization problems are transferred to logarithmic space. It is proved that natural problems are complete for different logspace approximation classes. This is used to show that under the assumption L \neq NL some logspace optimization problems cannot be approximated with a constant ratio; some can be approximated with a constant ratio, but do not permit a logspace approximation scheme; and some have a logspace approximation scheme, but cannot be solved in logarithmic space.
In Proceedings of Workshop on Algorithms in Bioinformatics (WABI), Volume 4175 of Lecture Notes in Computer Science, pp. 92-102. Springer, 2006.
Recent technologies for typing single nucleotide polymorphisms (SNPs) across a population are producing genome-wide genotype data for tens of thousands of SNP sites. The emergence of such large data sets underscores the importance of algorithms for large-scale haplotyping. Common haplotyping approaches first partition the SNPs into blocks of high linkage-disequilibrium, and then infer haplotypes for each block separately. We investigate an integrated haplotyping approach where a partition of the SNPs into a minimum number of non-contiguous subsets is sought, such that each subset can be haplotyped under the perfect phylogeny model. We show that finding an optimum partition is NP-hard even if we are guaranteed that two subsets suffice. On the positive side, we show that a variant of the problem, in which each subset is required to admit a perfect path phylogeny haplotyping, is solvable in polynomial time.
Technical report URCS-TR905, Technische Berichtsreihe der Universität Rochester, Computer Science Department, 2006.
A king in a directed graph is a node from which each node in the graph can be reached via paths of length at most two. There is a broad literature on tournaments (completely oriented digraphs), and it has been known for more than half a century that all tournaments have at least one king [Lan53]. Recently, kings have proven useful in theoretical computer science, in particular in the study of the complexity of reachability problems [Tan01,NT05]. and semifeasible sets [HNP98,HT06,HOZZ06]. In this paper, we study the complexity of recognizing kings. For each succinctly specified family of tournaments, the king problem is already known to belong to $\Pi_2^p$ [HOZZ06]. We prove that the complexity of kingship problems is a rich enough vocabulary to pinpoint every nontrivial many-one degree in $\Pi_2^p$. That is, we show that \emph{every} set in $\Pi_2^p$ other than $\emptyset$ and $\Sigma^*$ is equivalent to a king problem under $\leq_m^p$-reductions. Indeed, we show that the equivalence can even be instantiated via relatively simple padding, and holds even if the notion of kings is redefined to refer to k-kings (for any fixed $k \geq 2$)---nodes from which the all nodes can be reached via paths of length at most k. Using these and related techniques, we obtain a broad range of additional results about the complexity of king problems, diameter problems, radius problems, and initial component problems. It follows easily from our proof approach that the problem of testing kingship in succinctly specified graphs (which need not be tournaments) is $\Pi_2^p$-complete. We show that the radius problem for arbitrary succinctly represented graphs is $\Sigma_3^p$-complete, but that in contrast the diameter problem for arbitrary succinctly represented graphs (or even tournaments) is $\Pi_2^p$-complete. Yet again in contrast, we prove that initial component languages (which ask whether a given node can reach all other nodes in its tournament) all fall within $\Pi_2^p$, yet cannot be $\Pi_2^p$-complete---or even NP-hard---unless P=NP.
Richard Karp, Till Nierhoff, Till Tantau:
Optimal Flow Distribution Among Multiple Channels with Unknown Capacities.
In Essays in Theoretical Computer Science in Memory of Shimon Even, Volume 3895 of Lecture Notes in Computer Science - Festschriften, pp. 111-128. Springer, 2006.
Consider a simple network flow problem in which a flow of value D must be split among n channels directed from a source to a sink. The initially unknown channel capacities can be probed by attempting to send a flow of at most D units through the network. If the flow is not feasible, we are told on which channels the capacity was exceeded (binary feedback) and possibly also how many units of flow were successfully sent on these channels (throughput feedback). For throughput feedback we present optimal protocols for minimizing the number of rounds needed to find a feasible flow and for minimizing the total amount of wasted flow. For binary feedback we present an asymptotically optimal protocol.
The Descriptive Complexity of the Reachability Problem As a Function of Different Graph Parameters.
The reachability problem for graphs cannot be described, in the sense of descriptive complexity theory, using a single first-order formula. This is true both for directed and undirected graphs, both in the finite and infinite. However, if we restrict ourselves to graphs in which a certain graph parameter is fixed to a certain value, first-order formulas often suffice. A trivial example are graphs whose number of vertices is fixed to n. In such graphs reachability can be described using a first-order formula with a quantifier nesting depth of log2 n, which is both a lower and an upper bound. In this paper we investigate how the descriptive complexity of the reachability problem varies as a function of graph parameters such as the size of the graph, the clique number, the matching number, the independence number or the domination number. The independence number turns out to be the by far most challenging graph parameter.
Lane A. Hemaspaandra, Proshanto Mukherji, Till Tantau:
Contextfree Languages Can Be Accepted With Absolutely No Space Overhead.
Information and Computation, 203(2):163-180, 2005.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such machines, we show that all context-free languages can be accepted nondeterministically in polynomial time with absolutely no space overhead, and that all deterministic context-free languages can be accepted deterministically in polynomial time with absolutely no space overhead.
In Proceedings of the 2nd Brazilian Symposium on Graphs, Algorithms and Combinatorics (GRACO 2005), Volume 19 of Electronic Notes in Discrete Mathematics, pp. 225-231. Springer, 2005.
Consider a simple network flow problem in which there are n channels directed from a source to a sink. The channel capacities are unknown and we wish to determine a feasible network flow of value D. Flow problems with unknown capacities arise naturally in numerous applications, including inter-domain traffic routing in the Internet [Chandrayana et al. 2004], bandwidth allocation for sending files in peer-to-peer networks, and the distribution of physical goods like newspapers among different points of sale. We study protocols that probe the network by attempting to send a flow of at most D units through the network. If the flow is not feasible, the protocol is told on which channels the capacity was exceeded (binary feedback) and possibly also how many units of flow were successfully send on these channels (throughput feedback). For the latter, more informative, type of feedback we present optimal protocols for minimizing the number of rounds needed to find a feasible flow and for minimizing the total amount of wasted flow. For binary feedback, we show that one can exploit the fact that network capacities are often larger than the demand D: We present a protocol for this situation that is asymptotically optimal and finds a solution more quickly than the generalized binary search protocol previously proposed in the literature [Chandrayana et al. 2004]. For the special case of two channels we present a protocol that is optimal and outperforms binary search.
Arfst Nickelsen, Till Tantau:
The Complexity of Finding Paths in Graphs with Bounded Independence Number.
SIAM Journal on Computing, 34(5):1176-1195, 2005.
We study the problem of finding a path between two vertices in finite directed graphs whose independence number is bounded by some constant k. The independence number of a graph is the largest number of vertices that can be picked such that there is no edge between any two of them. The complexity of this problem depends on the exact question we ask: Do we only wish to tell whether a path exists? Do we also wish to construct such a path? Are we required to construct the shortest one? Concerning the first question, we show that the reachability problem is first-order definable for all k and that its succinct version is Pi2P-complete for all k. In contrast, the reachability problems for many other types of finite graphs, including dags and trees, are not first-order definable and their succinct versions are PSPACE-complete. Concerning the second question, we show that not only can we construct paths in logarithmic space, there even exists a logspace approximation scheme for this problem. The scheme gets a ratio r ≥ 1 as additional input and outputs a path that is at most r times as long as the shortest path. Concerning the third question, we show that even telling whether the shortest path has a certain length is NL-complete and thus as difficult as for arbitrary directed graphs.
In Proceedings of the 15th International Symposium on Fundamentals of Computation Theory (FCT 2005), Volume 3623 of Lecture Notes in Computer Science, pp. 92-103. Springer, 2005.
This paper introduces logspace optimization problems as analogues of the well-studied polynomial-time optimization problems. Similarly to them, logspace optimization problems can have vastly different approximation properties, even though the underlying decision problems have the same computational complexity. Natural problems, including the shortest path problems for directed graphs, undirected graphs, tournaments, and forests, exhibit such a varying complexity. In order to study the approximability of logspace optimization problems in a systematic way, polynomial-time approximation classes are transferred to logarithmic space. Appropriate reductions are defined and optimization problems are presented that are complete for these classes. It is shown that under the assumption L neq NL some logspace optimization problems cannot be approximated with a constant ratio; some can be approximated with a constant ratio, but do not permit a logspace approximation scheme; and some have a logspace approximation scheme, but cannot be solved in logarithmic space. A new natural NL-complete problem is presented that has a logspace approximation scheme.
Weak Cardinality Theorems.
Journal of Symbolic Logic, 70(3):861-878, 2005.
Kummer's Cardinality Theorem states that a language A must be recursive if a Turing machine can exclude for any n words w1, ..., wn one of the n + 1 possibilities for the cardinality of {w1, ..., wn} \cap A. There was good reason to believe that this theorem is a peculiarity of recursion theory: neither the Cardinality Theorem nor weak forms of it hold for classical resource-bounded computational models like polynomial time. This belief may be flawed. In this paper it is shown that Weak Cardinality Theorems hold for finite automata and also for other models. A mathematical explanation is given as to ``why'' recursion-theoretic and automata-theoretic Weak Cardinality Theorems hold, but not corresponding `middle-ground theorems': the recursion- and automata-theoretic Weak Cardinality Theorems are instantiations of purely logical Weak Cardinality Theorems. The logical theorems can be instantiated for logical structures characterizing recursive computations and finite automata computations. A corresponding structure characterizing polynomial time does not exist.
On the Complexity of Haplotyping via Perfect Phylogeny.
In Proceedings of Second RECOMB Satellite Workshop on Computational Methods for SNPs and Haplotypes, pp. 35-46. , 2004.
The problem of haplotyping via perfect phylogeny has received a lot of attention lately due to its applicability to real haplotyping problems and its theoretical elegance. However, two main research issues remained open: The complexity of haplotyping with missing data, and whether the problem is linear-time solvable. In this paper we settle the first question and make progress toward answering the second one. Specifically, we prove that Perfect Phylogeny Haplotyping with missing data is NP-complete even when the phylogeny is a path and only one allele of every polymorphic site is present in the population in its homozygous state. Our result implies the hardness of several variants of the missing data problem, including the general Perfect Phylogeny Haplotyping Problem with missing data, and Hypergraph Tree Realization with missing data. On the positive side, we give a linear-time algorithm for Perfect Phylogeny Haplotyping when the phylogeny is a path. This variant is important due to the abundance of yin-yang haplotypes in the human genome. Our algorithm relies on a reduction of the problem to that of deciding whether a partially ordered set has width~2.
Jens Gramm, Till Nierhoff, Till Tantau:
Perfect Path Phylogeny Haplotyping with Missing Data is Fixed-Parameter Tractable.
In Proceedings of the 2004 International Workshop on Parameterized and Exact Computation, Volume 3162 of Lecture Notes in Computer Science, pp. 174-186. Springer, 2004.
Haplotyping via perfect phylogeny is a method for retrieving haplotypes from genotypes. Fast algorithms are known for computing perfect phylogenies from complete and error-free input instances---these instances can be organized as a genotype matrix whose rows are the genotypes and whose columns are the single nucleotide polymorphisms under consideration. Unfortunately, in the more realistic setting of missing entries in the genotype matrix, even restricted forms of the perfect phylogeny haplotyping problem become NP-hard. We show that haplotyping via perfect phylogeny with missing data becomes computationally tractable when imposing additional biologically motivated constraints. Firstly, we focus on asking for perfect phylogenies that are paths, which is motivated by the discovery that yin-yang haplotypes span large parts of the human genome. A yin-yang haplotype implies that every corresponding perfect phylogeny <i>has</i> to be a path. Secondly, we assume that the number of missing entries in every column of the input genotype matrix is bounded. We show that the perfect path phylogeny haplotyping problem is fixed-parameter tractable when we consider the maximum number of missing entries per column of the genotype matrix as parameter. The restrictions we impose are met by a majority of the problem instances encountered in publicly available human genome data.
Overhead-Free Computation, DCFLs, and CFLs.
Technical report URCS-TR-2004-844, Technische Berichtsreihe der Universität Rochester, Computer Science Department, 2004.
Arfst Nickelsen, Till Tantau, Lorenz Weizäcker:
Aggregates with Component Size One Characterize Polynomial Space.
Aggregates are a computational model similar to circuits, but the underlying graph is not necessarily acyclic. Logspace-uniform polynomial-size aggregates decide exactly the languages in PSPACE; without uniformity condition they decide the languages in PSPACE/poly. As a measure of similarity to boolean circuits we introduce the parameter component size. We prove that already aggregates of component size 1 are powerful enough to capture polynomial space. The only type of cyclic components needed to make polynomial-size circuits as powerful as polynomial-size aggregates are binary xor-gates whose output is fed back to the gate as one of the inputs.
Mitsunori Ogihara, Till Tantau:
On the Reducibility of Sets Inside NP to Sets with Low Information Content.
Journal of Computer and System Sciences, 69(4):299-324, 2004.
This paper studies for various natural problems in NP whether they can be reduced to sets with low information content, such as branches, P-selective sets, and membership comparable sets. The problems that are studied include the satisfiability problem, the graph automorphism problem, the undirected graph accessibility problem, the determinant function, and all logspace self-reducible languages. Some of these are complete for complexity classes within NP, but for others an exact complexity-theoretic characterization is not known. Reducibility of these problems is studied in a general framework introduced in this paper: prover-verifier protocols with low-complexity provers. It is shown that all these natural problems indeed have such protocols. This fact is used to show, for certain reduction types, that these problems are not reducible to sets with low information content unless their complexity is much less than what it is currently believed to be. The general framework is also used to obtain a new characterization of the complexity class L: L is the class of all logspace self-reducible sets in LL-sel.
Comparing Verboseness for Finite Automata and Turing Machines.
Theory of Computing Systems, 37(1):95-109, 2004.
A language is called (m,n)-verbose if there exists a Turing machine that enumerates for any n words at most m possibilities for their characteristic string. We compare this notion to (m,n)-fa-verboseness, where instead of a Turing machine a finite automaton is used. Using a new structural diagonalisation method, where finite automata trick Turing machines, we prove that all (m,n)-verbose languages are (h,k)-verbose, iff all (m,n)-fa-verbose languages are (h,k)-fa-verbose. In other words, Turing machines and finite automata behave in exactly the same way with respect to inclusion of verboseness classes. This identical behaviour implies that the Nonspeedup Theorem also holds for finite automata. As an application of the theoretical framework, we prove a lower bound on the number of bits needed to be communicated to finite automata protocol checkers for nonregular protocols.
Strahlende Präsentationen in LaTeX.
Die TeXnische Komödie, 2/04:54-80, 2004. In German.
Die LaTeX-Klasse beamer dient dem Erstellen von Prsentationen mittels LaTeX, pdfLaTeX oder LyX. Wie der Name der Klasse schon verrät, will beamer es insbesondere leicht machen, Beamervorträge zu erzeugen; klassische Folienvorträge werden aber genauso unterstützt. Soweit möglich baut beamer auf dem Benutzer vertrauten LaTeX-Befehlen auf, wie \section zur Erzeugung von Abschnitten oder enumerate zur Erzeugung von Aufzählungen. Beamervorträge profitieren oft von der Verwendung von Overlays und deren Erstellung wird durch eine einfache, aber mächtige Syntaxerweiterung erleichtert. Die Klasse folgt der LaTeX-Philosophie der Trennung von Form und Inhalt: Aus einer einzigen Quelle lassen sich durch Variation von Layout-Parametern verschieden gestaltete Vorträge erzeugen. Vorgefertigte Layouts reichen von klassisch-sachlich bis modisch-effektvoll, folgen aber immer dem Bauhausgrundsatz "form follows function".
Über strukturelle Gemeinsamkeiten der Aufzählbarkeitsklassen von Turingmaschinen und endlichen Automaten.
In Ausgezeichnete Informatikdissertationen 2003, Lecture Notes in Informatics, pp. 189-198. Springer, 2004.
Der Beitrag enthält eine Zusammenfassung der Dissertation On Structural Similarities of Finite Automata and Turing Machine Enumerability Classes.
A Logspace Approximation Scheme for the Shortest Path Problem for Graphs with Bounded Independence Number.
In Proceedings of the 21st International Symposium on Theoretical Aspects of Computer Science (STACS 2004), Volume 2996 of Lecture Notes in Computer Science, pp. 326-337. Springer, 2004.
How difficult is it to find a path between two vertices in finite directed graphs whose independence number is bounded by some constant k? The independence number of a graph is the largest number of vertices that can be picked such that there is no edge between any two of them. The complexity of this problem depends on the exact question we ask: Do we only wish to tell whether a path exists? Do we also wish to construct such a path? Are we required to construct the shortest path? Concerning the first question, it is known that the reachability problem is first-order definable for all k. In contrast, the corresponding reachability problems for many other types of finite graphs, including dags and trees, are not first-order definable. Concerning the second question, in this paper it is shown that we cannot only construct paths in logarithmic space, but that there even exists a logspace approximation scheme for constructing them. In contrast, for directed graphs, undirected graphs, and dags we cannot construct paths in logarithmic space (let alone approximate the shortest one), unless complexity class collapses occur. Concerning the third question, it is shown that even telling whether the shortest path has a certain length is NL-complete and thus as difficult as for arbitrary directed graphs.
Computation with Absolutely No Space Overhead.
In Proceedings of the Seventh International Conference on Developments in Language Theory (DLT 2003), Volume 2710 of Lecture Notes in Computer Science, pp. 325-336. Springer, 2003.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such machines, we show that subclasses of the context-free languages can even be accepted in polynomial time with absolutely no space overhead.
Partial Information Classes.
SIGACT News, 34(1):32--46, 2003.
In this survey we present partial information classes, which have been studied under different names and in different contexts in the literature. They are defined in terms of partial information algorithms. Such algorithms take a word tuple as input and yield a small set of possibilities for its characteristic string as output. We define a unified framework for the study of partial information classes and show how previous notions fit into the framework. The framework allows us to study the relationship of a large variety of partial information classes in a uniform way. We survey how partial information classes are related to other complexity theoretic notions like advice classes, lowness, bi-immunity, NP-completeness, and decidability.
Logspace Optimisation Problems and Their Approximation Properties.
This paper introduces logspace optimisation problems as an analogue of the widely studied polynomial-time optimisation problems. Similarly to the polynomial-time setting, logspace optimisation problems can have vastly different approximation properties, even though the underlying decision problems have the same computational complexity. In order to study the approximability of logspace optimisation problems in a systematic way, polynomial-time approximation classes are transferred to logarithmic space. Appropriate reductions are defined and optimisation problems are presented that are complete for these classes. It is shown that under the assumption L != NL some natural logspace optimisation problems cannot be approximated with a constant ratio; some can be approximated with a constant ratio, but do not permit a logspace approximation scheme; and some have a logspace approximation scheme, but cannot be solved in logarithmic space. An example of a problem of the latter type is the problem of finding the shortest path between two vertices of a tournament.
On Structural Similarities of Finite Automata and Turing Machine Enumerability Classes.
Wissenschaft und Technik Verlag, 2003.
Supervised by: Dirk Siefkes, Johannes Köbler. Disseration zum Dr. rer. nat., Technische Universität Berlin
There are different ways of measuring the complexity of functions that map words to words. Well-known measures are time and space complexity. Enumerability is another possible measure. It is used in recursion theory, where it plays a key role in bounded query theory, but also in resource-bounded complexity theory, especially in connection with nonuniform computations. This dissertation transfers enumerability to automata theory. It is shown that enumerability behaves similarly in recursion theory and in automata theory, but differently in complexity theory.
The enumerability of a function f is the smallest m such that there exists an m-enumerator for f. An m-enumerator is a machine that produces, for every input word w, a set of up to m possibilities for f(w). By varying the parameter m and the class of allowed enumerators, different enumerability classes can be defined. In recursion theory, one allows arbitrary Turing machines as enumerators; in automata theory, only finite automata. A deep structural result that holds both for finite automata and for Turing machine enumerability is the following cross product theorem: if f × g is (n + m)-enumerable, then either f is n-enumerable or g is m-enumerable. In contrast, this theorem does not hold for polynomial-time enumerability.
Enumerability can be used to quantify the difficulty of a language A by asking how difficult it is to enumerate its n-fold characteristic function \chiAn and cardinality function #An. A language is (n, m)-verbose if \chiAn is m-enumerable. The inclusion structures of Turing machine and of finite automata verboseness classes are identical: all (n, m)-Turing-verbose languages are (h, k)-Turing-verbose iff all (n, m)-fa-verbose languages are (h, k)-fa-verbose. The structure of polynomial-time verboseness classes is different.
The enumerability of #An has been studied in detail in recursion theory. Kummer's cardinality theorem states that if #An is n-enumerable by a Turing machine, then A must be recursive. Evidence is gathered that this theorem also holds for finite automata: it is shown that the nonspeedup theorem, the cardinality theorem for two words, and the restricted cardinality theorem all hold for finite automata. The cardinality theorem does not hold for polynomial-time computations.
The central proofs rely on two proof techniques that promise to be applicable in other situations as well: generic proofs and branch diagonalisation. Generic proofs use elementary definitions, a concept from logic, to define enumerators in terms of other enumerators. They can be instantiated for all computational models that are closed under elementary definitions. Examples of such models are finite automata, but also Presburger arithmetic and ordinal number arithmetic. The second technique is a new diagonalisation method, where machines are tricked on codes of diagonalisation decision sequences, rather than on codes of machines. Branch diagonalisation is not applicable universally, but where it is applicable, it can be used to diagonalise against Turing machines, using only finite automata.
Results on enumerability classes have applications in unrelated areas, like finite automata protocol testing, classification problems where examples are provided, and separability. An intriguing example of such an application is the following theorem: if there exist regular supersets of A × A, A × \bar A, and \bar A × \bar A whose intersection is empty, then A is regular.
Query Complexity of Membership Comparable Sets.
Theoretical Computer Science, 302(1-3):467-474, 2003.
This paper investigates how many queries to k-membership comparable sets are needed in order to decide all (k+1)-membership comparable sets. For k >= 2 this query complexity is at least linear and at most cubic. As a corollary we obtain that more languages are O(log n)-membership comparable than truth-table reducible to P-selective sets.
Weak Cardinality Theorems for First-Order Logic.
Kummer's cardinality theorem states that a language is recursive if a Turing machine can exclude for any n words one of the n + 1 possibilities for the number of words in the language. It is known that this theorem does not hold for polynomial-time computations, but there is evidence that it holds for finite automata: at least weak cardinality theorems hold for finite automata. This paper shows that some of the recursion-theoretic and automata-theoretic weak cardinality theorems are instantiations of purely logical theorems. Apart from unifying previous results in a single framework, the logical approach allows us to prove new theorems for other computational models. For example, weak cardinality theorems hold for Presburger arithmetic.
In Proceedings of the 14th International Symposium on Fundamentals of Computation Theory (FCT 2003), Volume 2751 of Lecture Notes in Computer Science, pp. 400-411. Springer, 2003.
Computation with Absolutely No Overhead.
We study Turing machines that are allowed absolutely no space overhead. The only work space the machines have, beyond the fixed amount of memory implicit in their finite-state control, is that which they can create by cannibalizing the input bits' own space. This model more closely reflects the fixed-sized memory of real computers than does the standard complexity-theoretic model of linear space. Though some context-sensitive languages cannot be accepted by such machines, we show that a large subclasses of the context-free languages can even be accepted in polynomial time with absolutely no space overhead.
On Reachability in Graphs with Bounded Independence Number.
In Proceedings of the Eighth Annual International Computing and Combinatorics Conference (COCOON 2002), Volume 2387 of Lecture Notes in Computer Science, pp. 554--563. Springer, 2002.
We study the reachability problem for finite directed graphs whose independence number is bounded by some constant k. We show that this problem is first-order definable for all k. In contrast, the reachability problem for many other types of finite graphs, including dags and trees, is not first- order definable. We also study the reachability problem for succinctly represented graphs with independence number at most k. We also study the succinct version of the problem and show that it is \PiP2-complete for all k.
We study whether sets inside NP can be reduced to sets with low information content but possibly still high computational complexity. Examples of sets with low information content are tally sets, sparse sets, P-selective sets and membership comparable sets. For the graph automorphism and isomorphism problems GA and GI, for the directed graph reachability problem GAP, for the determinant function det, and for logspace self-reducible languages we establish the following results:
If GA is \leptt-reducible to a P-selective set, then GA \in P.
If GI is O(log)-membership comparable, then GI \in RP.
If GAP is logspace O(1)-membership comparable, then GAP \in L.
If det is \lelogT-reducible to an L-selective set, then det \in FL.
If A is logspace self-reducible and \lelogT-reducible to an L-selective set, then A \in L.
The last result is a strong logspace version of the characterisation of P as the class of self-reducible P-selective languages. As P and NL have logspace self-reducible complete sets, it also establishes a logspace analogue of the conjecture that if SAT is \lepT-reducible to a P-selective set, then SAT \in P.
A Note on the Power of Extra Queries to Membership Comparable Sets.
Technical report ECCC-TR02-044, Schriftenreihe der Institute für Informatik/Mathematik der Universität zu Lübeck, 2002.
A language is called k-membership comparable if there exists a polynomial-time algorithm that excludes for any k words one of the 2k possibilities for their characteristic string. It is known that all membership comparable languages can be reduced to some P-selective language with polynomially many adaptive queries. We show however that for all k there exists a (k+1)-membership comparable set that is neither truth-table reducible nor sublinear Turing reducible to any k-membership comparable set. In particular, for all k > 2 the number of adaptive queries to P-selective sets necessary to decide all k-membership comparable sets is Omega(n) and O(n3). As this shows that the truth-table closure of P-sel is a proper subset of P-mc(log), we get a proof of Sivakumar's conjecture that O(log)-membership comparability is a more general notion than truth-table reducibility to P-sel.
In Proceedings of the 19th International Symposium on Theoretical Aspects of Computer Science (STACS 2002), Volume 2285 of Lecture Notes in Computer Science, pp. 465-476. Springer, 2002.
Towards a Cardinality Theorem for Finite Automata.
In Proceedings of the 27th International Symposium on Mathematical Foundations of Computer Science (MFCS 2002), Volume 2420 of Lecture Notes in Computer Science, pp. 625-636. Springer, 2002.
Kummer's cardinality theorem states that a language is recursive if a Turing machine can exclude for any n words one of the n+1 possibilities for the number of words in the language. This paper gathers evidence that the cardinality theorem might also hold for finite automata. Three reasons are given. First, Beigel's nonspeedup theorem also holds for finite automata. Second, the cardinality theorem for finite automata holds for n=2. Third, the restricted cardinality theorem for finite automata holds for all n.
Closure of Polynomial Time Partial Information Classes under Polynomial Time Reductions.
Polynomial time partial information classes are extensions of the class P of languages decidable in polynomial time. A partial information algorithm for a language A computes, for fixed n, on input of words x1, ..., xn a set P of bitstrings, called a pool, such that \chiA(x1, ..., xn) \in P, where P is chosen from a family D of pools. A language A is in P[D], if there is a polynomial time partial information algorithm which for all inputs (x1, ..., xn) outputs a pool P in D with \chiA(x1, ..., xn) \in P. Many extensions of P studied in the literature, including approximable languages, cheatability, p-selectivity and frequency computations, form a class P[D] for an appropriate family D. We characterise those families D for which P[D] is closed under certain polynomial time reductions, namely bounded truth-table, truth-table, and Turing reductions. We also treat positive reductions. A class P[D] is presented which strictly contains the class P-sel of p-selective languages and is closed under positive truth-table reductions.
A Note on the Complexity of the Reachability Problem for Tournaments.
Deciding whether a vertex in a graph is reachable from another vertex has been studied intensively in complexity theory and is well understood. For common types of graphs like directed graphs, undirected graphs, dags or trees it takes a (possibly nondeterministic) logspace machine to decide the reachability problem, and the succinct versions of these problems (which often arise in hardware design) are all PSPACE-complete. In this paper we study tournaments, which are directed graphs with exactly one edge between any two vertices. We show that the tournament reachability problem is first order definable and that its succinct version is \PiP2-complete.
Schnelle Löser für periodische Integralgleichungen mittels Matrixkompression und Mehrgitteriteration.
Technische Universität Berlin, 2001. Diplomarbeit Mathematik
In this thesis we study fast solvers for a class of periodic integral equations. Such integral equations arise in a number applications, including the solution of Dirichlet problems and the biharmonic equation. The integrals employed in the integral equations under consideration are convolutions where a smooth distortion is allowed. Provided the fourier coefficients of the convolution kernels satisfy appropriate properties, the integral equations have unique solutions. Provided such a solution exists, we show how it can be found quickly. Here, "quickly" means with work proportional to the work needed to perform a fast fourier transformation. Using parallel hardware even logarithmic runtime can be achieved. To find solutions quickly, we proceed in three steps. First, all input functions are dicretised. Second, a spectral Galerkin method is applied to the resulting equations. The Galerkin method results in a large, but compressable system matrix. Third, the matrix compression allows us to use a multi-grid iteration. The iteration yields an asymptotically optimal approximation of the desired solution.
Klaus Didrich, Wolfgang Grieskamp, Florian Schintke, Till Tantau, Baltasar Trancón-y-Widemann:
Reflections in Opal - Meta Information in a Functional Programming Language.
In Proceedings of the 11th International Workshop on Implementation of Functional Languages, Lochem, The Netherlands, September 1999, (IFL'99), Selected Papers, Volume 1868 of Lecture Notes in Computer Science, pp. 146--164. Springer, 2000.
We report on an extension of the Opal system that allows the use of reflections. Using reflections, a programmer can query information like the type of an object at runtime. The type can in turn be queried for properties like the constructor and deconstructor functions, and the resulting reflected functions can be evaluated. These facilities can be used for generic meta-programming. We describe the reflection interface of Opal and its applications, and sketch the implementation. For an existing language implementation like Opal's the extension by a reflection facility is challenging: in a statically typed language the management of runtime type information seems to be an alien objective. However, it turns out that runtime type information can be incorporated in an elegant way by a source-level transformation and an appropriate set of library modules. We show how this transformation can be done without changing the Opal core system and causing runtime overhead only where reflections are actually used.
On the Power of Extra Queries to Selective Languages.
A language is selective if there exists a selection algorithm for it. Such an algorithm selects from any two words one, which is an element of the language whenever at least one of them is. Restricting the complexity of selection algorithms yields different selectivity classes like the P-selective or the semirecursive (i.e. recursively selective) languages. A language is supportive if k queries to the language are more powerful than k-1 queries for every k. Recently, Beigel et al. (J. of Symb. Logic, 65(1):1-18, 2000) proved a powerful recursion theoretic theorem: A semirecursive language is supportive iff it is nonrecursive. For restricted computational models like polynomial time this theorem does not hold in this form. Our main result states that for any reasonable computational model a selective language is supportive iff it is not cheatable. Beigel et al.'s result is a corollary of this general theorem since `recursively cheatable' languages are recursive by Beigel's Nonspeedup Theorem. Our proof is based on a partial information analysis (see Nickelsen, STACS 97, LNCS 1200, pp. 307-318) of the involved languages: We establish matching upper and lower bounds for the partial information complexity of the equivalence and reduction closures of selective languages. From this we derive the main results as these bounds differ for different k.
We give four applications of our main theorem and the proof technique. Firstly, the relation EPk-tt(P-sel) \notsubseteq RP(k-1)-tt(P-sel) proven by Hemaspaandra et al. (Theor. Comput. Sci., 155(2):447-457, 1996) still holds if we relativise only the right hand side. Secondly, we settle an open problem from the same paper: Equivalence to a P-selective language with k serial queries cannot generally be replaced by a reduction using less than 2k-1 parallel queries. Thirdly, the k-truth-table reduction closures of selectivity classes are (m,n)-verbose iff every walk on the n-dimensional hypercube with transition counts at most k visits at most m bitstrings. Lastly, these reduction closures are (m,n)-recursive iff every such walk is contained in a closed ball of radius n-m.
|
CommonCrawl
|
Search results for: T. W. Wang
Items from 1 to 20 out of 259 results
Search for a heavy pseudoscalar boson decaying to a Z and a Higgs boson at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A. M. Sirunyan, A. Tumasyan, W. Adam, F. Ambrogi, more
The European Physical Journal C > 2019 > 79 > 7 > 1-27
A search is presented for a heavy pseudoscalar boson $$\text {A}$$ A decaying to a Z boson and a Higgs boson with mass of 125$$\,\text {GeV}$$ GeV . In the final state considered, the Higgs boson decays to a bottom quark and antiquark, and the Z boson decays either into a pair of electrons, muons, or neutrinos. The analysis is performed using a data sample corresponding to an integrated luminosity...
Search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at 13 TeV
The CMS collaboration, A. M. Sirunyan, A. Tumasyan, W. Adam, more
Journal of High Energy Physics > 2019 > 2019 > 6 > 1-34
Abstract Results are reported of a search for supersymmetry in final states with photons and missing transverse momentum in proton-proton collisions at the LHC. The data sample corresponds to an integrated luminosity of 35.9 fb−1 collected at a center-of-mass energy of 13 TeV using the CMS detector. The results are interpreted in the context of models of gauge-mediated supersymmetry breaking. Production...
Search for the associated production of the Higgs boson and a vector boson in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV via Higgs boson decays to τ leptons
Abstract A search for the standard model Higgs boson produced in association with a W or a Z boson and decaying to a pair of τ leptons is performed. A data sample of proton-proton collisions collected at s $$ \sqrt{s} $$ = 13 TeV by the CMS experiment at the CERN LHC is used, corresponding to an integrated luminosity of 35.9 fb−1. The signal strength is measured relative to the expectation...
Search for a low-mass τ−τ+ resonance in association with a bottom quark in proton-proton collisions at s $$ \sqrt{s} $$ = 13 TeV
Abstract A general search is presented for a low-mass τ−τ+ resonance produced in association with a bottom quark. The search is based on proton-proton collision data at a center-of-mass energy of 13 TeV collected by the CMS experiment at the LHC, corresponding to an integrated luminosity of 35.9 fb−1. The data are consistent with the standard model expectation. Upper limits at 95% confidence level...
Search for supersymmetry in events with a photon, jets, $$\mathrm {b}$$ b -jets, and missing transverse momentum in proton–proton collisions at 13$$\,\text {Te}\text {V}$$ Te
A search for supersymmetry is presented based on events with at least one photon, jets, and large missing transverse momentum produced in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te . The data correspond to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 and were recorded at the LHC with the CMS detector in 2016. The analysis characterizes signal-like...
Combined measurements of Higgs boson couplings in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
Combined measurements of the production and decay rates of the Higgs boson, as well as its couplings to vector bosons and fermions, are presented. The analysis uses the LHC proton–proton collision data set recorded with the CMS detector in 2016 at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te , corresponding to an integrated luminosity of 35.9$${\,\text {fb}^{-1}} $$ fb-1 . The combination is based...
Combinations of single-top-quark production cross-section measurements and |fLVVtb| determinations at s $$ \sqrt{s} $$ = 7 and 8 TeV with the ATLAS and CMS experiments
The ATLAS collaboration, M. Aaboud, G. Aad, B. Abbott, more
Abstract This paper presents the combinations of single-top-quark production cross-section measurements by the ATLAS and CMS Collaborations, using data from LHC proton-proton collisions at s $$ \sqrt{s} $$ = 7 and 8 TeV corresponding to integrated luminosities of 1.17 to 5.1 fb−1 at s $$ \sqrt{s} $$ = 7 TeV and 12.2 to 20.3 fb−1 at s $$ \sqrt{s} $$ = 8 TeV. These combinations...
Measurement of inclusive very forward jet cross sections in proton-lead collisions at s N N $$ \sqrt{s_{\mathrm{NN}}} $$ = 5.02 TeV
Abstract Measurements of differential cross sections for inclusive very forward jet production in proton-lead collisions as a function of jet energy are presented. The data were collected with the CMS experiment at the LHC in the laboratory pseudorapidity range −6.6 < η < −5.2. Asymmetric beam energies of 4 TeV for protons and 1.58 TeV per nucleon for Pb nuclei were used, corresponding to a...
Measurement of the energy density as a function of pseudorapidity in proton–proton collisions at $$\sqrt{s} =13\,\text {TeV} $$ s=13TeV
A measurement of the energy density in proton–proton collisions at a centre-of-mass energy of $$\sqrt{s} =13$$ s=13 $$\,\text {TeV}$$ TeV is presented. The data have been recorded with the CMS experiment at the LHC during low luminosity operations in 2015. The energy density is studied as a function of pseudorapidity in the ranges $$-\,6.6<\eta <-\,5.2$$ -6.6<η<-5.2 and $$3.15<|\eta...
Measurement of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ production cross section, the top quark mass, and the strong coupling constant using dilepton events in pp collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A measurement of the top quark–antiquark pair production cross section $$\sigma _{\mathrm {t}\overline{\mathrm {t}}} $$ σtt¯ in proton–proton collisions at a centre-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te is presented. The data correspond to an integrated luminosity of $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 , recorded by the CMS experiment at the CERN LHC in 2016. Dilepton events ($$\mathrm...
Search for vector-like quarks in events with two oppositely charged leptons and jets in proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te
A search for the pair production of heavy vector-like partners $$\mathrm {T}$$ T and $$\mathrm {B}$$ B of the top and bottom quarks has been performed by the CMS experiment at the CERN LHC using proton–proton collisions at $$\sqrt{s} = 13\,\text {Te}\text {V} $$ s=13Te . The data sample was collected in 2016 and corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . Final states...
Measurements of the pp → WZ inclusive and differential production cross sections and constraints on charged anomalous triple gauge couplings at s $$ \sqrt{s} $$ = 13 TeV
Abstract The WZ production cross section is measured in proton-proton collisions at a centre-of-mass energy s $$ \sqrt{s} $$ = 13 TeV using data collected with the CMS detector, corresponding to an integrated luminosity of 35.9 fb−1. The inclusive cross section is measured to be σtot(pp → WZ) = 48.09 − 0.96+ 1.00 (stat) − 0.37+ 0.44 (theo) − 2.17+ 2.39 (syst) ± 1.39(lum) pb, resulting in...
Search for nonresonant Higgs boson pair production in the b b ¯ b b ¯ $$ \mathrm{b}\overline{\mathrm{b}}\mathrm{b}\overline{\mathrm{b}} $$ final state at s $$ \sqrt{s} $$ = 13 TeV
Abstract Results of a search for nonresonant production of Higgs boson pairs, with each Higgs boson decaying to a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair, are presented. This search uses data from proton-proton collisions at a centre-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 35.9 fb−1, collected by the CMS detector at the LHC. No signal is observed, and...
Search for contact interactions and large extra dimensions in the dilepton mass spectra from proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for nonresonant excesses in the invariant mass spectra of electron and muon pairs is presented. The analysis is based on data from proton-proton collisions at a center-of-mass energy of 13 TeV recorded by the CMS experiment in 2016, corresponding to a total integrated luminosity of 36 fb−1. No significant deviation from the standard model is observed. Limits are set at 95% confidence...
Measurement of the top quark mass in the all-jets final state at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using $$35.9{\,\text {fb}^{-1}} $$ 35.9fb-1 of LHC proton–proton collision data collected with the CMS detector at $$\sqrt{s}=13\,\text {TeV} $$ s=13TeV . The measurement uses the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ all-jets final state. A kinematic fit is performed to reconstruct the decay of the $${\mathrm {t}\overline{\mathrm {t}}}$$ tt¯ system...
Search for resonant production of second-generation sleptons with same-sign dimuon events in proton–proton collisions at $$\sqrt{s} = 13\,\text {TeV} $$ s=13TeV
A search is presented for resonant production of second-generation sleptons ($$\widetilde{\mu } _{\mathrm {L}}$$ μ~L , $$\widetilde{\nu }_{\mu }$$ ν~μ ) via the R-parity-violating coupling $${\lambda ^{\prime }_{211}}$$ λ211′ to quarks, in events with two same-sign muons and at least two jets in the final state. The smuon (muon sneutrino) is expected to decay into a muon and a neutralino (chargino),...
Search for resonant t t ¯ $$ \mathrm{t}\overline{\mathrm{t}} $$ production in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV
Abstract A search for a heavy resonance decaying into a top quark and antiquark t t ¯ $$ \left(\mathrm{t}\overline{\mathrm{t}}\right) $$ pair is performed using proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV. The search uses the data set collected with the CMS detector in 2016, which corresponds to an integrated luminosity of 35.9 fb−1. The analysis considers three exclusive...
Search for excited leptons in ℓℓγ final states in proton-proton collisions at s = 13 $$ \sqrt{\mathrm{s}}=13 $$ TeV
Abstract A search is presented for excited electrons and muons in ℓℓγ final states at the LHC. The search is based on a data sample corresponding to an integrated luminosity of 35.9 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV, collected with the CMS detector in 2016. This is the first search for excited leptons at s $$ \sqrt{s} $$ = 13 TeV. The observation is consistent...
Search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks in proton–proton collisions at $$\sqrt{s}=13\,\text {Te}\text {V} $$ s=13Te
A search for dark matter produced in association with a Higgs boson decaying to a pair of bottom quarks is performed in proton–proton collisions at a center-of-mass energy of 13$$\,\text {Te}\text {V}$$ Te collected with the CMS detector at the LHC. The analyzed data sample corresponds to an integrated luminosity of 35.9$$\,\text {fb}^{-1}$$ fb-1 . The signal is characterized by a large missing transverse...
Measurement of exclusive $$\mathrm {\Upsilon }$$ Υ photoproduction from protons in $$\mathrm {p}$$ p Pb collisions at $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text {TeV} $$ sNN=5.02TeV
The exclusive photoproduction of $$\mathrm {\Upsilon }\mathrm {(nS)} $$ Υ(nS) meson states from protons, $$\gamma \mathrm {p} \rightarrow \mathrm {\Upsilon }\mathrm {(nS)} \,\mathrm {p}$$ γp→Υ(nS)p (with $$\mathrm {n}=1,2,3$$ n=1,2,3 ), is studied in ultraperipheral $$\mathrm {p}$$ p Pb collisions at a centre-of-mass energy per nucleon pair of $$\sqrt{\smash [b]{s_{_{\mathrm {NN}}}}} = 5.02\,\text...
Last 3 years (211)
CARBON NANOTUBES (5)
BISPHOSPHONATES (2)
CROSS SECTION (2)
DENDRITIC CELLS (2)
JETS (2)
LIPOSOMES (2)
QCD (2)
VACCINE DELIVERY (2)
ΓΔ T CELLS (2)
ALPHA-S (1)
B HADRONS (1)
B2G (1)
BIOCOMPATIBILITY (1)
BLOOD-BRAIN BARRIER (BBB) (1)
BLOOD–BRAIN BARRIER (1)
BRAIN DELIVERY (1)
BRAIN DRUG DELIVERY (1)
CANCER CELLS (1)
CANCER VACCINES (1)
CARBON NANOTUBES (CNTS) (1)
CELL PENETRATING PEPTIDE (1)
CELL SURVIVAL (1)
CELL TOXICITY (1)
CO2 EMISSION (1)
CROP RESIDUE (1)
DIBOSON (1)
DIFFRACTION (1)
DUAL‐IMAGING (1)
ELECTROWEAK INTERACTION (1)
EXPERIMENTAL RESULTS (1)
FUNCTIONALISATION (1)
FUNCTIONALIZATION (1)
FUNDC1 (1)
HADRON SPECTROSCOPY (1)
HADRONIC (1)
HEAVY FLAVOUR (1)
HEAVY FLAVOUR SPECTROSCOPY (1)
HEDGE (1)
HYDROGEN PEROXIDE (1)
INTEGRIN TARGETING (1)
LARYNGEAL CANCER (1)
MICROBIAL BIOMASS (1)
MULTI-PHOTON LUMINESCENCE MICROSCOPY (1)
MULTI-WALLED CARBON NANOTUBES (1)
N MINERALIZATION (1)
N2O EMISSION (1)
ORGAN BIODISTRIBUTION (1)
PDT (1)
PHOTOCHEMICAL INTERNALISATION (1)
PHOTODYNAMIC THERAPY (1)
PHOTOPRODUCTION (1)
PPB (1)
PR CHINA (1)
SENSITISER (1)
SHORTENING (1)
SIRNA (1)
SMP (1)
SOIL CONSERVATION (1)
SOIL PROPERTIES (1)
SPECT/CT (1)
SPECT/CT IMAGING (1)
STANDARD MODEL (1)
STANDARD MODEL PHYSICS (1)
STRONG COUPLING CONSTANT (1)
TAT PEPTIDE (1)
TERRACING (1)
THREE‐GORGES AREA (1)
TOP QUARK CROSS SECTION (1)
TOP QUARK MASS (1)
UPC (1)
VECTOR BOSON (1)
VECTOR BOSON FUSION (1)
VH (1)
WFPS (1)
Z BOSON (1)
Data set
Springer (247)
Elsevier (10)
Journal of High Energy Physics (174)
The European Physical Journal C (69)
Journal of Controlled Release (6)
Agroforestry Systems (1)
Clinical and Translational Oncology (1)
Land Degradation & Development (1)
Metallurgical and Materials Transactions A (1)
Photodiagnosis and Photodynamic Therapy (1)
|
CommonCrawl
|
Fair colorful k-center clustering
Full Length Paper
Xinrui Jia1,
Kshiteej Sheth1 &
Ola Svensson1
Mathematical Programming volume 192, pages 339–360 (2022)Cite this article
An instance of colorful k-center consists of points in a metric space that are colored red or blue, along with an integer k and a coverage requirement for each color. The goal is to find the smallest radius \(\rho \) such that there exist balls of radius \(\rho \) around k of the points that meet the coverage requirements. The motivation behind this problem is twofold. First, from fairness considerations: each color/group should receive a similar service guarantee, and second, from the algorithmic challenges it poses: this problem combines the difficulties of clustering along with the subset-sum problem. In particular, we show that this combination results in strong integrality gap lower bounds for several natural linear programming relaxations. Our main result is an efficient approximation algorithm that overcomes these difficulties to achieve an approximation guarantee of 3, nearly matching the tight approximation guarantee of 2 for the classical k-center problem which this problem generalizes. algorithms either opened more than k centers or only worked in the special case when the input points are in the plane.
In the colorful k-center problem introduced in [5], we are given a set of n points P in a metric space partitioned into a set R of red points and a set B of blue points, along with parameters k, r, and b.
The goal is to find a set of k centers \(C \subseteq P\) that minimizes \(\rho \) so that balls of radius \(\rho \) around each point in C cover at least r red points and at least b blue points.
More generally, the points can be partitioned into \(\omega \) color classes \(\mathcal {C}_1, \dots , \mathcal {C}_{\omega }\), with coverage requirements \(p_1, \dots , p_{\omega }\). To keep the exposition of our ideas as clean as possible, we concentrate the bulk of our discussion to the version with two colors. In Sect. 3 we show how our algorithm can be generalized for \(\omega \) color classes with an exponential dependence on \(\omega \) in the running time in a rather straightforward way, thus getting a polynomial time algorithm for constant \(\omega \).
This generalization of the classic k-center problem has applications in situations where fairness is a concern. For example, if a telecommunications company is required to provide service to at least 90% of the people in a country, it would be cost effective to only provide service in densely populated areas. This is at odds with the ideal that at least some people in every community should receive service. In the absence of color classes, an approximation algorithm could be "unfair" to some groups by completely considering them as outliers. The inception of fairness in clustering can be found in the recent paper [8] (see also [1, 4]), which uses a related but incomparable notion of fairness. Their notion of fairness requires each individual cluster to have a balanced number of points from each color class, which leads to very different algorithmic considerations and is motivated by other applications, such as "feature engineering".
The other motive for studying the colorful k-center problem derives from the algorithmic challenges it poses. One can observe that it generalizes the k-center problem with outliers, which is equivalent to only having red points and needing to cover at least r of them. This outlier version is already more challenging than the classic k-center problem: only recent results give tight 2-approximation algorithms [6, 12], improving upon the 3-approximation guarantee of [7]. In contrast, such algorithms for the classic k-center problem have been known since the '80s[10, 13]. That the approximation guarantee of 2 is tight, even for classic k-center, was proved in [14].
At the same time, a special case of subset-sum with polynomial-sized numbers is embedded within the colorful k-center problem. To see this, consider n numbers \(a_1, \ldots , a_n\) and let \(A = \sum _{i=1}^n a_i\). Construct an instance of the colorful k-center problem with \(r = k\cdot A + A/2\), \(b = k\cdot A - A/2\), and for every \(i\in \{1, \ldots , n\}\), a ball of radius one containing \(A+a_i\) red points and \(A- a_i\) blue points. These balls are assumed to be far apart so that any single ball that covers two of these balls must have a very large radius. It is easy to see that the constructed colorful k-center instance has a solution of radius one if and only if there is a size k subset of the n numbers whose sum exactly equals A/2.
We use this connection to subset-sum to show that the standard linear programming (LP) relaxation of the colorful k-center problem has an unbounded integrality gap even after a linear number of rounds of the powerful Lasserre/Sum-of-Squares hierarchy (see Sect. 4.1). We remark that the standard linear programming relaxation gives a 2-approximation algorithm for the outliers version even without applying lift-and-project methods. Another natural approach for strengthening the standard linear programming relaxation is to add flow-based inequalities specially designed to solve subset-sum problems. However, in Sect. 4.2, we prove that they do not improve the integrality gap due to the clustering feature of the problem. This shows that clustering and the subset-sum problem are intricately related in colorful k-center. This interplay makes the problem more complex and prior to our work only a randomized constant-factor approximation algorithm was known when the points are in \(\mathbb {R}^2\) with an approximation guarantee greater than 6 [5].
Our main result overcomes these difficulties and we give a nearly tight approximation guarantee:
There is a 3-approximation algorithm for the colorful k-center problem.
As aforementioned, our techniques can be easily extended to a constant number of color classes but we restrict the discussion here to two colors.
On a very high level, our algorithm manages to decouple the clustering and the subset-sum aspects. First, our algorithm guesses certain centers of the optimal solution that it then uses to partition the point set into a "dense" part \(P_d\) and a "sparse" part \(P_s\). The dense part is clustered using a subset-sum instance while the sparse set is clustered using the techniques of Bandyapadhyay, Inamdar, Pai, and Varadarajan [5] (see Sect. 2.1). Specifically, we use the pseudo-approximation of [5] that satisfies the coverage requirements using \(k+1\) balls of at most twice the optimal radius.
While our approximation guarantee is nearly tight, it remains an interesting open problem to give a 2-approximation algorithm or to show that the ratio 3 is tight. One possible direction is to understand the strength of the relaxation obtained by combining the Lasserre/Sum-of-Squares hierarchy with the flow constraints. While we show that individually they do not improve the integrality gap, we believe that their combination can lead to a strong relaxation. Independent work Independently and concurrently to our work, authors in [2] obtained a 4-approximation algorithm for the colorful k-center problem with \(\omega = O(1)\) and running time \(|P|^{O(\omega )}\) using different techniques than the ones described in this work. Furthermore they show that, assuming \(P\ne NP\), if \(\omega \) is allowed to be unbounded then the colorful k-center problem admits no algorithm guaranteeing a finite approximation. They also show that assuming the Exponential Time Hypothesis, colorful k-center is inapproximable if \(\omega \) grows faster than \(\log n\).
Organization We begin by giving some notation and definitions and describing the pseudo-approximation algorithm in [5]. In fact, we then describe a 2-approximation algorithm on a certain class of instances that are well-separated, and the 3-approximation follows almost immediately. This 2-approximation proceeds in two phases: the first is dedicated to the guessing of certain centers, while the second processes the dense and sparse sets. Section 3 explains the generalization to \(\omega \) color classes. In Sect. 4 we present our integrality gaps under the Sum-of-Squares hierarchy and additional constraints deriving from a flow network to solve subset-sums.
A 3-approximation algorithm
In this section we present our 3-approximation algorithm. We briefly describe the pseudo-approximation algorithm of Bandyapadhyay et al. [5] since we use it as a subroutine in our algorithm.
Notation We assume that our problem instance is normalized to have an optimal radius of one and we refer to the set of centers in an optimal solution as OPT. The set of all points at distance at most \(\rho \) from a point j is denoted by \(\mathcal {B}(j, \rho )\) and we refer to this set as a ball of radius \(\rho \) at j. We write \(\mathcal {B}(j)\) for \(\mathcal {B}(j,1)\). By a ball of OPT we mean \(\mathcal {B}(j)\) for some \(j \in OPT\).
The linear programs used in the pseudo-approximation algorithm
The pseudo-approximation algorithm
The algorithm of Bandyapadhyay et al. [5] first guesses the optimal radius for the instance (there are at most \(O(n^2)\) distinct values the optimal radius can take), which we assume by normalization to be one, and considers the natural LP relaxation LP1 depicted on the left in Fig. 1. The variable \(x_i\) indicates the extent to which point i is fractionally opened as a center and \(z_i\) indicates the extent to which i is covered by centers.
Given a fractional solution to LP1, the algorithm of [5] finds a clustering of the points. The clusters that are produced are of radius two, and with a simple modification (details can be found in Appendix 2), can be made to have a special structure that we call a flower:
For \(j \in P\), a flower centered at j is the set \(\mathcal {F}(j) = \cup _{i\in \mathcal {B}(j)} \mathcal {B}(i)\).
More specifically, given a fractional solution (x, z) to LP1, the clustering algorithm in [5] produces a set of points \(S \subseteq P\) and a cluster \(C_j \subseteq P\) for every \(j\in S\) such that:
The set S is a subset of the points \(\{j\in P : z_j > 0\}\) with positive z-values.
For each \(j\in S\), we have \(C_j \subseteq \mathcal {F}(j)\) and the clusters \(\{C_j\}_{j\in S}\) are pairwise disjoint.
If we let \(r_j = |C_j \cap R|\) and \(b_j = |C_j \cap B|\) for \(j\in S\), then the linear program LP2 (depicted on the right in Fig. 1) has a feasible solution y of value at least r.
As LP2 has only two non-trivial constraints, any extreme point will have at most two variables attaining strictly fractional values. So at most \(k+1\) variables of y are non-zero. The pseudo-approximation of [5] now simply takes those non-zero points as centers. Since each flower is of radius two, this gives a 2-approximation algorithm that opens at most \(k+1\) centers. (Note that, as the clusters \(\{C_j\}_{j\in S}\) are pairwise disjoint, at least b blue points are covered, and at least r red points are covered since the value of the solution is at least r.)
Obtaining a constant-factor approximation algorithm that only opens k centers turns out to be significantly more challenging.
Nevertheless, the above techniques form an important subroutine in our algorithm. Given a fractional solution (x, z) to LP1, we proceed as above to find S and an extreme point to LP2 of value at least r. However, instead of selecting all points with positive y-value, we, in the case of two fractional values, only select the one whose cluster covers more blue points. This gives us a solution of at most k centers whose clusters cover at least b blue points. Furthermore, the number of red points that are covered is at least \(r- \max _{j\in S} r_j\) since we disregarded at most one center. As \(S \subseteq \{j: z_j >0 \}\) (see first property above) and \(C_j \subseteq \mathcal {F}(j)\) (see second property above), we have \(\max _{j\in S} r_j \le \max _{j: z_j > 0} |\mathcal {F}(j) \cap R|\). We summarize the obtained properties in the following lemma.
Lemma 1
Given a fractional solution (x, z) to LP1, there is a polynomial-time algorithm that outputs at most k clusters of radius two that cover at least b blue points and at least \(r - \max _{j: z_j > 0} |\mathcal {F}(j) \cap R|\) red points.
We can thus find a 2-approximate solution that covers sufficiently many blue points but may cover fewer red points than necessary. The idea now is that, if the number of red points in any cluster is not too large, i.e., \(\max _{j: z_j > 0} |\mathcal {F}(j) \cap R|\) is "small", then we can hope to meet the coverage requirements for the red points by increasing the radius around some opened centers. Our algorithm builds on this intuition to get a 2-approximation algorithm using at most k centers for well-separated instances as defined below.
An instance of colorful k-center is well-separated if there does not exist a ball of radius three that covers at least two balls of OPT.
Our main result of this section can now be stated as follows:
There is a 2-approximation algorithm for well-separated instances.
The above theorem immediately implies Theorem 1, i.e., the 3-approximation algorithm for general instances. Indeed, if the instance is not well-separated, we can find a ball of radius three that covers at least two balls of OPT by trying all n points and running the pseudo-approximation of [5] on the remaining uncovered points with \(k-2\) centers. A more formal description of the algorithm is as follows:
In the correct iteration, this gives us at most \(k-1\) centers of radius two, which when combined with the ball of radius three that covers two balls of OPT, is a 3-approximation.
Our algorithm for well-separated instances now proceeds in two phases with the objective of finding a subset of P on which the pseudo-approximation algorithm produces subsets of flowers containing not too many red points. In addition, we maintain a partial solution set of centers (some guessed in the first phase), so that we can expand the radius around these centers to recover the deficit of red points from closing one of the fractional centers.
In this phase we will guess some balls of OPT that can be used to construct a bound on \(\max _{j: z_j > 0} |R\cap \mathcal {F}(j)|\). To achieve this, we define the notion of Gain(p, q) for any point \(p\in P\) and \(q\in \mathcal {B}(p)\).
For any \(p \in P\) and \(q \in \mathcal {B}(p)\), let
$$\begin{aligned} \mathbf{Gain} (p,q):= R \cap \left( \mathcal {F}(q) {\setminus } \mathcal {B}(p) \right) \end{aligned}$$
be the set of red points added to \(\mathcal {B}(p)\) by forming a flower centered at q.
Our algorithm in this phase proceeds by guessing three centers \(c_1, c_2, c_3\) of the optimal solution OPT:
The time it takes to guess \(c_1, c_2\), and \(c_3\) is \(O(n^3)\) and for each \(c_i\) we find the \(q_i\in \mathcal {B}(c_i)\) such that \(|\mathbf{Gain} (c_i,q_i) \cap P_i|\) is maximized by trying all points in \(\mathcal {B}(c_i)\) (at most n many).
For notation, define \(\mathbf{Guess} := \cup _{i=1}^3 \mathcal {B}(c_i)\) and let
$$\begin{aligned} \tau = |\mathbf{Gain} (c_3,q_3)\cap P_3|. \end{aligned}$$
The reason for guessing three points is that later we lose up to \(3\tau \) red points after closing one extra center opened by running the pseudo-approximation on a pre-processed instance (see Lemma 4).
The important properties guaranteed by the first phase are summarized in the following lemma.
Assuming that \(c_1, c_2,\) and \(c_3\) are guessed correctly, we have that
the \(k-3\) balls of radius one in \(OPT {\setminus } \{c_i\}_{i=1}^3\) are contained in \(P_4\) and cover \(b - |B \cap \mathbf{Guess} |\) blue points and \(r - |R \cap \mathbf{Guess} |\) red points; and
the three clusters \(\mathcal {F}(q_1),\mathcal {F}(q_2)\), and \(\mathcal {F}(q_3)\) are contained in \(P {\setminus } P_4\) and cover at least \(|B \cap \mathbf{Guess} |\) blue points and at least \(|R \cap \mathbf{Guess} | + 3 \cdot \tau \) red points.
(1) We claim that the intersection of any ball of \(OPT {\setminus } \{ c_i \}_{i=1}^3\) with \(\mathcal {F}(q_i)\) in P is empty, for all \(1 \le i \le 3\). Then the \(k-3\) balls in \(OPT {\setminus } \{ c_i \}_{i=1}^3\) satisfy the statement of (1). To prove the claim, suppose that there is \(p \in OPT {\setminus } \{ c_i \}_{i=1}^3\) such that \(\mathcal {B}(p) \cap \mathcal {F}(q_i) \ne \emptyset \) for some \(1 \le i \le 3\). Note that \(\mathcal {F}(q_i) = \cup _{j\in \mathcal {B}(q_i)} \mathcal {B}(j)\), so this implies that \(\mathcal {B}(p) \cap \mathcal {B}(q') \ne \emptyset \), for some \(q'\in \mathcal {B}(q_i)\). Hence, a ball of radius three around \(q'\) covers both \(\mathcal {B}(p)\) and \(\mathcal {B}(c_i)\) as \(c_i \in \mathcal {B}(q_i)\), which contradicts that the instance is well-separated.
(2) Note that for \(1 \le i \le 3\), \(\mathcal {B}(c_i) \cup \mathbf{Gain} (c_i, q_i) \subseteq \mathcal {F}(q_i)\), and that \(\mathcal {B}(c_i)\) and Gain(\(c_i, q_i\)) are disjoint. The balls \(\mathcal {B}(c_i)\) cover at least \(|B \cap \mathbf{Guess} |\) blue points and \(|R \cap \mathbf{Guess} |\) red points, while \(\sum _{i=1}^3 |\mathbf{Gain} (c_i, q_i) \cap P_i| \ge 3\tau \). \(\square \)
Throughout this section we assume \(c_1, c_2\), and \(c_3\) have been guessed correctly in Phase I so that the properties of Lemma 2 hold. Furthermore, by the selection and the definition of \(\tau \), we also have
$$\begin{aligned} |\mathbf {Gain}(p, q) \cap P_4| \le \tau \qquad \text{ for } \text{ any } p\in P_4 \cap OPT\text { and } q\in \mathcal {B}(p) \cap P_4. \end{aligned}$$
This implies that \(\mathcal {F}(p) {\setminus } \mathcal {B}(p)\) contains at most \(\tau \) red points of \(P_4\). However, to apply Lemma 1 we need that the number of red points of \(P_4\) in the whole flower \(\mathcal {F}(p)\) is bounded. To deal with balls with many more than \(\tau \) red points, we will iteratively remove dense sets from \(P_4\) to obtain a subset \(P_s\) of sparse points.
When considering a subset of the points \(P_s \subseteq P\), we say that a point \(j\in P_s\) is dense if the ball \(\mathcal {B}(j)\) contains strictly more than \(2\cdot \tau \) red points of \(P_s\). For a dense point j, we also let \(I_j \subseteq P_s\) contain those points \(i \in P_s\) whose intersection \(\mathcal {B}(i) \cap \mathcal {B}(j)\) contains strictly more than \(\tau \) red points of \(P_s\).
We remark that in the above definition, we have in particular that \(j \in I_j\) for a dense point \(j\in P_s\). Our iterative procedure now works as follows:
Let \(P_d = P_4 {\setminus } P_s\) denote those points that were removed from \(P_4\). We will cluster the two sets \(P_s\) and \(P_d\) of points separately. Indeed, the following lemma says that a center in \(OPT {\setminus } \{c_i\}_{i=1}^3\) either covers points in \(P_s\) or \(P_d\) but not points from both sets. Recall that \(D_j\) denotes the set of points that are removed from \(P_s\) in the iteration when j was selected and so \(P_d = \cup _j D_j\).
For any \(c\in \) \(OPT{\setminus } \{c_i\}_{i=1}^3\) and any \(I_j\in I\), either \(c \in I_j\) or \(\mathcal {B}(c)\cap D_j = \emptyset \).
Let \(c\in OPT{\setminus } \{c_i\}_{i=1}^3\), \(I_j\in I\), and suppose \(c \notin I_j\). If \(\mathcal {B}(c) \cap D_j \ne \emptyset \), there is a point p in the intersection \(\mathcal {B}(c) \cap \mathcal {B}(i)\) for some \(i \in I_j\). Suppose first that \(\mathcal {B}(c) \cap \mathcal {B}(j) \ne \emptyset \). Then, since \(c \notin I_j\), the intersection \(\mathcal {B}(c) \cap \mathcal {B}(j)\) contains at most \(\tau \) red points from \(D_j\) (recall that \(D_j\) contains the points of \(\mathcal {B}(j)\) in \(P_s\) at the time j was selected). But by the definition of dense clients, \(\mathcal {B}(j)\cap D_j\) has more than \(2 \cdot \tau \) red points, so \((\mathcal {B}(j) {\setminus } \mathcal {B}(c)) \cap D_j \) has more than \(\tau \) red points. This region is a subset of \(\mathbf {Gain}(c,p) \cap P_4\), which contradicts (1). This is shown in Fig. 2a. Now consider the second case when \(\mathcal {B}(c) \cap \mathcal {B}(j) = \emptyset \) and there is a point p in the intersection \(\mathcal {B}(c) \cap \mathcal {B}(i)\) for some \(i\in I_j\) and \(i \ne j\). Then, by the definition of \(I_j\), \(\mathcal {B}(i) \cap \mathcal {B}(j)\) has more than \(\tau \) red points of \(D_j\). However, this is also a subset of \(\mathbf {Gain}(c,p) \cap P_4\) so we reach the same contradiction. See Fig. 2b. \(\square \)
The shaded regions are subsets of Gain(c,p), which contain the darkly shaded regions that have \(> \tau \) red points
Our algorithm now proceeds by guessing the number \(k_d\) of balls of \(OPT {\setminus } \{c_i\}_{i=1}^3\) contained in \(P_d\). We also guess the numbers \(r_d\) and \(b_d\) of red and blue points, respectively, that these balls cover in \(P_d\). Note that after guessing \(k_d\), we know that the number of balls in \(OPT{\setminus } \{c_i\}_{i=1}^3\) contained in \(P_s\) equals \(k_s = k- 3 - k_d\). Furthermore, by the first property of Lemma 2, these balls cover at least \(b_s = b - |B \cap \mathbf {Guess}| - b_d\) blue points in \(P_s\) and at least \(r_s = r - |R \cap \mathbf {Guess}| - r_d\) red points in \(P_s\). As there are \(O(n^3)\) possible values of \(k_d, b_d\), and \(r_d\) (each can take a value between 0 and n) we can try all possibilities by increasing the running time by a multiplicative factor of \(O(n^3)\). Henceforth, we therefore assume that we have guessed those parameters correctly. In that case, we show that we can recover an equally good solution for \(P_d\) and a solution for \(P_s\) that covers \(b_s\) blue points and almost \(r_s\) red points:
There exist two polynomial-time algorithms \(\mathcal {A}_d\) and \(\mathcal {A}_s\) such that if \(k_d, r_d\), and \(b_d\) are guessed correctly then
\(\mathcal {A}_d\) returns \(k_d\) balls of radius one that cover \(b_d\) blue points of \(P_d\) and \(r_d\) red points of \(P_d\);
\(\mathcal {A}_s\) returns \(k_s\) balls of radius two that cover at least \(b_s\) blue points of \(P_s\) and at least \(r_s - 3\cdot \tau \) red points of \(P_s\).
We first describe and analyze the algorithm \(\mathcal {A}_d\) followed by \(\mathcal {A}_s\).
The algorithm \(\mathcal {A}_d\) for the dense point set \(P_d\). By Lemma 3, we have that all \(k_d\) balls in \(OPT {\setminus } \{c_i\}_{i=1}^3\) that cover points in \(P_d\) are centered at points in \(\cup _{j} I_j\). Furthermore, we have that each \(I_j\) contains at most one center of OPT. This is because every \(i \in I_j\) is such that \(\mathcal {B}(i) \cap \mathcal {B}(j) \ne \emptyset \) and so, by the triangle inequality, \(\mathcal {B}(j,3)\) contains all balls \(\{\mathcal {B}(i)\}_{i\in I_j}\). Hence, by the assumption that the instance is well-separated, the set \(I_j\) contains at most one center of OPT.
We now reduce our problem to a 3-dimensional subset-sum problem.
For each \(I_j \in I\), form a group consisting of an item for each \(p\in I_j\). The item corresponding to \(p\in I_j\) has the 3-dimensional value vector \((1, |\mathcal {B}(p) \cap D_j \cap B|, |\mathcal {B}(p) \cap D_j \cap R|)\). Our goal is to find \(k_d\) items such that at most one item per group is selected and their 3-dimensional vectors sum up to \((k_d, b_d, r_d)\). Such a solution, if it exists, can be found by standard dynamic programming that has a table of size \(O(n^4)\). For completeness, we provide the recurrence and precise details of this standard technique in Appendix 1. Furthermore, since the \(D_j\)'s are disjoint by definition, this gives \(k_d\) centers that cover \(b_d\) blue points and \(r_d\) red points in \(P_d\), as required in the statement of the lemma.
It remains to show that such a solution exists. Let \(o_1, o_2, \ldots , o_{k_d}\) denote the centers of the balls in \(OPT {\setminus } \{c_i\}_{i=1}^3\) that cover points in \(P_d\). Furthermore, let \(I_{j_1}, \ldots , I_{j_{k_d}}\) be the sets in I such that \(o_i \in I_{j_i}\) for \(i\in \{1,\ldots , k_d\}\). Notice that by Lemma 3 we have that \(\mathcal {B}(o_i)\) is disjoint from \(P_d {\setminus } D_{j_i}\), i.e., \(\mathcal {B}(o_i)\) is contained in \(D_{j_i}\). It follows that the 3-dimensional vector corresponding to an OPT center \(o_i\) equals \((1, |\mathcal {B}(o_i) \cap D_{j_i} \cap B|, |\mathcal {B}(o_i) \cap D_{j_i} \cap R|)\). This is equivalent to just \((1, |\mathcal {B}(o_i) \cap B|, |\mathcal {B}(o_i) \cap R|)\) and so the definition of the value vectors does indeed give the correct contribution of points. Therefore, the sum of these vectors corresponding to \(o_1, \ldots , o_{k_d}\) results in the vector \((k_d, b_d, r_d)\), where we used that our guesses of \(k_d, b_d\), and \(r_d\) were correct.
The algorithm \(\mathcal {A}_s\) for the sparse point set \(P_s\). Assuming that the guesses are correct we have that \(OPT {\setminus } \{c_i\}_{i=1}^3\) contains \(k_s\) balls that cover \(b_s\) blue points of \(P_s\) and \(r_s\) red points of \(P_s\). Hence, LP1 has a feasible solution (x, z) to the instance defined by the point set \(P_s\), the number of balls \(k_s\), and the constraints \(b_s\) and \(r_s\) on the number of blue and red points to be covered, respectively. Lemma 1 then says that we can in polynomial-time find \(k_s\) balls of radius two such that at least \(b_s\) blue balls of \(P_s\) are covered and at least
$$\begin{aligned} r_s - \max _{j: z_j >0} | \mathcal {F}(j) \cap R| \end{aligned}$$
red points of \(P_s\) are covered. Here, \(\mathcal {F}(j)\) refers to the flower restricted to the point set \(P_s\).
To prove the the second part of Lemma 4, it is thus sufficient to show that LP1 has a feasible solution where \(z_j = 0\) for all \(j\in P_s\) such that \(| \mathcal {F}(j) \cap R| > 3\cdot \tau \). In turn, this follows by showing that, for any such \(j\in P_s\) with \(|\mathcal {F}(j) \cap R| > 3 \cdot \tau \), no point in \(\mathcal {B}(j)\) is in OPT (since then \(z_j = 0\) in the integral solution corresponding to OPT). Such a feasible solution can be found by adding \(x_i=0 \forall i\in \mathcal {B}(j)\) for all such j to LP1.
To see why this holds, suppose towards a contradiction that there is a \(c\in OPT\) such that \(c\in \mathcal {B}(j)\). First, since there are no dense points in \(P_s\), we have that the number of red points in \(\mathcal {B}(c) \cap P_s\) is at most \(2 \cdot \tau \). Therefore the number of red points of \(P_s\) in \(\mathcal {F}(j) {\setminus } \mathcal {B}(c)\) is strictly more than \(\tau \). In other words, we have \(\tau < |\mathbf {Gain}(c, j) \cap P_s| \le |\mathbf {Gain}(c, j) \cap P_4|\) which contradicts (1). \(\square \)
Equipped with the above lemma we are now ready to finalize the proof of Theorem 2.
Proof of Theorem 2
Our algorithm guesses the optimal radius and the centers \(c_1, c_2, c_3\) in Phase I, and \(k_d, r_d, b_d\) in Phase II. There are at most \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) \) choices of the optimal radius, n choices for each \(c_i\), and \(n+1\) choices of \(k_d,r_d, b_d\) (ranging from 0 to n). We can thus try all these possibilities in polynomial time and, since all other steps in our algorithm run in polynomial time, the total running time will be polynomial. The algorithm tries all these guesses and outputs the best solution found over all choices. For the correct guesses, we output a solution with \(3+ k_d + k_s = k\) balls of radius at most two. Furthermore, by the second property of Lemma 2 and the two properties of Lemma 4, we have that
the number of blue points covered is at least \(|B \cap \mathbf {Guess}| + b_d + b_s = b\); and
the number of red points covered is at least \(|R \cap \mathbf {Guess}| + 3 \tau + r_d + r_s - 3 \tau = r\).
We have thus given a polynomial-time algorithm that returns a solution where the balls are of radius at most twice the optimal radius. \(\square \)
Constant number of colors
Our algorithm extends easily to a constant number \(\omega \) of color classes \(\mathcal {C}_1, \dots , \mathcal {C}_{\omega }\) with coverage requirements \(p_1, \dots , p_{\omega }\). We use the LPs in Fig. 3 for a general number of colors, where \(p_{j,i}\) in LP2\((\omega )\) indicates the number of points of color class i in cluster \(j \in S\). S is the set of cluster centers obtained from modified clustering algorithm in Appendix 2 to instances with \(\omega \) color classes. LP2\((\omega )\) has only \(\omega \) non-trivial constraints, so any extreme point has at most \(\omega \) variables attaining strictly fractional values, and a feasible solution attaining objective value at least \(p_1\) will have at most \(k+\omega -1\) positive values. By rounding up to 1 the fractional value of the center that contains the most number of points of \(\mathcal {C}_{\omega }\), we can cover \(p_{\omega }\) points of \(\mathcal {C}_{\omega }\). We would like to be able to close the remaining fractional centers, so we apply an analogous procedure to the case with just two colors.
Linear programs for \(\omega \) color classes
We can guess \(3(\omega -1)\) centers of OPT for each of the \(\omega -1\) colors whose coverage requirements are to be satisfied. Then we bound the number of points of each color that may be found in a cluster, by removing dense sets that contain too many points of any one color and running a dynamic program on the removed sets. The final step is to run the clustering algorithm of [5] on the remaining points, and rounding to one the fractional center with the most number of points of \(\mathcal {C}_1\), and closing all other fractional centers.
In particular, we get a running time with a factor of \(n^{O(\omega ^2)}\). The remainder of this section gives a formal description of the algorithm for \(\omega \) color classes.
Formal algorithm for \(\omega \) colors
The following is a natural generalization of Lemma 1 and summarizes the main properties of the clustering algorithm of Appendix 2 for instances with \(\omega \) color classes.
Lemma 1′
Given a fractional solution (x, z) to LP1\((\omega )\), there is a polynomial-time algorithm that outputs at most k clusters of radius two that cover at least \(p_{1}\) points of \(\mathcal {C}_{1}\), and at least \(p_i - (\omega - 1)\max _{j:z_j>0} |\mathcal {F}(j) \cap \mathcal {C}_i|\) points of \(\mathcal {C}_i\) for \(2 \le i \le \omega \).
Since we may not meet the coverage requirements for \(\omega -1\) color classes, it is necessary to guess some balls of OPT for each of those colors, and for each fractional center. In total we guess \(3(\omega -1)^2\) points of OPT as follows:
This guessing takes \(O(n^{3(\omega -1)^2})\) rounds. It is possible that some \(c_{j,i}\) coincide, but this does not affect the correctness of the algorithm. In fact, this can only improve the solution, in the sense that the coverage requirements will be met with fewer than k centers. Let \(k_c\) denote the number of distinct \(c_{j,i}\) obtained in the correct guess. For notation, define
$$\begin{aligned} \mathbf{Guess} :&= \cup _{j=2}^{\omega } \cup _{i=1}^{3(\omega -1)} \mathcal {B}(c_{j,i}) \\ \tau _{j}&= \big \vert \mathcal {C}_j\cap \mathbf{Gain} (c_{j,3(\omega -1)},q_{j,3(\omega -1)})\cap P_{j,3(\omega -1)} \big \vert . \end{aligned}$$
To be consistent with previous notation, let
$$\begin{aligned} P_4 := P {\setminus } \cup _{j=2}^{\omega }\cup _{i=1}^{3(\omega -1)}\mathcal {F}(q_{j,i}). \end{aligned}$$
The important properties guaranteed by the first phase can be summarized in the following lemma whose proof is the natural extension of Lemma 2.
Assuming that \(c_{j,i}\) are guessed correctly, we have that
the \(k-3(\omega -1)^2\) balls of radius one in \(OPT {\setminus } \cup _{j=2}^{\omega } \cup _{i=1}^{3(\omega -1)} \{c_{j,i}\}\) are contained in \(P_4\) and cover \(p_{\omega } - |\mathcal {C}_{\omega } \cap \mathbf{Guess} |\) of points in \(\mathcal {C}_{\omega }\) and \(p_j - |\mathcal {C}_j \cap \mathbf{Guess} |\) points of \(\mathcal {C}_j\) for \(j=2, \dots , \omega \); and
the clusters \(\mathcal {F}(q_{j,i})\) are contained in \(P {\setminus } P_{3(\omega -1) + 1}\) and cover at least \(|\mathcal {C}_{\omega } \cap \mathbf{Guess} |\) points of \(\mathcal {C}_{\omega }\) and at least \(|\mathcal {C}_{j} \cap \mathbf{Guess} | + 3(\omega -1) \cdot \tau _{j}\) points of \(\mathcal {C}_j\).
Now we need to remove points which contain many points from any one of the color classes to partition the instance into dense and sparse parts which leads to the following generalized definition of dense points.
Definition 4′
When considering a subset of the points \(P_s \subseteq P\), we say that a point \(p\in P_s\) is j-dense if \(|\mathcal {C}_j \cap \mathcal {B}(p) \cap P_s| > 2\tau _j\). For a j-dense point p, we also let \(I_p \subseteq P_s\) contain those points \(i \in P_s\) such that \(|\mathcal {C}_j \cap \mathcal {B}(i) \cap \mathcal {B}(p) \cap P_s| > \tau _j\) , for every \(2 \le j \le \omega \).
Now we perform a similar iterative procedure as for two colors:
As in the case of two colors, set \(P_d = P_{3(\omega -1)} {\setminus } P_s\). By naturally extending Lemma 3 and its proof, we can ensure that any ball of \(OPT {\setminus } \cup _{j=2}^{\omega }\cup _{i=1}^{3(\omega -1)}\{ c_{j,i} \}\) is completely contained in either \(P_d\) or \(P_s\). We guess the number \(k_d\) of such balls of OPT contained in \(P_d\), and guess the numbers \(d_1, \dots , d_{\omega }\) of points of \(\mathcal {C}_1, \dots , \mathcal {C}_{\omega }\) covered by these balls in \(P_d\). There are \(O(n^{\omega +1})\) possible values of \(k_d, d_1, \dots , d_{\omega }\) and all the possibilities can be tried by increasing the running time by a multiplicative factor. The number of balls of \(OPT {\setminus } \cup _{j=2}^{\omega } \cup _{i=1}^{3(\omega -1)} \{c_{j,i}\}\) contained in \(P_s\) is given by \(k_s = k - k_c - k_d\) and these balls cover at least \(s_j = p_j - |\mathcal {C}_j \cap \mathbf{Guess} _{all}| - d_j\) points of \(\mathcal {C}_j\) in \(P_s\), \(1 \le j \le \omega \).
Assuming that the parameters are guessed correctly we can show, similar to Lemma 4, that the following holds.
There exist two polynomial-time algorithms \(\mathcal {A'}_d\) and \(\mathcal {A'}_s\) such that if \(k_d, d_1, \dots d_{\omega }\) are guessed correctly then
\(\mathcal {A'}_d\) returns \(k_d\) balls of radius one that cover \(d_1, \dots , d_{\omega }\) points of \(\mathcal {C}_1, \dots , \mathcal {C}_{\omega }\) of \(P_d\);
\(\mathcal {A'}_s\) returns \(k_s\) balls of radius two that cover at least \(s_{1}\) points of \(\mathcal {C}_{1}\) of \(P_s\) and at least \(s_j - 3(\omega - 1)\cdot \tau _{j}\) points of \(\mathcal {C}_j\) of \(P_s\), \(2\le j \le \omega \).
The algorithm \(\mathcal {A'}_d\) proceeds as \(\mathcal {A}_d\) did, with the modification that the dynamic program is now \((\omega +1)\)-dimensional. Algorithm \(\mathcal {A'}_s\) is also similar to \(\mathcal {A}_s\), because LP1 has a feasible solution where \(z_p=0\) for all \(p \in P_s\) such that \(|\mathcal {F}(p) \cap \mathcal {C}_{j}| > 3\tau _{j}\) holds for any \(2 \le j \le \omega \). Hence, we output a solution with \(k_c + k_d + k_s = k\) balls of radius at most two, and
the number of points of \(\mathcal {C}_{1}\) covered is at least \(|\mathcal {C}_{1} \cap \mathbf{Guess} | + d_{1} + s_{1} = p_{1}\); and
the number of points of \(\mathcal {C}_{j}\) covered is at least \(|\mathcal {C}_j \cap \mathbf{Guess} | + 3(\omega - 1)\tau _{j} + d_{j} + s_{j} - 3(\omega - 1)\tau _{j} = p_j\), for all \(j=2, \dots , \omega \).
This is a polynomial-time algorithm for colorful k-center with a constant number of color classes.
LP integrality gaps
In this section, we present two natural ways to strengthen LP1 and show that they both fail to close the integrality gap, providing evidence that clustering and knapsack feasibility cannot be decoupled in the colorful k-center problem. On one hand, the Sum-of-Squares hierarchy is ineffective for knapsack problems, while on the other hand, adding knapsack constraints to LP1 is also insufficient due to the clustering aspect of this problem.
Sum-of-squares integrality gap
The Sum-of-Squares (equivalently Lasserre [16, 17]) hierarchy is a method of strengthening linear programs that has been used in constraint satisfaction problems, set-cover, and graph coloring, to just name a few examples [3, 9, 18]. We use the same notation for the Sum-of-Squares hierarchy, abbreviated as SoS, as in Karlin et al. [15]. For a set V of variables, \(\mathcal {P}(V)\) are the power sets of V and \(\mathcal {P}_t(V)\) are the subsets of V of size at most t. Their succinct definition of the hierarchy makes use of the shift operator: for two vectors \(x, y \in \mathbb {R}^{\mathcal {P}(V)}\) the shift operator is the vector \(x * y \in \mathbb {R}^{\mathcal {P}(V)}\) such that
$$\begin{aligned} (x * y)_I = \sum _{J \subseteq V} x_J y_{I \cup J}. \end{aligned}$$
Analogously, for a polynomial \(g(x) = \sum _{I \subseteq V} a_I \prod _{i \in I} x_i\) we have \((g*y)_I = \sum _{J \subseteq V} a_J y_{I \cup J}\). In particular, we work with the linear inequalities \(g_1, \dots , g_m\) so that the polytope to be lifted is
$$\begin{aligned} K = \{x \in [0,1]^n : g_{\ell }(x)&\ge 0 \text{ for } \ell = 1, \dots , m \}. \end{aligned}$$
Let \(\mathcal {T}\) be a collection of subsets of V and y a vector in \(\mathbb {R}^{\mathcal {T}}\). The matrix \(M_{\mathcal {T}}(y)\) is indexed by elements of \(\mathcal {T}\) such that
$$\begin{aligned} (M_{\mathcal {T}}(y))_{I, J} = y_{I \cup J}. \end{aligned}$$
We can now define the t-th SoS lifted polytope.
For any \(1 \le t \le n\), the t-th SoS lifted polytope \(SoS^t(K)\) is the set of vectors \(y \in [0,1]^{\mathcal {P}_{2t}(V)}\) such that \(y_{\emptyset } = 1\), \(M_{\mathcal {P}_t(V)}(y) \succeq 0\), and \(M_{\mathcal {P}_{t-1}(V)}(g_{\ell } * y) \succeq 0\) for all \(\ell \).
A point \(x \in [0,1]^n\) belongs to the t-th SoS polytope \(SoS^t(K)\) if there exists \(y \in SoS^t(K)\) such that \(y_{\{i\}} = x_i\) for all \(i \in V\).
We use a reduction from Grigoriev's SoS lower bound for knapsack [11] to show that the following instance has a fractional solution with small radius that is valid for a linear number of rounds of SoS.
At least \(\min \{ 2 \lfloor \min \{k/2, n-k/2 \} \rfloor + 3, n \}\) rounds of SoS are required to recognize that the following polytope contains no integral solution for \(k \in \mathbb {Z}\) odd.
$$\begin{aligned} \sum _{i=1}^n 2w_i&= k \\ w_i&\in [0,1] \,\,\,\, \forall i. \end{aligned}$$
Consider an instance of colorful k-center with two colors, 8n points, \(k = n\), and \(r = b = 2n\) where n is odd. Points \(\{4i-3,4i-2, 4i-1,4i \} \forall i\in [2n]\) belong to cluster \(C_i\) of radius one. For odd i, \(C_i\) has three red points and one blue point and for even i, \(C_i\) has one red point and three blue points. A picture is shown in Fig. 4. In an optimal integer solution, one center needs to cover at least 2 of these clusters while a fractional solution satisfying LP1 can open a center of 1/2 around each cluster of radius 1. Hence, LP1 has an unbounded integrality gap since the clusters can be arbitrarily far apart. This instance takes an odd number of copies of the integrality gap example given in [5].
Integrality gap example for linear rounds of SoS
We can do a simple mapping from a feasible solution for the tth round of SoS on the system of equations in Theorem 3 to our variables in the tth round of SoS on LP1 for this instance to demonstrate that the infeasibility of balls of radius one is not recognized. More precisely, we assign a variable \(w_i\) to each pair of clusters of radius one as shown in Fig. 4, corresponding to opening each cluster in the pair by \(w_i\) amount. Then a fractional opening of balls of radius one can be mapped to variables that satisfy the polytope in Theorem 3. The remainder of this subsection is dedicated to formally describing the reduction from Theorem 3. Let W denote the set of variables used in the polytope defined in Theorem 3. Let w be in the t-th round of SoS applied to the system in Theorem 3 so that w is indexed by subsets of W of size at most t. Let \(V = V_x\cup V_z\), where \(V_x = \{x_1, \dots , x_{8n}\}\) and \(V_z =\{ z_1, \dots , z_{8n}\}\), be the set of variables used in LP1 for the instance shown in Fig. 4. We define vector y with entries indexed by subsets of V, and show that y is in the t-th SoS lifting of LP1. In each ball we pick a representative \(x_i\), \(i \equiv 1 \mod 4\), to indicate how much the ball is opened, so we set \(y_I = 0\) if \(x_j \in I\), \(j \not \equiv 1 \mod 4\). Otherwise, we set \(y_I = w_{\pi (I)}\) where
$$\begin{aligned} \pi (I)&= \{w_i : x_{8i-3} \text{ or } x_{8i-7} \text{ or } z_{8i-j} \in I, \text{ for } \text{ some } i \in [n], j \in [7] \}. \end{aligned}$$
We have \(M_{\mathcal {P}_t(W)}(w) \succeq 0\), and for \(g_1 = -n + \sum _{i=1}^n 2x_i\) and \(g_2 = n - \sum _{i=1}^n 2x_i\), \(M_{\mathcal {P}_{t-1}(W)}(g_{\ell } * w) \succeq 0\) for \(\ell = 1, 2\) since w satisfies the t-th round of SoS. This implies that \(M_{\mathcal {P}_{t-1}(W)}(g_{\ell } * w)\) is the zero matrix.
To show that \(M_{\mathcal {P}_t(V)}(y)\succeq 0\), we start with \(M_{\mathcal {P}_t(W)}(w)\) and construct a sequence of matrices such that the semidefiniteness of one implies the semidefiniteness of the next, until we arrive at a matrix that is \(M_{\mathcal {P}_t(V)}(y)\) with rows and columns permuted, i.e. \(M_{\mathcal {P}_t(V)}(y)\) multiplied on the left and right by a permutation matrix and its transpose. Since the eigenvalues of a matrix are invariant under this operation, \(M_{\mathcal {P}_t(W)}(w) \succeq 0\) implies that \(M_{\mathcal {P}_t(V)}(y)\succeq 0\).
There exists a sequence of square matrices \(M_{\mathcal {P}_t(W)}(w) := M_0\), \(M_1\), \(M_2\), \(\dots \), \(M_p\), such that the rank of \(M_{i}\) is the same as the rank of \(M_{i+1}\), \(M_i\) is the leading principal submatrix of \(M_{i+1}\) of dimension one less, and \(M_p\) is \(M_{\mathcal {P}_t(V)}(y)\) with rows and columns permuted.
We claim that this sequence of matrices exists with the following description. Firstly, the matrix \(M_{i+1}\) has one extra row and column than \(M_i\), and is the same on the leading principal submatrix of size \(M_i\). Then there are two possibilities:
The last row and column of \(M_{i+1}\) are all zeroes, or
for some j, the last row of \(M_{i+1}\) is a copy of the jth row of \(M_i\), the last column is a copy of the jth column of \(M_i\), and the last entry is \((M_i)_{j,j}\).
Either way, the rank of \(M_{i+1}\) would be the same as the rank of \(M_i\).
To prove this claim, it suffices to consider a sequence of indices of the matrix \(M_{\mathcal {P}_t(V)}(y)\). The matrix \(M_0\) in our sequence will be the submatrix of \(M_{\mathcal {P}_t(V)}(y)\) indexed by the first k indices, where k is the dimension of \(M_{\mathcal {P}_t(W)}(w)\), i.e. the number of subsets of W of size at most t. Each subsequent matrix \(M_i\) will be the submatrix of \(M_{\mathcal {P}_t(V)}(y)\) indexed by the first \(k+i\) indices. Note that the rows/columns of \(M_{\mathcal {P}_t(V)}(y)\) can be considered to be indexed by all the subsets of V of size at most t. With this in mind, consider a sequence of subsets of V of size at most t with the following properties:
All subsets of \(\{x_{8i-7}: i \in [n]\}\) of size at most t form a prefix of our sequence.
Each set index after the first has exactly one more element than some set index that came earlier in the sequence.
It is clear that it is possible to arrange all the subsets of V of size at most t in a sequence to satisfy these properties. It only remains to show that this sequence produces the desired construction for \(M_0, M_1, \dots , M_p\).
$$\begin{aligned} \left( M_{\mathcal {P}_t(y)} \right) _{I,J} = y_{I \cup J} = w_{\pi (I \cup J)} = w_{\pi (I), \pi (J)} \end{aligned}$$
so property (1) guarantees that we begin with \(M_0\) being \(M_{\mathcal {P}_t(W)}(w)\), up to the correct permutation of subsets of \(\{x_{8i-7}: i \in [n]\}\). Now consider some \(k'\)th index in the sequence, \(k' > k\) where k is the dimension of \(M_{\mathcal {P}_t(W)}(w)\). By property (2), it is of the form \(J \cup \{x\}\), where J is one of the first \(k' - 1\) indices, and \(x \in V\). There are two cases:
If x is some \(x_i\) with \(i \not \equiv 1 \mod 4\), then \(y_{I_{\ell } \cup J} = 0\) for all \(\ell \le k'\).
Otherwise, \(\pi (J \cup \{x\}) = \pi (J)\).
In the first case, the matrix constructed from the first \(k'\) indices will have property (a), and in the second, property (b). Finally, it is clear that at each step the dimension of the matrices increases by one, and that it is the leading principal submatrix of the following matrix in the sequence, until we end up with \(M_{\mathcal {P}_t(V)}(y)\) (up to some permutation of its rows and columns). \(\square \)
By the rank-nullity theorem, \(M_{i+1}\) has one more 0 eigenvalue than \(M_i\), so we can apply the following theorem.
Let A be a symmetric \(n \times n\) matrix and B be a principal submatrix of A of dimension \((n-1) \times (n-1)\). If the eigenvalues of A are \(\alpha _1 \ge \cdots \ge \alpha _n\) and the eigenvalues of B are \(\beta _1 \ge \cdots \ge \beta _{n-1}\) then \(\alpha _1 \ge \beta _1 \ge \alpha _2 \ge \beta _2 \ge \cdots \ge \alpha _{n-1} \ge \beta _{n-1} \ge \alpha _n\).
With \(M_{i+1} = A\) and \(M_i = B\) as in Theorem 4 we have that \(\alpha _n = 0\) (since \(M_{i+1}\) and \(M_i\) have the same eigenvalues but the dimension of the zero eigenspace of \(M_{i+1}\) is one greater than that of \(M_i\)). Hence, \(M_{i+1}\) has no negative eigenvalues if \(M_i\) has no negative eigenvalues. This is sufficient to show that each matrix in the sequence constructed is positive semidefinite, and concludes the proof that \(M_{\mathcal {P}_t(V)}(y)\succeq 0\).
It remains to show that the matrices arising from the shift operator between y and the linear constraints of our polytope are positive semidefinite. Let \(h_i\) denote the linear inequalities in LP1. In essence, the corresponding moment matrices \(M_{\mathcal {P}_{t-1}(V)}(h_i * y)\) are zero matrices since all \(h_i\) are tight for the example in Fig. 4. Formally, we have
Matrices \(M_{\mathcal {P}_{t-1}(V)}(h_{\ell } * y)\) are the zero matrix, for each \(h_{\ell }\) a linear constraint from LP1.
Let \(h_{1,j}\) be the linear polynomial that corresponds to the first inequality of LP1 for \(j \in P\). First, if \(i \not \equiv 1 \mod 4\), then \(y_{I \cup \{x_i \}} = 0\) for any \(I \subseteq V\). Otherwise, we have
$$\begin{aligned} (M_{\mathcal {P}_{t-1}}(h_{1j} * y))_{I,J}&= \left( \sum _{i \in \mathcal {B}(j, 1)} y_{I\cup J\cup \{x_{i}\}} \right) - y_{I\cup J \cup \{z_j\}} \\&= w_{\pi (I\cup J)\cup \pi (x_{i})} - w_{\pi (I\cup J) \cup \pi (z_j)} = 0 \end{aligned}$$
since \(\pi (\{x_i\}) = \pi (z_j)\) for \(i \in \mathcal {B}(j,1)\), \(i \equiv 1 \mod 4\). For the remaining inequalities of LP1: \(h_2\), \(h_3\), and \(h_4\), we have that \(M_{\mathcal {P}_{t-1}(V)}(h_{\ell } * y)\) is the zero matrix because of how we defined the projection onto w:
$$\begin{aligned} (M_{\mathcal {P}_{t-1}}(h_2 * y))_{I,J}&= ny_{I\cup J} - \sum _{x_j\in V_x} y_{I\cup J \cup \{x_j\}}\\&= nw_{\pi (I\cup J)} - \sum _{j=1}^n 2w_{\pi (I\cup J \cup \{w_j\})} \\&= (M_{\mathcal {P}_{t-1}}(g_2 * w))_{\pi (I),\pi (J)} = 0 \\ M_{\mathcal {P}_{t-1}}(h_3 * y))_{I,J}&= M_{\mathcal {P}_{t-1}}(h_4 * y))_{I,J} \\&= \left( \sum _{j\in R} y_{I\cup J \cup \{z_j\}} \right) - 2ny_{I\cup J}\\&= \left( \sum _{i=1}^{n} 4w_{\pi (I\cup J) \cup \{w_i\}} \right) - 2nw_{\pi (I\cup J)} \\&= 2(M_{\mathcal {P}_{t-1}}(g_1 * w))_{\pi (I),\pi (J)} = 0. \end{aligned}$$
\(\square \)
This concludes the formal proof of the following theorem.
The integrality gap of LP1 with 8n points persists up to \(\Omega (n)\) rounds of Sum-of-Squares.
Flow constraints
In this section we add additional constraints based on standard techniques to LP1. These incorporate knapsack constraints for the fractional centers produced in the hope of obtaining a better clustering and we show that this fails to reduce the integrality gap.
We define an instance of a knapsack problem with multiple objectives. Each point \(p \in P\) corresponds to an item with three dimensions: a dimension of size one to restrict the number of centers, \(|B \cap \mathcal {B}(p)|\), and \(|R \cap \mathcal {B}(p)|\). We set up a flow network with an \((n+1) \times n \times n \times k\) grid of nodes and we name the nodes with the coordinate (w, x, y, z) of its position. The source s is located at (0, 0, 0, 0) and we add an extra node t for the sink. Assign an arbitrary order to the points in P. For the item corresponding to \(i \in P\), for each \(x \in [n]\), \(y \in [n]\), \(z \in [k]\):
Add an edge from (i, x, y, z) to \((i+1, x, y, z)\) with flow variable \(e_{i,x,y,z}\).
With \(b_i := |B \cap \mathcal {B}(i)|\) and \(r_i := |R \cap \mathcal {B}(i)|\), if \(z < k\) add an edge from (i, x, y, z) to \((i+1, \min \{x+b_i, n\}, \min \{y+b_i, n\}, z+1)\) with flow variable \(f_{i,x, y,z}\).
For each \(x \in [b, n]\), \(y \in [r, n]\):
Add an edge from \((n+1, x, y, k)\) to t with flow variable \(g_{x, y}\).
Set the capacities of all edges to one. In addition to the usual flow constraints, add to LP1 the constraints
$$\begin{aligned} x_i&= \sum _{x, y \in [n], z \in [k]} f_{i,x,y,z} \quad \text{ for } \text{ all } i \in P \end{aligned}$$
$$\begin{aligned} 1 - x_i&= \sum _{x, y \in [n], z \in [k]} e_{i,x,y,z} \quad \text{ for } \text{ all } i \in P. \end{aligned}$$
We refer to the resulting linear program as LP3. Notice that an integral solution to LP1 defines a path from s to t through which one unit of flow can be sent; hence LP3 is a valid relaxation. On the other hand, any path P from s to t defines a set \(C_P\) of at most k centers by taking those points c for which \(f_{c,x,y, z} \in P\) for some x, y, and z. Moreover, as t can only be reached from a coordinate with \(x\ge b\) and \(y\ge r\) we have that \(\sum _{c\in C_P} |\mathcal {B}(c) \cap B| \ge b\) and \(\sum _{c\in C_P} |\mathcal {B}(c) \cap R| \ge r\). It follows that \(C_P\) forms a solution to the problem of radius one if the balls are disjoint. In particular, our integrality gap instances for the Sum-of-Squares hierarchy do not fool LP3.
\(k=3\), \(r=b=8\)
The example in Fig. 5 shows that in an instance where balls overlap, the integrality gap remains large. Here, the fractional assignment of open centers is 1/2 for each of the six balls and this gives a fractional covering of 8 red and 8 blue points as required. This assignment also satisfies the flow constraints because the three balls at the top of the diagram define a path disjoint from the three at the bottom. By double counting the five points in the intersection of two balls we cover 8 red and 8 blue points with each set of three balls. Hence, we can send flow along each path. However, this does not give a feasible integral solution with three centers as any set of three clusters does not contain enough points. In fact, the four clusters can be placed arbitrarily far from each other and in this way we have an unbounded integrality gap since one ball needs to cover two clusters.
Conclusion and open questions
Our 3-approximation algorithm for colorful k-center with \(\omega \) color classes runs in time \(|P|^{O(\omega ^2)}\), where the quadratic term arises from guessing linearly many optimal centers to make up for linearly many extra centers in the pseudo-approximation. In [2], it was shown that a linear exponential dependence on \(\omega \) is necessary assuming ETH holds. It would be interesting to obtain the same approximation factor but without the quadratic dependence on \(\omega \), or better yet, obtain a tight result. The current best hardness of approximation of \(2-\epsilon \) comes from the standard k-center problem. Note that in the well-separated case where no ball of radius 3 covers two optimal balls, we obtain a 2-approximation. This well-separated condition is crucial in the design of our algorithm, however, so it seems that significantly new ideas would be required to decrease this factor, if it is at all possible.
Another direction is to explore fair coverage constraints for other clustering problems such as k-median and k-means. The natural linear programming relaxations for these problems have unbounded integrality gaps even in the case of just one color class, i.e. the case of outliers.
Having multiple color classes requires solving subset-sum in some form. Investigating these problems could shed light on general combinatorial optimization problems that involve subset-sum.
Anagnostopoulos, A., Becchetti, L., Böhm, M., Fazzone, A., Leonardi, S., Menghini, C., Schwiegelshohn, C.: Principal fairness: removing bias via projections. CoRR arXiv:1905.13651 (2019)
Anegg, G., Angelidakis, H., Kurpisz, A., Zenklusen, R.: A technique for obtaining true approximations for k-center with covering constraints. In: International Conference on Integer Programming and Combinatorial Optimization (IPCO). pp. 52–65 (2020)
Arora, S., Ge, R.: New tools for graph coloring. In: Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. pp. 1–12. Springer (2011)
Backurs, A., Indyk, P., Onak, K., Schieber, B., Vakilian, A., Wagner, T.: Scalable fair clustering. In: Proceedings of the 36th International Conference on Machine Learning, ICML. pp. 405–413 (2019)
Bandyapadhyay, S., Inamdar, T., Pai, S., Varadarajan, K.R.: A constant approximation for colorful k-center. In: 27th Annual European Symposium on Algorithms, ESA. pp. 1–14 (2019)
Chakrabarty, D., Goyal, P., Krishnaswamy, R.: The non-uniform k-center problem. In: 43rd International Colloquium on Automata, Languages, and Programming, ICALP. pp. 1–15 (2016)
Charikar, M., Khuller, S., Mount, D.M., Narasimhan, G.: Algorithms for facility location problems with outliers. In: Proceedings of the 12th Annual ACM-SIAM symposium on Discrete algorithms (SODA). pp. 642–651 (2001)
Chierichetti, F., Kumar, R., Lattanzi, S., Vassilvitskii, S.: Fair clustering through fairlets. In: Advances in Neural Information Processing Systems (NIPS). pp. 5029–5037 (2017)
Chlamtac, E., Friggstad, Z., Georgiou, K.: Understanding set cover: Sub-exponential time approximations and lift-and-project methods. CoRR arXiv:1204.5489 (2012)
Gonzalez, T.F.: Clustering to minimize the maximum intercluster distance. Theor. Comput. Sci. 38, 293–306 (1985)
Article MathSciNet Google Scholar
Grigoriev, D.: Complexity of positivstellensatz proofs for the knapsack. Comput. Complex. 10(2), 139–154 (2001)
Harris, D.G., Pensyl, T., Srinivasan, A., Trinh, K.: A lottery model for center-type problems with outliers. ACM Trans. Algorithms 15(3), 1–25 (2019)
MathSciNet MATH Google Scholar
Hochbaum, D.S., Shmoys, D.B.: A best possible heuristic for the k-center problem. Math. Oper. Res. 10(2), 180–184 (1985)
Hsu, W.L., Nemhauser, G.L.: Easy and hard bottleneck location problems. Discret. Appl. Math. 1(3), 209–215 (1979)
Karlin, A.R., Mathieu, C., Nguyen, C.T.: Integrality gaps of linear and semi-definite programming relaxations for knapsack. In: Integer Programming and Combinatoral Optimization IPCO. pp. 301–314 (2011)
Lasserre, J.B.: An explicit exact SDP relaxation for nonlinear 0-1 programs. In: International Conference on Integer Programming and Combinatorial Optimization (IPCO). pp. 293–303 (2001)
Lasserre, J.B.: Global optimization with polynomials and the problem of moments. SIAM J. Optim. 11(3), 796–817 (2001)
Tulsiani, M.: CSP gaps and reductions in the lasserre hierarchy. In: Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC. pp. 303–312 (2009)
Open Access funding provided by EPFL Lausanne.
EPFL, Route Cantonale, 1015, Lausanne, Switzerland
Xinrui Jia, Kshiteej Sheth & Ola Svensson
Xinrui Jia
Kshiteej Sheth
Ola Svensson
Correspondence to Kshiteej Sheth.
Supported by the Swiss National Science Foundation project 200021-184656 "Randomness in Problem Instances and Randomized Algorithms."
Dynamic programming for dense points
In this section we describe the dynamic programming algorithm discussed in Lemma 4. As stated in the proof of Lemma 4, given \(I = \cup _j I_j\) and correct guesses for \(k_d,b_d,r_d\), we need to find \(k_d\) balls of radius one centered at points in I covering \(b_d\) blue and \(r_d\) red points with at most one point from each \(I_j\in I\) picked as a center. To do this, we first order the sets in I arbitrarily as \(I= \{I_{j_1},\ldots ,I_{j_m}\}, m = |I|\). We create a 4-dimensional table T of dimension \((m,b_d,r_d,k_d)\). \(T[m',b',r',k']\) stores whether there is a set of \(k'\) balls in the first \(m'\) sets of I covering \(b'\) blue and \(r'\) red points. The recurrence relation for T is
$$\begin{aligned} T[0,0,0,0]&= \text{ True } \\ T[0, b', r', k']&= \text{ False, } \quad \text{ for } \text{ any } b', r',k' \ne 0 \\ T[m', b', r', k']&= {\left\{ \begin{array}{ll} \text{ True } &{}\text{ if } T[m'-1, b', r', k'] = \text{ True } \\ \text{ True } &{}\text{ if } \exists c \in I_{j_{m'}} \text{ s.t. } T[m'-1,b'',r'',k'-1] = \text{ True, } \\ &{} \text{ for } b'' = b'- |\mathcal {B}(c) \cap D_{j_{m'}} \cap B|, \\ &{} r'' = r' - |\mathcal {B}(c) \cap D_{j_{m'}} \cap R| \\ \text{ False } &{}\text{ otherwise } \end{array}\right. }. \end{aligned}$$
The table T has size \(O((m+1) \cdot (n+1) \cdot (n+1) \cdot (n+1)) = O(n^4)\) since the first parameter has range from 0 to m, and the other parameters can have value 0 up to at most n. Moreover, since \(|\cup _{i=1}^m I_{j_i}| \le n\) and this is a disjoint union, for each of the \(O(n^3)\) choices of \(b', r', k'\) we can determine all of the \(m'\) entries in O(n) time. Hence, we can compute the the whole table in time \(O(n^4)\) using, for example, the bottom-up approach. We can also remember the choices in a separate table and so we can find a solution in time \(O(n^4)\) if it exists.
The clustering algorithm
In this section we present the clustering algorithm used in [5] with a simple modification. The algorithm is described in pseudo-code in Algorithm 1.
Now we state the theorem which states the properties of this clustering algorithm used in Sect. 2.1.
Given a feasible fractional solution (x, z) to LP1, the set of points \(S \subseteq P\) and clusters \(C_j \subseteq P\) for every \(j\in S\) produced by Algorithm 1 satisfy:
Moreover, if we let \(R_j = C_j \cap R\) and \(B_j = C_j \cap B\) with \(r_j = |R_j|\) and \(b_j = |B_j|\) for \(j\in S\), then y is a feasible solution to LP2 (depicted on the right in Fig. 1) with objective value at least r.
The proof of the first statement is clear from the condition in the while loop of the algorithm.
For the second statement, observe that, by the definition of \(C_j\) as stated in the algorithm, \(C_j \subseteq \bigcup _{i \in \mathcal {B}(j)} \mathcal {B}(i) = \mathcal {F}(j)\). Since in each iteration, the cluster is removed from \(P'\), the clusters are clearly disjoint.
In order to prove that y is feasible this we first state some useful observations.
Firstly, for any \(i\in P\) there is at most one \(j\in S\) such that \(d(i,j)\le 1\). This is true because if there were \(j,j'\in S\) such that both \(j,j' \in B(j)\) then, assuming w.l.o.g. j was considered before in the while loop, \(j'\in C_j\) and thus \(j'\) cannot be in S which is a contradiction.
Secondly, note that for any \(j_1\in P\) such that \(j_1\in C_j\) for some j, then \(\tilde{z}_{j} = \tilde{z}_{j_1}\ge z_{j_1}\). This is trivially true if \(\tilde{z}_{j}=1\), otherwise \(\tilde{z}_{j}=\sum _{i\in \mathcal {B}(j)}x_i \ge z_j\ge z_{j_1}\) where the first inequality follows from LP1 constraints and second inequality from the fact that when \(C_j\) was removed, \(z_j\) had the highest z value.
Now we show that y is feasible for LP2 with objective value at least r. Firstly we show that \(\sum _{j\in S}r_j y_j \ge r\). To see this,
$$\begin{aligned} \sum _{j\in S}r_j y_j&= \sum _{j\in S}|R_j| y_j \\&= \sum _{j\in S}\sum _{j'\in R_j} \tilde{z}_j (y_j = \tilde{z}_j\text { for any }j\in S) \\&\ge \sum _{j\in S}\sum _{j'\in R_j} z_{j'} (\text {from second observation, }\tilde{z}_j\ge z_{j'}\text { for any } j'\in C_j)\\&= \sum _{j' \in R : z_{j'}> 0} z_{j'} \text {(since } C_j\text {'s are disjoint and contain all }j\text { s.t. }z_j>0)\\&= \sum _{j' \in R} z_{j'} \ge r \text {(since }z\text { satisfies LP1))} \end{aligned}$$
Similarly \(\sum _{j\in S}b_j y_j \ge b\). Finally we will show that \(\sum _{j\in S} y_j \le k\),
$$\begin{aligned} \sum _{j\in S} y_j&\le \sum _{j\in S} \sum _{j'\in \mathcal {B}(j)} x_{j'} \left( \text {since } y_j\le \sum _{j'\in \mathcal {B}(j)}x_{j'}\right) \\&\le \sum _{j'\in P} x_{j'} \text {(from the first observation)}\\&\le k \text {(since}\,x\,\text {satisfies LP1)} \end{aligned}$$
This concludes the proof of the claim that y is a feasible solution to LP2 with objective value at least r. \(\square \)
Jia, X., Sheth, K. & Svensson, O. Fair colorful k-center clustering. Math. Program. 192, 339–360 (2022). https://doi.org/10.1007/s10107-021-01674-7
Issue Date: March 2022
k-center
Clustering and facility location
Mathematics Subject Classification
68W40 Analysis of Algorithms
|
CommonCrawl
|
Globalization and Health
Food trade among Pacific Island countries and territories: implications for food security and nutrition
Anne Marie Thow ORCID: orcid.org/0000-0002-6460-58641,
Amerita Ravuvu2,
Siope Vakataki Ofa3,
Neil Andrew4,
Erica Reeve1,5,
Jillian Tutuo6 &
Tom Brewer4
Globalization and Health volume 18, Article number: 104 (2022) Cite this article
There is growing attention to intra-regional trade in food. However, the relationship between such trade and food and nutrition is understudied. In this paper, we present an analysis of intra-regional food trade in the Pacific region, where there are major concerns regarding the nutritional implications of international food trade. Using a new regional database, we examine trends in food trade among Pacific Island Counties and Territories (PICTs) relative to extra-regional trade.
Intra-regional trade represents a small, but increasing proportion of total imports. The major food group traded within the Pacific is cereal grains and flour, which represented 51% of total intra-regional food trade in 2018. Processed and prepared foods, sweetened or flavoured beverages, processed fish, and sugar and confectionary are also traded in large quantities among PICTs. Trade in root crops is negligible, and overall intra-regional trade of healthy foods is limited, both in terms of tonnage and relative to imports from outside the region. Fiji remains the main source of intra-regional imports into PICTs, particularly for non-traditional staple foods.
This study highlights the growth in trade of staple foods intra-regionally, indicating a role for Fiji (in particular) in regional food security. Within this overall pattern, there is considerable opportunity to enhance intra-regional trade in traditional staple foods, namely root crops. Looking forward, the current food system disruption arising from the COVID-19 pandemic and associated policy measures has highlighted the long-term lack of investment in agriculture, and suggests an increased role for regional approaches in fostering trade in healthy foods.
Global attention has turned to regional trade as multilateral negotiations have continued to stall over the past 20 years. Intra-regional trade agreements can create new opportunities for specialization and comparative advantage, and open proximal markets through reduced barriers to trade (both tariff and non-tariff) and enhanced regional stability [1]. For developing countries, intra-regional trade agreements can also foster economic stability and enhanced capacity to engage with external trade partners [2, 3]. Broadly, drivers of regionalism include expected gains arising from reduced transaction costs, shared innovation, and greater economic and political weight in international markets and institutions [4].
The relationship between intra-regional trade and food and nutrition security is complex and dependent on regional and country-level characteristics. As a consequence, the degree to which intra-regional food trade contributes to the United Nations Sustainable Development Goals (notably 1 to 3) is contingent on the details of that trade [5]; despite potential advantages, barriers to intra-regional trade in food remain [6]. There have been recommendations for deeper regional integration to enable freer flows of food from surplus to deficit countries to address food insecurity [7]. Indeed, intra-regional trade cooperation in the Association of South East Asian Nations has improved food security through staple food trade [8], as has the South Asian Free Trade Agreement, in the form of regionalised meat trade [9]. Conversely, regional liberalization has also been associated with the nutrition transition in Central America [10, 11] and Southern Africa [12].
The food security impacts of COVID-19 have also highlighted the vulnerability of net food importing countries to food insecurity. Studies of regional trade within Africa have highlighted the potential for regional trade to mitigate this vulnerability [8], as well as the potential for COVID-19 associated disruptions to stimulate regional food markets and reduce reliance on global trade [13]. Similar opportunities have been observed in the Pacific region [14]. In particular, local agricultural production and incomes have reduced, as domestic markets have declined and access to international markets has been disrupted. However, this has also led to a resurgence in traditional food systems [15].
Concerns about trade and the nutrition transition – in which diets globally have shifted towards increased intakes of more processed foods high in fat, salt and sugar, with low intakes of fruit, vegetables and fibre [16] – have been prominent for Pacific Island Countries and Territories (PICTs) [17].Footnote 1 The region is one of the most affected by diet-related non-communicable diseases globally, and also faces persistent food insecurity (often as a result of natural disasters) [18, 19]. Historically, food trade in the Pacific has been strongly influenced by colonization and extra-regional trade [20]. Rising import dependence has generated concerns about the dumping of unhealthy products in the region and the impact of colonial trade patterns, and the role of imports in fostering dietary change [20, 21]. There has been a marked change in diets from consisting of mainly healthy traditional, local foods – including root crops, fish, and vegetables – to diets including a range of non-traditional, often imported foods, such as rice, sugar, wheat flour and processed snack foods [22].
While Pacific Island leaders recognize the importance of trade policy in improving nutrition in the region [23], there appears to be an implicit assumption that intra-regional trade is small and of little importance [24]. There have also been concerns raised regarding the limited potential benefits (and potential costs) of regional economic integration and intra-regional trade in the face of intractable barriers such as the high costs of transportation, limited production capacity and small market size [25,26,27].
In this study, we provide a critical and missing piece of evidence in analysing regional food trade policy in the Pacific Island region by quantifying trends in intra-regional trade, addressing the following research questions: 1) is intra-regional trade significant in terms of food security and nutrition?, and 2) have regional trade agreements contributed to the level of intra-regional trade?. We present findings from an analysis of the recently developed Pacific Food Trade Database (PFTD), informed by nutritional considerations consistent with global approaches to trade and nutrition analysis [28, 29]. We highlight the central role Fiji plays as a regional export and re-export hub in the region [30, 31]. Further, we provide a first critical analysis of the impact of the Pacific Island Countries Trade Agreement (PICTA) as a major regional food trade policy instrument.
Background on intra-regional trade in Pacific Island countries and territories
The first intra-regional agreement signed among Pacific countries was the Melanesian Spearhead Group (MSG) trade agreement in 1993, between Papua New Guinea, Solomon Islands and Vanuatu, with Fiji joining in 1996 [32]. In relation to food trade, the parties committed to eliminating duties and other restrictions, including providing exemptions on import duties for meat, fish, oils, noodles, baked goods originating in these countries. In 2001, PICTA was signed by the Cook Islands, Federated States of Micronesia, Fiji, Kiribati, Niue, Samoa, Solomon Islands, Tuvalu, Vanuatu, Nauru, Papua New Guinea and Tonga. PICTA was implemented in 2007 by Cook Islands, Federated States of Micronesia, Fiji, Kiribati, Niue, Samoa, Solomon Islands, Tuvalu and Vanuatu, and included reductions in tariff rates for most food items, albeit with fairly significant exemptions on the part of Kiribati, Niue, Papua New Guinea, Solomon Islands, Tuvalu and Vanuatu. PICTA was also envisaged by the Parties as the first step to deeper regional integration, including a common market. A treaty establishing the Micronesian Trade and Economic Community (MTEC) was concluded in 2014, including Federated States of Micronesia, Marshall Islands, and Palau, but there are no specific commitments to intra-regional trade liberalization.
Pacific Island governments envisage both political and economic benefits from regional integration, including strengthening domestic commitment across the region to liberalization, attracting development, providing a single voice in international fora, enlarging the market size, and providing a gradual adjustment towards more significant (extra-regional) liberalization [26]. Intraregional trade agreements have also played an important role in trade facilitation in the Pacific [33] and facilitated sectoral cooperation and regional service delivery [34]. The overall impact of intra-regional agreements on trade, however, has been limited. There has been some growth in intra-regional trade flows, but due to high costs, limited markets, a lack of strategic investment in agriculture and manufacturing, and bureaucratic regulations, extra-regional trade remains dominant [27, 35].
We conducted a descriptive quantitative analyses of existing regional trade data. A number of the analyses are comparative in nature in that we compare between, for example, temporal trends in quantity of food traded between countries and food types. Data include food and beverage commodities, countries, quantities and year in which trade occurred. Interpretation of the results was conducted by the authors, whose Pacific-focused trade expertise spans nutrition, policy, and trade value chains.
To characterize intra-regional trade of food and beverages (hereafter, unless specified otherwise 'food' is used as shorthand for 'food and beverages') in the Pacific we use the Pacific Food Trade Database (PFTD) [36]. The PFTD is derived from the BACI HS92 global trade database of international commodity trade [37] which uses United Nations Comtrade data as its primary and only data source. The PFTD is the result of extensive cleaning of the BACI data on food trade relevant to Pacific Island Countries and Territories (PICTs). The PFTD includes all food trade flows at subheading level across 18 PICTs for the years 1995–2018.
Some commodities were excluded because they fell outside the scope of the analysis: still and carbonated water, tobacco and alcohol. Tuna (except canned) was excluded due to ongoing concerns relating to data quality for these commodities (other types of fish and invertebrates are included). Coconuts (HS080110) were removed from analyses relating specifically to nuts, as a healthy source of food, due to the large and variable trade volumes which suggests copra has periodically been mis-classified as coconuts. It was retained elsewhere, as was copra (HS120300) because some derivatives of copra are used for human consumption and it represents a major export cash crop for many PICTs including significant intra-regional trade.
First, we present an overview of intra-regional trade relative to imports and exports with the rest of the world, and provide coarse analysis of the Pacific countries that dominate intra-regional trade. Second, we explore the temporal trends in quantity of the different types of food traded within the region. Foods were grouped with reference to commodity types and the Pacific Food Guide [38]. Third, we explore intra-regional trade of staples, and healthy and unhealthy food as key dimensions of food security and diets. The assessment of healthy and unhealthy foods used the INFORMAS classification [29]. This framework was chosen for its relevance to the context of this study following Ravuvu, Friel [39].
Fourth, we determine whether PICTA had any measurable impact on either intra-regional trade or on imports from outside the region. To make this determination we compare temporal trends in quantities of food being imported from outside the region with quantities being traded within the region, across both all PICTs and PICTs that were early adopters of PICTA. Early PICTA adopters were Cook Islands, Fiji, Niue, Samoa, Solomon Islands, Tuvalu and Vanuatu. Commodities included in analysis of imports from outside the region included only those commodities that are not produced within the region (Supplementary materials 1 [see end of text]). Commodities included in analysis of trade between PICTs include only those commodities that are produced within the region. A total of 145 commodities were included as being imported from outside the region, while a total of only 9 commodities were included as only being produced and traded within the region. This commodity distinction was necessary to control for effects of retrading (foods imported from outside the region and then exported to other PICTs with no or minimal further processing). In some instances the quantities were normalised across commodities to control for within-commodity quantity variability. Data were normalised over the range of 0 to 1 as:
$$normalised\ value=\frac{X_i-{X}_{min}}{X_{max}-{X}_{min}}$$
Where Xmin and Xmax are the smallest and largest trade quantities of the commodity reported from 1995 to 2018, and Xi. is the trade quantity to be normalised.
Overview of intra-regional food trade
Intra-regional food trade represents only a small fraction of total food imported by PICTs: rising from 0.3% in 1995 to 3.2% in 2018 (Fig. 1). In 2018, 58,712 t of food was traded intra-regionally. This was a substantial increase from the 2724 t traded intra-regionally in 1995, with the major increase occurring between 2000 (7819 t traded) and 2001 (12,325 t traded) (Fig. 2). Food trade between PICTs and non-PICTs has been dominated by Australia and New Zealand. The bulk of exports from the region are comprised of sugar and palm oil.
Intra-regional food trade compared to extra-regional trade (excluding alcoholic beverages, tuna, and water), 1995–2018
Intra-regional food trade in the Pacific Region, 1995–2018. Bars show cumulative total trade across different PICT combinations. Lines indicate per capita trade flows with PNG (solid) and without (dashed)
Fiji has consistently been the main source of intra-regional imports to PICTs, with the volume rising from 2693 t (99% of total) in 1995 to 49,900 t (85%) in 2018 (Fig. 2). Countries importing the most food from Fiji, on a per capita basis, include Cook Islands, Kiribati, Nauru, Samoa, Tonga, Tuvalu and Wallis and Futuna Islands. Exports from Papua New Guinea to other PICTs have increased from a negligible amount to nearly 10% of intra-regional food trade.
Papua New Guinea has the largest population in the region, and was the single largest destination for intra-regionally traded food from 2012 to 2014 (18–24% of trade). However, Papua New Guinea represents by far the lowest per capita consumption of intra-regionally traded foods, even during this period. In 2018, total per capita consumption of intra-regionally traded foods was 14 g/capita/ day, but excluding Papua New Guinea this rises to 45 g/capita/day (Fig. 2).
Types of food traded intra-regionally
The major food group traded intra-regionally in the Pacific is cereals, grains and flours, which represented 51% of total intra-regional food trade in 2018 (Fig. 3), 98% of which is exported from Fiji to smaller Pacific island countries. The major cereal traded in 2018 was wheat flour, at 84% of cereal, grains and flour trade and 45% of total intra-regional food trade. Rice comprised only 1% of cereal, grains and flours traded and 0.6% of total intra-regional food trade (nearly all rice is directly imported from outside the region). Cereal grains and rice are all imported from outside the region, except for limited rice production in Fiji, and are either re-traded or milled and exported as flour and milled rice.
Intra-regional trade by food groups including A tonnes imported to PICTs from other PICTs and B percentage of all imports that are imported from PICTs
Trade in processed and prepared foods, including processed meat, vegetable and 'miscellaneous' preparations, comprised 19% of intra-regional food trade in 2018. The next highest trade food groups were sweetened or flavoured beverages (8%), processed fish (5%) and sugar and confectionary (3%). Trade in root crops is negligible, totalling 78 t in 2018.
Intra-regional trade represents a small, but increasing proportion of total imports (Fig. 3). The most notable increases have been in intra-regional imports of 'cereals, grains and flours', which also rose as a proportion of total imports (from negligible in 1999 to over 4% in 2013 before declining to 2–3%). The large spike in prepared and processed foods as a proportion of total imports between 2001 and 2007 reflects the sustained increase in sugar export from Fiji to other PICTs that occurred in 2001.
Intra-regional trade in staple foods
Intra-regional trade of the major 'non-traditional' staple foods (namely rice and wheat flour) represented 3% of total trade in these foods during the period 2014–2018 (Table 1). Fifty-one percent and 26% of total trade in staple foods went to Papua New Guinea and Fiji respectively during this period. Almost all of this was imported from countries outside of the Pacific region. During this period, 2062 t of staple foods were imported into Fiji from Papua New Guinea between 2014 and 2018 (the only intra-regional export from Papua New Guinea), representing only 0.2% of staple food imports into Fiji.
Table 1 Intra-regional imports as a percentage of total imports (by weight; 2014–2018 avg)
The significance of intra-regional trade in rice and wheat varied widely across countries during the period 2014–2018. For Nauru, Tokelau, Tonga, Tuvalu and Wallis and Futuna Islands, intra-regional trade comprised more than 80% of total non-traditional staple food imports (Table 1). In contrast, intra-regional trade contributed less than 1% of imported non-traditional staple foods for the Federated States of Micronesia, Fiji, New Caledonia, Papua New Guinea and Solomon Islands.
Fiji acts as a hub for intra-regional trade in staple foods, with 98% of intra-regional trade in non-traditional staple foods coming from Fiji (Fig. 4). Fiji was the source of all intra-regional non-traditional staple food imports into Cook Islands, Federated State of Micronesia, French Polynesia, Kiribati, Marshall Islands, Nauru, New Caledonia, Niue, Papua New Guinea, Samoa, Solomon Islands, Tonga, Tuvalu, Vanuatu and Wallis and Futuna Islands. For Nauru, Samoa, Tonga, Tuvalu and Wallis and Futuna Islands, over 60% of total non-traditional staple food imports came from Fiji. An average of 895,143 t of staples entered the region per year between 2014 and 2018, 237,243 t of which was imported by Fiji; 177 g/capita/day was imported directly from outside the region to other PICTs. Of Fiji's imports roughly 11% was reexported, or processed and exported, to other PICTs.
Average annual grams per capita per day of staples (HS10, HS11) moving between PICTs and imports from outside the region for 1995–1999 and 2014–2018. Line width reflects grams per capita per day for the importing country (see scale bar). Per capita imports entering the region to countries other than Fiji are aggregated because there is negligible re-trade from PICTs other than Fiji
Intra-regional trade of healthy foods
Intra-regional trade of healthy foods, including fruit, vegetables, pulses, nuts and seeds, and root crops, is limited, both in terms of tonnage and relative to imports from outside the region (Fig. 5). The peak volume of intra-regional trade in healthy foods was 656 t in 2011.In 2018, healthy foods represented 0.3% of intra-regional trade, and 0.18% of total healthy food imports. The main extra-regional source country for healthy foods is New Zealand which exports a meaningful quantity of vegetables to the region.
Total intra-regional trade in healthy food, subdivided across four healthy food groups
Fiji was the source of 63% of healthy foods traded intra-regionally in 2018, a lower percentage than its contribution to total intra-regional food trade (85%) in the same year. The larger trade flows of staple root crops in 2011 and 2012 are cassava (HS071490 Manioc, arrowroot, sago pith etc.) from Solomon Islands to Kiribati. All intra-regional trade flows of staple root crops (2350 t) through the period are exported from high islands, and 75% of this volume is imported by atoll nations. The vast majority of healthy food imports come from outside the region. Only around 0.2% of healthy food imports were from PICTs in 2018. Fiji imports 139 g/capita/day from outside the region, mostly comprising potatoes from New Zealand. The rest of the region, excluding Fiji, directly imports only 11 g/capita/day from outside the region. Small quantities are exported from Fiji to other PICTs, predominantly atoll nations.
Intra-regional trade of unhealthy foods
Intra-regional trade in unhealthy foods – namely sugars, fatty meats, ready-to-eat snacks and meals, sweet snacks and energy dense beverages – peaked between 2002 and 2009, at an average of nearly 16,000 t traded per year (Fig. 6). Between 2014 and 2018 the yearly average was less than 10,000 t. Intra-regional trade in sweet snacks increased from 30 t in 1995 to over 2000 t in 2011 and has remained fairly steady since. Intra-regional trade in sweetened beverages increased from 17 t in 1995 to nearly 4000 t in 2002, and then remained between 4000 and 6000 t through to 2018. The overall decline is mainly due to a decline in sugar trade, which fell from an average of over 10,000 t per year to less than 1000 t per year over the same period. In particular, Fiji as the main intra-regional exporter of sugar to PICTs, ended its preferential export price on sugar (which was two to three times higher than the world market price) in December 2007 due to the European Union reform on its Common Agricultural Policy [40]. As a result, the local sugar industry in Fiji started to face stiff competition from more efficient sugar exporters worldwide from 2008 onwards.
Total intra-regional trade in unhealthy food, subdivided across five food groups. Lines (z axis) shows 3 year moving average of intra-regional trade in unhealthy food as a percentage of total unhealthy food imports, with the remainder coming from outside the region
Overall, intra-regional trade comprised 4% of total unhealthy food trade, and for eight of the 17 countries that imported unhealthy food from within the region (the only country in our study that only had extra-regional sources of unhealthy food imports was Palau), intraregional trade comprised 1% or less of total unhealthy food imports (Fig. 5). However, for Tokelau, intra-regional trade was the source of 98% of unhealthy food imports. For Nauru, Tuvalu, Vanuatu and Wallis and Futuna, intra-regional trade comprised over 20% of total unhealthy food imports.
In the late 1990s, Fiji was a hub for intra-regional trade in unhealthy foods, although the volume was minimal (Fig. 7). Between 2014 and 2018 the majority of intra-regional trade in unhealthy foods came from Fiji, but with more diversity in source than observed for staple foods. Only for Federated States of Micronesia, Marshall Islands, Niue, Samoa, Tonga, Tuvalu and Vanuatu was Fiji the source of 100% of intra-regional imports. Other source countries included Papua New Guinea, Samoa, New Caledonia, Solomon Islands, Vanuatu, French Polynesia and Marshall Islands. On a grams per capita basis Fiji is a major reexport and export hub for unhealthy food in the region.
Average annual grams per capita per day of unhealthy food moving between PICTs and imports from outside the region for 1995–1999 and 2014–2018. Line width reflects grams per capita per day for the importing country (see scale bar). Per capita imports entering the region to countries other than Fiji are aggregated because there is negligible re-trade from PICTs other than Fiji
Intra-regional trade and trade agreements
The PICTA was signed (2001) and implemented (2007) during the period under analysis. Intra-regional imports grew substantially during this period, as described above. Here, we explore whether there was any corresponding shift in the proportion of imports sourced intra-regionally compared to extra-regionally. PICTs have adopted PICTA at different times, and to differing degrees [41]. To control for ambiguity we include the following PICTs in the PICTA only analysis as they were unambiguously early adopters to PICTA; Cook Islands, Fiji, Niue, Samoa, Solomon Islands, Tuvalu and Vanuatu. Extra-regional imports to all PICTs and to PICTA early adopters have increased considerably and consistently since the mid 1990s (Fig. 8). Trade of regionally produced commodities (Supplementary Table 1) between all PICTs has been highly variable, with temporal spikes attributed to large intra-regional shipments of copra. Trade of regionally produced commodities between early PICTA adopters has been negligible through the time period. In particular, there is no aggregate evidence of an effect of PICTA on the quantity of food trade.
Temporal trend in quantity of food traded including net imports from outside the region to all PICTs (solid blue line) from outside the region to PICTs that were early adopters of PICTA (dashed blue line), all intra-regional imports (solid green line) and intra-regional imports for early adopters of PICTA (dashed green line)
When controlling for the difference in quantity of different commodities there is no significant change in quantity being imported from outside the region for either all PICTs or early PICTA adopters (Fig. 9A). If PICTA had a meaningful effect on imports from outside the region we would have expected to observe some difference between the two trends. Similarly, there is no discernible difference in the quantity of Pacific produce traded between all PICTs and between early PICTA adopters (Fig. 9B). There is some increase for both trends in the early 2000s, but the increase is ephemeral and not unambiguously attributable to PICTA.
A Mean food imports from outside the region to all PICTs and to PICTs that were early adopters of PICTA. Only commodities produced outside the region were included; B Mean intra-regional trade between all PICTs and between PICTs that were early adopters of PICTA. Only commodities that were produced within the region were included. For both graphs, each HS6 commodity type was given equal weighting to avoid bulk commodities dominating trends (see methods for calculation). Error bars show 95% confidence interval around the average trade quantity. See Supplementary Table 1 for inclusion and exclusions
Intra-regional food trade among PICTs has grown and is a significant source of traded food for many countries. Fiji acts as a regional hub, with the majority of intra-regional food imports originating in Fiji. We found a heavy reliance on intra-regional trade for staples among certain small remote island countries, including Tokelau, Tuvalu, Wallis and Futuna Islands and Tonga. This is likely to be influenced by trade routes, including having limited access to extra-regional trade, directly. However, we observed the opposite for other countries: for example, there were limited intra-regional rice and wheat imports into the Federated States of Micronesia.
Overall, intra-regional trade increased after 2001, the year PICTA was signed (but not implemented). Although PICTA wasn't implemented until 2007, it may have raised regional attention both to potential benefits of reducing trade barriers and the region as a potential market (vis a vis markets outside the region), despite our study showing limited direct impact. During this time, there have also been significant investments in trade facilitation especially for improving export standards of food crops in certain countries [42]. Overall, however, intra-regional trade in food remains a fraction of total trade. In part, this likely reflects the focus of the Pacific Island region-building exercise on extra-regional trade – similar to many other (non-EU) regional groupings [3]. It also reflects the role of long standing preferential trade agreements with extra-regional actors, particularly in relation to sugar [43].
Interest in promoting greater intra-regional trade among PICTs remained subdued throughout the 1990s, and even following PICTA there has been much greater emphasis given to promoting stronger trade links with major industrial markets where the potential for trade expansion is much greater. For example, the value of trade between the participating states of the Asia-Pacific Trade Agreement (APTA) and six PICTA countries (Fiji, Papua New Guinea, Samoa, Solomon Islands, Tonga and Vanuatu) increased steadily during the 1980s and 1990s and then grew from $121 million in 2000 to $1636 million in 2012 [44]. The majority of trade flows are exports from APTA to PICTA, but in 2012 20% was contributed by exports from PICTA to APTA members, which is notable given the size differential of member countries [44]. Many of the products exported from the region face intensive international competition and face continued pressure from substitutes, increased production from low-cost Asian producers and the inelastic demand for traditional exports [43].
The findings of this study have highlighted the growth in trade of staple foods intra-regionally, indicating an important role for Fiji (in particular) in regional food security. The increasing intra-regional contribution to imports of cereals, grains and flours over the past 20 years likely reflects a combination of re-export and local processing of staple foods, including as a result of the growing presence and scale of flour mills, particularly in Fiji. In addition, Fiji has continued to produce rice, with ongoing efforts to revitalise the rice industry [45]. This finding points to the potential for intra-regional trade to contribute to food security through increased availability and affordability of staple foods and suggests an opportunity for other commodities. For example, addressing the high cost of protein (and low consumption) in some settings, while also generating economic opportunities for development through supporting local food industries within the region [46]. This would have an additional benefit of increasing food access in times of disaster, due to overall increased regional availability (particularly given the countries in the region are mainly net-food-importers). Current high and volatile prices of staples on the international market, driven by COVID-19 and the war in Ukraine could make domestic rice production more economically viable in some PICTs. However, operationalising this potential for increased intra-regional has faced significant challenges. Several studies of intra-regional trade possibilities in the past have had no breakthrough pointing to the fact that the economic structure of PICTs and the kind of export activities they can sustain are broadly similar [26, 41].
Notable in our findings is the very limited trade in traditional staple foods, namely root crops. This likely reflects domestic production capacities and the fact that within the region there is little differential or comparative advantage in root crop production. Most PICTs still engage in root crop production and household production of traditional staple foods and vegetables is significant. For example, a 2015 agricultural survey in Tonga found that the majority (86%) of the surveyed households were active in agricultural production (cropping, livestock, fisheries, forestry or handicraft), with 37% producing for subsistence, 62% for semi-subsistence, and only 1% for commercial [47]. This indicates that majority of the households grow their own traditional food crops. Nearly all households surveyed were agricultural households, growing their own food. A similar trend of limited percentage share of households focussing on commercial (for sale) of agriculture crops is found for Samoa [48]. In Fiji, 99% of total household interviewed in rural and peri-urban areas were agricultural households, with the majority (59.4%) unpaid subsistence farmers [49].
Traditional foods are preferred in PICTs, especially in rural communities where imported foods are limited and of higher cost and there is capacity for own-production of food crops [50]. However, local consumption of traditional staple crops has been declining and across the region, food cultures have changed as people become more affluent [51]. Many of the traditional labour-intensive and time-consuming recipes are not used anymore, and younger generations have different diets from those of their parents and grandparents [52]. While the region has a very comprehensive set of food and nutrition policies, imported foods tend to be easier (cheaper and widely available) choices, and domestic staple food production is declining [53]. The limited trade in root crops also reflects the limited agricultural technologies related to storage and transport of root crops, compared to wheat and rice, which result in relatively high post-harvest losses and create disincentives for trade [54]. Trade in root crops is also challenging because sanitary and phytosanitary and technical requirements in most PICTS deters export opportunities between PICTs.
This study also identified intra-regional trade in unhealthy foods, building on previous research finding overall increases in processed food imports in Pacific Island Countries [21, 22]. This is also reflected by concerns among Pacific Island governments about unhealthy imports. For example, Tonga has implemented substantial tariff intervention to reduce unhealthy imports in recent years [55]. Overall, both intra- and extra-regional trade in unhealthy foods has been growing, and the dynamic of trade has been changing. There has been a notable shift to imports of unhealthy foods and beverages from Asia, including sugar sweetened beverages (SSBs) [22]. The Asian region has undergone major changes in agriculture and food systems, with rising food processing and export capacity [56]. Rising imports of snack foods and SSBs from Asia has also been seen in other developing regions, including Southern Africa [12].
A limitation of the study was that we were unable to ascertain the scale of retrade as a component of intra-regional trade – this is important as retrade is likely to be less beneficial to domestic economies. We were also unable to assess causation in our analysis of potential impacts of PICTA. In addition, we could not differentiate tourism as a 'destination' for food imports, due to a lack of information on the magnitude of consumption. In subsequent analyses that span the disruptions to trade and tourism due to COVID-19 and related measures, this may be a significant factor needed to help interpret trends in trade.
Policy implications
The COVID-19 pandemic and associated policy measures has disrupted the production, availability and international trade of food [15, 57]. The pandemic has highlighted the long-term lack of investment by Pacific Island Governments in local food production [58] – which is also reflected in the very limited trade in local traditional foods seen in this study. For the foreseeable future, many PICs will continue to rely on own domestic agricultural production. With renewed interest in domestic agriculture following COVID-19 this study points to an opportunity for increased investments in domestic agriculture (and storage, transport and processing) to support production and trade in traditional staple foods – which are preferred and under-supplied. Further, if PICs (and donors) may consider investing on the capacity and skills of the local agricultural sectors to not only export primary produce (cassava, fresh) but also to turn it into value added products (such as cassava flour), that could address the SPS issues currently hindering the inter-PIC trade in fresh produce. At the same time, it will also diversify exports to the point that PICs may be exporting different goods to each other.
In line with the challenges outlined above, specific domestic policy opportunities relevant to enhancing healthy food availability – including through intra-regional trade – relate to investment in supply chains, as well as in resilient and affordable access to transport and internet connectivity [58]. Such policy initiatives would enhance knowledge on upcoming market opportunities and risks, while enabling affordable inter-country transportation of healthy imported foods. In addition to investment in local food production, Ministries of Agriculture across the region are highlighting the importance of increased policy focus on encouraging youth participation and entrepreneurship in agriculture [46]. This strategy would not only to help increase agricultural productivity and also reduce dependence on imported foods and thus increase food security, but also to deal with high rates of youth unemployment.
This study also raises a broader question about the potential for regional approaches to foster 'healthier trade'. The emergence of regional trade hubs in other regions has created potential for a regional approach to improving diets and health. However, this has often occurred via health policy initiatives to improve the healthfulness of the food supply in parallel to ongoing efforts towards regional economic cooperation and liberalization, rather than via trade policy measures. In South Africa, for example, efforts to reduce salt and sugar in processed foods and to influence the nutrient composition of foods has been pursued through the Southern African Development Community (SADC) [12]. In the Pacific region, Fiji's role as hub for intra-regional food trade means that fortified flour – an effective intervention domestically [59] – is also benefiting other countries in the region. In relation to trade in healthy food, there is potential for more food preserving and manufacturing to foster intra-regional trade in 'Pacific' foods that are minimally processed, for instance canned fish products, making them easier to trade and contributing to policy objectives for increased value-adding. Lifting production of locally manufactured food products for export trade will likely increase scale and affordability needed to appeal to local markets [55]. Lifting production of niche (often healthy) food products like dried fruit and juices and staple crop flours (ie cassava) is already an aim for some PICTs, such as Vanuatu [46]. However, enhancing intra-regional trade in local foods will require strengthening quality control of exports between PICTs, increasing capacity for adherence to technical and phytosanitary measures imposed by each PICTS, and investment in facilities and harmonization of requirements [60, 61].
This study has documented the small but significant role of intra-regional food trade for food and nutrition security in the Pacific Island region. Fiji acts as a regional hub, and we found a heavy reliance on intra-regional trade for staples among small remote island countries. Notable in our findings is the very limited trade in root crops. Although there is a regional trade agreement, and efforts to enhance intra-regional trade have likely contributed to its growth in the region, we were unable to identify a clear impact of the main regional trade agreement on intra-regional trade. In the current context of significant food system disruption due to the COVID-19 pandemic and rising commodity prices, greater investment in traditional food export could enhance food security and nutrition in the Pacific region. More broadly, this study also echoes previous research that suggests that regional approaches offer an opportunity to foster trade in healthy foods.
All data generated or analysed during this study are included in this published article [and its supplementary information files].
Pacific Island Countries and Territories include the geographic members of the Pacific Community (SPC). Detailed list here: https://www.spc.int/our-members/
APTA:
Asia-Pacific Trade Agreement
MSG:
Melanesian Spearhead Group
MTEC:
Micronesian Trade and Economic Community
PFTD:
Pacific Food Trade Database
PICTs:
Pacific Island Countries and Territories
PICTA:
Pacific Island Countries Trade Agreement
SADC:
Southern African Development Community
SSB:
Sugar Sweetened Beverage
Krapohl S, Fink S. Different paths of regional integration: trade networks and regional institution-building in Europe, Southeast Asia and southern Africa. J Common Mark Stud. 2013;51(3):472–88.
Solingen E, Malnight J. Globalization, domestic politics, and regionalism. In: Börzel TA, Risse T, editors. The Oxford handbook of comparative regionalism. Oxford: Oxford University Press; 2016.
Krapohl S. Games regional actors play: dependency, regionalism, and integration theory for the global south. J Int Relat Dev. 2020;23(4):840–70.
Börzel TA. Theorizing regionalism: cooperation, integration, and governance. In: Börzel TA, Risse T, editors. The Oxford handbook of comparative regionalism. Oxford: Oxford University Press; 2016. p. 41–63.
United Nations. Sustainable Development Goals New York: Unived Nations Department of Economic and Social Affairs; 2015 [Available from: https://sustainabledevelopment.un.org/?menu=1300.
Suleymenova K, Syssoyeva-Masson I. Approaches to promoting intra-regional trade in staple foods in sub-Saharan Africa. In: K4D Helpdesk Report. Brighton: Institute of Development Studies; 2017.
Pasara MT, Diko N. The effects of AfCFTA on food security sustainability: an analysis of the cereals trade in the SADC region. Sustainability. 2020;12(4):1419.
Yudhatama P, Nurjanah F, Diaraningtyas C, Revindo MD. Food security, agricultural sector resilience, and economic integration: case study of ASEAN+ 3. Jurnal Ekonomi & Studi Pembangunan. 2021;22(1):89–109.
Ward M, Herr H, Pédussel WJ. South Asian free trade area and food trade: implications for regional food security. In: Working Paper. Berlin: IPE; 2020.
Werner M, Isa Contreras P, Mui Y, Stokes-Ramos H. International trade and the neoliberal diet in Central America and the Dominican Republic: bringing social inequality to the center of analysis. Soc Sci Med. 2019;239:112516.
Thow AM, Hawkes C. The implications of trade liberalization for diet and health: a case study from Central America. Glob Health. 2009;5(5).
Thow AM, Sanders D, Drury E, Puoane T, Chowdhury SN, Tsolokile L, et al. Regional trade and the nutrition transition: opportunities to strengthen NCD prevention policy in the southern African development community. Glob Health Action. 2015;8.
Morsy H, Salami A, Mukasa AN. Opportunities amid COVID-19: advancing intra-African food integration. World Dev. 2021;139:105308.
Farrell P, Thow AM, Wate JT, Nonga N, Vatucawaqa P, Brewer T, et al. COVID-19 and Pacific food system resilience: opportunities to build a robust response. Food Security. 2020;12(4):783–91.
Iese V, Wairiu M, Hickey GM, Ugalde D, Salili DH, Walenenea J Jr, et al. Impacts of COVID-19 on agriculture and food systems in Pacific Island countries (PICs): evidence from communities in Fiji and Solomon Islands. Agric Syst. 2021;190:103099.
Popkin BM, Adair LS, Ng SW. Global nutrition transition and the pandemic of obesity in developing countries. Nutr Rev. 2012;70(1):3–21.
Andrew NL, Allison EH, Brewer T, Connell J, Eriksson H, Eurich JG, et al. Continuity and change in the contemporary Pacific food system. Glob Food Secur. 2022;32:100608.
Savage A, McIver L, Schubert L. The nexus of climate change, food and nutrition security and diet-related non-communicable diseases in Pacific Island countries and territories. Clim Dev. 2020;12(2):120–33.
Campbell JR. Development, global change and food security in Pacific Island countries. In: Connell J, Lowitt K, editors. Food security in Small Island states. Singapore: Springer; 2020. p. 39–56.
Thow AM, Snowdon W. The effect of trade and trade policy on diet and health in the Pacific Islands. In: Hawkes C, Blouin C, Henson S, Drager N, Dubé L, editors. Trade, food, diet and health: perspectives and policy options. Oxford: Wiley-Blackwell; 2010. p. 147–68.
Vakatakiofa S, Gani A. Trade policy and health implication for Pacific island countries. Int J Soc Econ. 2017;44(6):816–30.
Sievert K, Lawrence M, Naika A, Baker P. Processed foods and nutrition transition in the Pacific: regional trends, patterns and food system drivers. Nutrients. 2019;11(6):1328.
Dodd R, Reeve E, Sparks E, George A, Vivili P, Tin STW, et al. The politics of food in the Pacific: coherence and tension in regional policies on nutrition, the food environment and non-communicable diseases. Public Health Nutr. 2020;23(1):168–80.
United Nations Economic and Social Commission for Asia and the Pacific. Asia-Pacific trade and investment briefs: the developing Pacific Islands United Nations economic and social commission for the Asia Pacific: UNESCAP; 2017.
Morgan W. Much lost, little gained? Contemporary trade agreements in the Pacific Islands. J Pac Hist. 2018;53(3):268–86.
Narsey WPICTA. PACER and EPAs: weaknesses in Pacific island countries trade policies. Pacific. Econ Bull. 2004;19(3).
Maiti D, Kumar S. Regional agreements, trade cost and flows in the Pacific. Econ Polit. 2016;33(2):181–99.
Ravuvu A, Friel S, Thow AM, Snowdon W, Wate J. Protocol to monitor trade agreement food-related aspects: the Fiji case study. Health Promot Int. 2017:dax020.
Friel S, Hattersley L, Snowdon W, Thow AM, Lobstein T, Sanders D, et al. Monitoring the impacts of trade agreements on food environments. Obes Rev. 2013;14:120–34.
Asia-Pacific Trade and Investment Agreement Database. Trade agreements and arrangements in the Pacific subregion: APTIAD; 2012.
Bell JD, Sharp MK, Havice E, Batty M, Charlton KE, Russell J, et al. Realising the food security benefits of canned fish for Pacific Island countries. Mar Policy. 2019;100:183–91.
MSG Secretariat. Melanesian Spearhead Group. Accessed: 9 June 2022. Available at: https://msgsec.info/. 2021.
Pomfret R. Multilateralism and regionalism in the South Pacific: World Trade Organization and regional fora as complementary institutions for trade facilitation. Asia & the Pacific Policy Studies. 2016;3(3):420–9.
Morgan W. Trade negotiations and regional economic integration in the Pacific Islands forum. Asia & the Pacific Policy Studies. 2014;1(2):325–36.
Mahadevan R, Asafu-Adjaye J. Unilateral liberalisation or trade agreements: which way forward for the Pacific? World Econ. 2013;36(10):1355–72.
Brewer TD, Andrew NL, Sharp MK, Thow AM, Kottage H, Jones S. A method for cleaning trade data for regional analysis: the Pacific food trade database (version 2, 1995–2018): Pacific community working paper. Noumea: The Pacific Community (SPC); 2020.
Gaulier G, Zignago S. Baci: international trade database at the product-level (the 1994–2007 version). https://ssrn.com/abstract=1994500 2010.
SPC. Pacific guide to healthy eating. Noumea: Pacific Community; 2002.
Ravuvu A, Friel S, Thow A-M, Snowdon W, Wate J. Monitoring the impact of trade agreements on national food environments: trade imports and population nutrition risks in Fiji. Glob Health. 2017;13(1):33.
Lal P, Rita R. Potential impacts of EU sugar reform on the Fiji sugar industry. Pac Econ Bull. 2005;20(2):18–42.
Anukoonwattaka W. APTIAD briefing note: trade agreements and arrangements in the Pacific subregion: United Nations Economic and Social Commission for Asia and the Pacific; 2012.
Moorhead A. Pardi: the pacific agribusiness researchfor development initiative. Partners in Research for Development. 2015;2:6–9.
Connell J, Soutar L. Free trade or free fall? Trade liberalization and development in the Pacific and Caribbean. Social and Economic Studies. 2007;56:41–66.
Cho J-W, Ratna RS. The Asia-Pacific trade agreement: promoting south-south regional integration and sustainable development. United Nations Economic and Social Commission for Asia and the Pacific. 2016.
FAO. Enhancing local rice production in Fiji - TCP/FIJ/3502. Rome: United Nations Food and Agriculture Organization; 2019.
Reeve E, Ravuvu A, Farmery A, Mauli S, Wilson D, Johnson E, et al. Strengthening food systems governance to achieve multiple objectives: A comparative instrumentation analysis of food systems policies in Vanuatu and the Solomon Islands. Sustainability. 2022;14(10):6139.
MAFFF (Ministry of Agriculture F, Forests and Fisheries),. Tonga National Agriculture Census 2015. Nuku'alofa: Ministry of Agriculture, Food, Forests and Fisheries (MAFFF); Tonga Statistics Department (TSD); Food and Agriculture Organisation of the United Nations (FAO). Available at: https://tonga-data.sprep.org/resource/tonga-national-agriculture-census-2015; Accessed 25 Feb 2022.; 2015.
Government of Samoa. Report on Samoa Agricultural Survey. Apia: Samoa Bureau of Statistics; 2015.
Government of Fiji. 2020 Fiji Agricultural Census; Volume 1: General Table & Descriptive Analysis Report. Suva: Ministry of Agriculture; United Nations Food and Agriculture Organization; 2021.
Savage A, Bambrick H, Gallegos D. From garden to store: local perspectives of changing food and nutrition security in a Pacific Island country. Food Security. 2020;12(6):1331–48.
Burlingame B, Vogliano C, Eme PE. Leveraging agricultural biodiversity for sustainable diets, highlighting Pacific Small Island developing states. Adv Food Secur Sustain. 2019;4:133–73.
Farrell P, Thow AM, Rimon M, Roosen A, Vizintin P, Negin J. An analysis of healthy food access amongst women in Peri-urban Honiara. Hawai'i J Health Soc Welf. 2021;80(2):33.
Halavatau S. Regional partnership to address food production crisis in the Pacific Islands. In: Karihaloo JL, Mal B, Ghodake R, editors. High level policy dialogue on Investment in Agricultural Research for sustainable development in Asia and the Pacific Bangkok, Thailand; 8-9 December 2015. Bangkok: Asia-Pacific Association of Agricultural Research Institutions (APAARI); 2015.
Stathers T, Holcroft D, Kitinoja L, Mvumi BM, English A, Omotilewa O, et al. A scoping review of interventions for crop postharvest loss reduction in sub-Saharan Africa and South Asia. Nat Sustain. 2020;3(10):821–35.
Win Tin ST, Kubuabola I, Ravuvu A, Snowdon W, Durand AM, Vivili P, et al. Baseline status of policy and legislation actions to address non communicable diseases crisis in the Pacific. BMC Public Health. 2020;20(1):1–7.
Reardon T, Echeverria R, Berdegué J, Minten B, Liverpool-Tasie S, Tschirley D, et al. Rapid transformation of food systems in developing regions: highlighting the role of agricultural research & innovations. Agric Syst. 2019;172:47–59.
Steenbergen DJ, Neihapi P, Koran D, Sami A, Malverus V, Ephraim R, et al. COVID-19 restrictions amidst cyclones and volcanoes: A rapid assessment of early impacts on livelihoods and food security in coastal communities in Vanuatu. Mar Policy. 2020;121:104199.
Guell C, Brown CR, Navunicagi OW, Iese V, Badrie N, Wairiu M, et al. Perspectives on strengthening local food systems in Small Island developing states. Food Secur. 2022:1–14.
Zimmerman S, Baldwin RJ. The link between organizational bodies and fortification strategies and practice: the role of the flour fortification initiative. Handbook of Food Fortification and Health: Springer; 2013. p. 3–13.
Trivedi J, Bajt D, Duval Y, Yoo JH. Non-tariff measures in regional trade agreements in Asia and the Pacific: SPS. TBT and government procurement. 2019.
Anderson G. The SPS agreement: Some issues for small island states. Droit et Env dans le Pacifique Sud; Problématiques et perspectives croisées–Law and Env in the South Pacific and Bey, Intersec Prob and Perspective. 2005;5.
We are grateful to Eleanor McNeill for production of graphics included in the manuscript.
This work was funded by the Australian Government through ACIAR projects FIS/2016/300 and FIS/2018/155.
Menzies Centre for Health Policy and Economics, School of Public Health Charles Perkins Centre (D17), University of Sydney, Camperdown, NSW, 2006, Australia
Anne Marie Thow & Erica Reeve
Non Communicable Disease Program, Public Health Division, Pacific Community, Suva, Fiji
Amerita Ravuvu
United Nations Economic and Social Commission for Asia and the Pacific, Bangkok, Thailand
Siope Vakataki Ofa
Australian National Centre for Ocean Resources and Security, University of Wollongong, Wollongong, NSW, Australia
Neil Andrew & Tom Brewer
Global Centre for Preventive Health and Nutrition (GLOBE), Institute for Health Transformation, School of Health and Social Development, Deakin University, 1 Gheringhap Street, Geelong, Victoria, 3220, Australia
Erica Reeve
WorldFish, Honiara, Solomon Islands
Jillian Tutuo
Anne Marie Thow
Neil Andrew
Tom Brewer
AMT, TB and NLA conceptualized the study, with guidance from AR, SVO, ER and JTW. TB extracted and conducted preliminary analysis of the data. TB and AMT led the analysis of findings and implications, with iterative guidance from all authors. AMT led the writing, with input from all authors, and all authors read and approved the final manuscript.
Correspondence to Anne Marie Thow.
Disclaimer: The findings and views expressed herein are those of the authors, and do not necessarily reflect the positions or views of the United Nations.
Additional file 1:
Supplementary 1. Attribution of HS92 commodities for analysis of Intra-regional trade and trade agreements.
Thow, A.M., Ravuvu, A., Ofa, S.V. et al. Food trade among Pacific Island countries and territories: implications for food security and nutrition. Global Health 18, 104 (2022). https://doi.org/10.1186/s12992-022-00891-9
Regionalism
Trade and health
|
CommonCrawl
|
communications earth & environment
Distinct influences of large-scale circulation and regional feedbacks in two exceptional 2019 European heatwaves
Future equivalent of 2010 Russian heatwave intensified by weakening soil moisture constraints
L. M. Rasmijn, G. van der Schrier, … W. Hazeleger
2021 North American heatwave amplified by climate change-driven nonlinear interactions
Samuel Bartusek, Kai Kornhuber & Mingfang Ting
Changes in land-atmosphere coupling increase compound drought and heatwaves over northern East Asia
Ye-Won Seo & Kyung-Ja Ha
Amplification of mega-heatwaves through heat torrents fuelled by upwind drought
Dominik L. Schumacher, Jessica Keune, … Diego G. Miralles
Drivers behind the summer 2010 wave train leading to Russian heatwave and Pakistan flooding
G. Di Capua, S. Sparrow, … D. Coumou
On the curious case of the recent decade, mid-spring precipitation deficit in central Europe
M. Ionita, V. Nagavciuc, … O. Rakovec
Anthropogenic warming exacerbates European soil moisture droughts
L. Samaniego, S. Thober, … A. Marx
Analysing the 2021 North American heatwave to understand extraordinary heat events
Hot extremes have become drier in the United States Southwest
Karen A. McKinnon, Andrew Poppick & Isla R. Simpson
Pedro M. Sousa ORCID: orcid.org/0000-0002-0296-42041,
David Barriopedro2,
Ricardo García-Herrera2,3,
Carlos Ordóñez ORCID: orcid.org/0000-0003-2990-01953,
Pedro M. M. Soares1 &
Ricardo M. Trigo ORCID: orcid.org/0000-0002-4183-98521,4
Communications Earth & Environment volume 1, Article number: 48 (2020) Cite this article
Atmospheric dynamics
Climate-change impacts
Two separate heatwaves affected western Europe in June and July 2019, in particular France, Belgium, the Netherlands, western Germany and northeastern Spain. Here we compare the European 2019 summer temperatures to multi-proxy reconstructions of temperatures since 1500, and analyze the relative influence of synoptic conditions and soil-atmosphere feedbacks on both heatwave events. We find that a subtropical ridge was a common synoptic set-up to both heatwaves. However, whereas the June heatwave was mostly associated with warm advection of a Saharan air mass intrusion, land surface processes were relevant for the magnitude of the July heatwave. Enhanced radiative fluxes and precipitation reduction during early July added to the soil moisture deficit that had been initiated by the June heatwave. We show this deficit was larger than it would have been in the past decades, pointing to climate change imprint. We conclude that land-atmosphere feedbacks as well as remote influences through northward propagation of dryness contributed to the exceptional intensity of the July heatwave.
Heatwaves (HWs) are among the most concerning extreme meteorological events, as they have a wide range of impacts, including human health (e.g. increased mortality and morbidity)1,2 and significant socio-economic and ecological effects, such as wildfires and poor air quality events3,4, droughts5 and peaks in energy consumption demand6,7. Within the context of global warming, an increased frequency in extremely warm events is foreseen, comprising HWs of unprecedented extension and duration8,9,10. 2019 was the second warmest year at the global scale, only surpassed by the strong El-Niño year of 201611. Unsurprisingly, summer 2019 presented exceptional HWs in Europe, exceeding notorious episodes which occurred just 1 year before in the also very hot summer of 201812,13. In terms of affected areas, the 2019 HW events resembled to a large extent the 2003 summer HW14,15 and in many places temperature extremes even shattered those of 2003. In late June an outstanding HW began in southwestern Europe12, and extended towards most of France and parts of central Europe. During this event the city of Vérargues in southeastern France reached an astonishing daily maximum temperature (TX hereon) of 46 °C on June 28th. This was the first time temperature measurements exceeded 45 °C in France. Just a few weeks later, another exceptional HW set new historical values in France and other European countries. For example, Paris registered a TX of 43 °C, surpassing the previous record standing since 1947 by ~2 °C. Furthermore, for the first time since the beginning of meteorological observations, Belgium and the Netherlands exceeded the 40 °C barrier. Fortunately, summer 2019 caused considerably less mortality excess than previous HWs, including the devastating 2003 event16. This might result from the combination of human factors, which include the lessons learned from the 2003 HW (i.e. early warning systems, better preparedness and societal awareness, deployment of sheltering and water-cooling facilities, use of air conditioning, etc.), and the shorter duration of both 2019 HWs.
Most extreme temperature events are partially driven by anomalous large-scale atmospheric circulation. However, the current rate of warming (i.e. thermodynamic changes) is sufficient to produce exceptional HWs, even without unprecedented anomalies in the large-scale circulation. Contrasting with the lack of robust projections in dynamical changes17, recent works indicate robust and significant increases in maximum HW magnitude over large regions, even for 1.5 °C global warming targets8. Moreover, anthropogenic forcing has already caused a 7‐fold increase in the likelihood of extreme heat events18. In addition to direct radiative effects of increasing greenhouse gases concentrations, the potential contribution of enhanced local land–atmosphere feedbacks has also been acknowledged19. Recent studies have further explored non-local feedbacks in recent mega-HW events20,21. Local drying and subsequent enhanced surface heat fluxes, together with horizontal warm advection and heat accumulation in the atmospheric boundary layer, have been shown to contribute to the magnitude of temperature anomalies during the August 2003 HW22. Very similar processes were observed in the 2010 Russian mega-HW22,23. Recently, the same events have been explored to introduce the concept of upwind soil dryness21 (also referred to as self-propagation20). These conceptual models illustrate how air masses warmed by sensible heat fluxes (due to pre-conditioning dry soil conditions) can be advected downwind, stimulating land–atmosphere feedbacks in nearby regions that contribute to a progressive set-up for HW occurrence.
Nevertheless, the combined effect of regional scale feedback processes and large-scale atmospheric circulation is crucial for understanding the development of extreme heat events24. Regarding the latter, several studies have described the weather systems associated with European HWs, highlighting the key role of blocking/ridge occurrence12,25,26,27. Given this major control of the dynamics, other studies have used flow-analogue approaches to quantify the contribution of thermodynamic changes to the observed magnitude of outstanding recent HW events28,29,30. Here we aim to investigate the combined roles of large-scale atmospheric circulation and soil moisture conditions in the occurrence and severity of the summer 2019 HWs in Europe. The synoptic conditions are analyzed for both events to characterize large-scale circulation features. Thermodynamic aspects are also considered, including regional feedback processes and their lagged effects. In particular, we investigate soil-atmosphere processes observed since the June 2019 HW and their potential role in amplifying the magnitude of the July 2019 HW.
How anomalous was summer 2019 in Europe?
To place the 2019 European summer into a long-term context, we first derived estimates of the European mean summer temperature anomalies since 1500 using a multi-proxy reconstruction (1500–1900)31 and an observational dataset (1901–2019)32. Further details are provided in the Data and Methods section. Summer 2019 falls within the top five warmest recorded in Europe since the early 16th century, only surpassed by other recent devastating summers (2018, 2010 and 2003; Fig. 1a). Although extremely warm conditions for the 2019 summer were mainly confined to western and some central areas of the continent, temperature anomalies were large enough to produce an overall continental anomaly close to +2 °C (with respect to 1981–2010). Recent results estimate a return period of nearly 300 years for a similar event, taking into account recent climatic conditions18. As seen in Fig. 1b, warm summers have become more frequent since the last decades of the 20th century, and the 21st century concentrates an unusual frequency of extreme summers when compared to the long-term variability (1500–2019). This is reflected by the dominance of post-2000 events in the high-end tail of the European summer temperature distribution (histogram, Fig. 1a), and the pronounced shift in the 30-year Gaussian fitted distributions between 1960–1989 and 1990–2019 (Fig. 1a, light and dark grey shading).
Fig. 1: Summer 2019 in Europe.
a European summer (JJA) land temperature anomalies (°C, with respect to 1981–2010) for 1500–2019 (vertical lines) and their probability density function (percentage, histogram). The five warmest (coldest) summers of 1500–2019 are highlighted in red (blue). Light (dark) grey shading shows a Gaussian fit of the distribution for 1960–1989 (1990–2019). These distributions are displayed for illustration purposes only, and they should not be considered a robust representation of the true distributions given the limited sample size. b Smoothed running decadal frequency of extreme summers (>95th percentile of 1500–2019), with dotted line showing the maximum decadal value that could be expected by random chance (p < 0.05). c Average TX (daily maximum temperature) anomalies for the 2019 and 2003 summers (calculated with respect to their corresponding previous 30-year climatological means), and their difference. d Same as (c) but for the highest TX anomalies registered in each summer and grid cell, regardless of the day of occurrence.
Interestingly, the European mean temperature anomaly of summer 2019 falls very close to that of summer of 2003 (Fig. 1a). The spatial distribution of TX anomalies averaged for summer 2019 was also somewhat similar to that observed in 2003, according to the E-OBS dataset (see Fig. 1c). Taking into account the comparable magnitude and spatial signatures of the 200314,33 and 2019 summers, we used the former as a "benchmark" to evaluate the exceptionality of the 2019 summer temperature anomalies. Given the fast pace of current atmospheric warming, the comparison of 2003 and 2019 summers was performed by defining temperature anomalies with respect to two distinct baselines: (i) the full available period (1950–2018) and (ii) the previous 30-year period at the time these summers occurred (i.e. 1973–2002 for 2003 and 1989–2018 for 2019), which leads to a warmer climatological baseline for 2019 than for 2003. As shown in Fig. 1c, summer mean TX anomalies were in general larger for 2003 than in 2019 with respect to their corresponding climatological conditions. However, the highest daily TX anomalies registered in 2019 exceeded those observed in 2003, even if anomalies are computed with respect to their corresponding climatologies (Fig. 1d). Specifically, daily TX in northeastern France and Benelux surpassed climatological values by nearly 20 °C during summer 2019, compared with the maximum TX anomalies observed in the same regions during summer 2003 (~16 °C). Note that these values are also the warmest TX anomalies of the continent in both summers. Therefore, the dichotomy between summer averages and daily values reflects that while summer 2003 was more anomalous for the climatic conditions expected at that time (essentially due to the longer nature of the August 2003 HW), the 2019 HWs were generally more intense than the 2003 HW at daily time scales in many places of western and central Europe. Similar conclusions are obtained when using the full period (1950–2018) as baseline (Supplementary Fig. 1). To illustrate the contribution of the non-stationary climate conditions to the magnitude of recent record-breaking events, Supplementary Fig. 2a shows the difference between the summer mean TX 30-year climatologies as of 2019 and 2003 in Europe. In just a ~15-year period, TX normals have increased by more than 1 °C in summer over most areas (and even by ~2 °C at some locations of southern Europe). Additionally, the rate of record-breaking events has also been increasing over the European continent (see Supplementary Fig. 2b), consistent with the rise in European summer mean temperatures during that period (see Supplementary Fig. 2c): Actually, approximately 2/3 of historical European TX extremes have been observed in the last two decades (i.e. post-2000).
To stress the exceptionality of the 2019 HWs, Fig. 2a presents the areas where all-time records in TX were hit during that summer. The spatial pattern shows a strong resemblance with the corresponding map for summer 2003, in particular over France and Benelux (see Supplementary Fig. 3). This means that a substantial part of the records set in summer 2003 was broken during 2019, with the exception of some areas in southwestern Europe (e.g. western Iberia, where 2003 temperatures were shattered during summer 201812).
Fig. 2: The 2019 European heatwaves.
a Maximum daily TX (°C) during summer 2019 according to the E-OBS dataset (shading), and areas where all-time records (since 1950) were broken (hatched areas, with grey darkness indicating the month of occurrence). b Total number of heatwave days (#, shading) and average Z500 anomaly (m, contours) for the June HW. Dots indicate regions where the heatwave duration was above the 90th percentile of the local distribution obtained from all summer heatwave events of 1948–2019 that affected western Europe (WEU, [43°–53° N, 0°–10° E]) according to our algorithm (see Methods). c Same as (b), for the July HW.
The unprecedented temperatures during summer 2019 were associated with two clearly distinct HWs in late June and late July. In Fig. 2b, c the spatial distribution and duration of these HW events are depicted, as diagnosed from a novel HW tracking algorithm (see "Data and Methods"). The panels show that the spatial distribution of areas under HW conditions during July extended much further north than those during the June HW. This is in agreement with the timing of new all-time records presented in Fig. 2a. During July, unprecedented TX was reported over larger areas and dominated higher latitudes, including more than half of the French territory, the Benelux, western Germany, southeastern England and parts of Scandinavia. In contrast, daily all-time records in June were essentially restricted to southeastern France and northeastern Iberia. In spite of this, the highest absolute values (TX > 45 °C) were observed during the June HW. The persistence of HW conditions was also more prominent over land during the June HW (cf. dots in Fig. 2b, c), with large areas of western Europe experiencing an extremely high number of HW days. These differences in the spatial signatures of the 2019 HWs are also reflected in the latitudinal location of the 500 hPa geopotential height (Z500) anomalies (contour lines in Fig. 2b, c), suggesting distinct atmospheric circulation patterns.
Atmospheric circulation during the 2019 HWs
In this section, we describe the large-scale atmospheric circulation configurations behind the summer 2019 HWs. The temporal evolution and spatial tracking of the two HWs (summarized in Supplementary Fig. 4) show that the initial location of the HW centre was detected much further south in June than in July. In the former, HW conditions originated over northern Africa and then migrated towards northern France, before affecting eastern Europe during its later stages. In contrast, the July HW onset was detected over France and then moved to higher latitudes, reaching areas close to the Arctic towards the end of its lifecycle.
Despite these differences, a relevant common factor can be identified. Both events displayed a classical pattern of Z500 positive anomalies over the affected areas, accompanied by the presence of a low-pressure system in the eastern Atlantic (see also Supplementary Fig. 4). 1000–500 hPa geopotential height thicknesses averaged for the HW periods are presented in Fig. 3a, b, revealing pronounced ridge-like patterns in both events, extending from northern Africa towards western Europe. However, the ridge affecting southwestern Europe during the June HW was stronger and better defined (i.e. sharper zonal gradients). As a result, a stronger southerly wind component characterized this first HW, when compared to relatively more stagnant conditions during the late July episode. This is in agreement with the strong intensity of the Saharan intrusion (see "Data and Methods" for details) observed during the first event (Fig. 3a), when an air mass with desertic features reached unprecedented latitudes over France. Saharan intrusions have been shown to be associated with extreme heat events in southwestern Europe12,34,35, as they present very high potential temperatures and low moisture content, favoring intense surface warming under anticyclonic conditions. During the July HW, air masses with such thermodynamic properties were not detected further north than the western Mediterranean (Fig. 3b). In consequence, these results (supported by the HWs evolution) suggest a more pronounced influence of warm advection during the June HW.
Fig. 3: Synoptic configuration and forcing mechanisms.
a Number of days (shading) when a Saharan intrusion was detected in each grid cell during the June HW. Black dots represent areas where the occurrence of Saharan intrusions was unprecedented (since at least 1948). Lines depict the composite of 1000–500 hPa geopotential height thickness (dam) for the days when our algorithm detects heatwave conditions. The dashed contour at 580 dam indicates the minimum threshold for Saharan dust intrusion. Panel adapted from Sousa et al.12. b Same as (a), but for the July HW. c Temporal evolution of the 1000–850 hPa temperature anomaly (°C, with respect to 1981–2010; grey line; right y-axis) and the contributors to the temperature tendency (°C day−1; coloured lines; left y-axis) over WEU, from lag −8 to lag +8 days of the June HW onset there (24 June). Grey shading represents days with HW conditions in WEU. d Same as (c) but for the July HW (onset on 23 July). e Temporal evolution of the fractional area (%) dominated by each forcing during the June HW. Grey shading as for previous panels. f Same as (e) but for the July HW. A 3-day smoothing is applied to the series presented in (c–f).
Following the Eulerian methodology developed in previous studies12,26, in Fig. 3c, d the main physical mechanisms contributing to the temperature anomalies in the lower troposphere are examined for both HWs (see "Data and Methods"). Air motions, both horizontal (warm advection, red line) and vertical (strong adiabatic heating due to subsidence, blue line) are often important for the establishment and maintenance of HWs over Europe, although their relative contributions may differ22,36. This is also the case for the 2019 HWs. For the June HW, horizontal advection (red) was relevant before and at the onset of the HW, while vertical descent (blue) was essential for its maintenance (Fig. 3c). Differently, diabatic processes (green line) played a more important role for the setup of the July HW (Fig. 3d). This diversity in the underlying processes of the HW events is even more noticeable considering the fraction of areas (within western Europe, WEU, [43°– 53° N, 0°–10° E]) where each contributing factor accounted for the largest temperature changes during the HWs lifecycles (Fig. 3e, f). Accordingly, as discussed in detail below, regional diabatic processes played a key role during the July HW. In Supplementary Fig. 5 (top panels) these differences are reinforced by the day-to-day evolution of the vertical profiles of temperature anomalies and horizontal wind averaged over WEU. During the June HW air masses presented reduced vertical gradients (presumably associated with the presence of the vertically homogeneous Saharan warm air intrusion) as compared to the July HW. Also, wind vectors support the major role of horizontal advection in the onset of the June HW, contrasting with more stagnant conditions at the peak of both HWs. This is further evidenced by the mean WEU vertical profiles of absolute temperature averaged over the HWs duration, as well as the instantaneous profiles at the peak of the HWs (bottom panels of Supplementary Fig. 5). Moreover, during the July HW, temperature anomalies seem to propagate upwards from the surface a few days prior to the HW onset, suggesting a progressive surface-atmosphere coupling, building up in the lower troposphere towards the HW peak. A similarly gradual warming process has been reported in the boundary layer, leading to self-intensification of near-surface temperatures during the well-known mega-HWs observed in western (2003) and eastern (2010) Europe22. In the next section, we further explore whether the diabatic processes that dominated the establishment of the July HW were influenced by land–atmosphere feedbacks.
Amplification of the late July 2019 HW due to soil desiccation
To explore the presence of land–atmosphere feedbacks during the July HW, we first analyzed the temporal evolution of a set of relevant variables averaged over a box covering the region with the highest TX anomalies (northeastern France and Belgium) as presented in Fig. 4. The preceding June HW contributed to strong losses in soil moisture content in that region, with this drying also reinforced by above-average radiative fluxes at the surface and low precipitation throughout July (see Supplementary Fig. 6). Consequently, persistent soil desiccation occurred between the two HWs. This resulted in anomalous surface heat fluxes, as shown in the lower panel of Fig. 4a. The three-week period in between the two HWs was characterized by an approximate doubling (halving) of the sensible (latent) heat fluxes when compared to the corresponding climatological values for that time of the year. This is reflected in the recurrence of days with Bowen ratio values above 1, indicating that energy partition was dominated by sensible heat fluxes from the surface, due to soil moisture limited latent fluxes (see also Supplementary Fig. 6). These results point to a contribution from regional soil moisture deficit to near-surface warming that persisted until the onset of the July HW (i.e. local land–atmosphere processes).
Fig. 4: Contributions to the July HW.
a Evolution of a set of variables related to HWs and surface processes, averaged for a regional box over the record-breaking area of the July HW (NE France/Belgium [48°–50.5° N, 2°–6° E]): Upper panel shows TX (°C) and 0–10 cm soil moisture (volumetric fraction), while lower panel shows latent/sensible heat fluxes and net radiative flux anomalies (W/m2). Dashed lines represent the climatological values and hatched areas correspond to positive (negative) anomalies for TX and sensible heat flux (soil moisture). Days with Bowen Ratio above 1 are also presented (red bars in lower panel), illustrating periods where upward fluxes of sensible heat exceed those of latent heat. b Sensible heat flux anomalies (shading, W/m2) averaged during the 7 days prior to the July HW, with vectors depicting the mean near-surface wind during the same period. Dots represent areas with soil moisture deficits (different sizes depict 10%, 20 and 30% deficit), averaged during the 15 days before the July HW onset (here using the C3S satellite derived product). Black box represents the area considered for the series presented in a).
Figure 4b illustrates relevant fields for the land–atmosphere coupling on a larger spatial domain than the regional box over NE France/Belgium considered in Fig. 4a. The areas with significant soil dryness (dots) in the weeks preceding the July HW are in good spatial agreement with subsequent large sensible heat flux positive anomalies (shading) during the build-up of the July HW over NE France/Belgium. Collocated large anomalies of both fields extended over large areas, suggesting land–atmosphere coupling beyond NE France/Belgium, particularly to the south of the region hit by the July HW. This, along with the mean near-surface wind direction observed in the days preceding the July HW, suggests similar processes to those describing the concept of self-propagation21,22. Under the presence of southerly winds, dry air masses in central France warmed by anomalous sensible heat fluxes prior to the July HW were advected further north, likely enhancing remote land–atmosphere feedbacks and local sensible heat fluxes over the box displayed in Fig. 4b. This process arguably contributed to the amplification of the July HW. The circulation analogue exercise conducted further ahead supports these conclusions.
A similar analysis was performed for the region with the highest temperature anomalies during the June HW (southeastern France, Supplementary Fig. 7). During the intense June HW, energy transfer from the surface by sensible heat fluxes was below climatological values. In addition, in this area close to the Mediterranean, soil desiccation was not as intense as observed further north. These results suggest land–atmosphere coupling did not substantially contribute to the June HW, thus reinforcing the contrast between the two events, i.e. the more advective nature of the June HW compared to the dominance of diabatic processes associated with land–atmosphere coupling during the July HW.
To further deepen the process analysis discussed above, Figs. 5 and 6 present two distinct analogue exercises (see "Data and Methods") with the aim of evaluating: (i) the potential contribution of the June HW to the subsequent soil desiccation observed in July; (ii) the level of amplification of surface temperature anomalies during the July HW as a result of the preceding soil moisture deficits.
Fig. 5: Intensified soil desiccation after the June HW.
Mean anomalies (with respect to 1981–2010) of Z500 (m, contours) for the June HW (24 June–1 July 2019) and subsequent (15-day forward mean) soil moisture at 0–10 cm (volumetric fraction, shading), as reconstructed from daily flow analogues of Z500 over Europe (solid box in c) for (a) 1950–1983 and (b) 1984–2018. c Difference between (b) and (a). d Flow-conditioned (dark grey boxes) and random (light grey boxes) distributions of the mean soil moisture content at 0–10 cm over WEU (dashed box in c) for the same 15-day period during 1950–1983 and 1984–2018 (x-axis). Boxes and whiskers show the ±0.5·SD around the mean and 5th–95th percentile ranges, respectively, with circles denoting the maximum and minimum values. Horizontal dashed line depicts regional mean averaged values observed during the June 2019 HW. Data source: NCEP/NCAR reanalysis.
Fig. 6: Soil-atmosphere feedbacks during the July HW.
Mean Z500 (m, contours) and TX (°C, shading) anomalies (with respect to 1981–2010) reconstructed for the July HW (23–26 July 2019) from daily flow analogues of Z500 over Europe (solid box in c) preceded by (a) wet (above 66th) and (b) dry (below 33rd) soil moisture conditions at 0–10 cm in WEU (dashed box in c) during the previous 15 days. c Difference between (b) and (a). d Flow-conditioned (dark grey boxes) and random (light grey boxes) distributions of the mean TX anomalies for the July HW over WEU preceded by wet and dry conditions (x-axis). Horizontal dashed line depicts regional mean averaged values observed during the July 2019 HW. e As (d) but for the root-mean-square error (RMSE) distributions of the flow-conditioned (dark grey boxes) and random (light grey boxes) analogues. Boxes and whiskers show the ±0.5·SD around the mean and 5th–95th percentile ranges, respectively, with circles denoting the maximum and minimum values. Data source: NCEP/NCAR reanalysis.
The results of the flow analogues for the June HW indicate that recent circulation conditions similar to those reported during the June HW have some drying imprints in the subsequent soil moisture conditions of western Europe (Fig. 5b). Indeed, the soil moisture content over WEU (dashed box in Fig. 5c) is significantly lower for flow analogues of the June HW (Fig. 5d, dark boxes) than for random circulation conditions (light boxes; p < 0.05; t-test and Kolmogorov–Smirnov test), portraying the role of the atmospheric circulation pattern in driving subsequent soil moisture deficits. These differences are even larger when using ERA5 (1979–2019) or ERA20C (1900–2010) data as a pool of analogues (see Supplementary Figs. 8 and 10), suggesting reduced variability of soil moisture in NCEP/NCAR (i.e. weaker responses to atmospheric forcing) and/or differences related to the thickness of the uppermost soil layer (0–10 cm in NCEP/NCAR vs. 0–7 cm in ERA reanalyses). These results lend support to the hypothesis that the atmospheric circulation associated with the June HW contributed to the subsequent desiccation that preceded the July HW. Note that drying was not so obvious during the actual June HW in observations (Fig. 4a), arguably because soils were replenished throughout a deeper layer (0–2 m) prior to the event (upper panel of Supplementary Fig. 6), which might have contributed to an initial dampening of the desiccation process. On the other hand, the results of the analogue exercise also indicate that flow analogues precede drier conditions in the present than in the recent past (Fig. 5a, b). Part of this difference is associated with a generalized regional drying over the analyzed period, since a comparable soil moisture decrease is also observed between the random circulation distributions of both subperiods (Fig. 5d), which are not constrained by the atmospheric circulation. Qualitatively similar results are obtained for ERA reanalyses (see Supplementary Figs. 8d and 10a), although trends and patterns are overall weaker in ERA20C, arguably due to the lack of soil moisture-related observations in the assimilation process. The temporal differences between soil moisture distributions are consistent with the reported occurrence of more severe European droughts due to enhanced atmospheric evaporative demand by recent warming trends37,38. In summary, our results support that the June HW, together with the precipitation deficits and high radiative fluxes that followed it, contributed to the soil moisture deficits preceding the July HW. Furthermore, we also find that this drying signal has been amplified in recent decades. While this result should not be interpreted as a formal attribution to anthropogenic factors, it is in agreement with recent studies attributing dry-season water imbalance changes to human-induced climate change39.
To support the above-mentioned amplifying role of the observed soil moisture deficits in the magnitude of the July HW, we have searched for flow analogues of each day of the July HW and reconstructed the associated TX anomalies (Fig. 6). We account for the role of soil desiccation by distinguishing between analogue days preceded by dry and wet conditions over WEU, as inferred from regional mean anomalies of soil moisture averaged for the previous 15 days (see "Data and Methods"). The results indicate that similar flow patterns to those recorded during the July HW tend to cause warmer conditions when they are preceded by dry conditions (Fig. 6a, b). In other words, for similar atmospheric circulation, soil moisture deficits promote warming (Fig. 6d, dark boxes; see also Fig. 6c). By construction, this warming should be interpreted as a response to drying, and not the other way around, since trends have been removed and the use of time lags minimizes misattributions of cause and effect. Random distributions (unconstrained by the atmospheric circulation) indicate similar warming levels following short-term soil moisture deficits (Fig. 6d, light boxes). Accordingly, regional drying seems to favour above-normal temperatures, regardless of the atmospheric circulation. Interestingly, additional analyses reveal atmospheric circulation differences between the flow analogues of dry and wet years (Fig. 6e, dark whiskers), involving larger positive Z500 anomalies for flow analogues preceded by soil moisture deficits (see Fig. 6c), which translate to lower RMSE (i.e. closer patterns to the actual circulation) than during wet conditions. These differences in RMSE are also observed for the unconditional distributions (Fig. 6e, light whiskers), indicating an overall tendency for dry periods to precede higher pressure anomalies. This could reflect methodological issues (e.g. limited sampling, residual trends, autocorrelation issues), although we obtain similar results for dry and wet periods of the ERA5 (1979–2019) and ERA20C (1900–2010) reanalyses (see Supplementary Figs. 9 and 10). Alternatively, the Z500 differences between dry and wet conditions may also indicate feedbacks of soil moisture deficits on the atmospheric circulation anomalies. If the latter is the case, such effect herein involves somehow weak high-pressure patterns, whose spatial details depend on the considered dataset (cf. Fig. 6c and Supplementary Fig. 9c) and methodological choices. Previous studies have suggested atmospheric circulation responses to soil moisture deficits, including local effects through thermal expansion by enhanced sensible heat fluxes40, and remote effects caused by a thermally-induced low41 or changes in cloud cover42. In short, our results indicate that soil moisture deficits in western Europe intensified the warming already expected from the circulation observed during the July HW and might even have contributed to amplifying the circulation anomalies. Additional studies are warranted to explore and quantify the contribution of atmospheric circulation responses induced by land–atmosphere coupling to the intensity of HWs.
Summary and discussion
Two distinct HWs affected widespread areas of western Europe in June and July 2019, contributing to placing that summer within the top five warmest since 1500 at the European scale. While the spatial distribution of the affected areas strongly resembled the historical HW of August 2003, the relatively shorter-lived 2019 HWs were more intense on daily time scales, shattering previous all-time records in many places (some of them standing since 2003). Here we have dissected these events with recently developed tools to provide an assessment of different relevant factors: (i) the role of the dynamics (synoptic setups associated with the 2019 HWs); (ii) underlying physical processes (warm advection vs. diabatic fluxes, including enhanced near-surface heating due to soil moisture deficits); (iii) recent thermodynamic changes (the steady regional warming trend).
We have applied recent novel methodologies to track the two 2019 HWs and the occurrence of subtropical warm air intrusions. The June HW displayed a clear fingerprint of a Saharan intrusion. The advection of exceptionally warm and dry air, together with enhanced subsidence under pronounced Z500 anomalies, are the main features of this event that triggered all-time temperature records in southern France and northeastern Spain. Differently, the July HW extended much further north, and unprecedented temperatures hit a comparatively larger domain of Europe. Diabatic processes, rather than temperature advection associated with a Saharan intrusion, played a dominant role for the setup of this event. Some studies have found different relative contributions of horizontal advection, subsidence and diabatic processes in shaping European HWs22,36. Our analysis attests to the distinct nature of two HWs that took place over the same region within a few weeks, supporting the coexistence of distinct dominant forcing mechanisms for their onset and maintenance.
We have shown evidence supporting a contribution of land–atmosphere coupling to the temperature anomalies during the July HW, potentially involving the self-propagation mechanism discussed for previous HWs20,21. The atmospheric conditions prevailing since the onset of the June HW, i.e. prolonged periods with no rain and persistently high solar radiative fluxes and temperatures, significantly contributed to anomalous soil moisture deficits over large areas, in particular over France during the transition period between both HWs. This resulted in strong energy transfer between the soil and atmosphere, via increased (decreased) sensible (latent) heat fluxes before the onset of the July HW. These diagnostics of land–atmosphere feedbacks were not restricted to the region most affected by the July HW (northeastern France and Belgium). They were also observed further south in areas under the influence of sustained southerlies, which suggests a northward propagation of dryness through the advection of warm air masses. By using atmospheric flow analogues, our analysis further supports the role of soil desiccation on the amplification of the July 2019 HW. In this context, lessened soil-atmosphere feedbacks have been reported in areas where shallow groundwater is available43. Accordingly, we argue that an event occurring under similar synoptic patterns to those observed in July 2019 would result in lower temperature anomalies if preceding soil conditions were wetter. Our results also suggest non-negligible effects of the June HW on the July HW through an imprint of the former on the soil moisture deficits that influenced the latter. However, further modelling studies are warranted to support this conclusion, as well as to address land–atmosphere feedbacks on the atmospheric circulation. In this regard, recent ensemble experiments provide evidence about complex local and remote effects of soil moisture deficits up to two months, including non-local responses in atmospheric circulation that can further amplify HW magnitudes42.
Our results also indicate that previous colder climatic conditions would have resulted in less soil desiccation than observed. This effect is found regardless of the atmospheric circulation, pointing to atmospheric warming effects as a consequence of anthropogenic forcing44, probably enhanced by land-use and land-cover changes43,45. We acknowledge limitations (e.g. limited sample size, transient climate conditions over the analyzed period or biases in the reanalysis datasets) and assumptions (e.g. event definition) in our analogue experiments. Moreover, their results should not be interpreted as a formal attribution to anthropogenic forcing, despite the overall consistency based on independent evidence. For example, following earlier studies on European HWs46, recent findings also indicate that hot summer temperatures in the Mediterranean area are often shortly preceded by the occurrence of dryness in spring or even early summer47, thus favoring temporally compounding events48. These facts highlight that future extreme heat episodes will probably be even further exacerbated by the increasing severity of drought events37,49, as stronger losses by evaporation are expected in a warming climate50 whenever soil moisture is still available51.
E-OBS dataset
Daily minimum and maximum 2 m temperature from the E-OBS gridded dataset (v21.0) was used to characterize anomalies and extremes during the 2003 and 2019 events, as well as trends and record-breaking values during the available period (since 1950). Anomalies are computed by removing the daily climatological mean (1981–2010). E-OBS is a European land-only high-resolution gridded observational dataset, using the European Climate Assessment and Dataset (ECA&D) blended daily station data52. It is presented on a horizontal resolution of 0.25°×0.25°. Files are replaced in monthly updates and in updated versions of the E-OBS dataset. Accordingly, small changes might occur between these releases after new data and/or stations are added.
NCEP/NCAR dataset
Meteorological fields were retrieved from the NCEP/NCAR reanalysis daily dataset53, starting from 1948. The following variables were considered for pressure levels between 1000 and 500 hPa on a 2.5° × 2.5° horizontal resolution grid: air temperature, geopotential height, zonal/meridional wind components, vertical velocity. We also analyzed other fields represented in a Gaussian grid: surface net radiation fluxes (long-wave and short-wave), latent and sensible heat fluxes, precipitation, 2 m temperature, 10 m wind, potential evapotranspiration and soil moisture fraction (0–10 cm and 10–200 cm). These fields were used to: (i) characterize and track the HW events, (ii) derive a catalogue of Saharan intrusions, (iii) generate vertical profiles, (iv) compute the contributing terms to the temperature tendency equation, and (v) perform the analogue exercises. Specific methods for products derived from these variables are explained below. In all cases, anomalies are computed with respect to the climatological seasonal cycle (1981–2010).
ERA5 and ERA20C datasets
Meteorological fields were extracted from two ECMWF (European Centre of Medium-range Weather Forecast) reanalyses to replicate the analogue exercises (see methodology further ahead) performed with the NCEP/NCAR dataset. The ERA554 and ERA20C55 datasets were considered, using the highest horizontal resolution available for the latter (1.25° × 1.25°), for the 1979–2019 and 1900–2019 periods, respectively. Daily time series of 2 m temperature, Z500 and soil moisture fraction (0–7 cm) were retrieved.
European temperature reconstruction since 1500
We use a near-surface temperature reconstruction on a 0.5° × 0.5° regular grid over [35°–70° N, 25° W – 40° E] based on long instrumental series and different proxies (including Greenland ice cores, tree rings and documentary sources)31,56. This reconstruction covers the period 1500–2002, although data for 1901–2002 comes from instrumental datasets. Near-surface temperature analyses of the Goddard Institute for Space Studies (GISS, data.giss.nasa.gov/gistemp/)32 were herein used at 2° × 2° spatial resolution to update the temperature reconstruction over the period 1901–2019. This monthly observational dataset was herein used at 2° × 2° spatial resolution to update the temperature reconstruction over the period 1901–2019. To do so, reconstructions and instrumental observations were linearly interpolated into a common 2.5° × 2.5° resolution grid over land, as that employed in Barriopedro et al.57. Afterwards, seasonal mean temperature anomalies were computed with respect to their respective 1981–2010 climatologies. Finally, the European mean temperature anomaly of each summer in the 1500–2019 period was computed as the area-weighted mean of all land 2.5° grid cells.
To assess whether the decadal frequency of extreme European summers is significantly higher than that expected by random chance we performed a 1000-trial bootstrap, each containing a randomly resampled series of the European summer temperature anomalies over 1500–2019. For each trial, the maximum running decadal frequency was retained, with the 95th percentile of the resulting distribution identifying the value whose one-tailed likelihood of occurring by chance is less than 5%.
C3S soil moisture dataset
The C3S dataset from Copernicus provides estimates of volumetric soil moisture (in m3 m−3) in a layer of 2 to 5 cm depth, retrieved from a large set of satellite sensors. Data is presented on a 0.25° × 0.25° regular grid with some gaps in space and time. Climate Data Records (CDR) and interim-CDR (ICDR) products are generated using the same software and algorithms. CDR is intended to have sufficient length, consistency, and continuity to characterize climate variability and change. ICDR provides a short-delay access to current data where consistency with the CDR baseline is expected but has not been extensively checked. The dataset contains the following products: "active", "passive" and "combined". The "active" and "passive" products are created by using scatterometer and radiometer soil moisture products, respectively. The "combined" product results from a blend based on the two previous products. Here we used the "combined" dataset, which is available for Europe since 1978. Climatological means for each calendar day and grid cell were computed in order to derive local and regional anomalies during the 2019 HW events.
Data is accessible online trough: https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-soil-moisture.
HW algorithm
To perform a spatio-temporal tracking of the 2019 summer HWs, we have adopted a semi-Lagrangian perspective. The 850 hPa temperature (T850) from the NCEP/NCAR dataset was used to analyze the spatio-temporal evolution of extreme temperature patterns, instead of considering HWs as isolated local surface extremes, thus enabling the temporal monitoring of the spatial extent of HWs affecting distinct areas during their lifecycle. The algorithm identifies HW events, defined as areas larger than 500,000 km2 with daily mean T850 above the local daily 95th percentile (with respect to 1981–2010) that persist for at least four consecutive days and fulfil some predefined conditions on spatial overlap during those days. Additional information on this methodology can be found in Sánchez-Benítez et al.58.
Saharan intrusions
A catalogue of air masses with subtropical desertic characteristics was obtained relying on simple thermodynamic air properties, considering the following conditions:
1000–500 hPa geopotential height thickness higher than 5800 m 59;
925–700 hPa potential temperature (θ) above 40 °C.
Grid cells satisfying both criteria correspond to low density, warm, stable and very dry air masses, with the potential to be additionally warmed by subsidence60. Using the NCEP/NCAR reanalysis dataset, we have classified the mean climatological (1948–2019) location and extension of Saharan air masses during summer, identifying temporary intrusions towards higher latitudes for each grid cell on a daily basis. Further details on the methodology can be found in Sousa et al.12.
Temperature tendency and related processes
The contributions of horizontal advection and vertical descent to temperature tendency were determined as
$$\left( {\frac{{{\Delta}T}}{{{\Delta}t}}} \right)_h\left( {\lambda ,\phi ,t} \right) = - \vec v \cdot \nabla _pT,$$
$$\left( {\frac{{{\Delta}T}}{{{\Delta}t}}} \right)_v\left( {\lambda ,\phi ,t} \right) = - \omega \frac{T}{\theta }\frac{{\partial \theta }}{{\partial p}},$$
where (1) is the temperature advection by the horizontal wind, and (2) the temperature tendency by vertical motion. Equations (1) and (2) are computed from daily mean fields in constant pressure coordinates, according to the pressure levels available in the NCEP/NCAR dataset, with (λ, ϕ, t) representing latitude, longitude and time, respectively, and v being the horizontal wind, T the temperature, ω the vertical velocity and θ the potential temperature. The daily mean temperature rate due to other diabatic processes (e.g. radiative and heat fluxes) is estimated as a residual from the previous two terms based on the temperature tendency equation:
$$\left( {\frac{{{\Delta}T}}{{{\Delta}t}}} \right)_d\left( {\lambda ,\phi ,t} \right) = \frac{{{\Delta}T}}{{{\Delta}t}} - \left( {\frac{{{\Delta}T}}{{{\Delta}t}}} \right)_h \, - \, \left( {\frac{{{\Delta}T}}{{{\Delta}t}}} \right)_v,$$
where the first term on the right-hand side of (3) is the daily mean temperature tendency (in °C day−1). It must be kept in mind that different factors such as sub-grid turbulent mixing, analysis increments and other numerical errors may contribute to the residual term. This bulk analysis is performed for the 1000–850 hPa layer. The relative contribution of each term to the temperature tendency is used to identify the dominant mechanism for each day and grid cell. For further details on the methodology the reader is referred to Sousa et al.26.
Analogue method
We use the analogue method, which infers the probability distribution of a target field from the atmospheric circulation during a considered time interval28. Herein, two analogue exercises were designed, one for each HW event, but with different target fields, as explained below. In both cases, flow analogue days are defined from their root-mean-square errors (RMSE) with respect to the actual Z500 anomaly field at the time of the HW event over a given domain ([35°–65°N, 10° W–25°E] for the June HW, and [40°–70°N, 10° W–25°E] for the July HW, following the regions with the largest Z500 anomalies; Fig. 2b, c). For each day of the considered HW events, the search of flow analogues was restricted to the [−31,31] day interval (i.e. 62-day window) around the corresponding calendar day, excluding the year of occurrence of the HW. Similar results are obtained for 15- and 31-day windows. Analogue days are used to reconstruct the target field by randomly picking one of the N best flow analogues for each day of the HW event. This number was determined based on the pool size of eligible days (Y × L × D, where Y is the number of years, L the length of the window and D the duration of the event) and their associated RMSE distribution, with a minimum value of 20. This N value ranges from 20 to 40, depending on the available period of the dataset. In all cases, the mean RMSE of these N best analogues averaged over all days of the event was below the 10th percentile of the RMSE distribution. For each day, the random selection of flow analogues was repeated 5000 times to derive flow-conditioned distributions. To test whether the dynamics played a significant role in the reconstructed anomalies of the target field, unconditional distributions were also retrieved by repeating the whole process with a random selection of days (instead of restricting the search to N days with similar flow configurations). Different choices in the spatial domain or the number of circulation analogues (e.g. N values ranging between 10 and 50) were tested, yielding similar results. For both analogue exercises, Z500 and volumetric soil moisture content at 0–10 cm were obtained from daily means of the NCEP/NCAR reanalysis (1950–2019). EOBS (1950–2019) was employed for daily maximum temperature at 2 m in the second analogue exercise, although the conclusions remain unchanged if NCEP/NCAR is used instead. We also repeated the analogue exercises with equivalent fields from different periods and reanalyses: ERA5 (1979–2019) and ERA20C (1900–2010; herein taking the 2019 fields from ERA5). Note that in both ERA reanalyses the uppermost soil layer spans 0–7 cm, and that ERA20C only assimilates surface and mean sea level pressure and surface marine winds. In all cases, anomalies are defined with respect to the 1981–2010 period.
In the first analogue exercise, we reconstructed the expected mean volumetric soil moisture fraction for the 15-day period ([1,15] day interval) after each day of the June HW (24 June–1 July 2019) by using daily flow analogues from the present and past subperiods separately, defined as 1984–2018 (1999–2018 and 1951–2010) and 1950–1983 (1979–1998 and 1900–1950) in NCEP/NCAR (ERA5 and ERA20C), respectively. This 15-day period is similar to the temporal interval between the end of the June HW and the beginning of the July HW (Fig. 4a), but we obtain similar results for other choices (e.g. [5,20] or [1,30] day intervals). In addition to spatial fields, flow-conditioned and random distributions of the mean soil moisture content over WEU ([43°–53° N, 0°–10° E]) were computed for each subperiod. As the atmospheric circulation is constrained, the difference between the reconstructions of the past and present should largely be ascribed to overall climatological differences between the two subperiods, enabling the estimation of the effect of recent changes in the soil moisture distributions, howsoever caused.
A second flow analogue exercise was performed to address whether the previously accumulated soil moisture deficits over WEU could have contributed to intensifying the temperature anomalies over that region at the time of the July HW. In this case, we reconstructed the maximum 2 m temperature anomalies expected from the circulation during the July HW, distinguishing between analogue days preceded by dry and wet conditions. Wet and dry conditions are defined as summer days of the full period with 15-day mean regional anomalies for the previous [−15,−1] day interval staying above the 66.6th percentile and below the 33.3rd percentile of the climatological distribution, respectively. That way, soil moisture departures of a given analogue day represent previously accumulated values and are not the direct response to the actual atmospheric circulation conditions. We tested the robustness of the results with respect to the percentile employed for the classification of dry and wet years (e.g. the first and last deciles or quartiles), reporting larger differences for the most extreme definitions. To avoid the effects of long-term trends that may further complicate the causality of the relationships between soil moisture and temperature, these fields were detrended by removing the local trends. For Z500, we removed the regional mean linear trend over the considered domain in order to keep the spatial gradients when searching for flow analogues. Flow-conditioned and random distributions of regional mean temperature anomalies over WEU were also derived for wet and dry conditions.
The choice of reanalysis, periods and other methodological aspects can affect the results quantitatively, as well as the spatial details of the reconstructed patterns. However, for the datasets employed and sensitivity tests described above, we did not report substantial differences that affect the main conclusions of the text, therefore adding confidence on the results.
All data used in this study is publically accessible online via the following links: E-OBS dataset: https://surfobs.climate.copernicus.eu/dataaccess/access_eobs_months.php. NCEP/NCAR dataset: https://psl.noaa.gov/data/gridded/data.ncep.reanalysis.html. ERA5 dataset: https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era5. ERA20C dataset: https://www.ecmwf.int/en/forecasts/datasets/reanalysis-datasets/era-20c. European temperature reconstruction since 1500: https://www.ncdc.noaa.gov/paleo-search/study/6288. C3S soil moisture dataset: https://cds.climate.copernicus.eu/cdsapp#!/dataset/satellite-soil-moisture.
All codes used in this study are available upon request after publication.
Kalkstein, L. S. Lessons from a very hot summer. Lancet 346, 857–859 (1995).
Grynszpan, D. Lessons from the French heatwave. Lancet 362, 1169–1170 (2003).
Tressol, M. et al. Air pollution during the 2003 European heat wave as seen by MOZAIC airliners. Atmos. Chem. Phys. 8, 2133–2150 (2008).
Konovalov, I. B., Beekmann, M., Kuznetsova, I. N., Yurova, A. & Zvyagintsev, A. M. Atmospheric impacts of the 2010 Russian wildfires: integrating modelling and measurements of an extreme air pollution episode in the Moscow region. Atmos. Chem. Phys. 11, 10031–10056 (2011).
Sutanto, S. J., Vitolo, C., Di Napoli, C., D'Andrea, M. & Van Lanen, H. A. J. Heatwaves, droughts, and fires: exploring compound and cascading dry hazards at the pan-European scale. Environ. Int. 134, 105276 (2020).
Pechan, A. & Eisenack, K. The impact of heat waves on electricity spot markets. Energy Economics 43, 63–71 (2014).
Lubega, W. N. & Stillwell, A. S. Maintaining electric grid reliability under hydrologic drought and heat wave conditions. Appl. Energy 210, 538–549 (2018).
Dosio, A., Mentaschi, L., Fischer, E. M. & Wyser, K. Extreme heat waves under 1.5 °C and 2 °C global Warming. Environ. Res. Lett. 13, 054006 (2018).
Perkins-Kirkpatrick, S. E. & Gibson, P. B. Changes in regional heatwave characteristics as a function of increasing global temperature. Nat. Scientific Rep. 7, 12256 (2017).
Christidis, N., Jones, G. S. & Stott, P. A. Dramatically increasing chance of extremely hot summers since the 2003 European heatwave. Nat. Climate Change 5, 46 (2015).
Copernicus. 2019 was the second warmest year and the last five years were the warmest on record. [Press release]. https://climate.copernicus.eu/copernicus-2019-was-second-warmest-year-and-last-five-years-were-warmest-record (2020).
Sousa, P. M. et al. Saharan air intrusions as a relevant mechanism for Iberian heatwaves: the record breaking events of August 2018 and June 2019. Weather Climate Extremes 26, 100224 (2019).
Larcom, S., She, P. W. & van Gevelt, T. The UK summer heatwave of 2018 and public concern over energy security. Nat. Climate Change 9, 370–373 (2019).
Trigo, R. M., García‐Herrera, R., Díaz, J., Trigo, I. F. & Valente, M. A. How exceptional was the early August 2003 heatwave in France? Geophys. Res. Lett. 32, L10701 (2005).
García-Herrera, R., Díaz, J., Trigo, R. M., Luterbacher, J. & Fischer, E. M. A review of the European Summer Heat Wave of 2003. Crit. Rev. Environ. Sci. Technol.40, 267–306 (2010).
Robine, J.-M. et al. Death toll exceeded 70,000 in Europe during the summer of 2003. C. R. Biol 331, 171–178 (2008).
Shepherd, T. G. Atmospheric circulation as a source of uncertainty in climate change projections. Nat. Geosci. 7, 703–708 (2014).
Ma, F., Yuan, X., Jiao, Y. & Ju, P. Unprecedented Europe heat in June‐July 2019: Risk in the historical and future context. Geophys. Res. Lett. in press, https://doi.org/10.1029/2020GL087809 (2020).
Seneviratne S.I. et al. in Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (eds Field, C. B. et al.). A Special Report of Working Groups I and II of the IPCC. 109–230 (Cambridge University Press, Cambridge, UK, 2012).
Miralles, D. et al. Mega-heatwave temperatures driven by local and upwind soil desiccation. Geophys. Res. Abstr. 21, EGU2019–EGU7729 (2019).
Schumacher, D. L. et al. Amplification of mega-heatwaves through heat torrents fuelled by upwind drough. Nat. Geo 12, 712–717 (2019).
Miralles, D., Teuling, A. & van Heerwaarden, C. C. Mega-heatwave temperatures due to combined soil desiccation and atmospheric heat accumulation. Nat. Geosci. 7, 345–349 (2014).
Fischer, E. M. Climate science: Autopsy of two mega-heatwaves. Nat. Geoscience 7, 332–333 (2014).
Quesada, B., Vautard, R., Yiou, P., Hirschi, M. & Seneviratne, S. I. Asymmetric European summer heat predictability from wet and dry winters and springs. Nat. Climate Change 2, 736–741 (2012).
Pfahl, S. & Wernli, H. Quantifying the relevance of atmospheric blocking for co-located temperature extremes in the Northern Hemisphere on (sub-)daily time scales. Geophys. Res. Lett. 39, L12807 (2012).
Sousa, P. M., Trigo, R. M., Barriopedro, D., Soares, P. M. M. & Santos, J. A. European temperature responses to blocking and ridge regional patterns. Climate Dyn. 50, 457–477 (2018).
Yiou, P. et al. Analyses of the Northern European Summer Heatwave of 201. [in "Explaining Extreme Events of 2018 from a Climate Perspective"]. Bull. Amer. Meteor. Soc. S15–S19. https://doi.org/10.1175/BAMS-D-19-0159.1 (2020).
Jézéquel, A., Yiou, P. & Radanovics, S. Role of circulation in European heatwaves using flow analogues. Climate Dyn. 50, 1145–1159 (2018).
Sánchez-Benítez, A., García-Herrera, R., Barriopedro, D., Sousa, P. M. & Trigo, R. M. June 2017: The Earliest European Summer Mega-heatwave of Reanalysis Period. Geophys. Res. Lett. 45, 1955–1962 (2018).
Barriopedro, D., Sousa, P. M., Trigo, R. M., García-Herrera, R. & Ramos, A. M. The exceptional Iberian heatwave of summer 2018 [in "Explaining Extreme Events of 2018 from a Climate Perspective"]. Bull. Amer. Meteor. Soc. S15–S19, https://doi.org/10.1175/BAMS-D-19-0159.1 (2020).
Luterbacher, J., Dietrich, D., Xoplaki, E., Grosjean, M. & Wanner, H. European seasonal and annual temperature variability, trends, and extremes since 1500. Science 303, 1499–1503 (2004).
Lenssen, N. et al. Improvements in the GISTEMP uncertainty model. J. Geophys. Res. Atmos. 124, 6307–6326 (2019).
Schär, C. et al. The role of increasing temperature variability in European summer heatwaves. Nature 427, 332–336 (2004).
Hernández-Ceballos, M.A., Brattich, E. & Cinell, G. Heat-wave events in Spain: air mass analysis and impacts on 7 Be concentrations. Advances in Meteorology 026018, https://doi.org/10.1155/2016/8026018 (2016).
Salvador, P. et al. African dust outbreaks over the western Mediterranean Basin: 11-year characterization of atmospheric circulation patterns and dust source areas. Atmos. Chem. Phys. 14, 6759–6775 (2014).
Zschenderlein, P., Fink, A. H., Pfahl, S. & Wernli, H. Processes determining heat waves across different European climates. Q. J. R. Meteorol. Soc. 145, 2973–2989 (2019).
Vicente-Serrano, S. M. et al. Evidence of increasing drought severity caused by temperature rise in southern Europe. Environ. Res. Lett. 9, 044001 (2014).
Stagge, J. H., Kingston, D. G., Tallaksen, L. M. & Hannah, D. M. Observed drought indices show increasing divergence across Europe. Sci Rep 7, 14045 (2017).
Padrón, R. S. et al. Observed changes in dry season water availability attributed to human-induced climate change. Nat. Geosci. 13, 477–481 (2020).
Fischer, E. M., Seneviratne, S. I., Vidale, P. L., Lüthi, D. & Schär, C. Soil moisture-atmosphere interactions during the 2003 European summer heat wave. J. Climate 20, 5081–5099 (2007).
Haarsma, R. J., Selten, F., Hurk, B. V., Hazeleger, W. & Wang, X. L. Drier Mediterranean soils due to greenhouse warming bring easterly winds over summertime central Europe. Geophys. Res. Lett. 36, L04705 (2009).
Merrifield, A. L. et al. Local and nonlocal land surface influence in European heatwave initial condition ensembles. Geophys. Res. Lett. 46, 14082–14092 (2019).
Vautard, R. et al. Human contribution to the record-breaking June and July 2019 heatwaves in Western Europe. Environ. Res. Lett. 15, L094077 (2020).
Zipper, S. C., Keune, J. & Kollet, S. J. Land use change impacts on European heat and drought: remote land-atmosphere feedbacks mitigated locally by shallow groundwater. Environ. Res. Lett. 14, L044012 (2019).
Hu, X., Huang, B. & Cherubini, F. Impacts of idealized land cover changes on climate extremes in Europe. Ecol. Indicators 194, 626–635 (2019).
Fischer, E. M., Seneviratne, S. I., Lüthi, D. & Schär, C. Contribution of land‐atmosphere coupling to recent European summer heat waves. Geophys. Res. Lett. 34, L06707 (2007).
Russo, A., Gouveia, C. M., Dutra, E., Soares, P. M. M. & Trigo, R. M. The synergy between drought and extremely hot summers in the Mediterranean. Environ. Res. Lett. 14, 014011 (2019).
Zcheischler, J. et al. Future climate risk from compound events. Nat. Climate Change 8, 469–477 (2018).
Spinoni, J., Vogt, J. V., Naumann, G., Barbosa, P. & Dosio, A. Will drought events become more frequent and severe in Europe? Int. J. Climatol. 38, 1718–1736 (2018).
Samaniego, L. et al. Anthropogenic warming exacerbates European soil moisture droughts. Nat. Clim. Change 8, 421 (2018).
Soares, P. M., Careto, J. A., Cardoso, R. M., Goergen, K. & Trigo, R. M. Land‐atmosphere coupling regimes in a future climate in Africa: from model evaluation to projections based on CORDEX‐Africa. J. Geophys. Res.: Atmos. 124, 11118–11142 (2019).
Haylock, M. R. et al. A European daily high-resolution gridded dataset of surface temperature and precipitation. J. Geophys. Res. (Atmos.) 113, D20119 (2008).
Kalnay, E. et al. The NCEP/NCAR 40-year reanalysis project. Bull. Am. Meteorol. Soc. 77, 437–471 (1996).
Hersbach H., et al. The ERA5 global reanalysis. Q. J. R. Meteorol. Soc. 1–51, https://doi.org/10.1002/qj.3803 (2020).
Poli, P. et al. ERA-20C: an atmospheric reanalysis of the twentieth century. J. Climate 29, 4083–4097 (2016).
Xoplaki, E. et al. European Spring and Autumn temperature variability and change of extremes over the last half millennium. Geophys. Res. Lett. 32, L15713 (2005).
Barriopedro, D., Fischer, E. M., Luterbacher, J. & Trigo, R. M. The Hot Summer of 2010: redrawing the temperature record map of Europe. Science 332, 220–224 (2011).
Sánchez-Benítez, A., Barriopedro, D. & García-Herrera R. Tracking Iberian heatwaves from a new perspective. Weather Climate Extremes: 100238, https://doi.org/10.1016/j.wace.2019.100238. (2019).
Galvin, J.F.P. An Introduction to the Meteorology and Climate of the Tropics 328pp (Wiley-Blackwell, 2016).
Wallace, J. M. & Hobbs, P. V. Atmospheric Science—An Introductory Survey 2nd edn (Elsevier, Canada, 2016)
The authors acknowledge the E-OBS dataset from http://www.uerra.eu (EU-FP6), the Copernicus Climate Change Service (https://cds.climate.copernicus.eu), the data providers in ECA&D (https://www.ecad.eu), the NCEP Reanalysis and GISTEPM data provided by the NOAA/OAR/ESRL PSL (Boulder, Colorado, USA, from their web site at https://psl.noaa.gov/), and the climate reconstructions provided by the World Data Centre for Paleoclimatology (https://www.ncdc.noaa.gov). This work was partially supported by national funds through FCT (Fundação para a Ciência e a Tecnologia, Portugal): P.M.S. thanks project HOLMODRIVE—North Atlantic Atmospheric Patterns influence on Western Iberia Climate: From the Lateglacial to the Present (PTDC/CTA-GEO/29029/2017) and R.M.T. acknowledges project FireCast (PCIF/GRF/0204/2017). This work was also partially funded by project INDECIS, which is part of ERA4CS, an ERA-NET initiated by JPI Climate, with co-funding by the European Union (Grant 690462). C.O. acknowledges funding from the Ramón y Cajal Programme of the Spanish Ministerio de Economía y Competitividad under grant RYC-2014-15036. We also acknowledge support from STEADY (CGL2017-83198-R), project funded by the Spanish Ministerio de Economía, Industria y Competitividad, and JEDIS (RTI2018-096402-B-I00), project funded by the Spanish Ministerio de Ciencia, Innovación y Universidades. P.M.M.S. wishes to acknowledge the LEADING project (PTDC/CTA-MET/28914/2017). The IDL authors (P.M.S, R.M.T and P.M.M.S) would like to acknowledge the financial support FCT through project UIDB/50019/2020—Instituto Dom Luiz.
Instituto Dom Luiz (IDL), Faculdade de Ciências, Universidade de Lisboa, 1749-016, Lisboa, Portugal
Pedro M. Sousa, Pedro M. M. Soares & Ricardo M. Trigo
Instituto de Geociencias, IGEO (CSIC-UCM), C/Doctor Severo Ochoa, 7. Facultad de Medicina (Ed. Entrepabellones 7y 8), Planta 4, 28040, Madrid, Spain
David Barriopedro & Ricardo García-Herrera
Departamento de Física de la Tierra y Astrofísica, Facultad de Ciencias Físicas, Universidad Complutense de Madrid, Plaza Ciencias 1, 28040, Madrid, Spain
Ricardo García-Herrera & Carlos Ordóñez
Departamento de Meteorologia, Instituto de Geociências, Universidade Federal do Rio de Janeiro, Rio de Janeiro, 21941-916, Brazil
Ricardo M. Trigo
Pedro M. Sousa
David Barriopedro
Ricardo García-Herrera
Carlos Ordóñez
Pedro M. M. Soares
P.M.S., D.B., R.G.H, C.O., P.M.M.S. and R.M.T. designed the study. P.M.S. and D.B. conducted the experiments and produced the figures. P.M.S., D.B., R.G.H, C.O., P.M.M.S. and R.M.T. contributed to the interpretation of the results and development of the research plan. P.M.S. wrote the initial manuscript draft. P.M.S., D.B., R.G.H, C.O., P.M.M.S. and R.M.T. organized, revised and edited the manuscript until its final version.
Correspondence to Pedro M. Sousa.
Peer review information Primary handling editor: Heike Langenberg.
Sousa, P.M., Barriopedro, D., García-Herrera, R. et al. Distinct influences of large-scale circulation and regional feedbacks in two exceptional 2019 European heatwaves. Commun Earth Environ 1, 48 (2020). https://doi.org/10.1038/s43247-020-00048-9
Inter-seasonal connection of typical European heatwave patterns to soil moisture
Elizaveta Felsche
Andrea Böhnisch
Ralf Ludwig
npj Climate and Atmospheric Science (2023)
Enhanced trends in spectral greening and climate anomalies across Europe
Michael Kempf
Environmental Monitoring and Assessment (2023)
Classification of extreme heatwave events in the Northern Hemisphere through a new method
Yuqing Wang
Chunzai Wang
Climate Dynamics (2023)
Climate warming amplified the 2020 record-breaking heatwave in the Antarctic Peninsula
Sergi González-Herrero
Marc Oliva
Communications Earth & Environment (2022)
The influence of soil dry-out on the record-breaking hot 2013/2014 summer in Southeast Brazil
J. L. Geirinhas
A. C. Russo
R. M. Trigo
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online)
|
CommonCrawl
|
Security analysis for a extended ElGamal Signature against selective unforgeability
Let $H$ is a cryptography hash function and $\Pi=(\mathsf{G}, \mathsf{S}, \mathsf{V})$ is a digital signature, as follows:
$(h_1=g^x,h_2=g^y) \leftarrow \mathsf{G}(1^n)$, where $x,y$ uniformly random from $\mathbb{Z}^*_q \ .$
$(r=g^k,s=(H(m)-x \cdot r)\cdot k^{-1},z= y^{-1} \cdot k ) \leftarrow \mathsf{s}_{x,y}(m)$, where $k$ uniformly random from $\mathbb{Z}^*_q \ .$
$b:=\mathsf{V}_{h_1,h_2}(m,(r,s,z))$, where $b := \begin{cases} 1 & \text{if } (g^{H(m)} = h_1^r \cdot h_2^{z \cdot s}) \\ 0 & \text{otherwise} \end{cases}$ The above construction is similar to ElGamal signature (https://en.wikipedia.org/wiki/ElGamal_signature_scheme).
1- Where can I find proof of existential unforgeability ElGamal signature?
2- Does the above construction is a secure digital signature against existential unforgeability?
cryptanalysis signature elgamal-signature
rafaelrafael
The hashed ElGamal signature scheme was shown to be secure by Pointcheval and Stern (see Section 3.3.); i.e., the signature scheme resists adaptive chosen message attacks assuming the discrete logarithm problem is hard. The proof of security is quite remarkable, as this was the first application of the forking lemma. Sometimes they refer to the hashed ElGamal signature scheme as the Pointcheval–Stern signature algorithm.
Note that the original ElGamal scheme (where the message is not hashed) is existentially forgeable, see Theorem 16. in the linked paper above.
István András SeresIstván András Seres
Not the answer you're looking for? Browse other questions tagged cryptanalysis signature elgamal-signature or ask your own question.
Proof of security for RSA signatures
Compact digital signature for noisy data
Using Permutation polynomial to compute a MAC
Do $v_1=\alpha\cdot r_1$ and $v_2=\alpha\cdot r_2$ leak information about $\alpha$
Security exists or not in Modification in schnorr signature
Challenging Question for the Build a Secure MAC with Special Properties
How to simplify a the signing process in an elliptic curve signature scheme that involves a quadratic verification equation?
Rabin-Williams signature and it's reduction to factorization
|
CommonCrawl
|
Effects of Sodium Polyacrylate and Phytase-Supplemented Diet on Performance and Phosphorus Retention in Chicks
Yamazaki, M. (National Institute of Livestock and Grassland Science) ;
Murakami, H. (National Institute of Livestock and Grassland Science) ;
Ohtsu, H. (National Institute of Livestock and Grassland Science) ;
Abe, H. (National Institute of Livestock and Grassland Science) ;
Takemasa, M. (National Institute of Livestock and Grassland Science)
Two experiments were conducted to evaluate the effects of addition of sodium polyacrylate (SPA) to a phytasesupplemented diet on the performance and phosphorus (P) retention of chicks. In experiment 1, chicks were randomly allocated to four dietary treatments which were fed from 7 to 21 days of age: i) basal diet (low nonphytate phosphorus (0.23% NPP)); ii) basal with 250 U/kg diet of phytase; iii) as (ii) with 2.5 g/kg diet of SPA; and iv) as (ii) with 5.0 g/kg diet of SPA. In experiment 2, three replicates, each with three chicks, were fed from 7 to 28 days of age the basal diet (0.23% NPP) with supplementation of phytase (0, 300, 600, 900 U/kg diet) and SPA (0, 2.5 g/kg diet) in a $4{\times}2$ factorial arrangement. In Experiment 1, feed efficiency was improved and excreted P was 10% less with phytase supplementation. However, the addition of SPA did not affect performance or P excretion. Dietary SPA supplementation to the diets showed significantly higher amounts of P retention, and highest values were observed in chicks fed 2.5 g/kg of the SPA-supplemented diet. In Experiment 2, feed efficiency was improved with phytase supplementation, and the addition of SPA showed significant improvement in feed efficiency. Excreted P was significantly lower in chicks fed SPA-supplemented diets, and the retained P coefficient improved with SPA supplementation. In conclusion, the increased transit time of digesta with suitable supplementation levels of SPA may allow phytase activity to be more effective in the degradation of phytate, and improve P retention.
Sodium Polyacrylate;Phytase;Phosphorus;Chick
Allen, R. J. L. 1940. The estimation of phosphorus. Biochem. J. 34:858-865.
Choct, M. 2006. Enzymes for the feed industry: Past, present and future. World's Poult. Sci. J. 62:5-15. https://doi.org/10.1079/WPS200480
Ellis, W. C., J. H. Matis, K. R. Pond, C. Lascano and J. P. Telford.1984. Dietary influences on flow rate and digestive capacity. Science Press, Johannesvurg, South Africa.
Farrell, D. J., E. Martin, J. J. Dupreez, M. Bongarts, M. Betts, A.Sudaman and E. Thomson. 1993. The beneficial-effects of a microbial feed phytase in diets of broiler-chickens and ducklings. J. Anim. Physiol. Anim. Nutr. 69:278-283. https://doi.org/10.1111/j.1439-0396.1993.tb00815.x
Furuse, M., S. Nakajima, J. Nakagawa, M. Okamura and J.Okumura. 1997. Effects of dietary guar gum on the performance of growing broilers given diets containing phytase. Jpn. Poult. Sci. 34:103-109. https://doi.org/10.2141/jpsa.34.103
Furuya, S., K. Sakamoto, T. Asano, S. Takahashi and K. Kameoka.1978. Effects of added dietary sodium polyacrylate on passage rate of markers and apparent digestibility by growing swine. J. Anim. Sci. 47:159-165.
Hill, F. W. and D. L. Anderson. 1958. Comparison of metabolizable energy and productive energy determinations with growing chicks. J. Nutr. 64:587-603.
Leske, K. L. and C. N. Coon. 1999. A bioassay to determine the effect of phytase on phytate phosphorus hydrolysis and total phosphorus retention of feed ingredients as determined with broilers and laying hens. Poult. Sci. 78:1151-1157. https://doi.org/10.1093/ps/78.8.1151
Nelson, T. S. 1967. The utilization of phytase p by poultry - a review. Poult. Sci. 46:862-872. https://doi.org/10.3382/ps.0460862
Pond, W. G., K. R. Pond, W. C. Ellis and J. H. Matis. 1986. Markers for estimating digesta flow in pigs and the effects of dietary fiber. J. Anim. Sci. 63:1140-1149.
SAS Institute. 1988. Sas/stat user's guide. Release 6.03 edition. SAS Institute Inc., Cary, NC.
Simons, P. C. M., H. A. J. Versteegh, A. W. Jongbloed, P. A.Kemme, P. Slump, K. D. Bos, M. G. E. Wolters, R. F.Beudeker and G. J. Verschoor. 1990. Improvement of phosphorus availability by microbial phytase in broilers and pigs. Br. J. Nutr. 64:525-540. https://doi.org/10.1079/BJN19900052
Um, J. S. and I. K. Paik. 1999. Effects of microbial phytase supplementation on egg production, eggshell quality, and mineral retention of laying hens fed different levels of phosphorus. Poult. Sci. 78:75-79. https://doi.org/10.1093/ps/78.1.75
Van Der Klis, J. D., A. Van Voorst and C. Van Cruyningen. 1993.Effect of a soluble polysaccharide (carboxy methyl cellulose) on the physicochemical conditions in the gastrointestinal tract of broilers. Br. Poult. Sci. 34:971-983. https://doi.org/10.1080/00071669308417657
Yi, Z., E. T. Kornegay and D. M. Denbow. 1996. Effect of microbial phytase on nitrogen and amino acid digestibility and nitrogen retention of turkey poults fed corn-soybean meal diets. Poult. Sci. 75:979-990. https://doi.org/10.3382/ps.0750979
|
CommonCrawl
|
Search Results: 1 - 10 of 656128 matches for " J. A. Fernandez-Ontiveros "
The innermost globular clusters of M87
M. Montes,J. A. Acosta-Pulido,M. A. Prieto,J. A. Fernandez-Ontiveros
Physics , 2014, DOI: 10.1093/mnras/stu948
Abstract: We present a comprehensive multiwavelength photometric analysis of the innermost (3x3 square kpc) 110 globular clusters (GCs) of M87. Their spectral energy distributions (SEDs) were built taking advantage of new ground-based high resolution near-IR imaging aided by adaptive optics at the Very Large Telescope (VLT) combined with Hubble Space Telescope (HST) ultraviolet--optical archival data. These GC SEDs are among the best photometrically sampled extragalactic GC SEDs. To compare with our SEDs we constructed equally sampled SEDs of Milky Way GCs. Using both these Milky Way cluster templates and different stellar population models, ages of >10 Gyr and metallicities of [Fe/H] -0.6 dex are consistently inferred for the inner GCs of M87. In addition, the metallicity of these GCs is low (Dif([Fe/H]) 0.8 dex) compared to that of their host galaxy. These results agree with the idea that the GC formation in M87 ceased earlier than that of the bulk of the stars of the central part of the galaxy. The ages of the inner GCs of M87 support the idea that these central parts of the galaxy formed first. Our data do not support evidence of recent wet merging.
The central parsecs of M87: jet emission and an elusive accretion disc
M. A. Prieto,J. A. Fernandez-Ontiveros,S. Markoff,D. Espada,O. Gonzalez-Martin
Abstract: We present the first time-simultaneous high angular resolution spectral energy distribution (SED) of the core of M87 at a scale of 0.4 arcsecs across the electromagnetic spectrum. Two activity periods of the core of M87 are sampled: a quiescent mode, representative of the most common state of M87, and an active one, represented by the outburst occurring in 2005. The main difference between both SEDs is a shift in flux in the active SED by a factor of about two, their shapes remaining similar across the entire spectrum. The shape of the compiled SEDs is remarkably different from those of active galactic nuclei (AGN). It lacks three major AGN features: the IR bump, the inflection point at about 1 micron and the blue bump. The SEDs also differ from the spectrum of a radiatively inefficient accretion flow. Down to the scales of ~12 pc from the centre, we find that the emission from a jet gives an excellent representation of the spectrum over ten orders of magnitude in frequency for both the active and the quiescent phases of M87. The inferred total jet power is one to two orders of magnitude lower than the jet mechanical energy inferred from various methods in the literature. This discrepancy cannot easily be ascribed to variability. Yet, our measurements regard the inner few parsecs which might provide a genuine account of the jet power at the base. We derive a strict upper limit to the accretion rate of 6 x 10E-5 Mo / yr, assuming 10% efficiency. The inferred accretion power can account for M87 radiative luminosity at the jet-frame assuming boosting factors larger than 10, it is however two orders of magnitude below that required to account for M87 jet kinetic power. We thus propose that energy tapped from the black hole spin may be a complementary source to power the jet of M87, a large supply of accreting gas becoming thus unnecessary.
The spectral energy distribution of the central parsecs of the nearest AGN
M. A. Prieto,J. Reunanen,K. R. W. Tristram,N. Neumayer,J. A. Fernandez-Ontiveros,M. Orienti,K. Meisenheimer
Abstract: Spectral energy distributions (SEDs) of the central few tens of parsec region of some of the nearest, most well studied, active galactic nuclei (AGN) are presented. These genuine AGN-core SEDs, mostly from Seyfert galaxies, are characterised by two main features: an IR bump with the maximum in the 2-10 micron range, and an increasing X-ray spectrum in the 1 to ~200 keV region. These dominant features are common to Seyfert type 1 and 2 objects alike. Type 2 AGN exhibit a sharp drop shortward of 2 micron, with the optical to UV region being fully absorbed, while type 1s show instead a gentle 2 micron drop ensued by a secondary, partially-absorbed optical to UV emission bump. Assuming the bulk of optical to UV photons generated in these AGN are reprocessed by dust and re-emitted in the IR in an isotropic manner, the IR bump luminosity represents >70% of the total energy output in these objects while the high energies above 20 keV are the second energetically important contribution. Galaxies selected by their warm IR colours, i.e. presenting a relatively-flat flux distribution in the 12 to 60 micron range have often being classified as AGN. The results from these high spatial resolution SEDs question this criterion as a general rule. It is found that the intrinsic shape of the IR SED of an AGN and inferred bolometric luminosity largely depart from those derived from large aperture data. AGN luminosities can be overestimated by up to two orders of magnitude if relying on IR satellite data. We find these differences to be critical for AGN luminosities below or about 10^{44} erg/s. Above this limit, AGNs tend to dominate the light of their host galaxy regardless of the aperture size used. We tentatively mark this luminosity as a threshold to identify galaxy-light- vs AGN- dominated objects.
Polyamide Fibers Covered with Chlorhexidine: Thermodynamic Aspects [PDF]
E. Giménez-Martín, M. López-Andrade, J. A. Moleón-Baca, M. A. López, A. Ontiveros-Ortega
Journal of Surface Engineered Materials and Advanced Technology (JSEMAT) , 2015, DOI: 10.4236/jsemat.2015.54021
Abstract: Results of dynamic and equilibrium of sorption of a reactive dye Remazol Brilliant Blue, and a bactericidal agent, Digluconate of Chlorhexidine over Polyamide fibers are presented with the aim of supplying the fiber with bactericidal properties. However, adsorption of Chlorhexidine onto Polyamide is scarce due to the lack of interactions between the reactive groups of the fiber and the antiseptic molecule. Therefore, in order to provide the fiber surface with anionic groups, fiber has been previously dyed with Remazol Brilliant Blue which increases the negative charge of the fiber surface due to the presence of its sulfonate end groups. Thermodynamic parameters of equilibrium sorption in the two situations, fiber/dye and fiber-dye/Chlorhexidine, have been analyzed, as function of the temperature, pH and concentration of the dye in the pretreatment. Results show that when sorption of Remazol Brilliant Blue reaches the value of about 50 mmol/ kg at the higher temperature and concentration tested, the amount of Chlorhexidine adsorbed exhibits its maximum value which is 6 mmol/kg. Both processes, adsorption of Remazol Brilliant Blue and adsorption of Chlorhexidine, fit well to Langmuir adsorption model, suggesting the existence of some kinds of specific interactions between adsorbent and adsorbate. Thermodynamic functions show that the interaction is endothermic and spontaneous in all the rage of temperature tested. The kinetic studies show that sorption of Remazol Brilliant Blue is better described by pseudo-first order model, while sorption of Chlorhexidine fits better to pseudo-second order model, and seems to be quicker process. According to the obtained results, chemical interaction between the vinyl-sulfone group of Remazol Brilliant Blue and the amine groups of Polyamide fiber, followed by electrostatic interactions between the guanine group of the Chlorhexidine and the sulfonate group of the dye must be considered in order to explain the adsorption process.
Status of neonatal intensive care units in India.
Fernandez A,Mondkar J
Journal of Postgraduate Medicine , 1993,
Abstract: Neonatal mortality in India accounts for 50% of infant mortality, which has declined to 84/1000 live births. There is no prenatal care for over 50% of pregnant women, and over 80% deliver at home in unsafe and unsanitary conditions. Those women who do deliver in health facilities are unable to receive intensive neonatal care when necessary. Level I and Level II neonatal care is unavailable in most health facilities in India, and in most developing countries. There is a need in India for Level III care units also. The establishment of neonatal intensive care units (NICUs) in India and developing countries would require space and location, finances, equipment, staff, protocols of care, and infection control measures. Neonatal mortality could be reduced by initially adding NICUs at a few key hospitals. The recommendation is for 30 NICU beds per million population. Each bed would require 50 square feet per cradle and proper climate control. Funds would have to be diverted from adult care. The largest expenses would be in equipment purchase, maintenance, and repair. Trained technicians would be required to operate and monitor the sophisticated ventilators and incubators. The nurse-patient ratio should be 1:1 and 1:2 for other infants. Training mothers to work in the NICUs would help ease the problems of trained nursing staff shortages. Protocols need not be highly technical; they could include the substitution of radiant warmers and room heaters for expensive incubators, the provision of breast milk, and the reduction of invasive procedures such as venipuncture and intubation. Nocosomial infections should be reduced by vacuum cleaning and wet mopping with a disinfectant twice a day, changing disinfectants periodically, maintaining mops to avoid infection, decontamination of linen, daily changing of tubing, and cleaning and sterilizing oxygen hoods and resuscitation equipment, and maintaining an iatrogenic infection record book, which could be used to study the infection patterns and to apply the appropriate antibiotics.
The SED of Low-Luminosity AGNs at high-spatial resolution
J. A. Fernández-Ontiveros,M. A. Prieto,J. A. Acosta-Pulido,M. Montes
Abstract: The inner structure of AGNs is expected to change below a certain luminosity limit. The big blue bump, footprint of the accretion disk, is absent for the majority of low-luminosity AGNs (LLAGNs). Moreover, recent simulations suggest that the torus, a keystone in the Unified Model, vanishes for nuclei with L_bol < 10^42 erg/s. However, the study of LLAGN is a complex task due to the contribution of the host galaxy, which light swamps these faint nuclei. This is specially critical in the IR range, at the maximum of the torus emission, due to the contribution of the old stellar population and/or dust in the nuclear region. Adaptive optics imaging in the NIR (VLT/NaCo) together with diffraction limited imaging in the mid-IR (VLT/VISIR) permit us to isolate the nuclear emission for some of the nearest LLAGNs in the Southern Hemisphere. These data were extended to the optical/UV range (HST), radio (VLA, VLBI) and X-rays (Chandra, XMM-Newton, Integral), in order to build a genuine spectral energy distribution (SED) for each AGN with a consistent spatial resolution (< 0.5") across the whole spectral range. From the individual SEDs, we construct an average SED for LLAGNs sampled in all the wavebands mentioned before. Compared with previous multiwavelength studies of LLAGNs, this work covers the mid-IR and NIR ranges with high-spatial resolution data. The LLAGNs in the sample present a large diversity in terms of SED shapes. Some of them are very well described by a self-absorbed synchrotron (e.g. NGC 1052), while some other present a thermal-like bump at ~1 micron (NGC 4594). All of them are significantly different when compared with bright Seyferts and quasars, suggesting that the inner structure of AGNs (i.e. the torus and the accretion disk) suffers intrinsic changes at low luminosities.
The central parsecs of active galactic nuclei: challenges to the torus
M. A. Prieto,M. Mezcua,J. A. Fernández-Ontiveros,M. Schartmann
Physics , 2014, DOI: 10.1093/mnras/stu1006
Abstract: Type 2 AGN are by definition nuclei in which the broad-line region and continuum light are hidden at optical/UV wavelengths by dust. Via accurate registration of infrared (IR) Very Large Telescope adaptive optics images with optical \textit{Hubble Space Telescope} images we unambiguously identify the precise location of the nucleus of a sample of nearby, type 2 AGN. Dust extinction maps of the central few kpc of these galaxies are constructed from optical-IR colour images, which allow tracing the dust morphology at scales of few pc. In almost all cases, the IR nucleus is shifted by several tens of pc from the optical peak and its location is behind a dust filament, prompting to this being a major, if not the only, cause of the nucleus obscuration. These nuclear dust lanes have extinctions $A_V \geq 3-6$ mag, sufficient to at least hide the low-luminosity AGN class, and in some cases are observed to connect with kpc-scale dust structures, suggesting that these are the nuclear fueling channels. A precise location of the ionised gas H$\alpha$ and [\textsc{Si\,vii}] 2.48 $\mu$m coronal emission lines relative to those of the IR nucleus and dust is determined. The H$\alpha$ peak emission is often shifted from the nucleus location and its sometimes conical morphology appears not to be caused by a nuclear --torus-- collimation but to be strictly defined by the morphology of the nuclear dust lanes. Conversely, [\textsc{Si\,vii}] 2.48 $\mu$m emission, less subjected to dust extinction, reflects the truly, rather isotropic, distribution of the ionised gas. All together, the precise location of the dust, ionised gas and nucleus is found compelling enough to cast doubts on the universality of the pc-scale torus and supports its vanishing in low-luminosity AGN. Finally, we provide the most accurate position of the NGC 1068 nucleus, located at the South vertex of cloud B.
The concentration-compactness principle for variable exponent spaces and applications
J. Fernandez Bonder,A. Silva
Mathematics , 2009,
Abstract: In this paper we extend the well-known concentration -- compactness principle of P.L. Lions to the variable exponent case. We also give some applications to the existence problem for the $p(x)-$Laplacian with critical growth.
The warm molecular gas and dust of Seyfert galaxies: two different phases of accretion?
M. Mezcua,M. A. Prieto,J. A. Fernández-Ontiveros,K. Tristram,N. Neumayer,J. K. Kotilainen
Physics , 2015, DOI: 10.1093/mnras/stv1408
Abstract: The distribution of warm molecular gas (1000--3000 K), traced by the near-IR H$_2$ 2.12 $\mu$m line, has been imaged with a resolution $<0.5$ arcsec in the central 1 kpc of seven nearby Seyfert galaxies. We find that this gas is highly concentrated towards the central 100 pc and that its morphology is often symmetrical. Lanes of warm H$_2$ gas are observed only in three cases (NGC\,1068, NGC\,1386 and Circinus) for which the morphology is much wider and extended than the dust filaments. We conclude that there is no one-to-one correlation between dust and warm gas. This indicates that, if the dust filaments and lanes of warm gas are radial streaming motions of fuelling material, they must represent \textit{two different phases of accretion}: the dust filaments represent a colder phase than the gas close to the nucleus (within $\sim$100 pc). We predict that the morphology of the nuclear dust at these scales should resemble that of the cold molecular gas (e.g. CO at 10--40 K), as we show for CenA and NGC\,1566 by Atacama Large Millimeter/submillimeter Array (ALMA) observations, whereas the inner H$_2$ gas traces a much warmer phase of material identified with warmer (40-500 K) molecular gas such as CO(6-5) or HCN (as shown by ALMA for NGC\,1068 and NGC\,1097). We also find that X-ray heating is the most likely dominant excitation mechanism of the H$_{2}$ gas for most sources.
COMPORTAMIENTO DE ALGUNAS PROPIEDADES FíSICO-QUíMICAS DEL SUELO CON DIFERENTE SISTEMA SILVOPASTORIL EN LA LLANURA NORTE DE NAYARIT
Bugarín,J; Bojórquez,J. I; Lemus,C; Murray,R. M; Ontiveros,H; Aguirre,J; Hernández,A;
Cultivos Tropicales , 2010,
Abstract: a silvopastoral system was established on a haplic (eutric, cromic) cambisol in the northern coastal plain of nayarit. the application of silvopastoral system for ovine production was evaluated and its impact on soil properties. treatments were leucaena leucocephala+brachiaria brizantha (t1), l. glauca+b. brizantha (t2), l. leucocephala+clitoria ternatea+b. brizantha (t3), l. glauca+c. ternatea+b. brizantha (t4) and b. brizantha (t5), arranged in randomized blocks with four repetitions of 256 m2 each. the experiment was settled in october, 2007 with irrigation and without fertilizers, according to the season. the soil was characterized at the beginning and its physicochemical properties were determined. in the upper 20 cm, bulk density was 1.33 mg.m-3, moisture 12.83 %, ph 6.3 and low organic matter content (1.68 %). five evaluations were performed for bulk density, ph and organic matter content; samples were taken with vegetation cover and without it, regarding the treatments. the main results indicate a bulk density increment and no statistical differences among treatments when finishing the evaluation; ph increased towards more neutral values and also organic matter content, in which arboreous and herbaceous leguminous were mostly employed, as well as pasture (lgcb treatment); arboreous leguminous disappeared at the end of the experiment, so that it had a negative influence on the results. the use of silvopastoral systems is recommended as a mechanism to decrease soil degradation, considering the characteristics of the species to be established and the land
|
CommonCrawl
|
Inter-laboratory assessment of different digital PCR platforms for quantification of human cytomegalovirus DNA
Jernej Pavšič1,2,
Alison Devonshire3,
Andrej Blejec1,
Carole A. Foy3,
Fran Van Heuverswyn4,
Gerwyn M. Jones3,
Heinz Schimmel4,
Jana Žel1,
Jim F. Huggett3,5,
Nicholas Redshaw3,
Maria Karczmarczyk4,
Erkan Mozioğlu6,
Sema Akyürek6,
Müslüm Akgöz6 &
Mojca Milavec1
Analytical and Bioanalytical Chemistry volume 409, pages 2601–2614 (2017)Cite this article
Quantitative PCR (qPCR) is an important tool in pathogen detection. However, the use of different qPCR components, calibration materials and DNA extraction methods reduces comparability between laboratories, which can result in false diagnosis and discrepancies in patient care. The wider establishment of a metrological framework for nucleic acid tests could improve the degree of standardisation of pathogen detection and the quantification methods applied in the clinical context. To achieve this, accurate methods need to be developed and implemented as reference measurement procedures, and to facilitate characterisation of suitable certified reference materials. Digital PCR (dPCR) has already been used for pathogen quantification by analysing nucleic acids. Although dPCR has the potential to provide robust and accurate quantification of nucleic acids, further assessment of its actual performance characteristics is needed before it can be implemented in a metrological framework, and to allow adequate estimation of measurement uncertainties. Here, four laboratories demonstrated reproducibility (expanded measurement uncertainties below 15%) of dPCR for quantification of DNA from human cytomegalovirus, with no calibration to a common reference material. Using whole-virus material and extracted DNA, an intermediate precision (coefficients of variation below 25%) between three consecutive experiments was noted. Furthermore, discrepancies in estimated mean DNA copy number concentrations between laboratories were less than twofold, with DNA extraction as the main source of variability. These data demonstrate that dPCR offers a repeatable and reproducible method for quantification of viral DNA, and due to its satisfactory performance should be considered as candidate for reference methods for implementation in a metrological framework.
Nucleic acid amplification-based tests offer an important method for rapid and reliable diagnosis of infectious diseases. Quantitative polymerase chain reaction (qPCR) has become a valuable tool for routine microbiology testing, as it allows rapid, specific and sensitive detection and quantification of viral and bacterial nucleic acids in a broad range of samples [1–3]. However, clinical laboratories often use different qPCR platforms, different commercial or in-house calibration materials (e.g. secondary reference materials), and different PCR components, which can in turn result in significant variability of the reported quantitative (i.e. concentration of pathogen) and qualitative (i.e. presence or absence of pathogen) data between laboratories [2, 4, 5]. Lack of agreement in reported quantitative data hampers the definition of generally accepted clinical thresholds for initiation or termination of anti-pathogen therapies, while disparities in qualitative data can result in misdiagnosis of a disease [4].
The establishment of a reference measurement system for nucleic acid amplification-based tests through the development of reference measurement procedures and suitable reference materials would facilitate standardisation of quantitative and qualitative measurements within the international clinical community. Metrological traceability of end-user measurements to reference measurement procedures, that have been used to value assign reference materials of higher metrological order, could enhance equivalence of measurements over time and space [6]. Suitable reference materials with assigned values that are traceable to the International System of Units (SI units) or to other internationally accepted standards (e.g. international units; IU) would facilitate accurate and reproducible characterisation of reference materials at a lower level in the calibration hierarchy and/or of calibrators produced by different manufacturers [6, 7]. This should, in turn, increase agreement between end-user quantitative measurements and improve assessment of the analytical performance characteristics of quantitative and qualitative nucleic acid amplification-based methods [4]. Additionally, reference measurement procedures that have well-defined measurement uncertainty and are independent of external calibrators are needed to provide reproducible, precise and accurate enumeration and characterisation of reference materials of higher metrological order, including assessment of their homogeneity and stability over time [4, 6].
For a small number of more important viruses, reference materials that are composed of either cultured whole virus (World Health Organisation, WHO International Reference Materials) or plasmids (National Institute of Standards and Technology; NIST) have already been developed, traceable to arbitrary assigned international units (IU, by WHO), or SI units in terms of plasmid copy numbers (NIST) [8–10]. However, for the great majority of viruses and bacteria, suitable reference materials for calibration purposes have not been developed yet, while with human cytomegalovirus (HCMV), there remains discordance for the viral loads between the different commercial reference materials [7]. As the value assignments of the existing WHO reference materials were performed in collaborative studies using various qPCR-based methods calibrated against arbitrary assigned external calibrators [8], there is a challenge as neither the qPCR method nor the external calibrator are anchored in an unbiased way to a uniform reference. Consequently, value assignment of different batches of calibrators and in vitro diagnostics is challenging; a factor that could be rectified if a suitable reference method was available and provided the intermediate reference materials and calibrators are suited to allow unbiased value transfer.
Digital (d)PCR has the potential to provide accurate and robust end-point measurements of nucleic acid copy number concentration and is therefore a promising candidate for a reference measurement procedure. It has been demonstrated that dPCR is more resistant (although not completely insensitive) to PCR inhibitors than qPCR [11, 12]; hence, it is expected to deliver more robust and accurate nucleic acid quantification than qPCR [13]. However, the reported over-estimation and under-estimation of nucleic acid copy number concentration indicate the need for optimisation of the analytical process of dPCR, so that homogenous distributions of nucleic acids can be achieved during partitioning, along with their successful amplification during the PCR cycling [14, 15]. dPCR has already been used for different applications, such as quantification of viruses and bacteria [16–18], quantification of virus reference materials [7, 19, 20] and value assignment of certified reference materials that consist of plasmid DNA [10, 14, 21]. However, as with every novel technology, comprehensive inter-laboratory assessments with different dPCR platforms need to be conducted to confirm the suitability of dPCR-based methods for value assignment of reference materials for viruses and bacteria.
In the present study, inter-laboratory comparisons were conducted to determine the intermediate precision and reproducibility of different dPCR platforms for quantification of DNA extracted from a reference material from HCMV. Two HCMV test materials were used in four National Metrology Institutes (Fig. 1): whole-virus material (WVM), which was purchased from the manufacturer individually by each laboratory for subsequent 'local' DNA extraction (henceforth referred to as the 'locally extracted WVM units'); and genomic DNA (gDNA), which was 'centrally' extracted from a single WVM unit in the National Institute of Biology (NIB) laboratory (Fig. 1, Table 1, laboratory 1), and aliquoted into the 'gDNA units' that were distributed to the collaborating laboratories (henceforth referred to as the 'centrally prepared gDNA units'). In each participating laboratory, DNA quantification of the locally extracted WVM units and the centrally prepared gDNA units was performed using one or two of the following dPCR platforms: a droplet-based dPCR system (QX100 Droplet Digital PCR; Bio-Rad); and two chip-based dPCR systems (Biomark HD, Fluidigm; QuantStudio 3D Digital PCR, Thermo Fisher Scientific). Three units of each HCMV test material (as both the locally extracted WVM units and the centrally prepared gDNA units) were analysed on each dPCR instrument as three consecutive experiments, to determine the inter-unit variability and inter-experiment variability (i.e. intermediate precision, considering single unit measured in three experiments in duplicate) within each instrument. Additionally, to determine the reproducibility of the dPCR for each test material, and thus the potential for characterisation and value assignment of reference materials, the mean HCMV DNA copy number concentrations and corresponding measurement uncertainties were estimated using the data from all five dPCR instruments.
Schematical overview of the inter-laboratory assessment for A whole-virus material (WVM) and B genomic DNA (gDNA). With both HCMV test materials, the procedure presented for unit 5 was also applied to all of the other units (for WVM and gDNA; omitted here for clarity). For each HCMV test material within each laboratory, at least one of the two platforms shown was used
Table 1 Participating laboratories, dPCR platforms and HCMV test materials analysed in this study
Test materials
Whole HCMV material for local DNA extraction
For the quantification of the locally extracted HCMV DNA from WVM, laboratory 1 obtained five units of the First World Health Organisation International Standard for Human Cytomegalovirus for Nucleic Acid Amplification Techniques (i.e. four WVM units) from the WHO (code, 09/162; UK), and each collaborating laboratory obtained four WVM units (Fig. 1, Table 1, laboratories 2–4). Each of these WVM units of the HCMV standard comprised the lyophilised equivalent of 1 mL of a whole-virus preparation of the HCMV 'Merlin' strain that was resuspended in 10 mM Tris-HCl buffer (pH 7.4) with 0.5% (v/v) human serum albumin. The material had been assigned a nominal HCMV concentration of 5 × 106 IU/mL when reconstituted in 1 mL nuclease-free water, based on data from an international collaborative study. The uncertainty of the vial content determined by the manufacturer was ±0.23%. The material was shown to be stable during its shipment at ambient temperatures.
Centrally prepared HCMV genomic DNA
HCMV gDNA was centrally prepared at laboratory 1 (NIB; Fig. 1B, Table 1) by extraction of the gDNA from one WVM unit, followed by circulation of the prepared gDNA units to the participating laboratories. To prepare the gDNA units, one of WVM units (i.e., Fig. 1B, unit W10) obtained by laboratory 1 from WHO (code, 09/162; UK) was used. After opening of WVM unit, the contents were reconstituted in 1 mL double-distilled water, followed by fivefold dilution in phosphate-buffered saline (PBS) (137 mM NaCl, 2.7 mM KCl, 8 mM Na2HPO4, 2 mM KH2PO4, pH 7.4). This material was then divided into 22× 200-μL aliquots, from where the gDNA was extracted on the same day using QIAamp DNA Mini kits (Qiagen). These extractions were carried out according to the manufacturer instructions, except for the DNA elution, where only 50 μL elution buffer was used (instead of 200 μL). The extracted HCMV gDNA was then pooled, mixed and aliquoted into 50-μL aliquots (i.e. the gDNA units). The concentration of the HCMV gDNA was assigned during the homogeneity assessment, for which five of these gDNA units were tested (see Electronic Supplementary Material (ESM), Method S1, gDNA units H1–H5). The remaining gDNA units were stored at −20 °C for 4 months. Eight of these gDNA units were then sent on dry ice to two of the collaborating laboratories (Fig. 1, Table 1, laboratories 2, 4), as four gDNA units for each laboratory, while four of these gDNA units remained at laboratory 1.
Participating laboratories and dPCR platforms
Four National Metrology Institutes participated in this collaborative study: laboratory 1, National Institute of Biology (NIB), Slovenia (the 'central' laboratory); laboratory 2, Joint Research Centre, European Commission, Directorate F. Retieseweg, European Union; laboratory 3, LGC (formerly Laboratory of the Government Chemist), United Kingdom; and laboratory 4, National Metrology Institute of Turkey (TUBITAK UME), Turkey. The dPCR platforms used and the materials tested (i.e., WVM units and gDNA units) by each of these participating laboratories are shown in Table 1.
Experimental workflow
Three of the collaborating laboratories (Fig. 1, Table 1, laboratories 1–3) obtained at least four WVM units from the manufacturer and two participating laboratories (Fig. 1, Table 1, laboratories 2 and 4) also received four gDNA units from laboratory 1. One of each of these WVM units and gDNA units of the test materials was used for preliminary analysis of the analytical protocol, with the remaining three WVM units (e.g. Fig. 1A, units W1–W9) and gDNA units (e.g. Fig. 1B, units G1–G9) of each test material used in the collaborative study. Two vials with mixed primers and probes were also received by each collaborating laboratory (from laboratory 1) on dry ice (see below). The complete experimental workflow is shown schematically in Fig. 1. In each laboratory for each dPCR instrument, three WVM units and/or three gDNA units were included in this analysis. For each dPCR instrument, three experiments were performed on different days over a short period of time. In each experiment, three aliquots were tested simultaneously, with each derived from a different WVM or gDNA unit (e.g. Fig. 1B, experiment 1, aliquot 1 from unit G1, aliquot 1 from unit G2, aliquot 1 from unit G3). Laboratories 1 and 2 examined both test materials in the same experiment.
DNA extraction of WVM units and aliquot preparation
In laboratory 1 and upon arrival at the collaborating laboratories (laboratories 2, 3), each WVM unit (e.g. Fig. 1A, units W1–W9) was opened and diluted in 1 mL double-distilled water, as stated by the manufacturer (final nominal HCMV concentration, 5 × 106 IU/mL). At least 200 μL of this prepared material was additionally diluted fivefold in PBS (each collaborating laboratory used either the same PBS as for the laboratory 1 units, or purchased PBS from a manufacturer), to reach the nominal HCMV concentration of 1 × 106 IU/mL. For each diluted material, DNA extraction was performed using High Pure Viral Nucleic Acid kits (Roche), according to the manufacturer instructions. Immediately after the elution of the DNA, each of three tubes with 50-μL eluted DNA was aliquoted as described below. Additionally, in each participating laboratory (laboratories 1, 2, 3), 200 μL negative extraction control (40 μL double-distilled diluted in 160 μL PBS) was extracted using High Pure Viral Nucleic Acid kits (Roche) simultaneously with the extraction of the WVM units. Following the extraction, the negative extraction control was aliquoted into nine 5 μL aliquots for subsequent analysis.
Aliquot preparation and dilutions of test materials
In each laboratory, from each locally extracted gDNA (50-μL eluted volume) from each WVM unit, and from each centrally prepared 50-μL gDNA unit, six aliquots were prepared (e.g. Fig. 1A, aliquots 1–6 from unit W5), by dividing the 50-μL volume into six low-binding microcentrifuge tubes (aliquot volumes used for dPCR platforms: QX100, ∼6 μL; Biomark, ∼9 μL; QuantStudio 3D, ∼9 μL). All of the aliquots were then stored at −20 °C. For each dPCR instrument, three aliquots from the same WVM unit or gDNA unit were each analysed in one of three consecutive experiments on the same dPCR instrument, over a short period of time. Each aliquot was thawed, gravimetrically diluted in double-distilled water and analysed on the same day.
On the QX100 system, for quantification of the aliquots (derived from both the WVM units and the gDNA units), 10-fold gravimetric dilutions were made of each one on the day of the analysis. For the analysis on the Biomark system, undiluted aliquots derived from the WVM units and the gDNA units were used. In case of QuantStudio 3D system, 2.3-fold diluted aliquots of gDNA units were used. Each participating laboratory reported all of their gravimetric dilutions in Excel 2007 data submission spreadsheets (ESM, Tables S1-S6).
PCR assay for dPCR
In all of the experiments, the same UL54 assay was used for quantification of the HCMV DNA in the WVM units and the gDNA units, which targets the DNA polymerase (UL54) gene of HCMV [22] (ESM, Table S7). The UL54 assay had been assessed previously for its robustness on the QX100 system and the Biomark system, with well-defined dynamic ranges and limits of quantification and detection obtained [23]. Laboratory 1 prepared 20-fold concentrated mixtures of 600 nM oligonucleotide primers, and 200 nM probes which were mixed, aliquoted (75-μL aliquots) and stored at −20 °C. Two aliquots were then shipped to each of the collaborating laboratories (laboratories 2–4) on dry ice, where they were stored at −20 °C.
dPCR
Three different dPCR platforms were used in this study (Table 1). Two laboratories used a droplet-based dPCR platform (laboratories 1, 2; QX100 system, Bio-Rad), three laboratories used a chip-based dPCR platform from Fluidigm (laboratories 1–3; qdPCR 37K Integrated Fluidic Circuits for the Biomark system; henceforth referred to as the Biomark 37K array), and one laboratory used a chip-based dPCR platform from Thermo Fisher Scientific (laboratory 4; QuantStudio 3D system). For each instrument, the experiments were performed according to the guidelines from the minimum information for publication of quantitative digital PCR experiments (ESM, Table S8).
For the Biomark 37K array, 8-μL reactions with excess volume were used, which comprised 2 μL 4× TaqMan Fast Virus 1-Step Master Mix (Thermo Fisher Scientific); 0.4 μL 20× UL54 assay; 0.8 μL GE Sample Loading Reagent (Fluidigm); 2 μL double-distilled water; and 2.8 μL sample. In each experiment, three no template controls (NTCs) and three aliquots of the negative extraction control were included. As only 24 samples were pipetted into the 48-inlet arrays, the remaining 24 inlets were filled with no template reaction mix, to avoid baseline problems (5-μL reactions with excess volume composed of 1.25 μL 4× TaqMan Fast Virus 1-Step Master Mix, 0.5 μL GE Sample Loading Reagent, and 3.25 μL double-distilled water). During the array loading, only 4-μL reactions were loaded into the 770 chambers of each inlet, while the excess reaction volume served to reduce the bias that can arise from small pipetting volumes. The reactions were performed using universal conditions: 2 min at 50 °C, 10 min at 95 °C, followed by 45 cycles of 15 s at 95 °C and 1 min at 60 °C. Ramp rate was set to 2 °C/s. The analyses were performed using different versions of Biomark HD Data Collection Software (Fluidigm). Each participating laboratory reported the version of the analysis software, the fluorescence threshold, the quality threshold, the accepted Cq range, the baseline correction method and the number of positive amplifications per panel, with all included in their data submission Excel 2007 spreadsheets (ESM, Table S9).
For the QX100 system, 20-μL reactions were used, composed of 10 μL 2× ddPCR Supermix for Probes (Bio-Rad Laboratories, USA); 1 μL 20× UL54 assay; 1 μL double-distilled water; and 8 μL sample. Three NTCs were included in each experiment. The reactions were performed using the same universal conditions as for the Biomark system, except for the addition of a single incubation of 10 min at 98 °C at the end of the cycling. Each participating laboratory reported the version of QuantaSoft analysis software used (Bio-Rad), the method for determination of the fluorescence threshold (manual or automatic), the fluorescence threshold, the number of accepted droplets and the number of positive droplets, with all included in their data submission Excel 2007 spreadsheets (ESM, Table S10). Additionally, each laboratory reported 2D charts of each experiment to allow exclusion of measurements with abnormally increased droplet fluorescence [14] (ESM, Fig. S1).
For the QuantStudio 3D (TUBITAK UME only), 15-μL reactions were used, composed of 7.5 μL 2× QuantStudio 3D Digital PCR Master Mix kits (Thermo Fisher Scientific.); 0.75 μL 20× UL54 assay; and 6.75 μL sample. These were loaded into the QuantStudio 3D Digital Chips (version 1). Two NTCs were included in each experiment. The reactions were carried out under the following cycling conditions: 10 s at 96 °C, 39 cycles of 60 s at 60 °C and 30 s at 96 °C and 10 min at 60 °C. For each chip/reaction, three readings were used and the means of the three readings are given in ESM, Table S6. The analyses were performed with QuantStudio 3D AnalysisSuite Software version 1.1.1 (Thermo Fisher Scientific), the quality threshold was set to 0.5 in the 'colour by quality' mode and an automatically calculated threshold was used in the 'colour by calls' mode to separate the positive and negative signals. The number of negative chambers and the number of qualified chambers after application of the quality threshold setting were reported, with all included in the data submission Excel 2007 spreadsheets. A more recent update from manufacturer on the partition volume (0.809 nL) was used for estimation of the DNA copy number concentration, instead of the partition volume of 0.865 nL that was the announced chip-partition volume of the first version of the QuantStudio 3D Chip kit.
Analysis of results
Calculation of DNA copy number concentration
For each of the laboratories and dPCR instruments, the estimated DNA copy number concentration (cp/μL) for each of the aliquots derived from the WVM units and the gDNA units were based on the reported dilutions, numbers of positive partitions and analysed partitions. For the QX100 system, the droplet volume was determined by Corbisier et al. to be 0.834 nL [14], with the QuantStudio 3D, the updated chamber volume of 0.809 nL was used, while the partition volumes of the Biomark 37K array were defined by the manufacturer (ESM, Table S11). The equations used to calculate the initial DNA copy number concentration were reported in our previous study [23]. For each aliquot, the estimated DNA copy number concentration was calculated according to the means of two replicates analysed in a single experiment.
The homogeneity and stability of the gDNA units were determined using single-factor analysis of variance (ANOVA) in R studio, version 0.98.977. For the homogeneity study, the mean DNA copy number concentrations for each of the five gDNA units were measured (ESM, Method S1, gDNA units H1-H5), with duplicate measurements for each gDNA unit taken into account. For the stability study, six replicates were taken into account for each of three gDNA units (ESM, Method S1, gDNA units G1-G3) that were also tested as a part of the inter-laboratory study. For the inter-laboratory study and stability study, outliers were determined for each instrument independently, using Grubbs tests (R studio, package 'outliers'), and these were excluded from the further analysis. For the intra-experiment variability, the coefficients of variation (CVs) were calculated for the duplicates in Excel 2007, by dividing the standard deviation by the mean DNA copy number concentration. With the intermediate precision, CVs were calculated from all measurements performed in three experiments on the same instrument using one gDNA or WVM unit (n = 6). Statistical analyses of inter-unit variability within a single instrument, and those that assessed the reproducibility between the instruments and laboratories, were performed using ANOVA and Tukey's tests (R studio). For each instrument and HCMV test material, the estimated mean DNA copy number concentrations were calculated by taking the means of all of the data from every unit (WVM units or gDNA units) and experiment (Microsoft Excel 2007). To evaluate the differences between the reported mean DNA copy number concentrations from the different dPCR instruments, ANOVA and Tukey's tests were used, in R studio. In addition, the standard measurement uncertainties were calculated for every instrument and HCMV test material, based on the bottom-up approach in Eq. (1) [24]:
The measurement uncertainty was calculated as a combined measurement uncertainty (MU) according to Eq. (1):
$$ M U=\sqrt{{u_r}^2 + {u_{ip}}^2\ } $$
where u r is the uncertainty associated with the repeatability, and u ip is the uncertainty associated with the intermediate precision.
u r and u ip were calculated using Eqs. (2), (3) and (4):
$$ {u}_r=\frac{\sqrt{M{ S}_{\mathrm{within}}}\ }{\sqrt{n}} $$
$$ {u}_{ip1}=\sqrt{\frac{M{S}_{\mathrm{between}} - M{S}_{\mathrm{within}}}{n \times N}} $$
$$ {u}_{ip2}=\frac{\sqrt{\frac{M{ S}_{\mathrm{between}}\ }{n}} \times \sqrt[4]{\frac{2}{N\times \left( n-1\right)}}}{\sqrt{N}} $$
where n is the number of independent replicates per experiment, N is the number of experiments performed on one instrument, MS within is the mean square value within groups and MS between is the mean square value between groups. Both mean squares were calculated using ANOVA in Microsoft Excel 2007, with all of the measurements taken into account for each experiment, including duplicates. If MS between > MS within, Eq. (3) was used to calculate the intermediate precision. In contrast, if MS between < MS within, Eq. (4) was used to calculate the intermediate precision. To obtain the expanded measurement uncertainty, the coverage factor (k) was applied. The value of the coverage factor was chosen at the 95% level of confidence, based on the degrees of freedom (i.e., k = 2.2 for two experiments, each with six independent replicates [n = 12]; k = 2.11 for three experiments, each with six independent replicates [n = 18]).
For both of the HCMV test materials, the mean DNA copy number concentrations and the corresponding expanded measurement uncertainties, which took into account the mean DNA copy number concentration of each instrument, were estimated according to the guidelines from the Consultative Committee for Amount of Substance: Metrology in Chemistry and Biology (CCQM) [25]. To select the most appropriate estimator, the following were performed: Grubbs test for outliers (R studio); preliminary graphical inspection to check for over-dispersion of mean values; mutual consistency check (chi-squared test in Excel 2007) [25]; and Birge ratio calculation (ESM, Method S2). For both of the HCMV test materials, estimation of the mean DNA copy number concentration and the corresponding measurement was performed using the Vangel-Ruhkin estimator (R studio; package 'metrology'), which was characterised as the most appropriate estimation method based on the CCQM guidelines (ESM, Method S3) [25].
Central laboratory monitoring of concentration, homogeneity and stability of gDNA
To assign nominal DNA copy number concentration to the gDNA material and to define the homogeneity of the gDNA units, initial monitoring was carried out in laboratory 1. Here, five gDNA units (units H1–H5) were analysed in duplicate on a QX100 system immediately after the gDNA extraction from a single WVM unit (unit W10). The stability of the test batch was checked during the inter-laboratory study with the analysis of three gDNA units (units G1–G3) that had been stored for 4 months at −20 °C. The mean DNA copy number concentration of the gDNA test material (combined DNA concentrations from units H1–H5 and G1–G3) (±expanded standard error; k = 2.78) was estimated at 979 (±59) cp/μL (ESM, Fig. S2), and the gDNA units H1–H5 were homogenous in terms of the DNA copy number concentration (p > 0.41; ANOVA 95% confidence level). By comparing units G1–G3 with units H1–H5, the stability of the gDNA units after the 4 months of storage at −20 °C was also confirmed (p > 0.9; ANOVA 95% confidence interval) (ESM, Fig. S2).
Inter-laboratory assessment
The two different HCMV test materials comprised the locally extracted gDNA from the purchased WVM units (e.g. Fig. 1A, units W1–W9), and the centrally prepared gDNA units (e.g. Fig. 1B, units G1–G9) that were distributed to the participating laboratories. These were each quantified in three different laboratories using two or three different dPCR platforms (Table 1). In each laboratory, three WVM units and three gDNA units were tested. From each WVM unit and gDNA unit, six aliquots were locally prepared, three aliquots for subsequent analysis on one dPCR platform (laboratories 1–4) and three aliquots for analysis on another dPCR platform (laboratories 1, 2). For each dPCR instrument, three aliquots derived from each of the three WVM units and three gDNA units were each tested in duplicate in one of three consecutive experiments (e.g. Fig. 1A, aliquots 1–3 from unit W5 were tested in experiments 1–3), to determine the intermediate precision and the inter-unit variability within each dPCR instrument and laboratory. Additionally, for each dPCR instrument, the mean DNA copy number concentration and the corresponding expanded measurement uncertainty were estimated. For each of the three WVM units and three gDNA units, three of the four participating laboratories reported complete data for the three units tested in three experiments on one or two dPCR platforms (ESM, Tables S1-S6). The exception here was laboratory 1, which reported only two complete sets of data due to technical problems with the Biomark system (ESM, Table S1). With all of the instruments, the NTCs and negative extraction controls were negative, except for the QuantStudio3D system, where one false positive partition was noted in two NTCs. This can occur due to cross-contamination between samples or because of non-specific binding of primers [13]. However, due to the high DNA copy number concentrations used in this inter-laboratory study, the occurrence of a single false positive partition in the NTCs did not bias the subsequent interpretation of the data.
Intra-experiment variability, intermediate precision and agreement between experiments
For each dPCR instrument, three experiments were performed, each with one of three aliquots derived from each of three WVM units and three gDNA units, analysed in duplicates. Grubbs tests were used to determine the outliers and exclude these from the further analysis (ESM, Table S12). With each of these HCMV test materials, low CVs related to intra-experiment variability were observed among the different dPCR instruments, as the great majority of the duplicates (e.g. Fig. 1A, two replicates of aliquot 1 from unit W5) had CV <10%, with some CVs between 10 and 40% mostly for the Biomark system. The low CVs related to the intra-experiment variability of the QX100 system and the Biomark 37K array are in agreement with other reports using HCMV DNA and bacterial DNA [26, 27]. The higher CVs observed with the Biomark 37K array compared to the other two dPCR platforms might have been due to the >25-fold smaller number of analysed partitions, and/or to pipetting errors related to the smaller sample volumes [23, 27].
With all of the dPCR instruments, low CVs related to intermediate precision were noted (CVs below 25%). Moreover, with each HCMV test material tested with each dPCR instrument, there were no statistically significant differences in the mean DNA copy number concentration between the three consecutive experiments, when for each all six measurements (three units in duplicate) were taken into account (Fig. 2, ESM, Table S13). The low CVs related to intermediate precision are in agreement with previous reports from individual laboratory data for HCMV DNA and different bacterial DNA template types [26, 28]. This finding provides additional support for the indications that dPCR is suitable as a reference measurement procedure, as it provides very precise quantification of viral DNA within and between experiments.
Mean DNA copy number concentration estimated on each of dPCR instrument involved in quantification of A whole-virus material (WVM) and B genomic DNA (gDNA). Each symbol denotes a single measurement of the dPCR platform, whereas black triangles represent outliers. Dashed lines, mean DNA copy number concentration obtained on each dPCR instrument in all experiments; dotted lines, expanded measurement uncertainty considering all experiments (Lab 1-Biomark, k = 2.2; all other instruments, k = 2.11)
Inter-unit variability
With the gDNA test material, each of the four participating laboratories measured three different gDNA units that were centrally prepared in laboratory 1. In contrast, for the WVM test material, the DNA from three WVM units was locally extracted and analysed in each of the three laboratories. With all of the dPCR instruments, the centrally prepared gDNA units showed only minor inter-unit variability, as the differences in mean DNA copy number concentration between these gDNA units were below 14% and were mostly not statistically significant (Table 2; ESM, Fig. S3). On the other hand, for the WVM units, where the gDNA was extracted locally, there was higher inter-unit variability compared to the centrally prepared gDNA units, with the highest difference between the WVM units with the Biomark 37 array from laboratory 2, with 55% higher mean DNA copy number concentration obtained from unit W4 compared to unit W6 (Table 2; ESM, Fig. S4). The low CVs related to inter-unit variability of the centrally prepared gDNA units confirms their homogeneity when analysed in each of the participating laboratories. This is in agreement with other reports where simple DNA templates that did not require DNA extraction were used (e.g. gDNA, plasmid DNA) [28, 29]. Despite the statistically significant differences between the centrally prepared gDNA units tested on the QuantStudio 3D system, there were low CVs related to intra-experiment variability (CVs for duplicates, <10%) and intermediate precision (CVs for three experiments, <1%), which explains the statistical significance of <12% difference between these gDNA units. With the WVM material tested in laboratories 1 and 2, most of the differences between the three analysed WVM units were constant, as they were observed with both dPCR platforms. With the low filling-volume related uncertainty and high stability of the WVM units that are claimed by the manufacturer, it is reasonable to assume that the inter-unit variability was introduced during the DNA extraction, as the DNA was locally extracted from each individual WVM unit. This is in agreement with previous studies where up to 50% difference was noted between DNA-extraction replicates quantified using the same qPCR assay [30, 31]. With PCR-based DNA quantification, estimation of DNA copy number concentration can be influenced by variable DNA recovery and/or insufficient removal of PCR inhibitors during DNA extractions [32]. However, as dPCR platforms are considered to be relatively robust to potential inhibitory substances that might have remained during the DNA extraction [11, 12], it is likely that the differences in the estimated mean DNA copy number concentration between the WVM units were mostly caused by variable DNA recovery upon extraction. This is in agreement with previously reported data with HCMV, where intermediate variability was noted between extraction replicates analysed by dPCR within a single laboratory [20, 31]. With the QX100 system, the differences between the WVM units had higher statistical significance in comparison to the Biomark 37K array. This finding suggests that the QX100 system provides more precise discrimination between the WVM units. This is due to the lower CVs related to the intra-experiment variability of the QX100 system compared with the Biomark 37K array, as previously discussed.
Table 2 Inter-unit variability and intermediate precision obtained on different platforms and instruments. For every unit in each laboratory, mean DNA copy number concentrations were calculated considering all duplicate measurements from all experiments (n = 6 or less). p value denotes statistical significance of differences between units. For each unit from every laboratory, intermediate precision is calculated as CV considering all measurements performed in three experiments (n = 6 or less)
Measurement uncertainties of each dPCR instrument
With each HCMV test material and dPCR instrument, the mean DNA copy number concentrations and corresponding expanded measurement uncertainties were calculated by taking into account the three WVM units or gDNA units, each of which was divided into three aliquots that were each measured in one of three experiments. For every dPCR instrument, small expanded measurement uncertainties (<18%) were obtained for the centrally prepared gDNA test material, whereas with a more complex material (i.e. WVM) that requires local DNA extraction, higher expanded measurement uncertainties (<28%) were noted (Fig. 2, Table 3). This is in agreement with a previous inter-laboratory study on bacterial DNA [28]. Additionally, in two other assessments using simple and well-defined DNA templates [29, 33], lower expanded measurement uncertainties (<6%) were observed for the QX200 system, the Biomark 12.765 arrays and other dPCR platforms compared to this study. Hence, dPCR offers a very precise estimation of the DNA copy number concentration. However, the final measurement uncertainty is dependent on the complexity of the DNA material, with the local DNA extraction resulting in additional uncertainty components, i.e., leading to higher measurement uncertainty. In the present study, within laboratories 1 and 2, smaller measurement uncertainties were seen for the QX100 system compared to the Biomark 37K array (Fig. 2, Table 3), which is in agreement with another report where a QX100 system and a Biomark 12.765 array were compared [33]. This might arise from the higher CVs related to the intra-experiment variability that was noted on the Biomark 37K system. As no such differences between the QX100 and the Biomark 37K arrays were observed in the inter-laboratory study on bacterial DNA, this might be attributed to the study setup and pipetting errors [28].
Table 3 Estimated mean DNA copy number concentrations and expanded measurement uncertainties
Intra-laboratory agreement
In laboratories 1 and 2, two different dPCR platforms were used. For both HCMV test materials analysed on the Biomark 37K array in laboratory 1, the mean DNA copy number concentration was approximately 8% higher than that measured on the QX100 system (Table 3). The opposite was noted in laboratory 2, where both of the HCMV test materials showed 17% lower mean DNA copy number concentrations when measured on the Biomark 37K array compared with those measured using the QX100 system. The high intra-laboratory agreement between the QX100 system and the Biomark 37K array has already been observed in two other studies [23, 27]. Additionally, low discrepancies were observed between the other dPCR platforms [14, 20, 33].
Within laboratories 1 and 2, the differences between the QX100 system and the Biomark 37K array were very consistent, as the differences in the DNA copy number concentration between both of these platforms were similar, regardless of the HCMV test material used. Similar consistency between these two platforms has already been noted for three different types of bacterial DNA [28].
However, there was disagreement noted between laboratories 1 and 2, where for the QX100 system, higher (laboratory 2) and lower (laboratory 1) mean DNA copy number concentrations were measured compared to the Biomark 37K array. Although in both laboratories differences between those two platforms were smaller than expanded measurement uncertainty of each platform, this pattern was noted with both test materials. Similar inconsistent data have been reported previously for plasmid DNA [28], which suggests that such discrepancies between platforms are not always systematic, but can be random; however, the reasons for such random discrepancies are not yet completely understood. As the same assay, and the same HCMV test materials and cycling conditions were used in all of the laboratories, over-estimation and under-estimation of the DNA copy number concentrations and discrepancies between laboratories might be due to either the use of different master mixes, or to incorrectly assigned partition volumes [15, 23, 27]. For both dPCR platforms, various lot numbers of the particular master mixes were used in the different laboratories, which might partially contribute to these observed discrepancies between the platforms. With the QX100 system and the Biomark 12.765 array, >10% difference in partition volume was reported from an independent assessment carried out in several laboratories [14, 33–35]. Furthermore, with the Biomark 12.765 array, around a 7% difference in chamber volume was found between two arrays measured in the same laboratory [36]. The discrepancy between the QX100 system and the Biomark 37K array observed in the present study might therefore arise from variable partition volumes of the different Biomark 37K array lots and/or differences between droplet volumes generated and analysed for the QX100 systems from different laboratories.
Inter-laboratory agreement and mean DNA copy number concentrations
The mean DNA copy number concentrations for each HCMV test material were measured on each dPCR instrument (i.e. two QX100 systems, two Biomark systems, one QuantStudio 3D) from the four laboratories. With the gDNA test material, the differences between the laboratories did not exceed the differences within each laboratory, as the maximum difference in mean DNA copy number concentration between the dPCR instruments from two laboratories was <20% (Table 3). Furthermore, no statistically significant differences were observed between the majority of the instrument pairs (ESM, Fig. S5A). Between laboratories 1 and 2, there was only a minor difference in mean DNA copy number concentrations when the mean DNA copy number concentrations from the dPCR platforms within each laboratory were taken into account. Conversely, with the more complex DNA material of WVM, a 62% difference in the mean DNA copy number concentration was noted between the two Biomark instruments from different laboratories (Table 3), with statistically significant differences between most of these instrument pairs (ESM, Fig. S5B). Furthermore, in laboratory 1, approximately 40% higher mean DNA copy number concentrations were noted when compared to laboratory 2.
With the gDNA test material, the good agreement between the laboratories additionally demonstrated the high stability of the gDNA units distributed to the participating laboratories. The reasons for minor discrepancies between instruments and laboratories are still not well understood; however, they were probably primarily caused by each individual dPCR instrument, due to over-estimation or under-estimation of DNA copy number concentration, and are not directly influenced by factors related to the different laboratories. In contrast to the centrally prepared gDNA test material, the WVM required local DNA extraction before DNA quantification. DNA extraction has already been demonstrated to introduce an additional variability (CV up to 50%) due to differences in DNA recoveries from extraction columns of the same manual extraction kit used by one operator within one laboratory [20, 30, 31]. To the best of our knowledge, no inter-laboratory assessment of the same DNA extraction method for quantification of viral DNA has been performed. However, it can be speculated that different operators from different laboratories would contribute to this variability. Another source of variability might be differences in composition between the suggested in-house prepared PBS (laboratory 1) and the purchased commercial PBS (laboratories 2–4), as it has been shown that the matrix can have an impact on the DNA recovery and the variability of DNA extractions [20]. Therefore, it is reasonable to assume that in the present study, the local DNA extractions from WVM performed individually in each laboratory contributed to the higher discordance between the laboratories and instruments than for those observed with the centrally prepared gDNA.
To additionally demonstrate the suitability of dPCR as a candidate reference measurement procedure of higher metrological order, its applicability for value assignments of different virus reference materials was determined. For each material, the Vangel-Ruhkin estimator was selected based on several criteria published in the CCQM guidelines [25] (ESM, Method S3, Tables S14, S15). With both materials, the data from all five of the dPCR instruments fell within narrow expanded measurement uncertainties (WVM, 15%; gDNA, 6%) of the mean DNA copy number concentrations (Fig. 3).
Estimated mean DNA copy number concentration and corresponding expanded measurement uncertainty for A whole-virus material (WVM) and B genomic DNA (gDNA). Each dot represents the mean DNA copy number concentration measured with one dPCR instrument, while vertical bars depict their corresponding expanded measurement uncertainty considering all material units tested in all experiments (Lab 1-Biomark, k = 2.2; all other instruments, k = 2.11). Dashed line, estimated mean DNA copy number concentration according to Vangel-Ruhkin estimator; two dotted lines, corresponding expanded measurement uncertainty
As the low measurement uncertainties observed in the present study are in agreement with two other inter-laboratory assessments that used bacteriophage DNA and different types of bacterial DNA [28, 29], we can conclude that dPCR offers good reproducibility for quantification of DNA of different complexities (e.g. plasmid DNA, gDNA, whole bacteria and viruses) and from different sources (e.g. viruses, bacteria, bacteriophages). To determine the suitability of dPCR as a reference measurement procedure and for characterisation of certified reference materials, the performance of dPCR should be compared to that of the qPCR method that is currently used for characterisation of virus reference materials [8]. The use of qPCR in several inter-laboratory studies for the quantification of HCMV resulted in more than 100-fold differences between laboratories in terms of the DNA copy number concentration [2, 8]. The main reason for this variability is most probably the disparity in the quantification procedures between the participating laboratories, as different assays and DNA extraction methods, and variable calibration procedures, can exacerbate the agreement between laboratories in terms of estimated DNA copy number concentration. In contrast to qPCR, several dPCR platforms have already been shown to be resilient to inhibitors and resistant to the influence of different PCR components, hence allowing for more accurate and robust quantification of DNA than is possible with qPCR [11, 23, 27]. The variability caused by the DNA extraction method in particular should receive special attention when dPCR is considered as a candidate for a reference measurement procedure and for characterisation of reference materials. The accuracy and robustness of dPCR-based DNA quantification can be further improved by preliminary selection of the DNA extraction method with the highest DNA recovery, while extraction replicates would probably reduce the influence of inter-column variability. Moreover, DNA recovery of the selected extraction method should be assessed to allow the inclusion of extraction related variability into the measurement uncertainty of quantification of the whole-virus reference material. Furthermore, where possible, dPCR-based direct quantification can bypass most of the mentioned problems, including inter-laboratory variability caused by the different operators of the extraction and clean up procedures, as this does not require DNA extraction and has been demonstrated to provide accurate and repeatable quantification of DNA derived from different whole-virus reference materials [20].
To the best of our knowledge, the present study represents the first inter-laboratory assessment of different dPCR platforms for quantification of viral DNA. Two HCMV test materials, as WVM for local extraction and centrally prepared (extracted) gDNA, were selected to determine the repeatability, intermediate precision, and agreement in the quantification within and between laboratories (reproducibility) when using different dPCR platforms and instruments. For each dPCR instrument, good precision was seen. Furthermore, less than twofold differences in the estimated mean DNA copy number concentration were observed between dPCR platforms from different laboratories. As a consequence, when measurements from all participating dPCR instruments were considered, with both of these HCMV test materials, the mean DNA copy number concentrations were estimated with small expanded measurement uncertainties. All of these findings indicate that dPCR offers very repeatable (i.e. within instrument) and reproducible (i.e. between instruments, platforms and laboratories) quantification of viral DNA. This demonstrates that dPCR has the potential for implementation in the metrological framework as a measurement reference procedure, if correctly validated DNA extraction and quantification can be assured. In their current format, such methods could enable reproducible production of reference materials. They could also be applied to value assign secondary reference materials for calibration of the qPCR methods that are widely used.
37K array:
qdPCR 37K integrated fluidic circuits for the Biomark system
ANOVA:
Coefficient of variation
dPCR:
Digital polymerase chain reaction
gDNA:
HCMV:
Human cytomegalovirus
IU:
International units
NIST:
NTC:
No template control
PCR:
UL54 :
DNA polymerase gene of human cytomegalovirus
WVM:
Whole-virus material
Espy MJ, Uhl JR, Sloan LM, Buckwalter SP, Jones MF, Vetter EA, et al. Real-time PCR in clinical microbiology: applications for routine laboratory testing. Clin Microbiol Rev. 2006;19:165–256. doi:10.1128/CMR.19.1.165.
Hayden RT, Yan X, Wick MT, Rodriguez AB, Xiong X, Ginocchio CC, et al. Factors contributing to variability of quantitative viral PCR results in proficiency testing samples: a multivariate analysis. J Clin Microbiol. 2012;50:337–45. doi:10.1128/JCM.01287-11.
Bustin SA, Benes V, Garson JA, Hellemans J, Huggett J, Kubista M, et al. The MIQE guidelines: minimum information for publication of quantitative real-time PCR experiments. Clin Chem. 2009;55:611–22. doi:10.1373/clinchem.2008.112797.
Pavšič J, Devonshire AS, Parkes H, Schimmel H, Foy CA, Karczmarczyk M, et al. Standardising clinical measurements of bacteria and viruses using nucleic acid tests. J Clin Microbiol. 2015;53:2008–14. doi:10.1128/JCM.02136-14.
Dorman SE, Chihota VN, Lewis JJ, Shah M, Clark D, Grant AD, et al. Performance characteristics of the Cepheid Xpert MTB/RIF test in a tuberculosis prevalence survey. PLoS One. 2012;7:e43307. doi:10.1371/journal.pone.0043307.
International Organisation of Standardisation. In vitro diagnostic medical devices—measurement of quantities in biological samples—metrological traceability of values assigned to calibrators and control materials. Internatiotinal Standard ISO 17511:2003. Geneva, Switzerland: ISO; 2003.
Hayden RT, Gu Z, Sam SS, Sun Y, Tang L, Pounds S, et al. Comparative evaluation of three commercial quantitative cytomegalovirus standards using digital and real-time PCR. J Clin Microbiol. 2015;53:1500–5. doi:10.1128/JCM.03375-14.
Fryer JF, Heath AB, Anderson R, Minor PD, collaborative study group (2010) Collaborative study to evaluate the proposed 1st WHO International Standard for human cytomegalovirus (HCMV) for nucleic acid amplification (NAT)-based assays. 1–40. http://apps.who.int/iris/bitstream/10665/70521/1/WHO_BS_10.2138_eng.pdf Accessed 14.02.2016.
Madej RM, Davis J, Holden MJ, Kwang S, Labourier E, Schneider GJ. International standards and reference materials for quantitative molecular infectious disease testing. J Mol Diagn. 2010;12:133–43. doi:10.2353/jmoldx.2010.090067.
Haynes RJ, Kline MC, Toman B, Scott C, Wallace P, Butler JM, et al. Standard reference material 2366 for measurement of human cytomegalovirus DNA. J Mol Diagn. 2013;15:177–85. doi:10.1016/j.jmoldx.2012.09.007.
Rački N, Dreo T, Gutierrez-Aguirre I, Blejec A, Ravnikar M. Reverse transcriptase droplet digital PCR shows high resilience to PCR inhibitors from plant, soil and water samples. Plant Methods. 2014;10:307. doi:10.1186/s13007-014-0042-6.
Nixon G, Garson JA, Grant P, Nastouli E, Foy CA, Huggett JF. A comparative study of sensitivity, linearity and resistance to inhibition of digital and non-digital PCR and LAMP assays for quantification of human cytomegalovirus. Anal Chem. 2014;86:4387–94. doi:10.1021/ac500208w.
Huggett JF, Foy CA, Benes V, Emslie K, Garson JA, Haynes R, et al. The digital MIQE guidelines: minimum information for publication of quantitative digital PCR experiments. Clin Chem. 2013;59:892–902. doi:10.1373/clinchem.2013.206375.
Corbisier P, Pinheiro L, Mazoua S, Kortekaas A-M, Chung PYJ, Gerganova T, et al. DNA copy number concentration measured by digital and droplet digital quantitative PCR using certified reference materials. Anal Bioanal Chem. 2015;407:1831–40. doi:10.1007/s00216-015-8458-z.
Huggett JF, Cowen S, Foy CA. Considerations for digital PCR as an accurate molecular diagnostic tool. Clin Chem. 2015;61:79–88. doi:10.1373/clinchem.2014.221366.
Sedlak RH, Jerome KR. Viral diagnostics in the era of digital polymerase chain reaction. Diagn Microbiol Infect Dis. 2013;75:1–4. doi:10.1016/j.diagmicrobio.2012.10.009.
Gutiérrez-Aguirre I, Rački N, Dreo T, Ravnikar M. Droplet digital PCR for absolute quantification of pathogens. Methods Mol Biol. 2015;1302:331–47. doi:10.1007/978-1-4939-2620-6_24.
Dreo T, Pirc M, Ramšak Ž, Pavšič J, Milavec M, Zel J, et al. Optimising droplet digital PCR analysis approaches for detection and quantification of bacteria: a case study of fire blight and potato brown rot. Anal Bioanal Chem. 2014;406:6513–28. doi:10.1007/s00216-014-8084-1.
Hayden RT, Gu Z, Ingersoll J, Abdul-Ali D, Shi L, Pounds S, et al. Comparison of droplet digital PCR to real-time PCR for quantitative detection of cytomegalovirus. J Clin Microbiol. 2013;51:540–6. doi:10.1128/JCM.02620-12.
Pavšič J, Žel J, Milavec M. Digital PCR for direct quantification of viruses without DNA extraction. Anal Bioanal Chem. 2016;408:67–75. doi:10.1007/s00216-015-9109-0.
Deprez L, Mazoua S, Corbisier P, Trapmann S, Schimmel H, White H, Cross N, Emons H (2012) The certification of the copy number concentration of solutions of plasmid DNA containing a BCR-ABL b3a2 transcript fragment Certified Reference Materials : ERM®-AD623a, ERM®-AD623b, ERM®-AD623c, ERM®-AD623d, ERM®-AD623e, ERM®-AD623f. doi: 10.2787/59675.
Sassenscheidt J, Rohayem J, Illmer T, Bandt D. Detection of beta-herpesviruses in allogenic stem cell recipients by quantitative real-time PCR. J Virol Methods. 2006;138:40–8. doi:10.1016/j.jviromet.2006.07.015.
Pavšič J, Žel J, Milavec M. Assessment of the real-time PCR and different digital PCR platforms for DNA quantification. Anal Bioanal Chem. 2016;408:107–21. doi:10.1007/s00216-015-9107-2.
Corbisier P, Zobell O, Trapmann S, Auclair G, Emons H (2014) Training manual on GMO quantification: proper calibration and estimation of measurement uncertainty. doi: 10.2787/10085.
CCQM (2013) CCQM Guidance note: estimation of a concensus KCRV and associated degrees of equivalence. http://www.bipm.org/cc/CCQM/Allowed/19/CCQM13-22_Consensus_KCRV_v10.pdf Accessed 13.03.2016.
Pavšič J, Žel J, Milavec M. Assessment of the real-time PCR and different digital PCR platforms for DNA quantification. Anal Bioanal Chem. 2015. doi:10.1007/s00216-015-9107-2.
Devonshire AS, Honeyborne I, Gutteridge A, Whale AS, Nixon G, Wilson P, et al. Highly reproducible absolute quantification of Mycobacterium tuberculosis complex by digital PCR. Anal Chem. 2015;87:3706–13. doi:10.1021/ac504617.
Devonshire AS, O'Sullivan DM, Honeyborne I, Jones G, Karczmarczyk M, Pavšič J, et al. The use of digital PCR to improve the application of quantitative molecular diagnostic methods for tuberculosis. BMC Infect Dis. 2016;16:366. doi:10.1186/s12879-016-1696-7.
Chung PYJ, Corbisier P, Mazoua S, Zegers I, Auclair G, Trapmann S, Emons H (2015) The certification of the mass of lambda DNA in a solution-certified reference material: ERM ® -AD442k. doi: 10.2787/58344.
Laus S, Kingsley LA, Green M, Wadowsky RM. Comparison of QIAsymphony automated and QIAamp manual DNA extraction systems for measuring Epstein-Barr virus DNA load in whole blood using real-time PCR. J Mol Diagn. 2011;13:695–700. doi:10.1016/j.jmoldx.2011.07.006.
Perandin F, Pollara PC, Gargiulo F, Bonfanti C, Manca N. Performance evaluation of the automated NucliSens easyMAG nucleic acid extraction platform in comparison with QIAamp Mini kit from clinical specimens. Diagn Microbiol Infect Dis. 2009;64:158–65. doi:10.1016/j.diagmicrobio.2009.02.013.
Verheyen J, Kaiser R, Bozic M, Timmen-Wego M, Maier BK, Kessler HH. Extraction of viral nucleic acids: comparison of five automated nucleic acid extraction platforms. J Clin Virol. 2012;54:255–9. doi:10.1016/j.jcv.2012.03.008.
Dong L, Meng Y, Sui Z, Wang J, Wu L, Fu B. Comparison of four digital PCR platforms for accurate quantification of DNA copy number of a certified plasmid DNA reference material. Sci Rep. 2015;5:13174. doi:10.1038/srep13174.
Pinheiro LB, Coleman VA, Hindson CM, Herrmann J, Hindson BJ, Bhat S, et al. Evaluation of a droplet digital polymerase chain reaction format for DNA copy number quantification. Anal Chem. 2012;84:1003–11. doi:10.1021/ac202578x.
Dagata JA, Farkas N, Kramer JA (2016) Method for measuring the volume of nominally 100 μm diameter spherical water-in-oil emulsion droplets. doi: 10.6028/NIST.SP.260-184.
Bhat S, Herrmann J, Armishaw P, Corbisier P, Emslie KR. Single molecule detection in nanofluidic digital array enables accurate measurement of DNA copy number. Anal Bioanal Chem. 2009;394:457–67. doi:10.1007/s00216-009-2729-5.
This study was supported financially by INFECT MET (the EMRP project; jointly funded by the EMRP participating countries within EURAMET and the European Union) and the Slovenian Research Agency (contract nos. P4-0165 and 1000-13-0105). The QX100 system used in this study was financed by the Metrology Institute of the Republic of Slovenia, with financial support from the European Regional Development Fund. The results were discussed by the courtesy of Heinz Zeichhardt, of Charité-University Medicine Berlin, and Hans-Peter Grunert, of Gesellschaft für Biotechnologische Diagnostik mbH, Berlin. The manuscript was edited for scientific language by Dr. Christopher Berrie.
Department of Biotechnology and Systems Biology, National Institute of Biology, Večna pot 111, 1000, Ljubljana, Slovenia
Jernej Pavšič, Andrej Blejec, Jana Žel & Mojca Milavec
Jožef Stefan International Postgraduate School, Jamova 39, 1000, Ljubljana, Slovenia
Jernej Pavšič
Molecular and Cell Biology, LGC Ltd., Queens Road, Teddington, Middlesex, TW11 0LY, UK
Alison Devonshire, Carole A. Foy, Gerwyn M. Jones, Jim F. Huggett & Nicholas Redshaw
European Commission, Joint Research Centre (JRC), Directorate F. Retieseweg 111, 2440, Geel, Belgium
Fran Van Heuverswyn, Heinz Schimmel & Maria Karczmarczyk
School of Biosciences and Medicine, Faculty of Health and Medical Science, University of Surrey, Guildford, Surrey, GU2 7XH, UK
Jim F. Huggett
Bioanalysis Laboratory, National Metrology Institute of Turkey, (TUBITAK UME), PO Box 54, 41470, Gebze, Kocaeli, Turkey
Erkan Mozioğlu, Sema Akyürek & Müslüm Akgöz
Alison Devonshire
Andrej Blejec
Carole A. Foy
Fran Van Heuverswyn
Gerwyn M. Jones
Heinz Schimmel
Jana Žel
Nicholas Redshaw
Maria Karczmarczyk
Erkan Mozioğlu
Sema Akyürek
Müslüm Akgöz
Mojca Milavec
Correspondence to Jernej Pavšič.
The authors declare that they have no conflicts of interest.
Pavšič, J., Devonshire, A., Blejec, A. et al. Inter-laboratory assessment of different digital PCR platforms for quantification of human cytomegalovirus DNA. Anal Bioanal Chem 409, 2601–2614 (2017). https://doi.org/10.1007/s00216-017-0206-0
Revised: 04 January 2017
DNA quantification
Virus reference materials
|
CommonCrawl
|
CYP2B6*6, CYP2B6*18, Body weight and sex are predictors of efavirenz pharmacokinetics and treatment response: population pharmacokinetic modeling in an HIV/AIDS and TB cohort in Zimbabwe
Milcah Dhoro1,2,
Simbarashe Zvada3,
Bernard Ngara1,
Charles Nhachi2,
Gerald Kadzirange4,
Prosper Chonzi5 &
Collen Masimirembwa1
BMC Pharmacology and Toxicology volume 16, Article number: 4 (2015) Cite this article
Efavirenz (EFV) therapeutic response and toxicity are associated with high inter-individual variability attributed to variation in its pharmacokinetics. Plasma concentrations below 1 μg/ml may result in virologic failure and above 4 μg/ml, may result in central nervous system adverse effects. This study used population pharmacokinetics modeling to explore the influence of demographic and pharmacogenetic factors including efavirenz-rifampicin interaction on EFV pharmacokinetics, towards safer dosing of EFV.
Patients receiving an EFV-based regimen for their antiretroviral therapy and a rifampicin-containing anti-TB regimen were recruited. EFV plasma concentrations were measured by HPLC and genomic DNA genotyped for variants in the CYP2B6, CYP2A6 and ABCB1 genes. All patients were evaluated for central nervous system adverse effects characterised as sleep disorders, hallucinations and headaches using the WHO ADR grading system. A pharmacokinetic model was built in a forward and reverse procedure using nonlinear mixed effect modeling in NONMEM VI followed by model-based simulations for optimal doses.
CYP2B6*6 and *18 variant alleles, weight and sex were the most significant covariates explaining 55% of inter-individual variability in EFV clearance. Patients with the CYP2B6*6TT genotype had a 63% decrease in EFV clearance despite their CYP2B6*18 genotypes with females having 22% higher clearance compared to males. There was a 21% increase in clearance for every 10 kg increase in weight. The effect of TB/HIV co-treatment versus HIV treatment only was not statistically significant. No clinically relevant association between CYP2B6 genotypes and CNS adverse effects was seen, but patients with CNS adverse effects had a 27% lower clearance compared to those without. Model- based simulations indicated that all carriers of CYP2B6*6 TT genotype would be recommended a dose reduction to 200 mg/day, while the majority of extensive metabolisers may be given 400 mg/day and still maintain therapeutic levels.
This study showed that screening for CYP2B6 functional variants has a high predictability for efavirenz plasma levels and could be used in prescribing optimal and safe EFV doses.
Treatment success with Efavirenz (EFV) requires maintenance of an optimal plasma concentration to ensure a balance between adverse drug reactions (ADRs) and possible treatment failure. Steady state concentrations below 1 μg/ml in plasma have been reported to be associated with an increased risk for virological failure and drug resistance, while concentrations above 4 μg/ml have been reported to be associated with an increased risk for the development of central nervous system (CNS) adverse effects, hepatic toxicity, and necessity for treatment discontinuation [1,2]. High rates of CNS adverse effects characterized by hallucinations, vivid dreams and insomnia have been reported in more than 50% of the patients who initiate EFV with up to a fifth of all individuals on an EFV based regimen discontinuing the drug and switching therapy primarily due to the unbearable neurotoxicity [3].
The current Zimbabwean guidelines for antiretroviral therapy (ART) recommend first line therapy of EFV at a dosage of 600 mg daily combined with two nucleoside reverse transcriptase inhibitors [4]. Reduced EFV doses of 200 and 400 mg daily have been shown to be effective in patients with good virologic response [5-7]. A randomized, double-blind, placebo-controlled trial (Encore 1) which was conducted in antiretroviral-naive adults showed that a daily dose of 400 mg EFV is non-inferior to the standard 600 mg dose and should be considered for initial ARV treatment [8]. The co-administration of EFV with standard anti-TB therapy that includes rifampicin, a potent drug enzyme inducer, isoniazid, ethambutol and pyrazinamide is recommended for all patients with HIV/AIDS and active TB co-infection [4]. TB is the most frequent life-threatening opportunistic infection among people living with HIV and a leading cause of death [9]. Zimbabwe is ranked among high burden countries for both TB and HIV [10].
The large inter-individual variability in EFV pharmacokinetics (PK) compromises the prediction of associated adverse effects as well as clinical outcomes. The effects of genetics, gender and weight on the variability of EFV PK have been explored previously [11-14]. EFV is primarily metabolized to its main metabolite, 8-hydroxyefavirenz by CYP2B6 [15] and to a lesser extent by CYP3A4 [15] and CYP2A6 [16]. P-glycoprotein, encoded by ABCB1, is the major efflux transporter at the blood brain barrier that limits entry into the CNS for a large number of drugs. There are conflicting reports in literature as to whether EFV is a substrate for Pgp [17,18]. Genetic polymorphisms in these drug metabolizing enzymes and transporter proteins have been associated with variability in EFV PK [14].
Of all the CYP2B6 variants identified, the CYP2B6*6 haplotype (516G > T and 785A > G) is the most frequent and functionally relevant variant across several populations [19,20], associated with reduced EFV clearance [21,22] and increased CNS adverse effects [23]. A less frequent polymorphism CYP2B6*18 (983 T > C), has also been shown to predict plasma EFV exposure [24]. There is limited data available on the additional functional CYP2B6 polymorphisms that have been suggested to affect EFV PK. Polymorphisms in CYP2A6, in particular CYP2A6*9b (1836G > T) and CYP2B6*17 (5065G > A), have been associated with variability in EFV PK [16,25]. There are conflicting reports on the effects of common polymorphisms in the ABCB1 gene on EFV PK [13,14] with some suggesting a favorable virologic response and CD4-cell recovery in patients carrying the ABCB1 3435TT genotype while others failing to replicate this association. There is also contradiction on the effect of EFV-rifampicin interaction on EFV PK with some studies showing an increased metabolism of EFV in the presence of rifampicin [26,27], while others report the opposite [28,29]. Some authors have suggested that isoniazid may play a role in counteracting the inducing effects of rifampicin on EFV metabolism [30]. In contrast, pyrazinamide has been shown not to affect CYP activities thereby not affecting EFV PK [31] and no effects have been reported to date with ethambutol.
Identifying the sources of EFV PK variability may improve therapeutic efficacy while decreasing EFV-induced adverse effects. We recently reported a high incidence of CNS adverse effects associated with carriage of CYP2B6*6TT genotype and male gender in Zimbabwean HIV positive patients on an EFV-based regimen [32]. Due to the large inter-patient variability in EFV concentrations, in combination with a narrow therapeutic index, therapeutic drug monitoring (TDM) has been suggested as a clinically useful monitoring tool during EFV treatment [33]. An alternative and less costly strategy to TDM aims to use patient specific factors (genetic, demographic) to guide dosing so as to achieve optimal drug exposure and effect. Therefore the aim of this study was to investigate the contribution of demographic and pharmacogenetic factors as well as EFV-rifampicin drug interactions on EFV PK using population pharmacokinetic modeling in Zimbabwean patients with HIV/AIDS and TB co-infection. Consequently, the final covariate model was used to simulate optimal EFV doses at various conditions. This study forms a basis for integrating pharmacogenetic testing in routine clinical practice as a step in evaluating drug safety and efficacy.
Study population and sample collection
A total of 95 HIV positive patients receiving an EFV-based ART regimen and 90 HIV/TB co-infected patients receiving an EFV-based ART regimen and a rifampicin-containing anti-TB therapy were recruited and enrolled into the study. Patient recruitment took place at two major hospitals in Zimbabwe; Wilkins and Chitungwiza hospitals. All patients were evaluated for CNS adverse effects in terms of sleep disorders, hallucinations and headaches using a score chart and classified into cases (presence of CNS adverse effects) and controls (no CNS adverse effects). The classification and determination of severity of the CNS side effects was done according to WHO guidelines [34]. Patient demographics were also collected. Blood samples for genotyping and EFV plasma concentration determination were collected at enrollment. The study was approved by the local Joint Research Ethics Committee and Medical Research Council of Zimbabwe. A written informed consent was obtained from each study participant.
DNA extraction and TaqMan Genotyping
Genomic DNA was isolated from peripheral blood leukocytes using the QIAamp DNA Midi Kit (QIAGEN GmbH.Hilden. Germany). All participants were genotyped for CYP2B6 G516T (rs3745274), CYP2B6*18 (rs28399499), CYP2A6*9 (rs8192726), CYP2A6*17 (rs28399454) and ABCB1 1236C/T (rs1128503). Allelic discrimination reactions were performed using TaqMan genotyping assays (Applied Biosystems, CA, USA) on the ABI 7500 System (Applied Biosystems, Foster City, CA). The final volume for each reaction was 25 μl, consisting of a 2x TaqMan genotyping master mix, 20 × genotyping assay mix and 10 ng genomic DNA. The PCR profile consisted of an initial step at 50°C for 2 min and 50 cycles with 95°C for 10 minutes and 92°C for 15 sec.
Efavirenz plasma concentration determination
Plasma EFV concentrations were determined 12–15 hrs post dose by reverse phase HPLC with UV-detection as previously described [35] with minor changes. Briefly, the reverse-phase chromatography with column: C18 (150 x 4.6 mm, 5 μm particle size) and UV/VIS detector (DAD) were used. Stock solutions for the calibration standards (0.5μΜ - 60 μM) were prepared using a mixture of acetonitrile (ACN) and water (dH2O) in the ratio 60:40. The quality control (QC) samples were prepared in the same way as the calibration standards to give final concentrations of 2μΜ (Low QC), 30μΜ (Medium QC) and 50μΜ (High QC). Felodipine was used as the internal standard with a retention time of 6.2 minutes. The mobile phase consisted of a mixture of solutions A and B in a 65:35 proportion. Both solutions contained glacial acetic acid, ACN and 25 mM ammonium acetate buffer in proportions 1:900:100 and 1:100:900, respectively. Plasma proteins were precipitated with ACN before centrifuging. Elution was performed at 1 ml/min giving a retention time for EFV of 5.2 min as detected at UV–VIS 1, 247 nm for a total run time of 10mins. Analysis of chromatograms was performed on the Agilent HP1100 HPLC System and data processing was done using the Chemstation Software (Agilent Technologies, CA, USA).
Statistical analysis of the data
Descriptive analysis of the data was performed using Genstat 8.1 to determine the means and standard deviations for continuous variables and percentages for categorical variables. ANOVA, linear regression and Chi-square/Fisher tests were used to assess the relationship between independent and dependent variables where appropriate. The Shapiro-Wilk test was used to assess for normality and the appropriate data transformation methods used where necessary. All tests perfomed in this section were carried at 95% confidence level and p < 0.05.
Population pharmacokinetic modeling
Pharmacokinetic data was analyzed using population mixed effects non-linear regression modeling in NONMEM VI. The estimation of typical population PK parameters, along with their random inter-individual and inter-occasional (IO) variability was performed using first-order conditional estimation method with interaction (FOCE INTER) [36]. The base model was built with all covariates and tested for significant relationships between parameters and covariates. The baseline EFV PK model parameters were adopted from a study in Zimbabwean patients by Nyakutira et al. [37]. Clearance (CL/F) was the only parameter that was estimated while the first-order absorption rate constant (ka) and volume of distribution in plasma (Vd) were fixed. A stepwise regression method was used and the statistical significance set at 5% (change in objective function value (ΔOFV) > 3.84,1.degrees of freedom [d.f] ) and 1% significance level (ΔOFV > 6.63, 1.d.f) for the forward and backward inclusion of covariates respectively [38]. Clinical significance was set at 20%.The effect of continuous covariates was parameterized centred on the median value using the following equation:
$$ PAR={\uptheta}_P\times \left(1 + {\uptheta}_{cov}\times \left(COV-CO{V}_{med}\right)\right) $$
where θ P is the parameter (PAR) estimate in a typical individual, COV med is a median covariate value while θ cov is the fractional change in PAR with each unit change in the covariate (COV).
For categorical covariates, such as genotype and sex, the covariate model was expressed as a fractional change (θ cov ) from the estimate for a typical value (θ P ) due to the covariate (COV) using the following equation:
$$ PAR={\uptheta}_P\times \left(1+{\uptheta}_{cov}\times (COV)\right) $$
Monte-carlo simulations
To propose dose adjustment the PK data was simulated in NONMEM VI on 1000 individuals using the final model parameters mimicking EFV drug concentration on demographic and genetic data from the 185 individuals at different doses: 200, 300, 400, 500, 600, 700 and 800 mg oral per day. A dose was selected that minimized the proportion of patients outside the 1 - 4 μg/ml therapeutic range.
A total of 185 patients; 60 males and 125 females; were recruited into the study and used for data analysis. The mean weight and height were significantly higher for males than females (61.5 kg vs 57.9 kg; p=0.0372 and 1.72 m vs 1.61 m; p<0.001, respectively). There was no statistically significant difference in occurrence of CNS toxicity and mean EFV concentration between males and females. The summary characteristics of the study participants are presented in Table 1.
Table 1 Demographic characteristics of the study population
All identified SNPs were tested for deviations from Hardy-Weinberg Equilibrium. Analysis of the log transformed EFV concentration and categorical explanatory variables revealed significantly higher mean log EFV concentration for patients carrying the homozygous mutant genotypes for CYP2B6*6, CYP2B6*18 and CYP2A6*9 compared to the other genotypes (Table 2). Figure 1 shows the mean log EFV concentration among the combined genotypes of CYP2B6*6 and *18. Patients carrying the homozygous mutant genotypes and at least two of the mutant alleles showed a fourfold higher plasma EFV concentration than those carrying the homozygous wild type genotypes. In addition most of the patients carrying the homozygous wild type genotypes had EFV levels that were in the therapeutic range log10 (0 – 0.5 μg/ml), corresponding to 1 - 4 μg/ml.
Table 2 Association between log transformed EFV concentration and categorical explanatory variables.
Log EFV concentration among the CYP2B6*6 and *18 composite genotypes.
However, the mean log EFV concentration for patients who experienced CNS adverse effects was comparable to that of patients without. Analysis of the EFV concentration against the continuous variables revealed no significant association with age and height but there was a significant decrease in concentration for each unit increase in the weight (p<0.001). The association between log transformed EFV concentration and categorical explanatory variables is summarized in Table 2.
Pharmacokinetic parameter estimation
A one compartmental PK model was used to estimate the impact of multi-covariates to the fixed parameter EFV CL/F. The ka and V/d were fixed in the model based on literature results. Covariates that resulted in statistically significant decreases in the baseline PK model were polymorphisms CYP2B6*6, CYP2B6*18, body weight and sex resulting in ∆OFV from 1.098 to 0.494; (p>0.001), explaining up to 55% of between subject variability in clearance. The parameter estimates for the final PK model for a daily dose of 600 mg EFV are shown in Table 3.
Table 3 Parameter estimates for the final pharmacokinetic model for daily 600 mg EFV
The most significant covariate was CYP2B6*18 with ∆OFV from 1.098 to 0.901 accounting for up to 18% variance in EFV clearance. The contribution of the covariate towards explaining between subject variability is shown in Table 4. As a result quantification of EFV oral clearance was fixed on CYP2B6*18. For the extensive metabolisers, CL/F was 7.01 L/h which significantly decreased to 2.26 L/h and 0.539 L/h in intermediate and poor metabolisers, respectively. Carriers of the CYP2B6*6 wild type had a 93% higher CL/F (CV=24%) while the poor metabolisers, CYP2B6*6 TT had 63% lower CL/F (CV=9%). For every 10 kg increase in weight the CL/F increased by 21% (CV=21%). Females showed a 22% higher CL/F (CV= 67%) compared to males. The final model adequately explained the observed data as shown by the basic goodness-of-fit plots for the model evaluation shown in Figure 2.
Table 4 Table showing contribution of each covariate on improving the model fit and percentage of inter-individual variability on EFV clearance accounted for
Basic goodness of fit plots for the final EFV PK model. The observations are plotted versus the population predictions. Upper right panel: The observations are plotted against the individual predictions. Lower left panel: The individually weighted residuals are plotted versus time after dose. Lower right panel: The absolute values of the individually weighted residuals are seen versus the individual predictions. The predictions match the observations and the residuals are distributed evenly around the reference line over time and do not give a pronounced slope over the predicted concentration range.
Monte-Carlo dose simulations
A reduction in EFV dose from 600 mg/dy to 400 mg/dy for the CYP2B6 extensive metabolisers would still result in an effective EFV exposure (1 - 4 μg/ml) for most patients. CYP2B6*6 GT carriers would require doses between 200 – 400 mg/dy depending on the CYP2B6*18 genotype, their gender and weight. All CYP2B6*6 TT carriers irrespective of their CYP2B6*18 genotype, weight and gender would require a reduced daily dose of 200 mg. The proposed optimal doses obtained from the simulation studies are summarized in Table 5.
Table 5 Proposed optimal doses given CYP2B6 genotypes, weight and gender
In the present study, the use of mixed effects modelling enabled the assessment of potential demographic and pharmacogenetic factors on EFV Cl. Consequently the final PK model was used to simulate therapeutic EFV doses associated with reduced occurrence of CNS side effects. The CYP2B6*6 and *18 variant alleles have been reported to show significant correlation with high EFV concentrations, with the CYP2B6*6 allele as a main risk factor for plasma EFV levels above 4 μg/ml [24,37]. Our results showed that the combined CYP2B6 SNPs had a clinically significant additive effect on reducing EFV CL and were associated with an almost four-fold higher EFV concentration, a finding in agreement with reports by Wyen et al. [39] and Maimbo et al. [24]. Similar observations were made in Caucasians, Africans and Asians [11,23,40,41].
An earlier report by Nemaura et al. showed that weight and gender combined with CYP2B6*6 polymorphisms can explain 22% of variability in EFV PK [42]. This was replicated in our study and we further showed that addition of more clinically significant factors can increase the percentage of variability being explained. Our final model was able to explain up to 55% of IIV. Although previous reports have indicated that females have a lower EFV CL compared to males [37,42], our results showed that females had a 22% higher clearance of EFV compared to the males. A study done in Hispanic women showed that they had increased CYP2B6 metabolic capacity due to some SNPs in the regulatory regions of the gene resulting in more CYP2B6 mRNA [43]. This may also explain our finding. There is need therefore, to determine these SNPs before a conclusion can be made regarding gender differences in the expression and activity of CYP2B6.
Our results did not identify polymorphisms of CYP2A6 and ABCB1 as significant covariates in the final model. There are currently conflicting reports in literature over the effect of EFV interaction with rifampicin. Both EFV and rifampicin are inducers of CYP2B6 and CYP3A4, which can lead to drug–drug interactions and decreased exposure of the drug. Some reports show increased metabolism of EFV in the presence of rifampicin consequently lowering EFV exposure [44]. Some authors suggest that the interaction may be modified by other anti-tubercular agents such as isoniazid which has been shown to inhibit many CYP P450 enzymes including CYP3A thereby counter balancing the inducing effect of rifampicin [30]. Our present results revealed no statistically significant difference in EFV concentration for patients on HIV treatment only and for those on HIV/TB co-treatment containing rifampicin.
Another observation from our study is that an increase in body weight after the time of initial measurement results in a decrease in EFV concentration, which agrees with a study in Thais which showed that body weight was an independent predictive factor for plasma EFV concentration [45,46], although some previous studies have not demonstrated this effect [29,47]. Since patients' body weights may increase over time while on treatment, a weight-based cutoff for EFV dosing is a practical therapeutic approach. To date, a body weight cutoff of 60 kg for the standard EFV dosing is recommended.
With regards the occurrence of CNS adverse effects, our analysis did not show a clinically significant association between the CYP2B6*6 and *18 genotypes and occurrence of CNS adverse effects although patients with CNS side effects had a 27% lower EFV Cl compared to those without. This result shows that other non-genetic factors play a role in development of these side effects. The occurrence and progression of symptoms for CNS side effects after administration of EFV also pose a challenge due to the wide range in time of symptoms onset and persistence. A report by Rodriguez-Novoa et al. showed that carriers of the CYP2B6 516 T allele have greater plasma EFV levels during the first 24 weeks of ART, and they experienced frequent CNS-related side effects during the first week of treatment [41]. Other studies show that symptoms may emerge after weeks of treatment and persist for several months [48]. In other reports, patients develop tolerance of side effects despite continued high EFV concentrations. A blinded, placebo-controlled study by Clifford et al., showed that with optimal use of EFV, stable or improved neurological performance is generally achieved for patients who remain on treatment over more than 3 years [49]. Similar patterns were observed in our study where patients developed symptoms from four weeks after EFV initiation and in some patients symptoms persisted for up to 72 months. Given this challenge, it is difficult to optimize a sampling window period for CNS side effects which may result in failure to associate their occurrence with the CYP2B6*6 and *18 genotypes. It is therefore crucial to replicate findings of the phenotype-genotype association study in a well controlled clinical study with sensitive screening tests for detection of the CNS side effects.
In order to minimize the occurrence of CNS side effects, a gradual reduction in the dose of efavirenz from 600 to 400 or 200 mg/day for intermediate and poor metaboliser patient groups, respectively, have been proposed [50]. Earlier studies have recommended increasing the EFV dosage to 800 mg/day in patients receiving EFV and rifampicin concomitantly [51] but later studies have demonstrated the efficacy of the recommended 600 mg/day. Recently some studies have suggested that the dosage be increased to 800 mg/day in patients weighing >50 kg [52]. Our simulation results show that reductions in EFV dose from 600 mg/day to 400 mg/day would still maintain the therapeutic range of the drug for most of the extensive metaboliser patient groups. This is in agreement with an earlier modeling study on the effectiveness of 400-mg efavirenz vs a 600 mg dose [8]. Daily doses of between 200–400 mg may be recommended for the poor metaboliser patient group but there is still need to closely monitor these patients to avoid sub-therapeutic concentrations leading to virologic failure.
CYP2B6*6 and *18 polymorphisms, gender and weight are predictors of EFV PK variability, and can explain up to 55% of the inter-individual variability. Our findings form a basis to start addressing EFV efficacy and safety in our population through carefully planned clinical trials to validate these predictive factors. There is need for a thorough investigation on the EFV-rifampicin interaction by also including the polymorphisms in the N-acetyltransferase 2 (NAT2) and their implications on isoniazid metabolism. Perhaps inclusion of more factors, genetic and non-genetic may help to explain the remaining 45% of inter individual variability. Close follow up and regular TDM of plasma EFV concentrations during early therapy is recommended, especially in patients with the underlying risk factors for early diagnosis and management of efavirenz-based ART induced CNS adverse effects.
Marzolini C, Telenti A, Decosterd LA, Greub G, Biollaz J, Buclin T. Efavirenz plasma levels can predict treatment failure and central nervous system side effects in HIV-1-infected patients. AIDS. 2001;15:71–5.
Scourfield A, Zheng J, Chinthapalli S. Discontinuation of Atripla as first-line therapy in HIV-1 infected individuals. AIDS. 2012;26:1399–401.
Ward DJ, Curtin JM. Switch from efavirenz to nevirapine associated with resolution of efavirenz-related neuropsychiatric adverse events and improvement in lipid profiles. AIDS Patient Care STDS. 2006;20:542–8.
Guidelines for Antiretroviral Therapy in Zimbabwe. The National Drug and Therapeutics Policy Advisory Committee (NDTPAC) and The AIDS and TB Unit, Ministry of Health and Child Welfare (MOHCW), Harare, Republic of Zimbabwe. 2010
Gatanaga H, Hayashida T, Tsuchiya K, Yoshino M, Kuwahara T, Tsukada H, et al. Successful efavirenz dose reduction in HIV type 1-infected individuals with cytochrome P450 2B6 *6 and *26. Clin Infect Dis. 2007;45:1230–7.
Van Luin M, Gras L, Richter C. Efavirenz dose reduction is safe in patients with high plasma concentrations and may prevent efavirenz discontinuations. J Acquir Immune Defic Syndr. 2009;52:240–5.
Figueroa S, Iglesias Gomez A, Sanchez Martin A. Long-term efficacy and safety of efavirenz dose reduction to 200 mg once daily in a Caucasian patient with HIV. Clin Drug Investig. 2010;30:405–11.
Puls R. Encore1 Study Group. A daily dose of 400 mg efavirenz (EFV) is non-inferior to the standard 600 mg dose: week 48 data from the ENCORE1 study, a randomised, double-blind, placebo controlled, non-inferiority trial. In 7th IAS Conference on HIV Pathogenesis, Treatment and Prevention; June 30-July 3; Kuala Lumpur. 2013; Abstract WELBB01
World Health Organization. WHO Report. Global Tuberculosis Control: Country Profile: South Africa. [apps.who.int/iris/bitstream/10665/44728/1/9789241564380_eng.pdf]
Ministry of Health and Child Welfare Zimbabwe National HIV Estimates. 2010.
Arab-Alameddine M, Di Iulio J, Buclin T, Rotger M, Lubomirov R, Cavassini M. Pharmacogenetics-based population pharmacokinetic analysis of efavirenz in HIV-1-infected individuals. Clin Pharmacol Ther. 2009;85:485–94.
Ribaudo HJ, Liu H, Schwab M, Schaeffeler E, Eichelbaum M, Motsinger-Reif AA, et al. Effect of CYP2B6, ABCB1, and CYP3A5 polymorphisms on efavirenz pharmacokinetics and treatment response: an AIDS Clinical Trials Group study. J Infect Dis. 2010;202:717–22.
Mukonzo JK, Roshammar D, Waako P. A novel polymorphism in ABCB1 gene, CYP2B6*6 and sex predict single-dose efavirenz population pharmacokinetics in Ugandans. Br J Clin Pharmacol. 2009;68:690–9.
Elens L, Vandercam B, Yombi JC, Lison D, Wallemacq P. Influence of host genetic factors on efavirenz plasma and intracellular pharmacokinetics in HIV-1-infected patients. Pharmacogenomics. 2010;11:1223–34.
Ward BA, Gorski JC, Jones DR, Hall SD, Flockhart DA, Desta Z. The cytochrome P450 2B6 (CYP2B6) is the main catalyst of efavirenz primary and secondary metabolism: implication for HIV/AIDS therapy and utility of efavirenz as a substrate marker of CYP2B6 catalytic activity. J Pharmacol Exp Ther. 2003;306:287–300.
Di Iulio J, Fayet A, Arab-Alameddine M, Rotger M, Lubomirov R, Cavassini M, et al. In vivo analysis of efavirenz metabolism in individuals with impaired CYP2A6 function. Pharmacogenet Genomics. 2009;19:300–9.
Winzer RLP, Zilly M, Tollmann F, Schubert J, Klinker HWB. No influence of the P-glycoprotein genotype (MDR1 C3435T) on plasma levels of lopinavir and efavirenz during antiretroviral treatment. Eur J Med Res. 2003;8:531–4.
Dirson G, Fernandez C, Hindlet P, Roux F, German-Fattal M, Gimenez F. Efavirenz does not interact with the ABCB1 transporter at the blood–brain barrier. Pharm Res. 2006;23:1525–32.
Zanger UM, Klein K, Saussele T, Blievernicht J, Hofmann MH. Polymorphic CYP2B6: molecular mechanisms and emerging clinical significance. Pharmacogenomics. 2007;8:743–59.
Uttayamakul S, Likanonsakul S, Manosuthi W, Wichukchinda N, Kalambaheti T, Nakayama EE, Shioda T, Khusmith S: Effects of CYP2B6 G516T polymorphisms on plasma efavirenz and nevirapine levels when co-administered with rifampicin in HIV/TB co-infected Thai adults. AIDS Research and Therapy 2010; 7.
Tsuchiya K, Gatanaga H, Tachikawa N, Teruya K, Kikuchi Y, Yoshino M, et al. Homozygous CYP2B6 *6 (Q172H and K262R) correlates with high plasma efavirenz concentrations in HIV-1 patients treated with standard efavirenz-containing regimens. Biochem Biophys Res Commun. 2004;319:1322–6.
Rotger M: CYP2B6: Explaining Efavirenz Pharmacokinetics. HIV Pharmacogenetics 2007; 2.
Haas DW, Ribaudo HJ, Kim RB, Tierney C, Wilkinson GR, Gulick RM, et al. Pharmacogenetics of efavirenz and central nervous system side effects: an Adult AIDS Clinical Trials Group study. AIDS. 2004;18:2391–400.
Maimbo M, Kiyotani K, Mushiroda T, Masimirembwa C, Nakamura Y. CYP2B6 genotype is a strong predictor of systemic exposure 5 to efavirenz in HIV-infected Zimbabweans. Eur J Clin Pharmacol. 2012;68:267–71.
Kwara A, Lartey M, Sagoe KW, Kenu E, Court MH. CYP2B6, CYP2A6 and UGT2B7 genetic polymorphisms are predictors of efavirenz mid-dose concentration in HIV-infected patients. AIDS. 2009;23:2101–6.
Lopez-Cortes LF, Ruiz-Valderas R, Viciana P, Alarcon-Gonzalez A, Gomez-Mateos J, Leon-Jimenez E. Pharmacokinetic interactions between efavirenz and rifampicin in HIV-infected patients with tuberculosis. Clin Pharmacokinet. 2002;41:681–90.
Matteelli A, Regazzi M, Villani P, De Iaco G, Cusato M, Carvalho AC. Multiple-dose pharmacokinetics of efavirenz with and without the use of rifampicin in HIV-positive patients. Curr HIV Res. 2007;5:349–53.
Brennan-Benson P, Lyus R, Harrison T, Pakianathan M, Macallan D. Pharmacokinetic interactions between efavirenz and rifampicin in the treatment of HIV and tuberculosis: one size does not fit all. AIDS. 2005;19:1541–3.
Friedland G, Khoo S, Jack C. Administration of efavirenz (600 mg/day) with rifampicin results in highly variable levels but excellent clinical outcomes in patients treated for tuberculosis and HIV. J Antimicrob Chemother. 2006;58:1299–302.
Desta Z, Soukhova NV, Flockhart DA. Inhibition of cytochrome P450 (CYP450) isoforms by isoniazid: potent inhibition of CYP2C19 and CYP3A. Antimicrob Agents Chemother. 2001;45:382–92.
Nishimura Y, Kurata N, Sakurai E, Yasuhara H. Inhibitory effect of antituberculosis drugs on human cytochrome P450-mediated activities. J Pharmacol Sci. 2004;96:293–300.
Dhoro M, Ngara B, Kadzirange G, Nhachi C, Masimirembwa C. Genetic variants of drug metabolizing enzymes and drug transporter (ABCB1) as possible biomarkers for adverse drug reactions in an HIV/AIDS cohort in Zimbabwe. Curr HIV Res. 2013;11:481–90.
Gutierrez F, Navarro A, Padilla S. Prediction of neuropsychiatric adverse events associated with long-term efavirenz therapy, using plasma drug level monitoring. Clin Infect Dis. 2005;41:1648–53.
Division of aids table for grading the severity of adult and pediatric adverse events. 2004.
Nyakutira C, Roshammar D, Chigutsa E, Chonzi P, Ashton M, Nhachi C, et al. High prevalence of the CYP2B6 516G>T (asterisk 6) variant and effect on the population pharmacokinetics of efavirenz in HIV/AIDS outpatients in Zimbabwe. Eur J Clin Pharmacol. 2007;64:357–65.
Karlsson MO, Sheiner LB. The importance of modeling interoccasion variability in population pharmacokinetic analyses. J Pharmacokinet Biopharm. 1993;21:735–50.
Nyakutira C, Roshammar D, Chigutsa E, Chonzi P, Ashton M, Nhachi C, et al. High prevalence of the CYP2B6 516GT(asterisk 6) variant and effect on the population pharmacokinetics of efavirenz in HIV/AIDS outpatients in Zimbabwe. Eur J Clin Pharmacol. 2008;64:357–65.
Wahlby U, Jonsson EN, Karlsson MO. Assessment of actual significance levels for covariate effects in NONMEM. J Pharmacokinet Pharmacodyn. 2001;28:231–52.
Wyen C, Hendra H, Vogel M, Hoffmann C, Knechten H, Brockmeyer NH, et al. Impact of CYP2B6 983 T>C polymorphism on non-nucleoside reverse transcriptase inhibitor plasma concentrations in HIV-infected patients. J Antimicrob Chemother. 2008;61:914–8.
Rotger M, Tegude H, Colombo S, Cavassini M, Furrer H, Decosterd L, et al. Predictive value of known and novel alleles of CYP2B6 for efavirenz plasma concentrations in HIV-infected individuals. Clin Pharmacol Ther. 2007;81:557–66.
Rodriguez-Novoa S, Barreiro P, Rendon A, Jimenez-Nacher I, Gonzalez-Lahoz J, Soriano V. Influence of 516G>T polymorphisms at the gene encoding the CYP450-2B6 isoenzyme on efavirenz plasma concentrations in HIV-infected subjects. Clin Infect Dis. 2005;40:1358–61.
Nemaura T, Nhachi C, Masimirembwa C. Impact of gender, weight and CYP2B6 genotype on efavirenz exposure in patients on HIV/AIDS and TB treatment: Implications for individualising therapy. African J Pharm Pharmacol. 2012;6:2188–93.
Lamba V, Lamba J, Yasuda K, Strom S, Davila J, Hancock ML. Hepatic CYP2B6 expression: gender and ethnic differences and relationship to CYP2B6 genotype and CAR (constitutive androstane receptor) expression. J Pharmacol Exp Ther. 2003;307:906–22.
Cohen K, Meintjes G. Management of individuals requiring antiretroviral therapy and TB treatment. Curr Opin HIV AIDS. 2010;5:61–9.
Manosuthi W, Sungkanuparph S, Tantanathip P. Body weight cutoff for daily dosage of efavirenz and 60-week efficacy of efavirenz-based regimen in human immunodeficiency virus and tuberculosis coinfected patients receiving rifampin. Antimicrob Agents Chemother. 2009;53:4545–8.
Stöhr W, Back D, Dunn D. Factors influencing efavirenz and nevirapine plasma concentration: effect of ethnicity, weight and co-medication. Antivir Ther. 2008;13:675–85.
Csajka C, Marzolini C, Fattinger K, Decosterd LA, Fellay J, Telenti A. Population pharmacokinetics and effects of efavirenz in patients with human immunodeficiency virus infection. Clin Pharmacol Ther. 2003;73:20–30.
Haas DW, Smeaton LM, Shafer RW, Robbins GK, Morse GD, Labbe L, et al. Pharmacogenetics of long-term responses to antiretroviral regimens containing Efavirenz and/or Nelfinavir: an Adult Aids Clinical Trials Group Study. J Infect Dis. 2005;192:1931–42.
Clifford DB, Evans S, Yang Y, Acosta EP, Ribaudo H. Team* aRMGAsS: Long-Term Impact of Efavirenz on Neuropsychological Performance and Symptoms in HIV-Infected Individuals (ACTG 5097 s). HIV Clin Trials. 2009;10:343–55.
Cabrera SE, Santos D, Valverde MP, Dominguez-Gil A, Gonzalez F, Luna G. Influence of the cytochrome P450 2B6 genotype on population pharmacokinetics of efavirenz in human immunodeficiency virus patients. Antimicrob Agents Chemother. 2009;53:2791–8.
Rekic D, Roshammar D, Mukonzo J, Ashton M. In silico prediction of efavirenz and rifampicin drug-drug interaction considering weight and CYP2B6 phenotype. Br J Clin Pharmacol. 2011;71:536–43.
Thompson MA, Aberg JA, Hoy JF. Antiretroviral treatment of adult HIV infection: 2012 recommendations of the International Antiviral Society-USA panel. JAMA. 2012;308:387–402.
We thank all the study participants who donated their samples for this study and the two Hospital authorities who gave us permission to use the hospitals as sample collection sites. Special mention goes to the AiBST staff; Zibusiso Mafaiti, Dennis Adu-Gyasi, Rebecca Govathson, Sarudzai Muyambo and Roslyn Thelingwani for carrying out the Efavirenz concentration determination work. Funding for this work was received from EDCTP through project TA 2011,40200.052. The ISP (Sweden) funding for Milcah's PhD scholarship is also acknowledged.
Department of Molecular Sciences, African Institute of Biomedical Science and Technology, Dominion House, 211 Herbert Chitepo Street, P.O. Box 2294, Harare, Zimbabwe
Milcah Dhoro, Bernard Ngara & Collen Masimirembwa
Department of Clinical Pharmacology, College of Health Sciences, University of Zimbabwe, Harare, Zimbabwe
Milcah Dhoro & Charles Nhachi
Department of Medicine, Division of Clinical Pharmacology, Faculty of Medicine and Health Sciences, Stellenbosch University, Stellenbosch, South Africa
Simbarashe Zvada
Department of Medicine, College of Health Sciences, University of Zimbabwe, Harare, Zimbabwe
Gerald Kadzirange
Department of Harare City Health, Harare, Zimbabwe
Prosper Chonzi
Milcah Dhoro
Bernard Ngara
Charles Nhachi
Collen Masimirembwa
Correspondence to Milcah Dhoro.
MD participated in design of the study, carried out the molecular genetic studies, participated in EFV plasma concentration determination, acquisition, analysis and interpretation of data and drafting of the manuscript. SZ carried out the pharmacokinetic modeling of EFV data and participated in drafting of the manuscript. BN performed the statistical analysis of the data and participated in the pharmacokinetic modeling of the EFV data and drafting of the manuscript. CN was involved in critical revision of the manuscript for important intellectual content and supervision of the project. GK participated in data acquisition of the patients and was involved in critical revision of the manuscript. PC was involved in study design and implementation logistics at Wilkins Hospital. CM conceived the study and participated in its design and co-ordination, also participated in drafting of the manuscript and supervision of the work. All authors read and approved the final manuscript.
Dhoro, M., Zvada, S., Ngara, B. et al. CYP2B6*6, CYP2B6*18, Body weight and sex are predictors of efavirenz pharmacokinetics and treatment response: population pharmacokinetic modeling in an HIV/AIDS and TB cohort in Zimbabwe. BMC Pharmacol Toxicol 16, 4 (2015). https://doi.org/10.1186/s40360-015-0004-2
Central nervous system adverse effects
CYP2B6
|
CommonCrawl
|
anova exercises with answers
Posted by December 18, 2021 andover minnesota events on anova exercises with answers
DOC ANOVA Exercises Title: ANOVA Exercises Author: Bryan W. Griffin Last modified by: Bryan W. Griffin Created Date: 4/29/2009 12:02:00 AM Company: Georgia Southern University Analyse the data in File 08 ANOVA.xlsx worksheet Ex 8.5.1 and answer the questions shown below. The results of running a 2*3 ANOVA on Minitab are presented below. Types of categorical variables include: Ordinal: represent data with an order (e.g. There were 100 people available for the study. 28. This chapter describes how to compute and . 5 10 9 Y..= 8 i −Y Y. A two-way ANOVA is performed and the f-ratio for factor A is calculated to be 4.8. In that case we always come to the same conclusions regardless of which method we use. ANOVA in Excel. 1. r/AskStatistics. Introduction to ANOVA and Experimental Design. ! Repeated Measures ANOVA Practice Problems 1. Exercise 36 Analysis Of Variance Anova I. Print results. In this example, we find that there is a statistically significant difference in mean weight loss among the four diets considered. Oneway ANOVA practice problems with solutions. The ANOVA Interpretation Exercise assignment Must be two to three double-spaced pages in length (not including title and references pages) and formatted according to APA Style as outlined in the Writing Center's APA Style (Links to an . 13.7 Using the data from the example in Exercise 13.6 and the ANOVA table from that exercise, determine the p-value for the test (use the F statistic and the appropriate degrees of freedom based on the within and between sum of squares). If the truth is linearity, the regression will have a bit more power. An ANOVA (Analysis of Variance), sometimes called an F test, is closely related to the t test. SS is the Sum of Squares or also known as SS(between). ANOVA-1 Source of Variation SS df MS F Between Groups 121.8 3 40.6 5.905 Within Groups 110 16 6.875 Total 231.8 19 من٤ ٨ 5. For each Conduct and interpret ANOVA. Such models include the one-way Analysis of Variance . a. LAYERED LEARNING. Study infants were grouped into four groups, according to reinforcement of walking and placement: (1) active (2) passive (3) no exercise; and (4) 8 week control. Answer. Your expectation is that each of the 3 experimental treat- Answers Two Factor ANOVA A two factor ANOVA (sometimes called 'two-way' ANOVA) is a hypothesis test on means for which groups are placed in groups that vary along two factors instead of one. D. the null hypothesis H0 is accepted if p <0.05 . a. provide no evidence for, or against, the null hypothesis of ANOVA b. represent evidence for the null hypothesis of ANOVA c. represent evidence against the null hypothesis of ANOVA d. can be very misleading, you should not be looking at box plots in this setting 29. Members. It is often used in hypothesis testing to determine whether a process or treatment actually has an effect on the population of interest, or whether two groups are different from one another. totsatis. Two-way Analysis of Variance (two-way ANOVA, "àP−px ') with interaction One-way ANOVA may be used to determine whether the means of a re-sponse under di erent levels of a factor are all the same. Transcribed image text: Use the following information to answer all ANOVA questions from part a to parte: (For ANOVA questions, answer in the boxes. The critical ingredient for a one-factor, between-subjects ANOVA, is that you have one independent variable, with at least two-levels. Developed by Ronald Fisher in 1918, this test extends the t and the . In this module, we will introduce the basic conceptual framework for experimental design and define the models that will allow us to answer meaningful questions about the differences between group means with respect to a continuous variable. Some readers may want to skip more advanced sections or those that explain Optional the mathematics behind the methods in more detail; these are marked with an sections asterisk '*' in the section title and in the table of . The largest 500 companies in the world were ranked according to their number of employees, with groups defined as follows: Small = Under 25,000 employees, Medium = 25,000 to 49,999 emplo (1972) investigated the variability in age at first walking in infants. Example Treatment 1 Treatment 2 Treatment 3 4 9 8 5 10 11 6 11 8 Yi. You'll have to do t-tests to find out. 172. DO NOT upload any file). The second edition has 243 worked examples and 646 exercises for students, increases of 76% and 63% over the first edition. 2. ANOVA and an independent samples t-test is when the explanatory variable has exactly two levels. ANOVA in Excel; How to Find Anova Add-ins In excel? THE MODEL BEHIND LINEAR REGRESSION 217 One-Way ANOVA •Simplest case is for One-Way (Single Factor) ANOVA The outcome variable is the variable you're comparing The factor variable is the categorical variable being used to define the groups-We will assume k samples (groups) The one-way is because each value is classified in exactly one way •ANOVA easily generalizes to more factors If we define s = MSE, then s i s a n e s t i m a t e o f t h e common population standard deviation, σ, of the populations under consideration. ANOVA Exercises Pages ANOVA, like GOF, is inconclusive when you get a small (less than 5%) P - value; it doesn't tell you which means are different…just that at least one is.You should always do post hoc testing once you're gotten a small P - value after an ANOVA ( ), just like you did with 2 GOF tests. On each night of testing the subject is allowed a total of four hours of sleep. Just like last time, I want to make sure you had the steps for ANOVA post hoc . QNT 561 Week 4 Weekly Learning Assessments - Assignment Chapter 10 Exercise 2 [The following information applies to the questions displayed below.] brands or species names). Groups Comparison With ANOVA: Exercises (Part 2) On this 2nd part of groups comparison exercise, we will focus on nested ANOVA application in R, particularly the application on ecology. Sleep researchers decide to test the impact of REM sleep deprivation on a computerized assembly line task. In that case we always come to the same conclusions regardless of which method we use. For exercises 5-8, decide whether the statement makes sense (or is clearly true) or does not make sense (or is clearly false) Explain clearly. D. Kruskal-Wallis . ANOVA Practice Problems 1. Their weights are recorded after a few days. a. provide no evidence for, or against, the null hypothesis of ANOVA b. represent evidence for the null hypothesis of ANOVA c. represent evidence against the null hypothesis of ANOVA d. can be very misleading, you should not be looking at box plots in this setting 29. F Distribution and ANOVA 13.1 F Distribution and ANOVA1 13.1.1 Student Learning Objectives By the end of this chapter, the student should be able to: Interpret the F probability distribution as the number of groups and the sample size change. Revised on December 14, 2020. A sample of 36 observations is selected from a normal population. 9.1. If you are proficient with the . Each group will contain n observations. Amanager of a department store wants to compare the average number of items being sold in each 3 sections of the store during the weekdays (5 days). Opening an Existing Data File Existing data les are usually in either SPSS format, Excel format, or Text Solution for For each of the studies below, please identify the following (or indicate N/A "not applicable" if the feature is not relevant to the design): a)… The effect of the exercises on the 5 groups of men is compared. Vergil 23 June 2021 at 06:45 on Examining Data Exercises Hello, could you tell me where I can find the reference answers to the exercises? Let's Spread the Word about R-exercises! But looking at the numbers above, you can guess that Blue Label is most likely to be the big winner. For example, we might want to measure BMI for subjects with di erent diets AND for di erent levels of exercise. How to Use ANOVA in Excel? Study infants were grouped into four groups, according to reinforcement of walking and placement: (1) active (2) passive (3) no exercise; and (4) 8 week control. This is the last part of groups comparison exercise.Previous exercise can be found here Answers to the exercises are available here. Critical value of F = 3.68. A t-test is a statistical test that is used to compare the means of two groups. 10 IBM SPSS Statistics 19 Step by Step Answers to Selected Exercises 12. Using the helping3.sav file, select females (gender = 1) who give more than the average amount of Exercise tests were performed up to maximal effort until symptoms, such as angina, were present. Exercises #1-6 Zelazo et al. Sample sizes were 6 per group, for a total of n=24. This is data from an experimental study evaluating are some questions to answer, based on the output of the task. The following data are consistent with summary information on price per. If we define s = MSE, then of which parameter is s an estimate? In ANOVA we use variance-like quantities to study the equality or non-equality of population means. Summary statistics and the ANOVA results are shown below. Questions and Answers. A study was done to see if meditation would reduce blood pressure in patients with high blood pressure. Is there a statistically significant difference in the ages among the groups? The result of a statistical test, denoted p, shall be interpreted as follows: A. the null hypothesis H0 is rejected if p <0.05 . ANOVA is a test that provides a global assessment of a statistical difference in more than two independent means. 2.12.2 Levene's Test To perform Levene's Test: 1. B. Levene's Test is robust because the true signi cance level is very close to the nominal signi cance (answers will vary, of course) A sample of 107 patients with one-vessel coronary artery disease was given percutaneous transluminal coronary angioplasty (PTCA). Nominal: represent group names (e.g. An exercise in the prior chapter looked at bronchial reactivity in asthmatics following exposure to noxious gases. ! Now conduct post hoc comparisons using the LSD method (bronch-react.sav). The one-factor ANOVA is sometimes also called a between-subjects ANOVA, an independent factor ANOVA, or a one-way ANOVA (which is a bit of a misnomer as we discuss later). rankings). Exercise 8.5.2 Conduct a StDev Test This time at Evelee golf balls someone has the bright idea that it might also be a good idea to find out if any of the moulding machines are producing golf balls at different standard deviations. by Matthew McBee. Textbook Authors: Triola, Mario F. , ISBN-10: 0321836960, ISBN-13: 978--32183-696-0, Publisher: Pearson Elementary Statistics (12th Edition) answers to Chapter 12 - Analysis of Variance - 12-2 One-Way ANOVA - Basic Skills and Concepts - Page 614 15 including work step by step written by community members like you. ANOVA Examples STAT 314 1. The sample mean is 12, and the population standard deviation is 3. › Download all Solutions. One-Way ANOVA Calculations: In-Class Exercise Psychology 311 Spring, 2013 1. But you'll know more after you do the post tests. An introduction to t-tests. Transcribed image text: ANOVA (Include ANOVA table including source of variation, degrees of freedom, sum of squares, mean squares, the F value and the P value; and a Tukey HSD test if appropriate. Interpret your answer. 4. Since the test statistic = F = 5.905 > 3.239, reject the null hypothesis. C. the alternate hypothesis H1 is rejected if p> 0.05 . 3. Published on January 31, 2020 by Rebecca Bevans. t C. Wilcoxon . Anyway, "conducting ANOVA" is an exercise, but what's more important is to understand what question the ANOVA will answer. 2 CHAPTER 0. Sign In. Side-by-side boxplots like these in both gur es reveal differences between samples For the ANOVA table please include the values for each letter and not the formulas) For the answer to letter L. only state whether the ANOVA is significant or not. One-way Analysis of Variance (Abbreviated one-way ANOVA) is a technique used to compare means of two or more samples (using the F distribution). The critical f-ratio at a significance level of alpha = 0.05 is 4.3. Which is the best coffee. If you enjoy our free exercises, we'd like to ask you a small favor: Please help us spread the word about R-exercises. A. ANOVA . . Now ANOVA and regression give different answers because ANOVA makes no assumptions about the relationships of the three population means, but regression assumes a linear relationship. New exercises and examples. Take a look at Figures 12.2(a) and 12.2(b) (p. 746) in your text. Sign In. The data was There are two different data analytic situations that you must become familiar with. For example, an ANOVA can examine potential differences in IQ scores by Country (US vs. Canada vs. Italy vs. Spain). ch 06 - example 01 - AnoVA.sav. Subjects are required to participate in two nights of testing. Cancel. Discuss two uses for the F distribution, ANOVA and the test of two variances. Explain your answer. 15 30 27 Y..= 72 Yi. Practice Final Exam on ANOVA, Statistics 110/201, page 2 of 3 3. The term \analysis of variance" is a bit of a misnomer. The effect of one dependent variable differs based on the level of a second dependent variable. ANOVA SPSS Hw. The major di erence is that, where the t test measures the di erence between the means of two groups, an ANOVA tests the di erence between the means of two or more groups A one-way ANOVA, or single factor ANOVA as mentioned in EXCEL, Depressed patients were randomly assigned to one of three groups: a placebo group, a group that received a low dose of the drug, and a group that received a moderate dose of the drug. 35.57 6.489 .495 34.59 36.55 19 50 127 33.34 6.558 .582 32.19 34.49 18 48 Title: ANOVA Exercises Author: Bryan W. Griffin Last modified by: Bryan W. Griffin Created Date: 4/29/2009 12:02:00 AM Company: Georgia Southern University Thus, the decision rule is: If the test statistic F > 3.239, reject the null hypothesis Otherwise do not reject The samples are independent and the data level . University of Missouri, St. Louis IRL @ UMSL Open Educational Resources Collection Open Educational Resources 11-13-2018 An Introduction to Psychological Statistics Interpret your answer. ANOVA and an independent samples t-test is when the explanatory variable has exactly two levels. Username or Email. . Answer the following questions: Explain why the ANOVA is called an omnibus test, and how the omnibus test is different from comparing specific means with t-tests. The effect of one independent variable differs based on the level of a second independent variable. Her weight is the only one factor. t test and ANOVA (analysis of variance) are so similar that this chapter will use the same example and the same 10 exercises used in Chapter 5 (t Test); the only difference is that the data sets have been enhanced to include a third or fourth group. Chapter 13 F Distribution and One-Way ANOVA Alternate Sequencing IntroductoryStatistics was conceived and written to fit a particular topical sequence, but it can be used flexibly to Run an ANOVA on the set of z ij values. Answer. The term \analysis of variance" is a bit of a misnomer. ii) within-subjects factors, which have related categories also known as repeated measures (e.g., time: before/after treatment). Excel ANOVA (Table of Contents). More Practice Problem ANSWERS: 1-Way ANOVA. ANSWERS TO EXERCISES AND REVIEW QUESTIONS. Compare the means of three or more samples using a one-way ANOVA (Analysis of Variance) test to calculate the F statistic. Interpret the ANOVA results in terms of statistical significance and in relation to the research question. Explain the reason for the word variance in the phrase analysis of variance. Understanding Two Way ANOVA Minitab output. Chapter 6 now focuses on block designs and the additive ANOVA model, with interaction coming in Chapter 7, and additional ANOVA topics in Chapter 8. Binary: represent data with a yes/no or 1/0 outcome (e.g. ANOVA is a statistical technique that assesses potential differences in a scale-level dependent variable by a nominal-level variable having 2 or more categories. 1.3 Basic Idea of ANOVA Analysis of variance is a perfectly descriptive name of what is actually done to analyze sample data ac-quired to answer problems such as those described in Section 1.1. The Mixed ANOVA is used to compare the means of groups cross-classified by two different types of factor variables, including: i) between-subjects factors, which have independent categories (e.g., gender: male/female). (1 point) Explain the general similarity between the ratios that are used to calculate the F value and the t-value (1 point) General grading. Last updated about 5 years ago. (1972) investigated the variability in age at first walking in infants. ANOVA (Analysis of Variance) in Excel is the single and two-factor method used to perform the null hypothesis test, which says if the test will be PASSED for Null Hypothesis if all the population values are exactly equal to each other. View Repeated+Measures+ANOVA+Practice+ANSWERS+CORRECTED.docx from PSY 2301 at Texas State University. Select all of the students in the grades.savfile whose previous GPA's are less than 2.00, and whose percentages for the class are greater than 85. Chapter 12.1 Exercises: One-Way Analysis of Variance The F critical value from the F- distribution for = 0.05 and with D1 = 4 - 1 = 3 and D2 = 20 - 4 = 16 degrees of freedom is 3.239. (This presumes, of course, that the equal-standard-deviations assumption holds.) A research study was conducted to examine the clinical efficacy of a new antidepressant. If p-value , reject H oand conclude the variances are not all equal. Choose the test that fits the types of predictor and outcome variables you have collected (if you are doing an . Practice Problems: TWO-FACTOR ANOVA. 3.5 In this exercise a variable with eight different responses will be recoded into another . $$\begin{array}{r|cccccc} \hbox{Area 1} &6.2 &9.3 &6.8 &6.1 &6.7 . Patients were given exercise tests at baseline and after 6 months of follow up. C8057 (Research Methods II): One-Way ANOVA Exam Practice Dr. Andy Field Page 2 4/18/2007 Banana Reward Observing Monkey Observing Human 17 15 115 8 71 13 13 8 13 13 9 6 Mean 7.00 8.00 11.00 Variance 36.00 25.00 14.50 Grand Mean Grand Variance 8.67 24.67 • Carry out a one-way ANOVA by hand to test the hypothesis that some forms of learning Complete Solutions. One-way ANOVA Central:ANOVA-table Df Sum Sq Mean Sq F value Pr(>F) (Intercept) 1 1764789.14 1764789.14 844.27 0.0000 group 2 15515.77 7757.88 3.71 0.0436 Residuals 19 39716.10 2090.32 ANOVA decomposes variance of the observations (\total") into contributions of the single sources (sources of variation): Solutions to Chapter Exercises and SPSS Exercises. -Using this model we can estimate τi or εij for any observation if we are given Yij and μ. Practice Problems: ANOVA. In ANOVA we use variance-like quantities to study the equality or non-equality of population means. Sample sizes were 6 per group, for a total of n=24. INTRODUCTION TO SPSS Figure 0.1: Dialog box for opening a data le or entering data. Two way ANOVA (d) Celia would like to know which is a better predictor of negative affect: optimism or self-esteem. ANSWERS TO EXERCISES AND REVIEW QUESTIONS . Are large companies more profitable per dollar of assets? Exercises #1-6 Data set: anova_infants.Rdata Zelazo et al. › Sample. DAT 565 - Week 4 Exercise. 13. The . Ask a question about statistics (other than homework) 48.6k. Half of the patients were randomly chosen to learn meditation and told to practice it for half an hour a day. Conduct a one-way ANOVA with post hoc tests to compare staff satisfaction scores (totsatis) across each of the length of service categories (use the servicegp3 variable). Multiple Regression Example of One Way ANOVA. F is significant. Exercises - One Way Analysis of Variance (ANOVA) Per-pupil costs (in thousands of dollars) for cyber charter school tuition for school districts in three areas are shown. DF is degrees of freedom, Coupon Level has 1 DF(2 levels - 1=1) and In Store Promotion has 2 DF(3 levels-1=2). Descriptives. Assumptions Sample answers are found at the end of each section. QUESTION 1. The analysis revealed significant differences in airway reactivity. Password. Forgot your password? ANOVA was used to test the outcomes of three drug treatments. Test the claim that there is a difference in means for the three areas, using an appropriate parametric test. Not all of these statements have definitive answers so. b. Calculate each z ij= jy ij y ij: 2. ANOVA was used to test the outcomes of three drug treatments. Chapter 12 Exercises: Analysis of Variance 12-2 page 517: [Another way] Answer (2) : , , ! The second is when you have a data set with several variables in it and must select among them to answer each of the research questions/hypotheses. Solutions to all Chapter Exercises and SPSS Exercises are provided here for reference. This technique can be used only for numerical data. Suppose that a random sample of n = 5 was selected from the vineyard properties for sale in Sonoma County, California, in each of three years. You are planning an experiment that will involve 4 equally sized groups, including 3 experimental groups and a control. This is the variation in Y related to the variation in the means of . It consists of a single factor with several levels and multiple observations at each level. The first is when you have only the data you need for a specific analysis. The question will govern what to do with covariates. Student . In Factorial ANOVA, you have an interaction effect when: Select one: a. 20 people are selected to test the effect of five different exercises. 2-way ANOVA Example Solution: Researchers have sought to examine the effects of various types of music on agitation levels in patients who are in the early and middle stages of Alzheimer's disease. On the nights of testing EEG, EMG, EOG measures are taken. 20 people are divided into 4 groups with 5 members each. One-Way ANOVA •Simplest case is for One-Way (Single Factor) ANOVA The outcome variable is the variable you're comparing The factor variable is the categorical variable being used to define the groups-We will assume k samples (groups) The one-way is because each value is classified in exactly one way •ANOVA easily generalizes to more factors b. Construct an ANOVA table. Answers to additional business exercises Chapter 18 One-way ANOVA . ..-3 2 1 -We can now write the linear model for each observation (Yij ).-Write in μ for each observation. Two-way ANOVA may be used when there are two factors, and we want to determine (i) whether the means of a response under di erent levels of . A research study was conducted to examine the impact of eating a high protein breakfast on adolescents' performance during a physical education physical fitness test. win or lose). This video shows one method for de. B. the null hypothesis H0 is rejected if p> 0.05 . Half of the subjects received a high protein breakfast and half were given a low protein breakfast. In addition to reporting the results of the statistical test of hypothesis (i.e., that there is a . Four alternative advertizing campaigns were considered: A TV . d. Main effects of the dependent variable only. Patients were selected to participate in the study based on their stage of Alzheimer's disease. Regular exercises in the design and analysis of experiments: Exercise 10 A cornflakes company wishes to test the market for a new product that is intended to be eaten for breakfast. Primarily two factors are of interest, namely an advertizing campaign and the type of emballage used.
Pure Wave Massager, 2001 Iowa Wrestling Roster, Wwl New Orleans Saints Radio, Moroccanoil Intense Curl Cream Vs Curl Defining Cream, Cuisipro 6 Sided Box Grater, ,Sitemap,Sitemap
anova exercises with answerswhat to make with pork stew meat besides stew
|
CommonCrawl
|
A magazine for the mathematically curious
Order Issue 16 now!
The first negative dimension
Crossnumber winners, Issue 15
Chalkdust issue 16 – Coming 9 November
Chalkdust issue 15 – Coming 25 May
Chalkdust Book of the Year 2021
Chalkdust issue 14 – Coming 22 November
+ more articles
In conversation with Martin Hairer
I don't like science fiction
Who's your font soulmate?
A canal runs through it
Quarto in higher dimensions
On the cover: Some assembly required
Things with silly names
Dog leads, mirrors and Hermann Minkowski
Spam calls and number blocking
2022 Fields medal winners
Formal sums
In conversation with Sammie Buzzard
Surviving the bridge in Squid Game
On the cover: Regular tilings
The power of curly brackets
My favourite LaTeX package
The hidden harmonies of Hamming distance
Significant figures: Gladys West
Accidentally mathematical songs
Slaying the dragon
Covering infinity
An uncomfortable truth
Big argument
Conference bingo
Dear Dirichlet
How to make...
Page 3 model
What's hot and what's not
Hall of lame
Crossnumber
Order copies
Nik Alexandrakis explains what they are and what they can tell us
Nik Alexandrakis
Some two thousand years ago in ancient Greece, philosophers Aristotle and Zeno asked some interesting thought-provoking questions, including a case of what are today known as Zeno's paradoxes. The most famous example is a race, known as Achilles and the tortoise. The setup is as follows:
Achilles and a tortoise are having a race, but (in the spirit of fairness) the tortoise is given a headstart. Then Achilles will definitely lose: he can never overtake the tortoise, since he must first reach the point where the tortoise started, so the tortoise must always hold a lead.
Since there are virtually infinitely many such points to be crossed, Achilles should not be able to reach the tortoise in finite time.
This argument is obviously flawed, and to see that we consider the point of view of the tortoise. From the tortoise's perspective, the problem is equivalent to just Achilles heading towards it at the speed equal to the difference between their speeds in the first version of the problem.
Since $\text{distance} = \text{speed}\times\text{time}$, we can say that after time $t$, Achilles has travelled a distance equal to $v_At$ and the tortoise $v_Tt$. The distance between them is
\begin{equation*}
D-v_At+v_Tt = D – (v_A-v_T)t,
\end{equation*}
and so Achilles catches the tortoise—ie the distance between them is $0$—when the time, $t$, is equal to $D/(v_A-v_T)$.
There is another way to see this problem that satisfies better the purpose of this article and directly tackles the problem Aristotle posed. To get to where the tortoise was at the start of the race, Achilles is going to travel the distance $D$ in time $t_1 = D/v_A$. By that time the tortoise will have travelled a distance equal to
D_1 = v_T t_1,
which is the new distance between them.
Travelling this distance will take Achilles time
t_2 = \frac{D_1}{v_A} = \left(\frac{v_T}{v_A}\right)t_1.
Then the tortoise will have travelled a distance
D_2 = v_Tt_2 = \left(\frac{v_T^2}{v_A}\right)t_1
and Achilles will cover this distance after time $t_3 = D_2/v_A = (v_T/v_A)^2t_1$.
Repeating this process $k$ times we notice that the distance between Achilles and the tortoise is
D_k &= v_T\left(\frac{v_T}{v_A}\right)^{k-1}t_1 \\ &= \left(\frac{v_T}{v_A}\right)^kD.
Summing up all these distances we get how far Achilles has to move before catching the tortoise: if we call this $D_A$ it's
D_A =
\lim_{n\to\infty}\sum_{k=0}^{n}D_k =\sum_{k=0}^{\infty}D_k = D \sum_{k=0}^{\infty}\left(\frac{v_T}{v_A}\right)^k.
This is probably the simplest example of an infinite convergent sum. In particular, this is the simplest example of a class of sums called geometric series, which are sums of the form
\sum_{k=0}^{n}a^k.
If $|a| < 1$, the sum tends to $(1-a)^{-1}$ as $n$ tends to $\infty$ and diverges otherwise, meaning that it either goes to $\pm\infty$, or a limiting value just doesn't exist. By 'doesn't exist', see for example what happens if $a=-1$: we get \begin{equation*} 1 - 1 + 1 - 1 + 1 - 1 + \cdots + (-1)^n \end{equation*} and the sum oscillates between $0$ (if $n$ is odd) and $1$ (if $n$ is even). In our case, $|a|=|v_T/v_A|<1$ so \begin{equation*} D_A = \frac{D}{1-v_T/v_A} = \frac{D v_A}{v_A-v_T}, \end{equation*} which, when divided by the speed of Achilles, $v_A$, gives exactly the time we found before. So Achilles and the tortoise will meet after Achilles has crossed a distance $D_A$ in time $t_A = D_A/v_A$. Thousands of years later, Leonhard Euler was thinking about evaluating the limit of \begin{equation*} \sum_{k=1}^{n}\frac{1}{k^2} = 1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + \cdots + \frac{1}{n^2} \end{equation*} as $n\to\infty$, which is named as the Basel problem after Euler's hometown. This sum is convergent and it equals $\mathrm{\pi}^2/6$, as Euler ended up proving in 1734. He was one of the first people to study formal sums—which we will try to define shortly—and concretely develop the related theory. In his 1760 work De seriebus divergentibus he says
Whenever an infinite series is obtained as the development of some closed
expression, it may be used in mathematical operations as the equivalent of
that expression, even for values of the variable for which the series diverges.
So let's think about series which diverge. One way a series can diverge is simply by its terms getting bigger. One such example is the sum
\sum_{k=1}^{n}k = 1 + 2 + 3 + 4 + \cdots + n = \frac{n(n+1)}{2},
the limit of which when $n\to\infty$ is, of course, infinite.
But now let's think about the harmonic series,
\sum_{k=1}^{n}\frac{1}{k} = 1 + \frac12 + \frac13 + \frac14 + \cdots + \frac1n.
This time, although the terms themselves get smaller and smaller, the series still diverges as $n\to\infty$. But we can still describe the sum and its behaviour. It turns out that
\sum_{k=1}^{n}\frac1k = \ln(n)+\gamma + O(1/n),
where $\gamma$ is the Euler–Mascheroni constant, which approximately equals 0.5772, and $O(1/n)$ means 'something no greater than a constant times $1/n$'. You can see the sum and its approximation here:
Historically, the development of the seemingly unconventional theory of divergent sums has been debatable, with Abel, who at some point made contributions to the area, once describing them as shameful, calling them "an invention of the devil". Later contributions include works of Ramanujan and Hardy in the 20th century, about which more information can be found in the latter's book, Divergent Series.
More recently, a video on the YouTube channel Numberphile was published, and attempted to deduce the 'equality'
1+2+3+4+\cdots=-\frac{1}{12}.
This video sparked great controversy, and indicates one of the dangers of dealing with divergent sums. One culprit here is the Riemann zeta function, which is defined for $\operatorname{Re}(s)>1$ as
\zeta(s)=\sum_{k=1}^{\infty}\frac{1}{k^s}.
When functions are only defined on certain domains, it is sometimes possible to 'analytically continue' them outside of these original domains. Specifically at $-1$, doing so here gives $\zeta(-1)=-1/12$. The other culprit here is matrix summation—another method to give some value to divergent sums. By sheer (though neat) coincidence, these methods, such as the Cesáro summation method they use in the video, also give $-1/12$!
The main problem is this: at this point we no longer have an actual sum in the traditional sense.
Instead, we have a divergent sum which is formal, and by that, we mean that it is a symbol that denotes the addition of some quantities, regardless of whether it is convergent or not: it simply has the form of a sum.
These sums are not just naive mathematical inventions, instead, they show up in science and technology quite frequently and they can give us good approximations as they often emerge from standard manipulations, such as (as we'll see) integration by parts.
Applications in physics can be found in the areas of quantum field theory and quantum electrodynamics. In fact, formal series derived from perturbation theory can give very accurate measurements of physical phenomena like the Stark effect and the Zeeman effect, which characterise changes in the spectral lines of atoms under the influence of an external magnetic and electric field respectively.
In 1952, Freeman Dyson gave an interesting physical explanation of the divergence of formal series in quantum electrodynamics, explaining it via the stability of the physical system versus the spontaneous, explosive birth of particles in a scenario where the corresponding series that describes it is convergent. Essentially he argues that divergence is, in some sense, inherent in these types of systems otherwise we would have systems in pathological states. His paper from that year in Physical Review contains more information.
Euler's motivation
Sometimes, such assignments of formal sums to finite values (constants or functions) can be useful. The fact that they sometimes diverge does not make much difference in the end, if certain conditions are met.
An example that follows Euler's line of thought as described earlier emerges when trying to find an explicit formula for the function
\operatorname{Ei}(x):=\int_{-\infty}^{x}\frac{\mathrm{e}^t}{t}\, \mathrm{d} t,
for which repeated integration by parts yields
\operatorname{Ei}(x) = \int_{-\infty}^x\frac{\mathrm{e}^t}{t}\, \mathrm{d} t =& \left[\frac{\mathrm{e}^t}{t}\right]_{-\infty}^{x}-\int_{-\infty}^{x}\mathrm{e}^{t}\frac{\mathrm{d}}{\mathrm{d}t}\left(\frac{1}{t}\right) \mathrm{d} t \\
=& \left[\frac{\mathrm{e}^x}{x} – 0\right] + \int_{-\infty}^{x}\frac{\mathrm{e}^{t}}{t^2} \mathrm{d} t \\
= \cdots =& \frac{\mathrm{e}^{x}}{x}\sum_{k=0}^{n-1}\frac{k!}{x^{k}}+O\left(\frac{\mathrm{e}^x}{x^{n+1}}\right),
where we've been able to say $\mathrm{e}^x/x\to 0$ on the second line as $x\to-\infty$. Dividing through by $\mathrm{e}^x$, this allows us to say
\mathrm{e}^{-x}\operatorname{Ei}(x) &= \sum_{k=0}^{n-1}\frac{k!}{x^{k+1}}+O\left(\frac{1}{x^{n+1}}\right)\\
&\sim\sum_{k=0}^{\infty}\frac{k!}{x^{k+1}}\text{ as }x\to\infty.
Now swap $x$ for $-1/x$ in this equation:
\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)&\sim\sum_{k=0}^{\infty}k!(-x)^{k+1}\text{ as }x\to0\\
&=-x+x^2-2x^3+6x^4+\cdots.
As you can see below, this series now diverges as $x\to\infty$, but we still see convergence of the partial (truncated) sums as $x\to0$, even as we add more terms:
Euler noticed that $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$, in its original integral form, solves the equation
x^2\frac{\mathrm{d}y}{\mathrm{d}x}+y=-x
(for $x\neq0$). Now here's the thing: the formal sum
\sum_{k=0}^{\infty}k!(-x)^{k+1},
to which $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$ is asymptotic as $x\to0$, also (formally) 'solves' the same equation for any $x$.
This solution is not unique, and in fact, adding any constant multiple of $\mathrm{e}^{1/x}$ to $\mathrm{e}^{1/x}\operatorname{Ei}(-1/x)$ would still solve the equation; and the resulting solution would still be asymptotic to the same formal sum.
However, the coefficients of the powers of $x$ are unique. So there may be something in the formal sum that can give away the actual solution of the equation (which is often difficult to find via standard methods—unlike formal solutions that are easier to compute like the one above), at least up to some class of solutions and under certain conditions. In fact, this seems to actually be the case, at least for certain classes of formal sums—the ones that attain 'at most' factorial over power rate of divergence.
Solving a differential equation
To elaborate further, let's consider one more example, the differential equation
-\frac{\mathrm{d}y}{\mathrm{d}x}+y = \frac{1}{x}, \quad \text{where } y(x)\to 0 \text{ as }x \to \infty.
\label{fs1}
\tag{*}
Thinking about the boundary condition there, we could substitute in the (formal) sum of powers of $x$ which decay away as $x\to\infty$,
y(x) = \sum_{k=0}^\infty a_k x^{-k-1} = \frac{a_0}{x} + \frac{a_1}{x^2} + \frac{a_2}{x^3} + \cdots.
Doing so, we get
-\frac{\mathrm{d}}{\mathrm{d}x}\left[\sum_{k=0}^{\infty}a_kx^{-k-1}\right]+\sum_{k=0}^{\infty}a_kx^{-k-1} &= \frac{1}{x}
\\ \implies \sum_{k=0}^{\infty}(k+1)a_kx^{-k-2}+\sum_{k=0}^{\infty}a_kx^{-k-1} &=
\frac{1}{x} \\
\implies a_0x^{-1}+\sum_{k=0}^{\infty}\big[(k+1)a_k+a_{k+1}\big] x^{-k-2} &=\frac{1}{x}.
Then for our differential equation to be satisfied the coefficients have to satisfy
a_0=1 \qquad \text{and}
(k+1)a_k+a_{k+1}=0\implies a_{k+1} = -(k+1)a_k
which recursively means that $a_k=(-1)^kk!$ and our formal sum solution is
y(x) = \sum_{k=0}^{\infty}(-1)^k k!x^{-k-1}.
Now that we have a sum that solves the equation formally, we can obtain an actual solution assuming that it is asymptotic to the sum we found as $x\to\infty$ by using the repeated integration by parts result
\int_{0}^{\infty}\mathrm{e}^{-xs} s^k \,\mathrm{d} s = k! x^{-k-1} \text{ for }x>0,
which implies
y(x) &= \sum_{k=0}^{\infty}(-1)^{k}k!x^{-k-1} \\
&= \sum_{k=0}^{\infty}(-1)^k\int_{0}^{\infty}\mathrm{e}^{-xs}s^k \,\mathrm{d} s \\
&= \int_{0}^{\infty}\mathrm{e}^{-xs}\sum_{k=0}^{\infty}(-1)^ks^k \,\mathrm{d} s.
How is that helpful? Well, for $s:|s|<1$ we know that \begin{equation*} \sum_{k=0}^{\infty}(-1)^ks^{k} = 1-s+s^2+\cdots = \frac{1}{1+s}, \end{equation*} by the formula for geometric series for $a=-s$ from our discussion of Achilles and the tortoise. This is a nice function on the real line, having all the fine properties that we need in order to define \begin{equation*} y(x)=\int_{0}^{\infty}\frac{\mathrm{e}^{-xs}}{1+s}\,\mathrm{d}s, \end{equation*} which is the solution to our differential equation, \eqref{fs1}, we are looking for, and is also asymptotic to the formal sum $\sum_{k=0}^{\infty}(-1)^k k!x^{-k-1}$ as $x\to\infty$:
Notice that any linear combination of these formal sums will result from the same linear combination of the respective convergent (for $s:|s|<1$) series $1-s+s^2-s^3+\cdots$ inside the integral. In conclusion, it is possible to obtain a solution in closed form to a differential equation just by finding a formal power series to which the solution is asymptotic.
Not just reinventing the wheel
The aforementioned example is, of course, quite simple and trying to find a solution in the way we just described might look like we're reinventing the wheel using modern-era technology. However, the true potential of the method described above can be seen in nonlinear equations, to which we generally cannot find solutions in standard ways. In my own research I used formal sums to study an equation with applications in fluid mechanics.
In one of the first talks I gave about this topic, I remember noticing several of my peers tilting their heads in distrust when I mentioned that the emerging sums are divergent. This reaction was almost expected and for obvious reasons. It took an hour-long talk and several questions later to convince them that the mathematics involved is genuine.
Controversial as it may sound, at first sight, this concept is even more realistic than imaginary numbers, which are simply symbols with properties that we just accept and use. The idea is that, although imaginary, these numbers can demonstrably give us, when interpreted properly, very real results such as solutions to differential equations like
\frac{\mathrm{d}^2y}{\mathrm{d}x^2}+y=0.
The same is true for formal sums too.
Why do we assign actual numbers to formal sums in the first place? Because they are sometimes easier to work with and can lead to interesting results (such as solutions to differential equations) if interpreted properly. The underlying mechanisms should be well-defined mathematical processes and well-understood in order to avoid any serious mistakes when working with such sums. An example of erroneous use of such sums is Henri Poincaré's attempt to solve the three-body problem in order to win the King Oscar prize in 1889. He managed, however, in the next decade to spark the development of chaos theory. But that's for another time.
Nik Alexandrakis is a fourth-year PhD student at Lancaster University. Apart from differential equations, he is sometimes interested in environmental and animal rights activism. He hates writing but usually likes the outcome.
All articles by Nik
More from Chalkdust
We talk to the Fields medallist about his life, his work and his advice to his younger self.
Peach Semolina admits her true feelings about science fiction, and delves into the maths of quantum teleportation.
Colin Beveridge barges in and admires some curious railway bridges
Peter Rowlett is gonna need a bigger board
Albert Wood returns to introduce the work that has won mathematics' most famous award this year.
Michael Wendl really wants those spammers to stop calling him
← The big argument: Is Matlab better than Python?
Spam calls and number blocking →
@chalkdustmag
Tweets by @chalkdustmag
Chalkdust is published by Chalkdust Magazine, UCL, Gower Street, London WC1E 6BT, United Kingdom. ISSN 2059-3805 (Print). ISSN 2059-3813 (Online).
|
CommonCrawl
|
2022 CMS Winter Meeting
Toronto, December 2 - 5, 2022
meeting home
cms home
Plenary, Prize and Public Lectures
By speaker
Mini-Courses
AARMS - CMS Student Poster Session
CMS Code of Conduct
Scientific Sessions Student Poster Session
CMS Student Poster Session
STÉPHANIE ABO, University of Waterloo
Can the clocks tick together despite the noise? Stochastic simulations and analysis of the mean-field limit [PDF]
The suprachiasmatic nucleus (SCN), also known as the circadian master clock, consists of a large population of oscillator neurons. Together, these neurons produce a coherent signal that drives the body's circadian rhythms. What properties of the cell-to-cell communication allow the synchronization of these neurons, despite a wide range of environmental challenges such as fluctuations in photoperiods? To answer that question, we present a mean-field description of globally coupled neurons modeled as Goodwin oscillators with standard Gaussian noise. Provided that the initial conditions of all neurons are independent and identically distributed, any finite number of neurons becomes independent and has the same probability distribution in the mean-field limit, a phenomenon called propagation of chaos. This probability distribution is a solution to a Vlasov-Fokker-Planck type equation, which can be obtained from the stochastic particle model. We study, using the macroscopic description, how the interaction between external noise and intercellular coupling affects the dynamics of the collective rhythm, and we provide a numerical description of the bifurcations resulting from the noise-induced transitions. Our numerical simulations show a noise-induced rhythm generation at low noise intensities, while the SCN clock is arrhythmic in the high noise setting. Notably, coupling induces resonance-like behavior at low noise intensities, and varying coupling strength can cause period locking and variance dissipation even in the presence of noise.
MARYAM ALHAWAJ, University of Toronto
Generalized pseudo-Anosov Maps and Hubbard Trees [PDF]
The Nielsen-Thurston classification of the mapping classes proved that every orientation preserving homeomorphism of a closed surface, up to isotopy is either periodic, reducible, or pseudo-Anosov. Pseudo-Anosov maps have particularly nice structure because they expand along one foliation by a factor of $\lambda >1$ and contract along a transversal foliation by a factor of $\frac{1}{\lambda}.$ The number $\lambda $ is called the dilatation of the pseudo-Anosov. Thurston showed that every dilatation $\lambda$ of a pseudo-Anosov map is an algebraic unit, and conjectured that every algebraic unit $\lambda$ whose Galois conjugates lie in the annulus $A_\lambda =\{z: \frac{1}{\lambda} <|z|<\lambda\}$ is a dilatation of some pseudo-Anosov on some surface $S.$
Pseudo-Anosovs have a huge role in Teichmuller theory and geometric topology. The relation between these and complex dynamics has been well studied inspired by Thurston.
In this project, I develop a new connection between the dynamics of quadratic polynomials on the complex plane and the dynamics of homeomorphisms of surfaces. In particular, given a quadratic polynomial, we show that one can construct an extension of it which is generalized pseudo-Anosov homeomorphism. Generalized pseudo-Anosov means the foliations have infinite singularities that accumulate on finitely many points. We determine for which quadratic polynomials such an extension exists. My construction is related to the dynamics on the Hubbard tree which is a forward invariant subset of the Julia set that contains the critical orbit.
CINDY CHEN, University of Saskatchewan
SIR Infectious Disease Modelling with Vaccination [PDF]
The new coronavirus attacked the world in 2019, causing harm to people's lives and society in multiple aspects. It is therefore of high importance to develop reliable mathematical models that would be able to predict the development of similar pandemics under different scenarios, including vaccination strategies, to help inform governments and health care systems and facilitate optimal policy making.
In this work, we study an SIR ("Susceptible-Infected-Recovered") epidemic model that considers the time evolution of the three respective groups of population. Transitions between Susceptible, Infected, and Recovered groups are usually defined by constant coefficients, such as infection and recovery rates. The novel aspect of our model is making the coefficients time-dependent. This allows a significantly larger freedom in building the models and predicting the outcomes under different scenarios. As an example we choose the model coefficients to reflect a situation when, at a certain time, a vaccine is introduced. In this situation, it is shown that under the same parameters, vaccination leads to a significantly faster transition to a recovered population.
K. BHASKARA, A. COOK, McMaster University
Hadamard Product and Binomials Ideals [PDF]
We study the Hadamard product of two varieties $V$ and $W$, with particular attention to the situation when one or both of $V$ and $W$ is a binomial variety. The main result of this paper shows that when $V$ and $W$ are both binomial varieties, and the binomials that define $V$ and $W$ have the same binomial exponents, then the defining equations of $V \star W$ can be computed explicitly and directly from the defining equations of $V$ and $W$. This result recovers known results about Hadamard products of binomial hypersurfaces and toric varieties. Moreover, as an application of our main result, we describe a relationship between the Hadamard product of the toric ideal $I_G$ of a graph $G$ and the toric ideal $I_H$ of a subgraph $H$ of $G$. We also derive results about algebraic invariants of Hadamard products: assuming $V$ and $W$ are binomial with the same exponents, we show that $\deg(V\star W) = \deg(V)=\deg(W)$ and $\dim(V\star W) = \dim(V)=\dim(W)$. Finally, given any (not necessarily binomial) projective variety $V$ and a point $p \in \mathbb{P}^n \setminus \mathbb{V}(x_0x_1\cdots x_n)$, subject to some additional minor hypotheses, we find an explicit binomial variety that describes all the points $q$ that satisfy $p \star V = q\star V$.
JENNY LAWSON, University of Calgary
Optimality and Sustainability of Delayed Impulsive Harvesting [PDF]
Optimal and sustainable management of natural resources requires knowledge about the behaviour of mathematical models of harvesting under many different types of conditions. In this talk, we will be investigating the sustainability and optimality of delayed impulsive harvesting. Impulses describe an instantaneous change in a system due to some external effect (like harvesting in a fishery), which has a duration that is negligible compared to the overall time scale of the process. These impulses can then be combined with differential equations (DEs) to form impulsive DEs.
Delays within harvesting can represent a dependency on information that is out of date. Since it is likely that most data used to make harvesting decisions will be at least somewhat out of date, including delays within impulsive conditions is a topic of current interest. A close connection to the dynamics of high-order difference equations is used to conclude that while the inclusion of a delay in the impulsive condition does not impact the optimality of the yield, sustainability may be highly affected and is once again delay-dependent. Maximum and other types of yields are explored, and sharp stability tests are obtained for the model. It is also shown that persistence of the solution is not guaranteed for all positive initial conditions, and extinction in finite time is possible, which provides a possible explanation for observed but unforeseen population collapses. Overall, the results imply that delays within harvesting should be kept short to maintain the sustainability of resources.
LAILA MAHRAT, Lewis University
An Agent-Based Model of Environmental Transmission of Clostridioides difficile in Healthcare Settings [PDF]
Clostridioides difficile (C. difficile) is one of the most frequently identified healthcare-acquired infections in United States hospitals. Colonized patients, both symptomatic and asymptomatic, shed C. difficile endospores that can survive for long periods on surfaces outside the host and are resistant to many commonly-used disinfectants. Transmission pathways can include contact with both endospores on fomites, objects likely to carry infection, and endospore-carrying individuals. Our agent-based model simulates the spread of C. difficile within a hospital ward, focusing on transmission originating from environmental pathways and healthcare workers. Simulations can help determine effective control strategies to mitigate the spread of C. difficile in healthcare settings.
ANA MUCALICA, McMaster University
Solitons on the rarefaction wave background via the Darboux transformation [PDF]
Rarefaction waves and dispersive shock waves are generated from the step-like initial data in many nonlinear evolution equations including the classical example of the Korteweg-de Vries (KdV) equation. When a solitary wave is injected on the step-like initial data, it is either transmitted over or trapped inside the rarefaction wave background. We show that the transmitted soliton can be obtained by using the Darboux transformation for the KdV equation. On the other hand, we show with the help of numerical simulations that the trapped soliton disappears in the long-time dynamics of the rarefaction wave.
GAVIN OROK, University of Waterloo
Determining where Monte Carlo Outperforms Quasi-Monte Carlo for Functions Monotone in Each Coordinate in Dimensions 3 and Above [PDF]
The Quasi-Monte Carlo methods are one way to estimate the integrals of functions over high-dimensional cubes. They are a variation of standard Monte Carlo methods; instead of choosing random points inside the cube to calculate an estimate of the result, Quasi-Monte Carlo scrambles a deterministic set of points that are sufficiently uniform inside of the cube. This is often desirable as it limits gaps and clusters of points that can harm the quality of the estimate.
One problem of interest to researchers of Quasi-Monte Carlo is to determine cases where these methods will outperform standard Monte Carlo methods, by having a lower theoretical variance in the final result. Previous work by Lemieux and Wiart showed that for two-dimensional functions monotone in each coordinate, Quasi-Monte Carlo will always outperform Monte Carlo in this way.
In this presentation, we will consider the extension of this problem to functions monotone in each coordinate in dimensions three and above. First, using computer searches we will find cases in higher dimensions where Monte Carlo has a lower theoretical variance than Quasi-Monte Carlo. Then, we will extend these cases to higher dimensions and determine relationships between them using equivalence classes and translations defined on sets of vectors called antichains.
KEVIN MIN SEONG PARK, University of Toronto
Deep Reinforcement Learning for Viscous Incompressible Flow [PDF]
Numerical methods for approximating the solution to the incompressible Navier-Stokes equations typically solve discretized equations on a finite mesh of the domain, a computationally expensive process. We present a mesh-free method which can be easily scaled to irregular 3D geometries as we encode the domain and boundary through signed distance functions. The numerical solution is provided by a deep neural network trained on an objective that is derived from the expectation of a martingale stochastic process of the viscous Burgers equation, similar to Monte Carlo methods through the Feynman-Kac formula. We adopt a reinforcement learning paradigm of iterating the optimization step at every simulated increment of the It\^o process. The vector potential is encoded into the neural network architecture, thereby automatically satisfying the incompressibility condition without requiring the pressure term. Simulation of the It\^o process requires the true velocity, which we replace with the current approximation during the training procedure and we prove that this process is a fixed-point iteration in a simplified setting. This method is capable of numerically solving solutions to elliptic and parabolic partial differential equations. Deep learning is parallelizable and hyperparameters can be incorporated to solve a family of problems. We provide an example of flow past disk with a range of input flow speeds and viscosities, all provided by a single neural network, to highlight these advantages.
KALEB D. RUSCITTI, McGill University
The Verlinde formula for flat SU(2) connections using a toric degeneration [PDF]
The moduli space M of flat SU(2) connections has a prequantum line bundle L and a polarization, the data required for geometric quantization. Jeffrey and Weitsman have shown the moduli space M of flat SU(2) connections has Hamiltonian functions which almost exhibit M as a toric variety. If it were toric, the theory of toric varieties tells us that the space of global sections of L, which is the quantum data, has dimension computed by the Verlinde formula. Hurtubise and Jeffrey have constructed a "master space" P with both a symplectic and a holomorphic description, which is toric and should contain all the information of M. Holomorphically, P is a space of framed parabolic sheaves over a punctured Riemann surface, and by degenerating the original Riemann surface to the punctured one, the moduli space M degenerates to the master space P. The aim now is to see how the recent work of Harada, Kaveh and Khovansky makes rigorous the justification of the Verlinde formula obtained by point counting by Jeffrey and Weitsman, hence giving a new proof of the formula.
KATARINA SACKA, McMaster University
Applications of Next-Iterate Operators to Discrete Planar Maps. [PDF]
Two applications of next-iterate operators for discrete planar maps defined in the work by S.H. Streipert and G.S.K. Wolkowicz are explored. The time-delay equation \begin{equation*} x_{n+1}=\frac{\alpha+x_{n-1}}{A+x_{n}} \end{equation*} for $n\in\mathbb{N}$, $\alpha\geq0$, $A\in[0, 1)$, $x_0>0$, and $x_1>0$ has a unique positive equilibrium which is a saddle point. Applying the change of variables, $y_n=x_{n-1}$, we write this equation as the planar system, \begin{equation*} x_{n+1} =\frac{\alpha+y_n}{A+x_{n}}, \quad y_{n+1}=x_n. \end{equation*} We show that there exists a nontrivial positive solution which decreases monotonically to the equilibrium, proving Conjecture 5.4.6 from M. Kulenovic and G. Ladas. By using the augmented phase plane with nullclines and their associated root-curves, we can show the general behaviour of solutions in the plane. Using the tangent vector to the stable manifold at the equilibrium, we can show that solutions in a particular region defined by the nullclines and their associated root-curves, will decreases monotonically to the equilibrium along the tangent vector to the stable manifold. While Conjecture 5.4.6 has been previously proven, our proof provides a more elementary solution.
The second application of next-iterate operators regards the time delay equation, \begin{equation*} x_{n+1}=\frac{\alpha+x_n+x_{n-1}}{A+x_n+x_{n-1}} \end{equation*} for $n\in\mathbb{N}$, $A>\alpha>0$, $x_0>0$, and $x_1>0$. This equation has a unique positive equilibrium which is locally stable. Using the same change of variables as before, $y_{n}=x_{n-1}$, we write this equation as the planar system, \begin{align*} x_{n+1}=\frac{\alpha+x_n+y_{n}}{A+x_n+y_{n}},\quad y_{n+1}=x_n. \end{align*} By applying the augmented phase portrait, in addition to two new next-iterate operators defined in this work, we can expand this result to prove global stability.
DAYANNA SANCHEZ, Lewis University
Analyzing the Impact of Alternative Assessments and Growth Mindset [PDF]
Alternate assessment techniques such as mastery grading, specifications grading, and standards-based grading are assessment techniques professors are implementing in order to support a growth mindset of learning. This proposal will support a multi-institutional collaboration that studies the impact of mastery grading assessment techniques on the growth mindset of students in a variety of mathematics classes. By analyzing pre- and post-surveys with questions adapted from Dweck's Mindset survey, we will explore whether there is a difference in the growth mindset between various cross-sections of student populations between classes (mastery and non-mastery, specific courses, universities, etc.) and whether the growth mindset of students changed by the end of the semester. This research will explore whether there is a difference in students' mindset of learning mathematics between various cross-sections of student populations between classes (mastery and non-mastery, specific courses, universities, etc.) and whether the growth mindset of students changed by the end of the semester.
GUSTAVO CICCHINI SANTOS, Toronto Metropolitan University
UNDERSTANDING NON-EQUILIBRIUM STEADY STATES [PDF]
Physical systems are characterized by their response to perturbations. The Fluctuation Dissipation Theorem predicts the behavior of systems in equilibrium. Can an expression be derived using methods from quantum field theory to describe the vertex response to a perturbation, and is the Fluctuation Dissipation Theorem modified as a result of these perturbations. Using Berezin integration and properties of determinants we derive said expression. The derivation yields the same result as the less rigorous methods. We learn the Fluctuation Dissipation Theorem has an equilibrium-like response to a vertex perturbation making the Fluctuation Dissipation theorem a bad indicator of whether a system is in equilibrium or out of equilibrium. We then apply our result to a biochemical problem.
MELISSA MARIA STADT, University of Waterloo
Impact of feedforward and feedback controls on potassium homeostasis: Mathematical modelling and analysis [PDF]
Dysregulation of potassium is a common and dangerous side effect of many pathologies and medications. Potassium homeostasis is primarily mediated by (i) uptake of potassium into the cells via the sodium-potassium pump and (ii) renal regulation of urinary potassium excretion. Due to the importance of potassium in cellular function and the daily challenge of undergoing variations in potassium intake, mammals have evolved several regulatory mechanisms to ensure proper potassium balance between the extra- and intracellular fluids. The multitude of physiological processes involved in potassium regulation makes its study well suited for investigation with mathematical modelling. In this project, we developed a compartmental model of extra- and intracellular potassium regulation. We included a detailed kidney compartment with the effects of aldosterone and potassium intake on renal potassium handling as well as intracellular potassium uptake stimulation by both insulin and aldosterone. Model simulations were conducted and analyzed to quantify the impact of individual regulatory mechanisms on whole-body potassium regulation. Additionally, we used this model to simulate and give evidence for a newly hypothesized signal, muscle-kidney cross talk, on potassium loading and depletion.
YUN-CHI TANG, University of Toronto
On Knots That Divide Ribbon Knotted Surfaces [PDF]
We define a knot to be half ribbon if it is the cross-section of a ribbon $2$-knot, and observe that ribbon implies half ribbon implies slice. We introduce the half ribbon genus of a knot $K$, the minimum genus of a ribbon knotted surface of which $K$ is a cross-section. We compute this genus for all prime knots up to $12$ crossings, and many $13$-crossing knots. The same approach yields new computations of the doubly slice genus. We also introduce the half fusion number of a knot $K$, that measures the complexity of ribbon $2$-knots of which $K$ is a cross-section. We show that it is bounded from below by the Levine-Tristram signatures, and differs from the standard fusion number by an arbitrarily large amount.
WILLIAM VERREAULT, Université Laval
Series expansion via unwinding [PDF]
We present a general unwinding scheme for analytic functions as well as convergence theorems for the unwinding series expansion, extending results on the Blaschke unwinding series, a nonlinear analogue of Fourier series with a wide range of practical applications.
YUMING ZHAO, University of Waterloo
There is no sum-of-squares certificate for positivity in tensor product of free algebras [PDF]
In quantum information, the algebra $\mathbb{C}\mathbb{Z}_m^{*n}\otimes \mathbb{C}\mathbb{Z}_m^{*n}$ models a physical system with two spatially separated subsystems, where in each subsystem we can make $n$ different measurements, each with $m$ outcomes. The recent $\text{MIP}^*=\text{RE}$ result shows that it is undecidable to determine whether an element of $\mathbb{C}\mathbb{Z}_m^{*n}\otimes \mathbb{C}\mathbb{Z}_m^{*n}$ (for varying $n$ and $m$) is positive in all finite-dimensional representations. In this poster, I will present joint work with Arthur Mehta and William Slofstra, in which we show that it is undecidable to determine whether an element of $\mathbb{C}\mathbb{Z}_2^{*n}\otimes \mathbb{C}\mathbb{Z}_2^{*n}$ (for some sufficiently large $n$) is positive in all representations. As a consequence, there is no sum-of-squares certificate for positivity in tensor product of free algebras.
EUGENE ZIVKOV, Toronto Metropolitan University
Thin liquid film stability in the presence of bottom topography and surfactant [PDF]
We consider the stability of gravity-driven fluid flow down a wavy inclined surface in the presence of surfactant. The periodicity of the bottom topography allows us to leverage Floquet theory to determine the correct form for the solution to the linearized governing partial differential equations. The result is that perturbations from steady state are wavelike, and a dispersion relation is identified which relates the wavenumber of an initial perturbation, $\kappa$, to its complex frequency, $\omega$. The real part of $\omega$ ultimately determines the stability of the flow. We observe that the addition of surfactant generally has a stabilizing effect on the flow, but has a destabilizing effect for small wavenumbers. These results are compared and validated against nonlinear results, which are obtained by numerically solving the governing equations directly. The linear and nonlinear analyses show good agreement, except at small wavenumbers, where the linear results could not be replicated.
© Canadian Mathematical Society
© Canadian Mathematical Society : http://www.cms.math.ca/
|
CommonCrawl
|
Recent questions tagged class11
Questions from:
Speed of a planet in an elliptical orbit with semi major axis 'a' about sum of mass M at a distance r from the sun is
jeemain
asked Aug 29, 2013 by meena.p
A smooth tunnel is dug along the radius of earth that ends at center. A ball is released from the surface of earth along tunnel. What is the velocity when it strike the center?
The density of case of a planet is $s_1$ and that of the outer shell is $s_2$.The radius of case and that of the planet are R and 2R respectively. Gravitational acceleration at the surface of the planet is same as at a depth R. Find $\large\frac{s_1}{s_2}$
A uniform rod of length $'l'$ whose mass per unit length is $\lambda$. The gravitational force on a particle of mass m located at a distance d from one end of the rod as shown is
Two lines intersect at O. Points $A_1,A_2,........A_n$ are taken on one of them and $B_1,B_2,........B_n$ on other. The no. of triangles that can be drawn from with the help of these 2n points and point 'O' is ?
permutations-and-combinations
asked Aug 27, 2013 by rvidyagovindarajan_1
The no. of rectangles in a chess board is ?
In a tournament, each participant should play one game with other .Two participants fell ill after playing 3 games.each. If total games played is 84, the no. of participants in the tournament is ?
In an election no. of candidates exceeds the number to be elected by 2. If a man can vote in 56 ways, then the no. of candidates is
The number of ways in which we can select a committee from 4 men and 6 women so that the committee includes at least two men and at least twice as many women as men is?
Ten persons are to speak in a meeting. The no. of ways in which this can be arranged if A wants to speak before B and B wants to speak before C, is ?
Two satellites are moving in a common plane along circular paths. with one having angular velocity $w_1=1.09 \times 10^{-3} rad/s$ and another $w_2=1.08 \times 10^{-3}rad /s$. Find time interval which seperates the periodic approaches of the satellites to each other over minimum distance, if they are revolving in opposite sense.
If a planet orbiting the sun in a circular orbit suddenly stops, it will fall onto the sun in a time n(T) where T is the period of planets revolution , then n is
An artificial satellite of mass m moves in an orbit whose radius is n times the radius of earth. Assuming the resistance to the motion is proportional to square of velocity ie $F= av^2$ where a is a constant. How long will the satellite take to fall to earth. M-mass of earth R- radius of earth
A body of mass m ascends from the earth's surface with zero initial velocity due to action of two forces as shown. The force $F$ varying with $h$ as $\hat F=-2m \vec {g} (1-ah)$ where a is constant. Find the maximum height atained by the body t
If a body is to be projected vertically upwards from the surface of the earth to reach a height nR then the velocity with which it is to be projected is
Mass density of a solid sphere is $\rho$. Radius of sphere is R. The gravitational field at a distance $r$ from the center of the sphere inside it is
If the period of revolution of an artificial satellite just above the earth's surface is T and density of earth is $\rho$ then $\rho T ^2$
The work done in slowly lifting a body from earth's surface to a height R (radius of earth) is equal to two times the work done in lifing the same body from earth's surface to height h there h is equal to
A uniform ring of mass $M$ and radius $R$ is placed directly above a uniform sphere of mass $8M$ and of radius $R$. The center of the ring is at a distance of $d= \sqrt 3 R$ from the center of the sphere. The gravitational attraction between the sphere and ring is
A particle of mass M is placed at the center of a uniform sperical shell of mass 2M and radius R. The gravitational potential on the surface of the shell is
Three particles each having a mass 100 gm are placed on the verticles of an equilateral triangle of side 20 cm. The work done in increasing the side of their triangle to 40 cm is $(G= 6.67 \times 10^{-11}\;Nm^2/kg^2)$
Two bodies of masses m and 4m are placed at a distance r . The gravitational potential at a point on the line joining them where the gravitational field is zero, is
Suppose the gravitational force varies inversely as the n-th power of distance. Then the time period of a planet in circular orbit of radius 'r' around the sun will be proportional to
Mean distance of mars from sun is 1.524 times the distance of earth from sun. The period of revolution of mars around the sun is
If a graph is plotted between $T^2$ and $r^3$ for a planet the slope will be
A particle is suspended from a spring and it stretches the spring by 1 cm on the surface of the earth. The same particle will stretch the same spring at a place 800 Km above the earth by
The radius of the earth $R_e$ and the acceleration due to gravity at its surface is g. The work required to rising a body of mass m to height h from the surface of the earth will be
Three particles of equal mass m are situated at verticles of a equilateral triangle of side l. What should be velocity of each particle so that they move on a circular path without changing l
What would be the angular speed of earth, so that bodies lying on equator may appear weightless? $(g=10 m/s^2\; R=6400km)$
Two bodies of mass $10^2 kg$ and $10^3kg$ are lying 1m apart. The gravitational potential at mid point of line joining them is
A mass M is split into two parts m and (M-m) which are then separated by a certain distance. What ratio of m/M maximise the gravitational force between the parts?
A geostationary satellite is orbiting the earth at a height of 6R where R is radius of earth. The time period of another satellite at a height of $2.5 R$ from the surface of earth is
The ratio of radius of two planets is K and ratio of acceleration due to gravity of both planet is G. What will be the ratio of their escape velocity ?
The mass of earth is $6.00 \times 10^{24} \;kg $ and that of the moon is $7.40 \times 10 ^{22} \;kg \;$ and $G= 6.67 \times 10^{-11} Nm^2 /kg^2$. potential energy of the system is $-7.79 \times 10^{28} J$. The mean distance of between earth and moon is
The gravitational field due to mass distribution is $ E= k/x^3$ in the x direction where k is a constant. Taking gravitational potential to be zero at infinity, its value at a distance x is
What is the acceleration due to gravity at surface of mass if its diameter is $6760\;km$ and mass is one tenth that of earth. The diameter of earth is $12742\;km$ acceleration due to gravity on earth $=9.8 m/s^2$
A stone is dropped freely into a tunnel along the diameter of the earth. When it reaches the center of earth then it has only
At what height from the ground will the value of 'g' be the same as that in 10 km deep mine below the surface of earth
A planet moves in an elliptical orbit with sun at one of its focii <br> 1) In path QRP work done by gravity is positive . <br> 2) In path PSQ work done by gravity is negative . <br> 3) Linear velocity of rotation from Q to P via R decreases. <br> 4) Gravitational potential energy increases in going from P to Q via R
The Planet mercury is revolving in an elliptical orbit around the sun as shown. Kinetic energy will be greatest at
As measured by an observer on earth, what would be the difference, if any, in the period of two satellite each in a circular orbit near the earth's equatorial plane, but one moving eastward and other moving westward.
A satellite of mass m is orbiting the earth at a height 'h' from its surface. If M is Mass of the earth and R its radius then how much energy must be spent to pull the satellite out of the earth's gravitational field?
If the distance between Earth and the sun were half its present value, the number of days in a year would be
The mean radius of earth is R, its angular velocity on its own axis is 'w' and acceleration due to gravity is g. What will be the radius of orbit of a geostationary satellite ?
A satellite revolving in a circular equatorial orbit of radius $r= 2 \times 10 ^4 km$ from west to East appears over a certain point at the equator every $11.6$ hours. Calculate the actual angular velocity of the satellite
A geo stationary satellite orbits around the earth in a circular orbit of radius $36000km$. Then the time period of a spy satellite orbiting a few Kilometers above the earth's surface $(R_{earth}=6400 Km)$ will be approximately
A satellite of mass $m_s$ revolving in a circular orbit of radius $r_s$ round the earth of mass M has a total energy E. Then its angular momentum will be
A satellite is seen after each 8 hours over equator at a place on earth when its sence of rotation is opposite to the earth. The time interval after which it can be seen at the same place when sense of rotation of earth and satellite is same , will be
A satellite can be in a geostationary orbit around the earth at a distance r from the center. If the angular velocity of earth about its axis doubles, a satellite can now be in a geostationary orbit around earth if its distance from the center is
The height to which the acceleration due to gravity becomes $\large\frac{g}{9}$ (where g= acceleration due to gravity at surface of earth ) in terms of $R$, is
|
CommonCrawl
|
Enumerating the economic cost of antimicrobial resistance per antibiotic consumed to inform the evaluation of interventions affecting their use
Poojan Shrestha1,2,
Ben S. Cooper2,3,
Joanna Coast4,
Raymond Oppong5,
Nga Do Thi Thuy6,7,
Tuangrat Phodha8,
Olivier Celhay3,
Philippe J. Guerin1,2,
Heiman Wertheim6,9 &
Yoel Lubell2,3
Antimicrobial Resistance & Infection Control volume 7, Article number: 98 (2018) Cite this article
136 Altmetric
Antimicrobial resistance (AMR) poses a colossal threat to global health and incurs high economic costs to society. Economic evaluations of antimicrobials and interventions such as diagnostics and vaccines that affect their consumption rarely include the costs of AMR, resulting in sub-optimal policy recommendations. We estimate the economic cost of AMR per antibiotic consumed, stratified by drug class and national income level.
The model is comprised of three components: correlation coefficients between human antibiotic consumption and subsequent resistance; the economic costs of AMR for five key pathogens; and consumption data for antibiotic classes driving resistance in these organisms. These were used to calculate the economic cost of AMR per antibiotic consumed for different drug classes, using data from Thailand and the United States (US) to represent low/middle and high-income countries.
The correlation coefficients between consumption of antibiotics that drive resistance in S. aureus, E. coli, K. pneumoniae, A. baumanii, and P. aeruginosa and resistance rates were 0.37, 0.27, 0.35, 0.45, and 0.52, respectively. The total economic cost of AMR due to resistance in these five pathogens was $0.5 billion and $2.9 billion in Thailand and the US, respectively. The cost of AMR associated with the consumption of one standard unit (SU) of antibiotics ranged from $0.1 for macrolides to $0.7 for quinolones, cephalosporins and broad-spectrum penicillins in the Thai context. In the US context, the cost of AMR per SU of antibiotic consumed ranged from $0.1 for carbapenems to $0.6 for quinolones, cephalosporins and broad spectrum penicillins.
The economic costs of AMR per antibiotic consumed were considerable, often exceeding their purchase cost. Differences between Thailand and the US were apparent, corresponding with variation in the overall burden of AMR and relative prevalence of different pathogens. Notwithstanding their limitations, use of these estimates in economic evaluations can make better-informed policy recommendations regarding interventions that affect antimicrobial consumption and those aimed specifically at reducing the burden of AMR.
Human antimicrobial consumption, whether or not clinically warranted, is associated with propagation of antimicrobial resistance (AMR) [1, 2]. This and other key drivers of AMR are listed in Fig. 1, notably widespread antibiotic use prophylactically and as growth promoters in agriculture [3].
Drivers and costs associated with antimicrobial resistance. Adapted: Holmes et al. [2] and McGowan [10]
Treatment of resistant infections is associated with higher costs for second line drugs, additional investigations, and longer hospitalisation [4]. Other indirect costs associated with AMR include productivity losses due to excess morbidity and premature mortality. These costs can be conceptualised as a negative externality to antimicrobial consumption accrued by all members of society, which are not reflected in the market price of antimicrobials [5, 6].
In addition to curative use in infectious diseases, antimicrobials are widely used presumptively, in mass treatment programmes (anti-helminths, antimalarials), and as prophylactics in surgical procedures and alongside immunocompromising treatments [2, 7]. Many other healthcare interventions such as vaccinations, diagnostics, and treatments for infectious diseases affect antimicrobial consumption, and consequently increase or decrease the risks of AMR. Economic evaluations of such interventions, however, have failed to internalise the potential costs of AMR into the analyses, leaving policymakers to intuitively consider these alongside more tangible costs and benefits in the evaluation [4, 8]. This can result in uninformed decision making, as the cost of AMR is likely to be under- or over-estimated by policymakers, if it is considered at all [4, 8, 9].
In 1996 Coast et al. argued that the omission of the cost of AMR in economic evaluation is partly explained by the challenges to its quantification [4], with extensive uncertainties surrounding resistance mechanisms, paucity and poor quality of relevant data, and other methodological challenges [5, 10]. The (mis)perception that the impact of AMR will only be felt in future years might also deter analysts from including them in the evaluation, assuming policymakers operate with a myopic view of health gains and costs. As confirmed in a recent review, very few attempts have since been made to quantify the externality of AMR [11].
Policymakers and key stakeholders, however, appear increasingly concerned with AMR, with unprecedented funding being allocated to interventions to mitigate its impact. In late 2016 the UN General Assembly held a special meeting on the topic, passing a unanimous resolution from Member States committing to adopt such measures [12]. Without enumerating the cost of AMR per antimicrobial consumed, it will be difficult to determine the allocative efficiency of these investments, and particularly so in low/middle income countries (LMICs) with more tangible causes of ill-health to invest in.
Therefore, despite the challenges, there is a clear need for costing the negative externality of AMR that can be affixed to the consumption of antimicrobials. The rare occasions where this has been done indicate the importance of such efforts. In a German hospital setting, for example, the use of a single defined daily dose of a second or third generation cephalosporin was associated with €5 and €15 respectively in costs of AMR [6]. The current analysis produced a menu of economic costs of AMR per antibiotic consumed for a variety drug classes, stratified into LMICs and high-income country settings. The output can be applied in future economic evaluations of interventions that involve or affect antibiotic consumption.
Economic costs of resistance
The economic cost of AMR is narrowly defined as the incremental cost of treating patients with resistant infections as compared with sensitive ones, and the indirect productivity losses due to excess mortality attributable to resistant infections. We therefore make a fundamental conservative assumption that resistant infections replace, rather than add to the burden of sensitive infections, even though there are strong indications that for Methicillin resistant Staphylococcus aureus (MRSA), for instance, the burden is additive to that of Methicillin sensitive Staphylococcus aureus (MSSA) [13]. We estimate these direct and indirect costs for the following key pathogens:
Staphylococcus aureus (S. aureus) resistant to Oxacillin
Escherichia coli (E. coli) resistant to 3rd generation cephalosporin
Klebsiella pneumoniae (K. pneumonia) resistant to 3rd generation cephalosporin
Acinetobacter baumanii (A. baumanii) resistant to carbapenems
Pseudomonas aeruginosa (P. aeruginosa) resistant to carbapenems
We focus our analysis on Thailand and the United States as representatives of low/middle and high-income country settings, respectively.
Total economic loss
This is captured through the addition of the direct and indirect economic effects of AMR. The direct economic cost refers to the direct medical cost attributable to the treatment of a resistant infection as compared with the costs of treating a susceptible strain of the pathogen, and the indirect cost refers to the cost to society due to productivity losses attributable to premature excess deaths due to resistance.
Direct cost to the provider
We use the product of the number of resistant infections due to each of the above organisms, and the direct incremental medical cost attributable to resistance in the respective infections (Table 1). The number of infections and deaths per infection for the US was obtained from the Centers for Disease Control and Prevention (CDC) [14]. The unit cost per infection was obtained from a study reporting the incremental cost of resistant bacterial infections based on the Medical Expenditure Panel Survey, with data available for 14 million bacterial infections of which 1.2 million were estimated to be antibiotic resistant [15]. These costs were inflation adjusted to 2016 US$ using the US consumer price index [16].
Table 1 Incidence and mortality of resistant infections per 100,000, and the excess direct cost per resistant infection
Estimates for the number of resistant infections and deaths in Thailand were available from two studies deriving their estimates from hospital records. The first report, published in 2012, estimated the number of AMR deaths at 38,000 [17], but we opted for the more conservative estimates in a 2016 study reporting approximately 19,000 AMR attributable deaths annually [18]. We obtained the unit cost per infection from the first of these studies, which included only the costs for antibiotics. We used an estimated excess length of stay (LoS) of 5 days for all gram negative bacteria based on the excess LoS for resistant E. coli infections [19] while for MRSA we assumed no excess LoS as compared with MSSA [20]. We then applied a cost of $38 per bed-day in a secondary hospital in Thailand to any excess LoS [21, 22]. Costs were adjusted to 2016 US$ by converting to US$ at the year they were reported, and inflation adjusted using the World Bank Gross Domestic Product (GDP) deflator for Thailand.
Mortality figures were converted into productivity losses taking the human capital approach, by multiplying them by an assumed ten productive life years lost per death, based on a study of survival post intensive care unit (ICU) admission in Thailand, which reported similar results for high income settings [22], with a sensitivity analysis of 5–20 productive years lost per death. The number of years lost was then multiplied by GDP per capita to generate the productivity losses per death. A 3% discount rate along with a 1% annual productive growth rate was applied to these values.
Resistance modulating factor (RMf)
As illustrated in Fig. 1, human antimicrobial consumption is one of a host of factors driving AMR, and different drug classes are implicated in propagating resistance in different pathogens. The Resistance Modulating factor (RMf) approximates the proportional contribution of human antimicrobial consumption towards the total cost of AMR. Correlation coefficients were calculated to study the strength of the relationship between consumption of antibiotic classes assumed to be implicated in driving resistance in each pathogen, and the rates of resistance observed to their first line treatments. It was assumed that drug classes that were implicated in driving resistance in each pathogen (Table 2) did so equally [23, 24]. Data points for consumption (from 2008 to 2014) and resistance (from 2008 to 2015) were obtained from 44 countries and included total consumption in both hospital and community settings [25].
Table 2 Drug classes implicated in increasing the risk of resistance in each organism
The ecological association between the consumption of antibiotics implicated in driving resistance and the level of resistance was measured using Pearson's correlation coefficient, ρp for each pathogen p, considering the correlation between average resistance rates from 2008 to 2015 and the average of antibiotic consumption between 2008 and 2014. This is given by
$$ \frac{\operatorname{cov}\left({\mathrm{R}}_{\mathrm{p}},{\mathrm{Q}}_{\mathrm{p}}\right)}{\sigma_{R_p}{\sigma}_{Q_p}} $$
where Rp is the log transformed average annual measure of resistance for pathogen p (defined as the proportion of non-susceptible isolates), and Qp is the log-transformed mean consumption of implicated antibiotics. The denominators represent corresponding standard deviations. The lower and upper bounds of the 95% coefficient confidence intervals (CI) were used in the sensitivity analysis.
Model for the economic cost of AMR per antibiotic consumed
Putting together the costs of AMR, the RMf, and the consumption of antibiotics that drive resistance in each pathogen, we established the cost of AMR attributable to the use of a Standard Unit (SU) and a full course of eight antibiotic drug classes. One SU is a measure of volume based on the smallest identifiable dose given to a patient, dependent on the pharmaceutical form (a pill, capsule, tablet or ampoule) [26]. The cost of AMR per SU is thus calculated as.
$$ {cAMR}_d=\sum \frac{\rho_p\ast \left({DC}_p+{IC}_p\right)}{Q} $$
where cAMR is the cost of AMR per standard unit of antibiotic d consumed, DC the direct cost of treatment and IC the indirect costs for each pathogen p, and Q is the annual consumption of antibiotics assumed to be implicated in driving resistance in the pathogen p. For each drug d the costs on the right of the equation are summed up for all pathogens in which it is implicated in driving resistance, as shown in Eq. 2.
The resulting economic costs per SU of antibiotic consumed in each pathogen were then aggregated to calculate the cumulative economic cost per antibiotic consumed for each drug class in each country, including only the infections in which the particular drug class was assumed to propagate resistance. For example, as quinolones are assumed to drive resistance in all 5 pathogens the cost of resistance per SU of quinolones would be the sum of the cost of resistance shown in Eq. 2 for all 5 pathogens.
Model outputs are also presented in terms of the cost of AMR per full course of treatment. While in reality there will be much variation in the number of SUs per course depending on the indication, patient age and other factors, we use a pre-specified number of SU per adult full course of antibiotics according to the British National formulary (BNF) [27]. The number of SU per full course ranged from 3 SU for a full course of macrolide antibiotics to 28 SU per full course of quinolones. The number of SUs per course for all classes is presented in Additional file 1: Table S1.
The lower and the upper bound costs of AMR are calculated using the confidence intervals of the RMf and a range of 5–20 productive life years assigned to each excess death to calculate the indirect cost.
Data entry, verification, and analysis were done in Microsoft Excel 2016. Calculation of the correlation coefficients was done in R version 3.2.2 (R Foundation for Statistical Computing, Vienna, Austria). A web interface for the model where readers can vary parameter estimates and test model assumptions was developed using R-Shiny (RStudio, Boston, US) [28].
The resistance modulating factor
As shown in Table 3, a positive relationship was confirmed between consumption of antibiotics assumed to be implicated in resistance, and the average resistance rates in all pathogens, with correlation coefficients ranging from 0.27 in E. coli (p = 0.07) to 0.52 in P. aerginosa (p = 0.0006).
Table 3 Pearson's correlation coefficient showing ecological associations between average consumption (2008–14) and corresponding resistance (2008–15)
Direct and indirect costs of AMR
The total economic cost of AMR due to drug resistance in the five pathogens was $0.5 billion and $2.8 billion in Thailand and the United States, respectively. This is disaggregated into direct and the indirect costs for each of the organisms in the two countries in Tables 4 and 5, respectively. As an illustration, the direct and indirect annual cost of AMR in Thailand due to MRSA was estimated at $29 million and $151 million, respectively. After adjusting for the relative contribution of human consumption using the RMf, the direct and indirect economic loss was estimated to be $11 million and $56 million.
Table 4 Direct cost to the providers due to human antibiotic consumption in each resistant infection
Table 5 Productivity losses due to excess deaths attributable to resistant infection (Indirect Cost)
Economic cost of AMR per antibiotic consumed
With the total economic cost of AMR for each pathogen multiplied by its RMf in the numerator, and the consumption data for the relevant drug classes in the denominator, the economic cost of AMR of one SU of antibiotic for each pathogen was calculated (Table 6). Thus any antibiotic implicated in driving resistance in S. aureus (Table 2) would have an economic cost of AMR of $0.07 per SU in the Thai setting, and if a full course of the same drug consisted of 10 units this would imply a cost of $0.69 per full course.
Table 6 Cost per Standard Unit (SU) and full course antibiotic consumed per resistant organism
As most antibiotics are assumed to drive resistance in more than one infection, the costs need to be aggregated for all relevant pathogens to obtain the cumulative cost of AMR attributable to the consumption of one SU of that antibiotic. For a broad spectrum penicillin that is assumed to drive resistance in all pathogens, the estimated cost of AMR would be $6.95 per course of 10 SU in Thailand. The costs in Table 6 were therefore aggregated for each drug class where it was assumed to drive resistance in each of the organisms. Table 7 presents the cumulative economic cost per SU and per full course by drug class.
Table 7 Cumulative cost per SU and per antibiotic course by drug class (US$)
The lower and the upper bound costs of AMR were calculated using the confidence intervals of the RMf (Table 3) and a range of 5–20 productive life years assigned to each excess death for the indirect cost of AMR. Table 8 shows the resulting range of economic costs for a SU and a full course of antibiotic consumed in Thailand and US. Hence, in Thailand, the best case scenario would see a cost of AMR of $2.93 per course of co-amoxiclav and the worst would be $32.16.
Table 8 Range of economic costs per full course of antibiotics using outputs from the sensitivity analysis (US$)
Evidence-based policy draws on economic evaluation to allocate resources most efficiently [29], but this is entirely dependent on the inclusion of all pertinent costs and benefits associated with interventions under consideration. This is, to our knowledge, a first attempt at estimating the costs of AMR per antibiotic consumed by drug class and across national income brackets. We chose simple and transparent methods and restricted our assessment to the current burden of AMR, rather than more uncertain future projections, and to tangible factors including only direct medical costs and productivity losses due to AMR attributable deaths. Even within this restrictive framework there is much uncertainty surrounding interactions between antibiotic consumption, development of resistance, and its economic implications, but our underlying assumptions and parameter estimates were conservative.
The cost per SU of antibiotic differed between the US and Thailand for several reasons. First, the burden of AMR is considerably higher in Thailand, with a total of 28 AMR associated deaths per 100,000 as compared with 4.6 per 100,000 in the US (Table 1). Furthermore, the two countries had different epidemiological profiles, such as a higher burden of Acinetobacter associated mortality in Thailand as compared with the dominance of MRSA in the US. There were also notable differences in the cost data between the two countries; as the unit costs per infection for Thailand were only available from hospital settings, they tended to be higher than those in the US, which included both hospital and community settings. Other factors contributing to this difference are the higher GDP per capita and lower per capita consumption of antibiotics in the US.
The costs of AMR for drug classes also varied widely, driven primarily by the degree to which they were assumed to propagate resistance in the selected infections; NSPs were assumed to drive resistance only in S. aureus, while cephalosporins were implicated in resistance in all pathogens. The costs per full course of antibiotics were mostly determined by the number of SU per course, which for glycopeptides is high - a full course of vancomycin being 56 SU (four daily over 14 days) as compared with three daily units for a course of azithromycin (Additional file 1: Table S1).
Very few attempts have been made to quantify the cost of AMR per antibiotic consumed and internalise them in evaluations of interventions that involve or affect the use of antimicrobials. A recent study by Oppong et al. was one of the first attempts to do so in an evaluation focusing on antibiotic treatment of respiratory infections, demonstrating the decisive impact this had on outcomes [30]. Their estimate for the cost of AMR, however, assumed that resistance is driven exclusively by human antimicrobial consumption and that consumption of all drug classes contribute to resistance in all pathogens equally. It also ignored the considerable differences in the burden of resistance across countries, as apparent in the much higher burden of AMR in Thailand compared with that in the US. An earlier study evaluating the cost-effectiveness of malaria rapid tests used a similarly crude estimate for the cost of antimalarial resistance, also showing the large impact this had in swaying results and conclusions [31]. Elbasha, building on previous work by Phelps [32] estimated the deadweight loss of resistance due to overtreatment and found a higher cost of AMR of $35 (2003) per course of amoxicillin in the US context [33].
Several studies have explored the correlation between antimicrobial consumption and resistance [34,35,36]. The correlation coefficients in the current study are smaller than prior estimates. For example, the coefficient for resistance in E. coli in this analysis was 0.27 (Table 4) in comparison to 0.74 from Goossens et al. [34]. This could be explained by the latter using 14 European countries in contrast to 44 countries from different regions in our study, and more abundant data for European countries that enabled correlating between the consumption and resistance of specific drugs, rather than drug classes as done here. The smaller coefficients imply a conservative assessment of the cost of AMR attributable to human antibiotic consumption.
Kaier et al. derived measures of association between antibiotic consumption and resistance from a time-series analysis using a multivariate regression model with different drug classes [37]. This would be a better approach for calculating the RMf, rather than the ecological associations used here. We were restricted, however, by having only 10 years of consumption data and even sparser and more heterogeneous resistance data.
There were many assumptions and limitations in the analysis (see Additional file 1: Table S2). One key limitation was the inclusion of a limited number of organisms, while consumption of the same antibiotics could drive resistance in other organisms with additional costs. The Thai estimates also focused only on the burden of AMR within hospital settings, excluding the possible excess burden in primary care and the community. These and other listed limitations result in a conservative estimate of the economic costs of AMR in our model.
Taking the human capital approach to productivity losses implies much higher estimates than would have been derived using friction costs; given the context of this analysis, trying to capture the full societal costs of AMR, this was deemed appropriate. This is essentially equivalent to the widespread use of GDP/capita as a proxy for the ceiling ratio in cost-effectiveness analyses to classify interventions as cost-effective.
The direct medical costs assigned to resistant infections were derived very differently in each country; the US estimates were taken from a recent study providing a national estimate of the incremental healthcare cost of treating millions of patients with antibiotic sensitive and resistant infections [15]. The Thai estimates used rudimentary costing methods, largely relying on expert opinion to estimate the cost of antibiotics required to treat resistant infections.
The selection of drug classes implicated in propagation of resistance in the respective organisms were based on limited available evidence [24]. This might explain some apparent anomalies, like the relatively low costs for NSPs, which were assumed to drive resistance only in S. aureus. Another reason for this anomaly relates to the entire framework of the analysis, whereby the cost of AMR is approximated from its current (or recent) estimated burden, rather than projections of what will happen if resistance to last line drugs, such as carbapenem, were to spread, for which there are alarming early indications. Such an approach is arguably more relevant than focusing on the present burden of AMR, but it requires many more strong and contestable assumptions.
The data on consumption and resistance levels used to derive the RMf were limited to 10 years and a causal relationship was assumed. For many pathogens and types of infections, however, this is not realistic as increasing resistance could alter consumption patterns as patients and physicians adapt their behaviour in order to provide the best possible treatment in a changing environment of resistance and therefore counteract the assumed dose-response relationship.
These rudimentary estimates for the economic cost of AMR per antibiotic consumed could be improved upon in several ways in future work as better data become available. In addition to addressing the above limitations, the link between human antibiotic consumption and resistance can be disaggregated into hospital vs. community use. The model can be further extended to other organisms including parasites and viruses and their varying distribution in different health sectors and geographical locations (global/regional/country/hospital/community).
The estimates of the economic costs of AMR per antibiotic consumed in this analysis were high. Incorporation of such estimates in economic evaluation of interventions that affect the use of antibiotics will better portray their true costs and benefits, and could act as a catalyst for more efficient deployment of interventions to mitigate the burden of AMR. We highlight the limitations of the analysis to emphasise the need for further development of the methods, and point to the notable differences in the costs of AMR per antibiotic consumed between the two countries and within the different drug classes to encourage their adaptation to other settings as relevant data become available.
AMR:
CDC:
GDP:
LMICs:
Low/middle income countries
MRSA:
Methicillin resistant Staphylococcus aureus
MSSA:
Methicillin sensitive Staphylococcus aureus
RMf:
Resistance modulating factor
Davies J, Davies D. Origins and evolution of antibiotic resistance. Microbiol Mol Biol Rev. 2010;74:417–33.
Holmes AH, Moore LSP, Sundsfjord A, Steinbakk M, Regmi S, Karkey A, et al. Understanding the mechanisms and drivers of antimicrobial resistance. Lancet. 2015;387:176–87.
Landers TF, Cohen B, Wittum TE, Larson EL. A review of antibiotic use in food animals: perspective, policy, and potential. Public Health Rep. 2012;127:4–22.
Coast J, Smith RD, Millar MR. Superbugs: should antimicrobial resistance be included as a cost in economic evaluation? Health Econ. 1996;5:217–26.
Coast J, Smith RD, Millar MR. An economic perspective on policy to reduce antimicrobial resistance. Soc Sci Med. 1998;46:29–38.
Kaier K, Frank U. Measuring the externality of antibacterial use from promoting antimicrobial resistance. PharmacoEconomics. 2010;28:1123–8.
Do NTT, Ta NTD, Tran NTH, Than HM, Vu BTN, Hoang LB, et al. Point-of-care C-reactive protein testing to reduce inappropriate use of antibiotics for non-severe acute respiratory infections in Vietnamese primary health care: a randomised controlled trial. Lancet Glob Heal. 2016;4:e633–41.
Coast J, Smith R, Karcher AM, Wilton P, Millar M. Superbugs II: How should economic evaluation be conducted for interventions which aim to contain antimicrobial resistance? Health Econ. 2002;11:637–47.
Gandra S, Barter DM, Laxminarayan R. Economic burden of antibiotic resistance: how much do we really know? Clin Microbiol Infect. 2014;20:973–9.
McGowan JE. Economic impact of antimicrobial resistance. Emerg Infect Dis. 2001;7:286–92.
Leal JR, Conly J, Henderson EA, Manns BJ. How externalities impact an evaluation of strategies to prevent antimicrobial resistance in health care organizations. Antimicrob Resist Infect Control. 2017;6:53.
Friedrich MJ. UN leaders commit to fight antimicrobial resistance. JAMA. 2016;316:1956.
Mostofsky E, Lipsitch M, Regev-yochay G. Is methicillin-resistant Staphylococcus aureus replacing methicillin-susceptible S. Aureus? J Antimicrob Chemother. 2011;66:2199–214.
Centers for Disease Control and Prevention. Antibiotic resistance threats in the United States, 2013; 2013. p. 114.
Thorpe KE, Joski P, Johnston KJ. Antibiotic-Resistant Infection Treatment Costs Have Doubled Since 2002, Now Exceeding $2 Billion Annually. Health Aff. 2018;37:662–9.
US Department of Labor Bureau of Labor Statistics. Inflation calculator. CPI Inflation Calculator. https://data.bls.gov/cgi-bin/cpicalc.pl. Accessed 15 Aug 2017.
Pumart P, Phodha T, Thamlikitkul V, Riewpaiboon A, Prakongsai P, Limwattananon S. Health and economic impacts of antimicrobial resistance in Thailand. J Health Serv Res Policy. 2012;6:352–60.
Lim C, Takahashi E, Hongsuwan M, Wuthiekanun V, Thamlikitkul V, Hinjoy S, et al. Epidemiology and burden of multidrug-resistant bacterial infection in a developing country. eLife. 2016;5:e18082.
de Kraker MEA, Wolkewitz M, Davey PG, Koller W, Berger J, Nagler J, et al. Burden of antimicrobial resistance in European hospitals: excess mortality and length of hospital stay associated with bloodstream infections due to Escherichia coli resistant to third-generation cephalosporins. J Antimicrob Chemother. 2011;66:398–407.
de Kraker MEA, Wolkewitz M, Davey PG, Grundmann H. Clinical impact of antimicrobial resistance in European hospitals: excess mortality and length of hospital stay related to methicillin-resistant Staphylococcus aureus bloodstream infections. Antimicrob Agents Chemother. 2011;55:1598–605.
Riewpaiboon A. Standard cost lists for health economic evaluation in Thailand. J Med Assoc Thail. 2014;97(SUPPL. 5):S127–34.
Luangasanatip N, Hongsuwan M, Lubell Y, Limmathurotsakul D, Teparrukkul P, Chaowarat S, et al. Long-term survival after intensive care unit discharge in Thailand: a retrospective study. Crit Care. 2013;17:R219.
MacAdam H, Zaoutis TE, Gasink LB, Bilker WB, Lautenbach E. Investigating the association between antibiotic use and antibiotic resistance : impact of different methods of categorising prior antibiotic use. Int J Antimicrob Agents. 2006;28:325–32.
Tacconelli E. Antimicrobial use: risk driver of multidrug resistant microorganisms in healthcare settings. Curr Opin Infect Dis. 2009;22:352–8.
The Center for Disease Dynamics Economics and Policy. ResistanceMap beta. http://resistancemap.cddep.org. Accessed 22 Jun 2016.
Van Boeckel TP, Gandra S, Ashok A, Caudron Q, Grenfell BT, Levin SA, et al. Global antibiotic consumption 2000 to 2010 : an analysis of national pharmaceutical sales data. Lancet Infect Dis. 2014;14:742–50.
NICE National Institute for Health and Care Excellence. British National Formulary. https://bnf.nice.org.uk/drug/. Accessed 3 Aug 2016.
AMR Costing App. https://moru.shinyapps.io/amrcost/. Accessed 9 Feb 2018.
Chisholm D, Evans DB. Economic evaluation in health: saving money or improving care? J Med Econ. 2007;10:325–37.
Oppong R, Smith RD, Little P, Verheij T, Butler CC, Goossens H, et al. Cost effectiveness of amoxicillin for lower respiratory tract infections in primary care: an economic evaluation accounting for the cost of antimicrobial resistance. Br J Gen Pract. 2016;66:e633–9.
Lubell Y, Reyburn H, Mbakilwa H, Mwangi R, Chonya S, Whitty CJM, et al. The impact of response to the results of diagnostic tests for malaria: cost-benefit analysis. BMJ. 2008;336:202–5.
Phelps CE. Bug / drug resistance sometimes less is more. Med Care. 1989;27:194–203.
Elbasha EH. Deadweight loss of bacterial resistance due to overtreatment. Health Econ. 2003;12:125–38.
Goossens H, Ferech M, Vander Stichele R, Elseviers M. Outpatient antibiotic use in Europe and association with resistance: a cross-national database study. Lancet. 2005;365:579–87.
Albrich WC, Monnet DL, Harbarth S. Antibiotic selection pressure and resistance in Streptococcus pneumoniae and Streptococcus pyogenes. Emerg Infect Dis. 2004;10:514–7.
Van De Sande-Bruinsma N, Grundmann H, Verloo D, Tiemersma E, Monen J, Goossens H, et al. Antimicrobial drug use and resistance in Europe. Emerg Infect Dis. 2008;14:1722–30.
Kaier K, Hagist C, Frank U, Conrad A, Meyer E. Two time-series analyses of the impact of antibiotic consumption and alcohol-based hand disinfection on the incidences of nosocomial methicillin-resistant Staphylococcus aureus infection and Clostridium difficile infection. Infect Control Hosp Epidemiol. 2009;30:346–53.
We thank Ms. Nistha Shrestha for her contribution in the data compiling process. We also thank Professor Lisa White, Dr. Pan-Ngum Wirichada and other members of the Mathematical and Economic Modelling group at the Mahidol Oxford Tropical Medicine Research Unit for their helpful feedback for a presentation of this analysis.
YL and PS conceptualised and designed the study. PS, YL and BC analysed and interpreted the data. PS and YL drafted the manuscript. JC, RO, OC, NDTT, TP, PG and HW revised the manuscript for intellectual content. OC designed the web-interface. All authors read and approved the final manuscript.
This work was supported by the Wellcome Trust Major Overseas Programme in SE Asia [grant number 106698/Z/14/Z]. The initial analysis formed the basis of the dissertation project funded by the MSc in International Health and Tropical Medicine programme at the University of Oxford, undertaken by PS. PS was funded by the Weidenfeld – Hoffmann Trust for the MSc.
The antibiotic consumption (2008–2014) and pathogen resistance (2008–2015) data used in this study was obtained from https://resistancemap.cddep.org/ and is openly accessible [25].
Infectious Diseases Data Observatory, University of Oxford, Oxford, UK
Poojan Shrestha & Philippe J. Guerin
Centre for Tropical Medicine and Global Health, Nuffield Department of Medicine, University of Oxford, Oxford, UK
Poojan Shrestha, Ben S. Cooper, Philippe J. Guerin & Yoel Lubell
Mahidol Oxford Tropical Medicine Research Unit Faculty of Tropical Medicine, Mahidol University, 420/6 Rajvithi Road, Bangkok, 10400, Thailand
Ben S. Cooper, Olivier Celhay & Yoel Lubell
School of Social and Community Medicine, University of Bristol, Bristol, UK
Joanna Coast
Health Economics Unit, School of Health and Population Sciences, University of Birmingham, Birmingham, UK
Raymond Oppong
Oxford University Clinical Research Unit-Ha Noi, Ha Noi, Vietnam
Nga Do Thi Thuy & Heiman Wertheim
National Hospital for Tropical Diseases, Hanoi, Vietnam
Nga Do Thi Thuy
Faculty of Pharmacy, Mahidol University, Bangkok, Thailand
Tuangrat Phodha
Department of Medical Microbiology, Radboud Center of Infectious Diseases, Radboudumc, Nijmegen, Netherlands
Heiman Wertheim
Poojan Shrestha
Ben S. Cooper
Olivier Celhay
Philippe J. Guerin
Yoel Lubell
Correspondence to Yoel Lubell.
Table S1. Standard units per course by antibiotic drug class. Table S2. Summary of assumptions and limitation. (DOCX 22 kb)
Shrestha, P., Cooper, B.S., Coast, J. et al. Enumerating the economic cost of antimicrobial resistance per antibiotic consumed to inform the evaluation of interventions affecting their use. Antimicrob Resist Infect Control 7, 98 (2018). https://doi.org/10.1186/s13756-018-0384-3
Cost of resistance
Economic cost
|
CommonCrawl
|
About Subscribe
Unsupervised learning, one notion or many?
Sanjeev Arora and Andrej Risteski • Jun 26, 2017 • 16 minute read
Unsupervised learning, as the name suggests, is the science of learning from unlabeled data. A look at the wikipedia page shows that this term has many interpretations:
(Task A) Learning a distribution from samples. (Examples: gaussian mixtures, topic models, variational autoencoders,..)
(Task B) Understanding latent structure in the data. This is not the same as (a); for example principal component analysis, clustering, manifold learning etc. identify latent structure but don't learn a distribution per se.
(Task C) Feature Learning. Learn a mapping from datapoint $\rightarrow$ feature vector such that classification tasks are easier to carry out on feature vectors rather than datapoints. For example, unsupervised feature learning could help lower the amount of labeled samples needed for learning a classifier, or be useful for domain adaptation.
Task B is often a subcase of Task C, as the intended user of "structure found in data" are humans (scientists) who pour over the representation of data to gain some intuition about its properties, and these "properties" can be often phrased as a classification task.
This post explains the relationship between Tasks A and C, and why they get mixed up in students' mind. We hope there is also some food for thought here for experts, namely, our discussion about the fragility of the usual "perplexity" definition of unsupervised learning. It explains why Task A doesn't in practice lead to good enough solution for Task C. For example, it has been believed for many years that for deep learning, unsupervised pretraining should help supervised training, but this has been hard to show in practice.
The common theme: high level representations.
If $x$ is a datapoint, each of these methods seeks to map it to a new "high level" representation $h$ that captures its "essence." This is why it helps to have access to $h$ when performing machine learning tasks on $x$ (e.g. classification). The difficulty of course is that "high-level representation" is not uniquely defined. For example, $x$ may be an image, and $h$ may contain the information that it contains a person and a dog. But another $h$ may say that it shows a poodle and a person wearing pyjamas standing on the beach. This nonuniqueness seems inherent.
Unsupervised learning tries to learn high-level representation using unlabeled data. Each method make an implicit assumption about how the hidden $h$ relates to the visible $x$. For example, in k-means clustering the hidden $h$ consists of labeling the datapoint with the index of the cluster it belongs to. Clearly, such a simple clustering-based representation has rather limited expressive power since it groups datapoints into disjoint classes: this limits its application for complicated settings. For example, if one clusters images according to the labels "human", "animal" "plant" etc., then which cluster should contain an image showing a man and a dog standing in front of a tree?
The search for a descriptive language for talking about the possible relationships of representations and data leads us naturally to Bayesian models. (Note that these are viewed with some skepticism in machine learning theory – compared to assumptionless models like PAC learning, online learning, etc. – but we do not know of another suitable vocabulary in this setting.)
A Bayesian view
Bayesian approaches capture the relationship between the "high level" representation $h$ and the datapoint $x$ by postulating a joint distribution $p_{\theta}(x, h)$ of the data $x$ and representation $h$, such that $p_{\theta}(h)$ and the posterior $p_{\theta}(x \mid h)$ have a simple form as a function of the parameters $\theta$. These are also called latent variable probabilistic models, since $h$ is a latent (hidden) variable.
The standard goal in distribution learning is to find the $\theta$ that "best explains" the data (what we called Task (A)) above). This is formalized using maximum-likelihood estimation going back to Fisher (~1910-1920): find the $\theta$ that maximizes the log probability of the training data. Mathematically, indexing the samples with $t$, we can write this as
\[\max_{\theta} \sum_{t} \log p_{\theta}(x_t) \qquad (1)\]
where \(p_{\theta}(x_t) = \sum_{h_t}p_{\theta}(x_t, h_t).\)
(Note that $\sum_{t} \log p_{\theta}(x_t)$ is also the empirical estimate of the cross-entropy $E_{x}[\log p_{\theta}(x)]$ of the distribution $p_{\theta}$, where $x$ is distributed according to $p^*$, the true distribution of the data. Thus the above method looks for the distribution with best cross-entropy on the empirical data, which is also log of the perplexity of $p_{\theta}$.)
In the limit of $t \to ∞$, this estimator is consistent (converges in probability to the ground-truth value) and efficient (has lowest asymptotic mean-square-error among all consistent estimators). See the Wikipedia page. (Aside: maximum likelihood estimation is often NP-hard, which is one of the reasons for the renaissance of the method-of-moments and tensor decomposition algorithms in learning latent variable models, which Rong wrote about some time ago.)
Toward task C: Representations arise from the posterior distribution
Simply learning the distribution $p_{\theta}(x, h)$ does not yield a representation per se. To get a distribution of $x$, we need access to the posterior $p_{\theta}(h \mid x)$: then a sample from this posterior can be used as a "representation" of a data-point $x$. (Aside: Sometimes, in settings when $p_{\theta}(h \mid x)$ has a simple description, this description can be viewed as the representation of $x$.)
Thus solving Task C requires learning distribution parameters $\theta$ and figuring out how to efficiently sample from the posterior distribution.
Note that the sampling problems for the posterior can be #-P hard for very simple families. The reason is that by Bayes law, $p_{\theta}(h \mid x) = \frac{p_{\theta}(h) p_{\theta}(x \mid h)}{p_{\theta}(x)}$. Even if the numerator is easy to calculate, as is the case for simple families, the $p_{\theta}(x)$ involves a big summation (or integral) and is often hard to calculate.
Note that the max-likelihood parameter estimation (Task A) and approximating the posterior distributions $p(h \mid x)$ (Task C) can have radically different complexities: Sometimes A is easy but C is NP-hard (example: topic modeling with "nice" topic-word matrices, but short documents, see also Bresler 2015); or vice versa (example: topic modeling with long documents, but worst-case chosen topic matrices Arora et al. 2011)
Of course, one may hope (as usual) that computational complexity is a worst-case notion and may not apply in practice. But there is a bigger issue with this setup, having to do with accuracy.
Why the above reasoning is fragile: Need for high accuracy
The above description assumes that the parametric model $p_{\theta}(x, h)$ for the data was exact whereas one imagines it is only approximate (i.e., suffers from modeling error). Furthermore, computational difficulties may restrict us to use approximately correct inference even if the model were exact. So in practice, we may only have an approximation $q(h|x)$ to the posterior distribution $p_{\theta}(h \mid x)$. (Below we describe a popular methods to compute such approximations.)
How good of an approximation to the true posterior do we need?
Recall, we are trying to answer this question through the lens of Task C, solving some classification task. We take the following point of view:
For $t=1, 2,\ldots,$ nature picked some $(h_t, x_t)$ from the joint distribution and presented us $x_t$. The true label $y_t$ of $x_t$ is $\mathcal{C}(h_t)$ where $\mathcal{C}$ is an unknown classifier. Our goal is classify according to these labels.
To simplify notation, assume the output of $\mathcal{C}$ is binary. If we wish to use $q(h \mid x)$ as a surrogate for the true posterior $p_{\theta}(h \mid x)$, we need to have $\Pr_{x_t, h_t \sim q(\cdot \mid x_t)} [\mathcal{C}(h_t) \neq y_t]$ is small as well.
How close must $q(h \mid x)$ and $p(h \mid x)$ be to let us conclude this? We will use KL divergence as "distance" between the distributions, for reasons that will become apparent in the following section. We claim the following:
CLAIM: The probability of obtaining different answers on classification tasks done using the ground truth $h$ versus the representations obtained using $q(h_t \mid x_t)$ is less than $\epsilon$ if $KL(q(h_t \mid x_t) \parallel p(h_t \mid x_t)) \leq 2\epsilon^2.$
Here's a proof sketch. The natural distance these two distributions $q(h \mid x)$ and $p(h \mid x)$ with respect to accuracy of classification tasks is total variation (TV) distance. Indeed, if the TV distance between $q(h\mid x)$ and $p(h \mid x)$ is bounded by $\epsilon$, this implies that for any event $\Omega$,
\[\left|\Pr_{h_t \sim p(\cdot \mid x_t)}[\Omega] - \Pr_{h_t \sim q(\cdot \mid x_t)}[\Omega]\right| \leq \epsilon .\]
The CLAIM now follows by instantiating this with the event $\Omega = $ "Classifier $\mathcal{C}$ outputs a different answer from $y_t$ given representation $h_t$ for input $x_t$", and relating TV distance to KL divergence using Pinsker's inequality, which gives
\[\mbox{TV}(q(h_t \mid x_t),p(h_t \mid x_t)) \leq \sqrt{\frac{1}{2} KL(q(h_t \mid x_t) \parallel p(h_t \mid x_t))} \leq \epsilon\]
as we needed. This observation explains why solving Task A in practice does not automatically lead to very useful representations for classification tasks (Task C): the posterior distribution has to be learnt extremely accurately, which probably didn't happen (either due to model mismatch or computational complexity).
The link between Tasks A and C: variational methods
As noted, distribution learning (Task A) via cross-entropy/maximum-likelihood fitting, and representation learning (Task C) via sampling the posterior are fairly distinct. Why do students often conflate the two? Because in practice the most frequent way to solve Task A does implicitly compute posteriors and thus also solves Task C.
The generic way to learn latent variable models involves variational methods, which can be viewed as a generalization of the famous EM algorithm (Dempster et al. 1977).
Variational methods maintain at all times a proposed distribution $q(h | x)$ (called variational distribution). The methods rely on the observation that for every such $q(h \mid x)$ the following lower bound holds \begin{equation} \log p(x) \geq E_{q(h \mid x)} \log p(x,h) + H(q(h\mid x)) \qquad (2). \end{equation} where $H$ denotes Shannon entropy (or differential entropy, depending on whether $x$ is discrete or continuous). The RHS above is often called the ELBO bound (ELBO = evidence-based lower bound). This inequality follows from a bit of algebra using non-negativity of KL divergence, applied to distributions $q(h \mid x)$ and $p(h\mid x)$. More concretely, the chain of inequalities is as follows,
\[KL(q(h\mid x) \parallel p(h \mid x)) \geq 0 \Leftrightarrow E_{q(h|x)} \log \frac{q(h|x)}{p(h|x)} \geq 0\] \[\Leftrightarrow E_{q(h|x)} \log \frac{q(h|x)}{p(x,h)} + \log p(x) \geq 0\] \[\Leftrightarrow \log p(x) \geq E_{q(h|x)} \log p(x,h) + H(q(h\mid x))\]
Furthermore, equality is achieved if $q(h\mid x) = p(h\mid x)$. (This can be viewed as some kind of "duality" theorem for distributions, and dates all the way back to Gibbs. )
Algorithmically observation (2) is used by foregoing solving the maximum-likelihood optimization (1), and solving instead
\[\max_{\theta, q(h_t|x_t)} \sum_{t} E_{q(h_t\mid x_t)} \log p(x_t,h_t) + H(q(h_t\mid x_t))\]
Since the variables are naturally divided into two blocks: the model parameters $\theta$, and the variational distributions $q(h_t\mid x_t)$, a natural way to optimize the above is to alternate optimizing over each group, while keeping the other fixed. (This meta-algorithm is often called variational EM for obvious reasons.)
Of course, optimizing over all possible distributions $q$ is an ill-defined problem, so $q$ is constrained to lie in some parametric family (e.g., " standard Gaussian transformed by depth $4$ neural nets of certain size and architecture") such the above objective can be easily evaluated at least (typically it has a closed-form expression).
Clearly if the parametric family of distributions is expressive enough, and the (non-convex) optimization problem doesn't get stuck in bad local minima, then variational EM algorithm will give us not only values of the parameters $\theta$ which are close to the ground-truth ones, but also variational distributions $q(h\mid x)$ which accurately track $p(h\mid x)$. But as we saw above, this accuracy would need to be very high to get meaningful representations.
In the next post, we will describe our recent work further clarifying this issue of representation learning via a Bayesian viewpoint.
Theme available on Github.
|
CommonCrawl
|
On the determinant of a Toeplitz-Hessenberg matrix
I am having trouble proving that
$$\det \begin{pmatrix} \dfrac{1}{1!} & 1 & 0 & 0 & \cdots & 0 \\ \dfrac{1}{2!} & \dfrac{1}{1!} & 1 & 0 & \cdots & 0 \\ \dfrac{1}{3!} & \dfrac{1}{2!} & \dfrac{1}{1!} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \dfrac{1}{(n-1)!} & \dfrac{1}{(n-2)!} & \dfrac{1}{(n-3)!} & \cdots & \dfrac{1}{1!} &1\\ \dfrac{1}{n!} & \dfrac{1}{(n-1)!} & \dfrac{1}{(n-2)!} & \dfrac{1}{(n-3)!} & \cdots & \dfrac{1}{1!} \end{pmatrix} =\dfrac{1}{n!}. $$
linear-algebra matrices determinant factorial
Héctor YelaHéctor Yela
$\begingroup$ Welcome to MSE. You'll get a lot more help, and fewer votes to close, if you show that you have made a real effort to solve the problem yourself. What are your thoughts? What have you tried? How far did you get? Where are you stuck? This question is likely to be closed if you don't add more context. Please respond by editing the question body. Many people browsing questions will vote to close without reading the comments. $\endgroup$ – saulspatz Jul 9 '19 at 21:39
$\begingroup$ I'll reformulate my comment: Would you mind writting the next-to-last row? Is it: $\left(\frac{1}{(n-1)!}\cdots\frac{1}{1!} 1\right)$ or just $\left(\frac{1}{(n-1)!}\cdots\frac{1}{1!} 0\right)$? $\endgroup$ – miraunpajaro Jul 9 '19 at 22:13
$\begingroup$ @miraunpajaro It is supposed to be the former. All the elements on the superdiagonal are $1$. $\endgroup$ – saulspatz Jul 9 '19 at 22:24
$\begingroup$ @saulspatz Thanks! I was confused about the weird behaviour of the last row $\endgroup$ – miraunpajaro Jul 9 '19 at 22:27
Hint. In general, let $d_0=d_1=1$ and let $(a_k)_{k=1,2,\ldots}$ be any sequence of numbers. For every $n\ge2$, denote by $d_n$ the determinant of the $n\times n$ Toeplitz-Hessenberg matrix $$ \begin{pmatrix} a_1 &1 &0 &0 &\cdots &0\\ a_2 &a_1 &1 &0 &\cdots &0\\ a_3 &a_2 &a_1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\ddots &\vdots\\ a_{n-1} &a_{n-2} &a_{n-3} &\cdots &a_1 &1\\ a_n &a_{n-1} &a_{n-2} &a_{n-3} &\cdots &a_1 \end{pmatrix}. $$ If one expands the determinant by the first column, one obtains $$ d_n=-\sum_{k=1}^n(-1)^ka_kd_{n-k}. $$
Prove it by induction.
At each step, expand by minors along the top row.
At the end, think about the binomial theorem.
saulspatzsaulspatz
$\begingroup$ I don't see how the expansion along the first row helps. Did you mean expansion along the bottom row or the first column? $\endgroup$ – user1551 Jul 10 '19 at 0:15
$\begingroup$ I meant the first row, so at the first stage, there are two determinants, one of which is known by the induction hypothesis. $\endgroup$ – saulspatz Jul 10 '19 at 0:18
HINT.-By property of determinants, we lower the order of n to (n-1) as follows $$\Delta_n=\det\begin{pmatrix} 1 &1 &0 &0 &\cdots &0\\ a_2 &1 &1 &0 &\cdots &0\\ a_3 &a_2 &1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\ddots &\vdots\\ a_{n-1} &a_{n-2} &a_{n-3} &\cdots &1 &1\\ a_n &a_{n-1} &a_{n-2} &a_{n-3} &\cdots &1 \end{pmatrix}$$
$$\Delta_n=\det\begin{pmatrix} 1 &0 &0 &0 &\cdots &0\\ a_2 &1-a_2 &1 &0 &\cdots &0\\ a_3 &a_2-a_3 &1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\ddots &\vdots\\ a_{n-1} &a_{n-2}-a_{n-1} &a_{n-3} &\cdots &1 &1\\ a_n &a_{n-1}-a_n &a_{n-2} &a_{n-3} &\cdots &1 \end{pmatrix}$$ $$\Delta_n=\det\begin{pmatrix}1-a_2 &1 &0 &\cdots &0\\ a_2-a_3 &1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots &\\ a_{n-2}-a_{n-1} &a_{n-3} &\cdots &1 &1\\ a_{n-1}-a_n &a_{n-2} &a_{n-3} &\cdots &1 \end{pmatrix}$$
On the other hand one has $\Delta_n=\dfrac{1}{n!}=\dfrac{1}{n}\dfrac{1}{(n-1)!}=\dfrac 1n\Delta_{n-1}$ so we have to prove that
$$\det\begin{pmatrix}1-a_2 &1 &0 &\cdots &0\\ a_2-a_3 &1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots &\\ a_{n-2}-a_{n-1} &a_{n-3} &\cdots &1 &1\\ a_{n-1}-a_n &a_{n-2} &a_{n-3} &\cdots &1 \end{pmatrix}=\dfrac 1n\Delta_{n-1}$$ $$\dfrac 1n\Delta_{n-1}=\det\begin{pmatrix}\dfrac 1n &1 &0 &\cdots &0\\ \dfrac{1}{n}a_2 &1 &1 &\cdots &0\\ \vdots &\vdots &\vdots &\ddots &\vdots &\\ \dfrac 1na_{n-2} &a_{n-3} &\cdots &1 &1\\ \dfrac 1na_{n-1} &a_{n-2} &a_{n-3} &\cdots &1 \end{pmatrix}$$
Note that the columns in these two last determinants are all equal excepting the first.
Can you apply comfortably induction now to prove that both two last determinants are equal?
PiquitoPiquito
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices determinant factorial or ask your own question.
Diagonalization of a Toeplitz matrix
Expressing determinant as a linear combination of minors of fixed dimension
Determinant of block tridiagonal Toeplitz matrices
Proof about determinant for a special matrix?
Determinant of $N \times\ N$ matrix
Formula for determinant of diagonals block matrix
|
CommonCrawl
|
bit-player
An amateur's outlook on computation and mathematics
The banner graphic
My God, It's Full of Dots!
Counting Your Chickens Before They're Pecked
The short arm of coincidence
Prime After Prime
Traffic Jams in Javascript
Sunshine In = Earthshine Out
Dancing with the Spheres
The abc game
Statistical mechanics of magnet balls
World3, the public beta
The n-ball game
A slight discrepancy
Googling the lexicon
Kenken-friendly numbers
Swapping Lies
The Middle of the Square
Jotto
Words for the Wordle-Weary
Does having prime neighbors make you more composite?
Riding the Covid coaster
Three Months in Monte Carlo
Foldable Words
We Gather Together…
More Questions About Trees
Questions About Trees
April Fool Redux
MathJax turns 3.0
computing (159)
mathematics (171)
modern life (78)
off-topic (11)
problems and puzzles (30)
social science (13)
The Hilbert curve
Drivel
Playing in traffic
Climate model
Gene flipping
Aglorithms
Limits to growth
Toothpick tree
Group Theory in the Bedroom
Dancing with the Spheres →
College ties
Posted on 5 October 2012 by Brian Hayes
Nate Silver's fivethirtyeight blog for the New York Times applies computational statistics to U.S. presidential politics. A recent post discusses the possibility of a tie vote in the Electoral College.
If the votes on November 6 should come out according to the map above, we'll have a 269-to-269 deadlock, leaving it to the House of Representatives to choose the next president. Silver ran 25,001 simulated elections, and the result shown above appeared 137 times. Eight other tied arrangements also turned up, though more rarely. The overall probability of an even split was about 0.6 percent. (The simulations were based on polls taken before last week's Obama-Romney debate.)
Looking at Silver's maps, I began to muse about a purely combinatorial question: Setting aside all considerations of political plausibility, in how many different ways can the Electoral College vote come out a tie? To put it another way, how many two-colorings of the U.S. map give each party 269 Electoral College votes?
I now have an answer to this question, which I'll give below. But first, for the benefit of my non-U.S. readers, I should say a word about that curious institution the Electoral College. It was the wisdom of the Founders that the POTUS should not be elected directly by the people but rather by the states, with each state getting as many votes as it has senators (two each) plus members of the House of Representatives (from 1 to 53, depending on population). Since 1964 the District of Columbia (which isn't a state, but I'm going to call it one anyway) has also had three votes. Thus there are 51 voting states, with vote allocations ranging from 3 to 55, and the total number of votes is 538.
California 55 Maryland 10 Utah 6
Texas 38 Minnesota 10 Nebraska 5
Florida 29 Missouri 10 New Mexico 5
New York 29 Wisconsin 10 West Virginia 5
Illinois 20 Alabama 9 Hawaii 4
Pennsylvania 20 Colorado 9 Idaho 4
Ohio 18 South Carolina 9 Maine 4
Georgia 16 Kentucky 8 New Hampshire 4
Michigan 16 Louisiana 8 Rhode Island 4
North Carolina 15 Connecticut 7 Alaska 3
New Jersey 14 Oklahoma 7 Delaware 3
Virginia 13 Oregon 7 District of Columbia 3
Washington 12 Arkansas 6 Montana 3
Arizona 11 Iowa 6 North Dakota 3
Indiana 11 Kansas 6 South Dakota 3
Massachusetts 11 Mississippi 6 Vermont 3
Tennessee 11 Nevada 6 Wyoming 3
Generally, each state's votes are cast as a winner-take-all block (but see the note on Maine and Nebraska at the end of this post).
I assume here that only two parties receive Electoral College votes. Thus any partitioning of the elector list into two subsets—red and blue—is a possible election outcome. The number of distinguishable outcomes is simply the product of all these binary choices: 251, or 2,251,799,813,685,248. We want to know how many of these cases yield exactly 269 red and 269 blue votes. It turns out there are 16,976,480,564,070 ways of arriving at a drawn election, which is roughly 0.75 percent of the total.
Let me emphasize that this number is of no political consequence; in particular, it says nothing about the probability of a tie—at least not unless the states choose their votes by flipping fair coins. I can't see where the number holds any notable mathematical significance either. What attracted my interest was simply the challenge of computing it.
Doing it the direct and obvious way—enumerating all 251 partitions and summing the votes for each case—is not utterly unthinkable. Amazingly enough, we live in a world where you can count to a quadrillion and live to tell the tale. But doing so would require more programming effort—or more patience or more hardware—than I'm willing to invest for the sake of idle curiosity.
It's interesting that some bigger problems are actually easier. Suppose we keep the number of electors constant at 538 but we subdivide the states into smaller territories, each of which gets fewer votes. In the limiting case there are 538 regions with one vote each. Now the problem is immensely larger: There are 2538 ways of two-coloring a map with 538 regions. Nevertheless, with very little fuss we can calculate the number of tied configurations, without having to count them all one by one. The number is:
$$\binom{538}{269} = \frac{538!}{(269!)^2} \approx 3 \times 10^{160}.$$
(The exact value is 30937469012875859932471852213149074559 46754979248232932201743920079668590495281866215749684213 30476315882464242716565222046883627439960129220374560486 58575478800.)
Is there some similar mathematical magic that will solve the original Electoral College problem? Not quite so neatly, as far as I know. But the same underlying principle offers a measure of help.
Let's focus for a moment on the eight states in the Electoral College that get three votes each. Considering just those states in isolation, they have 28 = 256 possible red-blue configurations, but only nine different ways of contributing to the election outcome. If red wins none of the states, it gets zero votes. If red wins any one of the states, it gets three votes, and there are eight ways for this to happen. Winning any two of the states yields six votes, and there are \(\binom{8}{2} = 28\) combinations with this result. In the map above, red wins five of the three-vote states (Alaska, Montana, North Dakota, South Dakota and Wyoming) while blue wins three (Delaware, the District of Columbia and Vermont). This combination is one of 56 that produce a 15–9 vote split in favor of red. By weighting the vote sums according to the appropriate binomial coefficients, we can deal with just nine cases instead of 256.
The same trick works for the five states that get four votes apiece, the three states that have five votes each, and so on. A neat way of organizing the computation is to count through all the possible values of a mixed-radix number (where each digit has its own radix, independent of its neighbors). For the Electoral College data, the mixed-radix number has a digit for each set of states that are allocated the same number of votes; the radix of this digit is one more than the number of states in the set. The number of 19 digits overall, with radices ranging from 2 to 9.
We can survey the full spectrum of election outcomes by cycling through all the values of this number, starting at 0000000000000000000 and counting up to 1122121111443236358. For each value we calculate the corresponding vote total s:
$$s = \sum_i k_i m_i$$
Then we use s as an index into an array H where we keep track of how many ways the total s can be achieved. We increment Hs by the appropriate number of combinations:
$$H_s = H_s + \prod_i \binom{r_i}{k_i}$$
With this algorithm, the number of configurations for which we need to sum up the votes is 6,270,566,400, which is a lot less than 251. Even allowing for the extra work of calculating binomial coefficients, the mixed-radix strategy reduces the size of the computation by a factor of 10,000 or so. For a Lisp program running on a laptop, it becomes a job of hours rather than years.
Since I was riffling through all of those 6,270,566,400 configurations anyway, I figured I might as well record not just the number of ties but the full spectrum of vote totals. It's no surprise that the result looks like a binomial distribution:
What did surprise me a little is the smoothness of the curve. I was expecting a bit of lumpiness because of the minimum allocation of three votes and because a few states constitute large, indivisible blocks of votes. If you zoom in on the extreme tails of the distribution, there is indeed some jaggedness:
votes: 0 1 2 3 4 5 6 7 8 9 10
combinations: 1 0 0 8 5 3 34 43 36 122 201
There's just one way that a party can receive zero votes; winning either one or two votes is impossible; seven votes are more likely than either six or eight. But fluctuations of this kind become undetectable toward the middle of the distribution.
Is there some other approach that would improve on the mixed-radix algorithm? Almost surely. But I would bet against the possibility of a polynomial-time algorithm. Counting Electoral College ties is a variation on another problem for which I harbor a special fondness, called the number partitioning problem (NPP). I was introduced to the NPP by my friend Stephan Mertens, and I wrote about it in a 2002 American Scientist column. The NPP asks whether a set of positive integers has at least one partitioning into two sets that have the same sum. In this form, the problem is NP-complete. The counting version of the problem is surely no easier.
Finally, I have to confess that I haven't really computed all the ways that the Electoral College can wind up with a tied vote. The snag is that pesky pair of states Maine and Nebraska, which do not necessarily award all their votes to the winner of the state contest. Instead they allocate one elector to the winner in each Congressional district, with the two senatorial electors going to the winner of the entire state. Thus Maine could be viewed as comprising three voting territories with allocations of 1, 1 and 2 electoral votes; Nebraska breaks down into four districts with 1, 1, 1 and 2 votes. This fragmentation of the vote expands the combinatorial challenge of counting ties. Another complication is even more troublesome: The intrastate voting territories are not independent. Unless vote counting is even more treacherous than it seems, you can't lose all of a state's Congressional districts but win the state overall. In my calculations I've simply ignored the possibility of split votes in Maine and Nebraska; perhaps someone else would like to patch this up.
Yet another caveat is that the Electoral College is not just a mathematical artifice but also a human institution. The electors are people who pledge to vote according to the mandate of their state, but they are not compelled to do so. An elector who ignores his or her instructions and votes for Mickey Mouse may well be punished after the fact, but the vote for Mickey stands, nonetheless.
Update 2012-10-10: I wrote: "Is there some other approach that would improve on the mixed-radix algorithm? Almost surely." Well, I got that right, and I've never felt quite so chagrinned about being correct. An anonymous commenter suggested trying dynamic programming, and soon after that Iain Murray posted an elegant, terse Python program based on that principle. His program answers the how-many-ties? question in milliseconds. It's hard to fathom how I could have spent several days noodling over the tie-counting problem and missed a solution that's so clearly superior. Especially since I'd seen it before. I mentioned above that Stephan Mertens introduced me to the number-partitioning problem. His lovely book The Nature of Computation (co-authored with Cristopher Moore) includes a detailed discussion of a dynamic-programming algorithm for the NPP.
One final point: Although Murray's program is fast, it is not polynomial-time (nor does he make any claims to that effect). The anonymous commenter correctly states that the running time of the algorithm is proportional to the product of the number of states and the number of electors. But the size of the input is the logarithm of this number. Thus the running time is an exponential function of the number of bits in the input.
This entry was posted in computing, mathematics.
5 Responses to College ties
anonymous moose says:
9 October 2012 at 4:35 am
I didn't work out the details, but I think that using dynamic programming you can count the number of tied configurations in time O(#electors*#states), where #electors stands for the total number of electors and #states is the total number of states.
Iain Murray says:
9 October 2012 at 11:22 am
I fear that the Python code will get mangled, but yes a simple recursion (or "dynamic programming") can quickly give the answer that you got:
from collections import defaultdict
ways = {0: 1}
vote_nums = [55,10,6,38,10,5,29,10,5,29,10,5,20,9,4,20,\
9,4,18,9,4,16,8,4,16,8,4,15,7,3,14,7,3,13,7,3,\
12,6,3,11,6,3,11,6,3,11,6,3,11,6,3]
for num in vote_nums:
new_ways = defaultdict(lambda: 0)
for w in ways:
new_ways[w+num] += ways[w]
new_ways[w-num] += ways[w]
ways = new_ways
print ways[0]
10 October 2012 at 9:47 am
Aha! The Right Answer. Why didn't I think of that? I'm going to add an update to the post.
Jim Ward says:
9 October 2012 at 3:20 pm
After the 269 tie, the House deadlocks, the Senate deadlocks, and John Boehner becomes President.
ZZMike says:
10 October 2012 at 1:44 pm
If there's a tie, the House of Reps breaks the tie, one vote/state. That's the newly-elected House (and Senate, for the VP.) Given the larger number of Red states. the outcome would (under those exceedingly unlikely circumstances) be obvious.
In addition to the basic HTML formatting options offered by the buttons above, you can also enter LaTeX math commands. Enclose LaTeX content in \( ... \) for inline mode or \[ ... \] for display mode.
Follow @bit_player on Twitter
Subscribe to an RSS feed: Posts, Comments
|
CommonCrawl
|
Haematological analysis of Japanese macaques (Macaca fuscata) in the area affected by the Fukushima Daiichi Nuclear Power Plant accident
Yusuke Urushihara1,2,
Toshihiko Suzuki3,
Yoshinaka Shimizu3,
Megu Ohtaki4,
Yoshikazu Kuwahara5,
Masatoshi Suzuki6,
Takeharu Uno7,
Shiori Fujita3,
Akira Saito8,
Hideaki Yamashiro9,
Yasushi Kino10,
Tsutomu Sekine11,
Hisashi Shinoda3 &
Manabu Fukumoto1,8
Scientific Reports volume 8, Article number: 16748 (2018) Cite this article
337 Altmetric
Several populations of wild Japanese macaques (Macaca fuscata) inhabit the area around Fukushima Daiichi Nuclear Power Plant (FNPP). To measure and control the size of these populations, macaques are captured annually. Between May 2013 and December 2014, we performed a haematological analysis of Japanese macaques captured within a 40-km radius of FNPP, the location of a nuclear disaster two years post-accident. The dose-rate of radiocaesium was estimated using the ERICA Tool. The median internal dose-rate was 7.6 μGy/day (ranging from 1.8 to 219 μGy/day) and the external dose-rate was 13.9 μGy/day (ranging from 6.7 to 35.1 μGy/day). We performed multiple regression analyses to estimate the dose-rate effects on haematological values in peripheral blood and bone marrow. The white blood cell and platelet counts showed an inverse correlation with the internal dose-rate in mature macaques. Furthermore, the myeloid cell, megakaryocyte, and haematopoietic cell counts were inversely correlated and the occupancy of adipose tissue was positively correlated with internal dose-rate in femoral bone marrow of mature macaques. These relationships suggest that persistent whole body exposure to low-dose-rate radiation affects haematopoiesis in Japanese macaques.
The Great Japan Earthquake and subsequent tsunami of March 2011 caused the Fukushima Daiichi Nuclear Power Plant (FNPP) accident, which released large amounts of artificial radioactive substances into the environment1. After people were evacuated, wild animals inhabiting the area became contaminated with artificial radionuclides2. These animals were exposed to long-term irradiation with a low-dose-rate of internally and externally deposited radionuclides. Seven years have passed since the FNPP disaster, but the biological effect caused by long-term exposure to radioactive caesium, a persistent nuclide, remains a major concern. Various biological impacts have been reported after the FNPP accident3,4,5,6. In some studies, the estimations of dose and dose-rate were carried out and were associated with the effect of wild life and livestock7,8,9,10,11,12. However, the extent to which radiation exposure is contributing to these findings remains unclear, due to the uncertainty attributed to fieldwork that is not controlled13,14.
Japanese macaques are a species of non-human primate with a life span of more than 20 years in a wild state. They form troops comprised of several dozen individuals with a mean home range size of 8.99 km2 (0.29–39.7 km2), irrespective of evergreen or deciduous habitat15. Japanese macaques are omnivorous and feed on plant leaves, fruits, insects and other small animals16. Those living in the Fukushima prefecture become sexually mature at around 5 years old17. Several troops of macaques had settled in certain areas around FNPP before the accident18, making them suitable subjects for determining the effect of long-term exposure to low-dose-rate radiation on humans. Leucocyte count in peripheral blood has been reported to have a significant inverse correlation with the muscle radiocaesium concentration in immature macaques captured in Fukushima city, located approximately 70 km northwest of FNPP19.
Haematopoiesis is one of the main vital processes in the body of mammals and is one of the most radiosensitive systems20. Bone marrow is the primary haematopoietic tissue in mammals, given that it produces all blood cells. Caused by moderate to high levels of radiation, acute whole-body irradiation reduces bone marrow cellularity and blood cell count21,22. The haematopoietic system is highly sensitive to radiation; whether at a large or medium dose, acute exposure damages its function23. However, the effect of long-term exposure to low-dose-rate radiation on the haematopoietic system, especially that of bone marrow, remains to be elucidated.
In this study, we collected haematological data from 95 Japanese macaques captured within a 40-km radius of FNPP (the exposed group) and within a range of 60-km to 100-km of FNPP (the non-exposed group). We estimated internal and external radiocaesium dose-rates using the ERICA Tool, and performed multiple regression analyses to evaluate the dose-rate dependency on haematological values, adjusting for possible background (confounding) covariates. While the environmental dose-rate from radionuclides in the soil is approximately 0.33 mSv/year in Japan24, the air dose-rate of several areas inhabited by the exposed group in this study was more than 100 times higher than the general background level25; therefore, haematological analysis of macaques inhabiting the area affected by the FNPP accident would provide insight into the effect of persistent exposure to low-dose-rate radiation on humans.
Dose-rate and haematological values in peripheral blood
We obtained peripheral blood samples from 42 exposed and 23 non-exposed Japanese macaques (Fig. 1). The response of chemokine and cytokine family genes to gamma irradiation is reportedly different between infant and adult mouse bone marrow tissues26. Ochiai, et al., previously divided Japanese monkeys into immature (0–4 years) and mature (\(\geqq \)5 years) groups19. Therefore, we adopted their classification in this study. The haematological values of peripheral blood are shown in Table 1. In members of the exposed group, the median radiocaesium (134Cs + 137Cs) activity concentrations in mature and immature macaques were 2,250 Bq/kg (ranging from 285 to 34,600 Bq/kg) and 1,280 Bq/kg (ranging from 376 to 24,500 Bq/kg), respectively, in femoral muscle. In members of the non-exposed group, the concentrations were 72.3 Bq/kg (ranging from 36.9 to 269.5 Bq/kg) for mature macaques and 57.3 Bq/kg for only one immature macaque. Red blood cell (RBC) count and hematocrit (Hct) were significantly lower in mature macaques of the exposed group than those of the non-exposed group.
Map of the sampling point. The black circle indicates the location of Fukushima Daiichi Nuclear Power Plant (FNPP). X-marks and cross-marks indicate the sampling points of Japanese macaques in the exposed and the non-exposed areas, respectively.
Table 1 Haematological values in peripheral blood.
Using the ERICA Tool, we calculated internal and external dose-rates from the concentration of combined 134Cs and 137Cs (radiocaesium) in skeletal muscle and in soil, respectively (Table 2). The body size is different between mature and immature macaques, affecting the determination of dose conversion coefficient. Therefore, we estimated the dose-rate from dose conversion coefficients of four sizes of spheroids, as described in the Methods section. Minor and major axes of each spheroid were calculated by the plots of body length and body weight of 65 macaques (Supplementary Fig. S1a). According to the home range size of Japanese macaques15, we calculated an average 137Cs concentration in soil within a 3-km radius (corresponding to the area of a 30-km2 circle) from the capture point based on the mesh data of radiocaesium deposition in soil by the airborne monitoring survey27. 137Cs concentration in soil of the capture point showed significantly positive correlation with mean 137Cs concentration in soil within 30-km2 around the capture point (N = 79, r = 0.97, p < 0.001, Supplementary Fig. S2). We therefore propose that the external dose-rate calculated from that of the capture point in this study can be used as real external dose-rate. The radioactivity concentrations of radiocaecium in soil collected from the non-exposed area on December 28, 2012 were lower than 10,000 Bq/m2, which was undetectable by the airborne survey27. On the other hand, radiocaesium was detected in the skeletal muscle of members of the non-exposed group. The median internal dose-rate in the non-exposed group was 0.45 μGy/day (0.24–1.73 μGy/day). The median internal dose-rate was 7.6 μGy/day (1.8–219 μGy/day) and the external dose-rate was 13.9 μGy/day (6.7–35.1 μGy/day) in the exposed group. A weak positive correlation was found between external and internal dose-rates in the same individual (N = 69, r = 0.38, p < 0.001), whereas the combined internal and external (total) dose-rate was strongly associated with internal dose-rate (N = 69, r = 0.98 and p < 0.001, Supplementary Fig. S3).
Table 2 Estimated dose-rates of Japanese macaques (μGy/day).
We performed multiple regression analyses to evaluate dose-rate effects on haematological values in the peripheral blood of macaques, adjusting for the effect of confounding covariates such as sex, age, season of capture, and altitude of the capture point (adjusted by every 100 m). A significant contribution of internal dose-rate (a tendency to decrease with dose-rate) was detected in white blood cell (WBC) count and platelet (PLT) count in mature macaques. On the contrary, no correlation was observed between external dose-rate and any variable (Table 3 and Supplementary Table S1).
Table 3 Estimated coefficients of dose-rate effects on the haematological values in peripheral blood of macaques using multiple regression with covariates sex, age, season of capture date, and altitude of capture.
Dose-rate and haematopoietic cells in bone marrow
We obtained bone marrow samples from the femur of 18 mature and 20 immature macaques of the exposed group. We counted the cell number of each haematopoietic cell lineage in sampled bone marrow and analyzed a correlation of haematopoietic cells and dose-rate (Figs 2 and 3; Supplementary Table S2). Using similar methods, we also performed multiple regression analyses to examine dose-rate effects on haematopoietic values in bone marrow of macaques (Table 4 and Supplementary Table S3). In mature macaques, significant inverse correlations with internal dose-rate were detected in myeloid, megakaryocyte, and haematopoietic cell counts. In addition, the opposite tendency was detected in adipose tissue from bone marrow of mature macaques. On the other hand, no consistent correlation was observed between external dose-rate and haemotological values.
Correlation between dose-rate and haematological values in bone marrow of the exposed group. White circles: immature macaques (N = 20), black circle: mature macaques (N = 18). Cell number is presented per 115,600 μm2 of bone marrow. Solid lines and dashed lines indicate linear trend lines which were drawn by the scatter plots of mature and immature animals, respectively. r and p indicate Pearson's correlation coefficient and p value, respectively. All Pearson's correlation coefficients and p values are shown in Supplementary Table S2.
Representative histology of bone marrow of the exposed macaques. (a) A 9-year-old male captured on August 27, 2013 with 479 Bq/kg (134Cs + 137Cs) of the skeletal muscle, of which the estimated internal dose-rate was 4.90 μGy/day and the external dose-rate was 24.8 μGy/day. (b) An 8-year-old female captured on January 24, 2014 with 11,400 Bq/kg (137Cs + 134Cs) of the skeletal muscle, of which the estimated internal dose-rate was 74.5 μGy/day and the external dose-rate was 24.9 μGy/day.
Table 4 Estimated coefficients of dose-rate effects on the haematological values in bone marrow using multiple regression with covariates sex, age, season of capture date, and altitude of capture.
Total dose-rate was highly associated with internal dose-rate (Supplementary Fig. S3), suggesting that the quantified effects found in this study reflect internal dose-rate. The soil radiocaesium concentration from the capture points showed a significantly positive correlation with mean radiocaesium concentration in soil within 30 km2 around the capture point (Supplementary Fig. S2). The ratio of the mean radiocaesium concentration in soil within 30 km2 from the capture point to the radiocaesium concentration of capture point was 1.12 ± 0.18 (S.D.), ranging from 0.92 to 1.57. We consider the error of the calculated external dose-rate to be within an allowable range. However, external dose-rate was not measured directly for each individual. In comparison with the internal dose-rate, more uncertainty may be mediated in the calculation of external dose-rate in this study.
It is important to investigate the living conditions and temporal dose-rate changes of macaques, in order to determine whether or not low-dose-rate radiation affects haematopoiesis. The variance of internal dose-rate was larger than that of external dose-rate in the exposed group (Table 2). Japanese macaques are omnivorous16. The variance of internal dose-rate in insects and small animals has been reported to be greater than the external dose-rate variance28. These data suggest that the variance of radiocaesium concentration in Japanese macaques is reflected by that of their prey. In our previous study, we analyzed the correlation of oxidative stress markers in calf peripheral blood with either internal or external dose, or with internal or external dose-rate. We found that oxidative stress status is closely related to internal dose-rate but to no other conditions11. We estimated the whole body dose-rate using the ERICA Tool and the haematopoietic cell count in bone marrow and correlated with the internal dose-rate (Table 4 and Supplementary Table S3). Various studies have reported that radiocaesium concentration in organs, including peripheral blood, is highly correlated with that in skeletal muscles in animals around FNPP after the accident29,30. Haematopoietic bone marrow is surrounded by skeletal muscles. Radiocaesium concentration tends to be highest in skeletal muscles, more than 20-fold that of peripheral blood. Furthermore, skeletal muscles compromise most of the body weight. Taken together, these suggest that internal dose-rate of bone marrow is derived from skeletal muscles, which can be estimated using radiocaesium concentration of femoral muscle. However, it is important to measure radionuclides in each organ for estimation of marrow dose-rate.
We chose populations of macaques residing at the same latitude and in adjacent prefectures to ensure that the environments were as identical as possible. Wild Japanese macaques may be infected with various pathogens. This can result in modest to marked effects on the intestine and its physiologic and immunologic functions, including peripheral blood cell count, so we carefully chose the members of the non-exposed group. We did not observe any macroscopic differences between the exposed and the non-exposed macaques, including coat and the amount of subcutaneous fat which are both affected by climate, nor did we observe any anomaly during dissection. Compared with the non-exposed group, the exposed group showed a lower RBC count and Hct, but these were still within the normal ranges. The mean altitude of capture points in the exposed group was significantly lower than that of the non-exposed group (Table 1). The natural environment of the exposed group may be different from that of the non-exposed group. Therefore, to exclude the effect of confounding covariates like sex, age, and the condition of capture, we applied multiple regression analyses and adjusted for possible effects of the confounders to evaluate dose-rate effects on the haematological values in peripheral blood. WBC and PLT counts in the peripheral blood of mature macaques were revealed to be inversely correlated with internal dose-rate (Table 3 and Supplementary Table 1). It should be noted, however, that the present study was not performed under strictly controlled conditions, but used wild macaques, and we could not therefore rule out environmental factors as confounders. Data collected from macaques, a closely-related species to humans, will contribute to radiation protection in humans. Therefore, it is necessary to vigilantly monitor both exposed and non-exposed macaque groups over time to elucidate the ultimate effect of very low-dose-rate radiation on the haemopoietic system, which may indicate a similar response in humans.
The haematopoietic system is one of the most radiosensitive tissues in the body20. Bone marrow is composed of haematopoietic cells and adipocytes and radiation suppresses the haematopoietic system, leading to increased concentrations of adipocytes. Total lipids of rat bone marrow reach maximum levels 1 week after exposure to radiation as a function of dose, but total lipids slowly decrease with time. Similarly, 8 days after 8-Gy irradiation, monkeys show a decrease in total marrow31. In this study, the ratio of the occupancy area of haematopoietic cells to adipocytes in bone marrow of mature macaques was positively correlated with internal dose-rate (Table 4 and supplementary Table S3). This shows that persistent exposure to low-dose-rate radiation, as well as acute high-dose radiation, is toxic to haematopoietic systems. The haematopoietic system is organized in a hierarchical manner. For example, damaged multipotent progenitors and haematopoietic progenitor cells result in myelosuppression, which is consistent with the suppression of all lineages of haematopoietic cells. In mature macaques, myeloid cell, megakaryocyte, and haematopoietic cell counts in bone marrow were inversely correlated with internal dose-rate (Table 4 and Supplementary Table S3). Furthermore, WBC and PLT counts in peripheral blood were inversely correlated with internal dose-rate (Table 3 and Supplementary Table S1). However, WBC and PLT counts in peripheral blood showed no significant difference between exposed group and non-exposed group (Table 1). Adult C57BL/6 mice exposed to gamma-ray radiation show a significantly decreased formation of mixed types of colonies of haematopoietic/progenitor cells in bone marrow, even under daily exposure to 10 mGy for 1 month32. Being exposed to 22.6 μGy/h of gamma-radiation for 2.5 years, free-ranging meadow vole populations have higher counts of neutrophils in peripheral blood than either the controls or high-dose voles (3,840 μGy/h) but have lower hematocrit than the controls33. We recently reported that spermatogenesis was accelerated in large Japanese wild mice residing the ex-evacuation zone of the FNPP accident, but sperm number remained constant12. These findings suggest that the compensatory mechanism in haematopoiesis maintains homeostasis under persistent exposure to very low-dose-rate radiation and that radiation effects are different between laboratory animals and field animals. We need to continue to carefully observe and interpret the biological effects induced by long-term exposure to very low-dose-rate radiation.
Ochiai, et al., reported that WBC counts in peripheral blood are inversely correlated with radiocaesium activity concentrations in the muscle, even though the mean values are lower than 1,000 Bq/kg in immature macaques captured in Fukushima city19. In our study, radioactivity concentrations of 137Cs in the femoral muscle of immature macaques of the exposed group ranged from 308 to 24,500 Bq/kg (median 1,200 Bq/kg), which is a remarkably high level of internal radiocaesium compared with their study. In addition, we estimated internal and external dose-rates and performed multiple regression analyses for the dose-rate dependence of haematological values. However, we did not observe an association between any haematological value in peripheral blood and internal dose-rate in immature macaques (Table 3 and Supplementary Table S1). We previously reported that, based on the γH2AX foci, cattle from the ex-evacuation zone had significantly higher levels of DNA damage in lymphocytes compared to levels in non-affected cattle34. However, the damage levels gradually decreased from 500 to 700 days after the FNPP accident. Our sampling was performed between May 2013 and December 2014, but Ochiai, et al., collected samples between April 2012 and March 2013. The contradiction in findings may be caused by the difference in the sampling period.
Our study revealed that the myeloid cell, megakaryocyte, and haematopoietic cell counts in bone marrow were inversely correlated with the internal dose-rate in mature macaques of the exposed group. The main cause of myelosuppression by persistent exposure to very low-dose-rate radiation remains unclear. We previously reported that the level of oxidative stress was significantly correlated with the internal dose-rate of radiocaesium. The levels were less than 50 μGy/day in cattle captured from the ex-evacuation zone of the FNPP accident11. Hypersensitivity to irradiation has been reported in the haematopoietic stem cells (HSCs) of adult mice. Even irradiation of 0.02 Gy X-rays causes an immediate increase in reactive oxygen species (ROS). However, total body irradiation with 0.02 Gy does not decrease HSC numbers unless the HSC microenvironment is altered by an inflammatory insult35. Health effects of people exposed to persistent low-dose gamma-ray derived from Iridium-192 were studied, in which the subjects received 0.05–0.65 Gy for 72 days and, over the course of 10 years, various degrees of immune dysfunction and abnormalities of blood cells and bone marrow are identified. Participants recovered within 1–3 years of exposure36. Levels of Chernobyl liquidators at or above 200 mGy result in an increased risk of leukemia, after excluding smoking as a major confounding factor37. An international cohort study of nuclear workers reveal positive associations between chronic low-dose radiation exposure and excess relative risk of leukaemia mortality in nuclear workers exposed to very low-dose (mean 1.1 mGy/year) radiation. The mean follow-up period of the study is 27 years38. Ramsar, Iran is among the places in the world with the highest levels of natural radiation. This has had a non-detrimental effect on the people living there, who seem to have adapted to mean annual exposure levels of 10 (and up to 260) mGy/year39,40. This suggests that the effect of long-term very low-dose-rate radiation is a consequence of complicated biological responses. We therefore propose that continuous observation on the macaques inhabiting around FNPP is necessary to determine whether long-term exposure to low-dose-rate radiation induces irreversible myelosuppression, maintains basal haematopoiesis, or causes haematological malignancies. Cytogenetic dosimetry is recognized as a valuable assessment method in measuring acute exposure to more than 100 mGy of low linear energy transfer (LET) radiation41. Collaborative efforts are now underway to identify chromosome aberrations in peripheral blood lymphocytes of Japanese macaques for biological dose evaluation.
This study is the first to report that haematopoiesis may be adversely affected by low-dose-rate radiation in primates. To understand the human response to long-term exposure to very low-dose-rate radiation, Japanese macaques inhabiting the area affected by the FNPP accident are the most suitable wild animal, since they do not recognize or fear radiation and do not smoke, which is the most confounding factor in analyzing the effects of low-dose radiation on humans. Therefore, the present study provides extremely crucial data for understanding the effect of chronic, very low-dose-rate radiation on humans. Radiocaesium concentrations in the organs of macaques belonging to the exposed group have been continuously high, attributable to the intake of radiocontaminated foods. The life span of Japanese macaques is approximately 20 years, and many individuals born after the FNPP accident have been observed. Therefore, macaques inhabiting the exposed area are suitable to study in order to identify and measure the effects, including transgenerational effects, of chronic very low-dose-rate irradiation on humans. We should vigilantly monitor these populations of macaques over time.
Animals and Ethics statements
Japanese macaques were culled to prevent damage to crops according to the Japanese Monkey Management Plan based on the Wildlife Protection and Hunting Management Law. Macaques were captured using box traps and killed by licensed hunters at the request of each local government. The method of capture and killing was carried out according to the guidelines published by the Primate Research Institute of Kyoto University42. Japanese macaques inhabiting the areas of this study were not listed as endangered species on the Japanese Red List revised by the Ministry of Environment, Japan43.
This entire study was approved by the Institutional Animal Care and Use Committee of the Center for Laboratory Animal Research, Tohoku University (Approved number: 2014 IDAC-037). All experiments were performed in accordance with relevant guidelines and regulations. The macaques analyzed in this study were captured from May 2013 to December 2014. The exposed group consisted of 72 macaques (71 from Minamisoma city and 1 from Iitate village) in Fukushima prefecture. The non-exposed (control) group consisted of 23 macaques (17 from Shichikasyuku Town, 4 from Sendai City, and 2 from Kawasaki Town) in Miyagi prefecture (Fig. 1). We obtained 42 peripheral blood samples, 69 femoral muscles, and 38 femurs from the exposed group. From the non-exposed group, we collected 23 peripheral blood samples and 15 femoral muscles. The age of each macaque was determined by counting growth layers in the dental cementum44. The macaques were divided into 2 groups: the immature group (0–4 years) and the mature group (\(\geqq \)5 years) according to Ochiai, et al.19.
Measurement of radioactivity concentration
Radioactivity was determined by gamma-ray spectrometry, specifically by using high-purity Germanium detectors, as described previously29. In brief, duration of the measurement varied from 3,600 to 86,400 seconds, depending on the radioactivity concentration of the sample. Standard volume sources of different sizes were prepared by diluting stock solutions of 137Cs and 152Eu and by gelling with a highly absorbent polymer. Samples were homogenized and scaled in a polyethylene tube. The nuclide was identified by its characteristic photopeaks (greater than 3σ above the baseline). Samples were stored at −30 °C until radioactivity measurements were taken. Therefore, the radioactivity concentration of radiocaesium was corrected to the capture date.
Haematological analysis
Peripheral blood was collected from the hearts of the carcasses and was immediately mixed in a tube containing ethylenediaminetetraacetic acid disodium salt (EDTA-2Na). The numbers of WBC, RBC, and PLT, haemoglobin concentration (Hb), and Hct were measured with a full automatic blood cell counter (PCE-310, ERMA Inc., Tokyo, Japan).
We examined bone marrow from the femurs of 18 mature (8–10 years) and 20 immature macaques (2–4 years) of the exposed group. The femur was cut at its central part immediately after sampling, fixed in 4% formaldehyde, and decalcified in 10% EDTA-2Na at room temperature. Paraffin-embedded bone marrow tissue from the femur was sectioned and stained with Giemsa solution. The image of stained specimens was captured by a Virtual Slide System VS120-L100 (OLYMPUS, Tokyo, Japan). The cell number of each haematopoietic cell lineage, such as erythroid cells, myeloid cells, and megakaryocytes, was counted in bone marrow image of a 2000 pixel (170 nm/pixel) square. The occupancy of haematopoietic cells and adipose tissue was calculated using an open source image processing program, ImageJ45.
Dose-rate estimation
Dose-rate by radiocaesium exposure in Japanese macaques was calculated using the ERICA Tool (version 1.2)46 with some modifications. The ERICA Tool's default radiation weighting factors of 1 for β/γ-rays and 3 for low energy β-rays were used. The following conditions were used in this study: Tier 2 assessment, "Terrestrial", "Mammal", "Ground-living animal", "On-soil," and an occupancy factor value of 1.0. For the calculation of external dose-rate, the infinite plane isotopic source at 0.5 g/cm2 depth was defined as the radionuclide distribution patterns in soil. To determine dose conversion coefficients (DCCs) using the equation provided by the ERICA Tool, we used a spheroid to approximate the shape of the macaque's body, with the body length spanning across the long axis. We plotted the body length and the body weight of 65 macaques. Assuming that the specific gravity of the body is 1.0, we calculated the body width, which was imputed as the minor axis of the equation. For the body length, we used 30 cm for individuals between 25 and 35 cm. Likewise, 40, 50, and 60 cm were used instead of the actual body length (Supplementary Fig. S1). DCCs were calculated and are shown in Supplementary Table S4.
Several studies have reported that radiocaesium activity concentration is the highest in the skeletal muscle among the organs measured in animals inhabiting areas near the Fukushima Daiichi Nuclear Power Plant (FNPP) accident29,30. Assuming that the whole body of the macaque was composed of skeletal muscle, the internal dose-rate was calculated using the radioactivity concentration of radiocaesium in the femoral muscle. Calculations of external dose-rate were based on deposited radiocaesium in soil, where Japanese macaques were captured. The available mesh data of radiocaesium deposited in soil, provided by the airborne monitoring survey, was closest temporally (December 28, 2012) to our study (May 2013 to December 2014)27. Using these mesh data, the radiocasesium deposit density was calculated and the external dose-rate was corrected, based on physical delay, to the sampling day. Total dose-rate indicates the sum of internal and external dose-rate.
Welch's t-test was used to compare haematological values in peripheral blood between the exposed and non-exposed groups.
We performed multiple regression analyses for the dose-rate dependence of peripheral blood and bone marrow cells. As explanatory variables, we chose internal dose-rate, external dose-rate, age, sex, season of capture and altitude of the capture site (adjusted by every 100 m). The regression models can be elaborated as follows:
Given a set of data {(Yi, agei, mature, sexi, seaseoni, altitudei), i = 1, …, n (serial number of individual)}, we assumed the regression model was specified as:
$$\begin{array}{lll}{Y}_{i} & = & \mu +\{{\beta }_{1}\cdot (1\,-\,{{\rm{m}}{\rm{a}}{\rm{t}}{\rm{u}}{\rm{r}}{\rm{e}}}_{i})+{\beta }_{2}\cdot {{\rm{m}}{\rm{a}}{\rm{t}}{\rm{u}}{\rm{r}}{\rm{e}}}_{i}\}\cdot {x}_{Ii}+\{{\beta }_{3}\cdot (1\,-\,{{\rm{m}}{\rm{a}}{\rm{t}}{\rm{u}}{\rm{r}}{\rm{e}}}_{i})+{\beta }_{4}\cdot {{\rm{m}}{\rm{a}}{\rm{t}}{\rm{u}}{\rm{r}}{\rm{e}}}_{i}\}\cdot {x}_{Ei}\\ & & +{\gamma }_{1}\cdot {{\rm{a}}{\rm{g}}{\rm{e}}}_{i}+{\gamma }_{2}\cdot {{\rm{s}}{\rm{e}}{\rm{x}}}_{i}+{\gamma }_{3}\cdot {{\rm{s}}{\rm{e}}{\rm{a}}{\rm{s}}{\rm{o}}{\rm{n}}}_{i}+{\gamma }_{4}\cdot {{\rm{a}}{\rm{l}}{\rm{t}}{\rm{i}}{\rm{t}}{\rm{u}}{\rm{d}}{\rm{e}}}_{i}+{\varepsilon }_{i},\,i=1,\,\ldots ,\,n,\end{array}$$
where Yi denotes the logarithmic transformed observed value of the i th sample, agei indicates age of "i" at capture, maturei defined as maturei = 1 if agei ≥ 5.0, or 0, if agei < 5.0, sexi denotes sex of "i" (1:male, 0:female), and seasoni is defined by seasoni = 1, if the "i" was captured in the period from April to September, or 0, if the "i" was captured in the period from October to March, altitudei denotes the sea level altitude of "i" at capture in 100 m units, εi's denote error terms, consisting of measurement error and individual variation, which are realizations of independent random variables from a normal distribution with mean 0 and variance σ2. The parameters μ, β1, …, β4, γ1, …, γ4 are unknown regression coefficients to be estimated, which represent the intercept (μ), dose-rate effect factor by internal exposure in immature individuals (β1), dose-rate effect factor by internal exposure in mature individuals (β2), dose-rate effect factor by external exposure in immature individuals (β3), dose-rate effect factor by external exposure in mature individuals (β4), and effects of the confounding background covariates concerning age (γ1), sex (γ2), season (γ3), and altitude (γ4), respectively.
The least squares method was applied for estimating the unknown parameters. Since the number of samples (n) is small, based on AIC (Akaike's Information Criterion)47, we optimized the model with selecting variables of confounding background factor candidates. The R software (ver. 3.4)48 was used for implementing statistical data analyses. Using the parallel method with the same explanatory variables to the analyses of peripheral blood cells, we also performed multiple regression analyses for dose-rate dependency of bone marrow cells. We set our significance level at <0.05 for all statistical procedures.
Kinoshita, N. et al. Assessment of individual radionuclide distributions from the Fukushima nuclear accident covering central-east Japan. Proc. Natl. Acad. Sci. USA 108, 19526–19529, https://doi.org/10.1073/pnas.1111724108 (2011).
Takahashi, S. et al. A comprehensive dose evaluation project concerning animals affected by the Fukushima Daiichi Nuclear Power Plant accident: its set-up and progress. J. Radiat. Res. 56(Suppl 1), i36–41, https://doi.org/10.1093/jrr/rrv069 (2015).
Hiyama, A. et al. The biological impacts of the Fukushima nuclear accident on the pale grass blue butterfly. Sci. Rep. 2, 570, https://doi.org/10.1038/srep00570 (2012).
Akimoto, S. Morphological abnormalities in gall-forming aphids in a radiation-contaminated area near Fukushima Daiichi: selective impact of fallout? Ecol. Evol. 4, 355–369, https://doi.org/10.1002/ece3.949 (2014).
Watanabe, Y. et al. Morphological defects in native Japanese fir trees around the Fukushima Daiichi Nuclear Power Plant. Sci. Rep. 5, 13232, https://doi.org/10.1038/srep13232 (2015).
Bonisoli-Alquati, A. et al. Abundance and genetic damage of barn swallows from Fukushima. Sci. Rep. 5, 9432, https://doi.org/10.1038/srep09432 (2015).
Garnier-Laplace, J., Beaugelin-Seiller, K. & Hinton, T. G. Fukushima wildlife dose reconstruction signals ecological consequences. Environ. Sci. Technol. 45, 5077–5078, https://doi.org/10.1021/es201637c (2011).
Yamashiro, H. et al. Effects of radioactive caesium on bull testes after the Fukushima nuclear plant accident. Sci. Rep. 3, 2850, https://doi.org/10.1038/srep02850 (2013).
Strand, P. et al. Assessment of Fukushima-Derived Radiation Doses and Effects on Wildlife in Japan. Environ. Sci. Technol. Lett. 1, 198–203, https://doi.org/10.1021/ez500019j (2014).
Garnier-Laplace, J. et al. Radiological dose reconstruction for birds reconciles outcomes of Fukushima with knowledge of dose-effect relationships. Sci. Rep. 5, 16594, https://doi.org/10.1038/srep16594 (2015).
Urushihara, Y. et al. Analysis of Plasma Protein Concentrations and Enzyme Activities in Cattle within the Ex-Evacuation Zone of the Fukushima Daiichi Nuclear Plant Accident. PloS one 11, e0155069, https://doi.org/10.1371/journal.pone.0155069 (2016).
Takino, S. et al. Analysis of the Effect of Chronic and Radiation Exposure on Spermatogenic Cells of Male Large Japanese Field Mice (Apodemus speciosus) after the Fukushima Daiichi Nuclear Power Plant Accident. Radiat. Res. 187, 161–168, https://doi.org/10.1667/rr14234.1 (2017).
Vives, I. B. J. et al. The impact of the Fukushima nuclear accident on marine biota: retrospective assessment of the first year and perspectives. Sci. Total Environ. 487, 143–153, https://doi.org/10.1016/j.scitotenv.2014.03.137 (2014).
Brechignac, F. et al. Addressing ecological effects of radiation on populations and ecosystems to improve protection of the environment against radiation: Agreed statements from a Consensus Symposium. J. Environ. Radioact. 158–159, 21–29, https://doi.org/10.1016/j.jenvrad.2016.03.021 (2016).
Hanya, G. et al. Not only annual food abundance but also fallback food quality determines the Japanese macaque density: evidence from seasonal variations in home range size. Primates 47, 275–278, https://doi.org/10.1007/s10329-005-0176-2 (2006).
Yamato, T., Y., I. T., Kazuo, W. & Kunio, W. Spatial patterns in the diet of the Japanese macaque Macaca fuscata and their environmental determinants. Mamm. Rev. 45, 227–238, https://doi.org/10.1111/mam.12045 (2015).
Hayama, S., Nakiri, S. & Konno, F. Pregnancy rate and conception date in a wild population of Japanese monkeys. J. Vet. Med. Sci. 73, 809–812 (2011).
Hayama, S. et al. Concentration of radiocesium in the wild Japanese monkey (Macaca fuscata) over the first 15 months after the Fukushima Daiichi nuclear disaster. PloS one 8, e68530, https://doi.org/10.1371/journal.pone.0068530 (2013).
Ochiai, K. et al. Low blood cell counts in wild Japanese monkeys after the Fukushima Daiichi nuclear disaster. Sci. Rep. 4, 5793, https://doi.org/10.1038/srep05793 (2014).
Stewart, F. A. et al. ICRP publication 118: ICRP statement on tissue reactions and early and late effects of radiation in normal tissues and organs–threshold doses for tissue reactions in a radiation protection context. Annals of the ICRP 41, 1–322, https://doi.org/10.1016/j.icrp.2012.02.001 (2012).
Fliedner, T. M., Graessle, D., Paulsen, C. & Reimers, K. Structure and function of bone marrow hemopoiesis: mechanisms of response to ionizing radiation exposure. Cancer biother. Radiopharm. 17, 405–426, https://doi.org/10.1089/108497802760363204 (2002).
Eric, J. H, Amato, J. Giaccia. 7th edition of Radiobiology for the Radiologist (ed. Charles W. Mitchell) (lippincott williams & wilkins, 2011).
Till, J. E. & Mc, C. E. A direct measurement of the radiation sensitivity of normal mouse bone marrow cells. Radiat. Res. 14, 213–222 (1961).
Abe, S., Fujitaka, K., Abe, M. & Fujimoto, K. Extensive field survey of natural radiation in Japan. J. Nucl. Sci. Technol. 18, 21–45 (1981).
Minister of Education, Culture, Sports, Science and Technology (MEXT). Air Dose Rate Measurement Results at the Height of 1 m above the Ground in Fukushima Prefecture and the Neighboring Prefectures (As of November 19, 2013 (32nd Month After the Accident)) (Decay correction: November 19, 2013). http://emdb.jaea.go.jp/emdb/assets/site_data/en/kml/10300000001/10300000001_00.kmz, Accessed 30 January 2018.
Ariyoshi, K. et al. Age dependence of hematopoietic progenitor survival and chemokine family gene induction after gamma irradiation in bone marrow tissue in C3H/He mice. Radiat. Res. 3, 302–313, https://doi.org/10.1667/RR13466. (2014).
Minister of Education, Culture, Sports, Science and Technology (MEXT). Results of Deposition of Radioactive Cesium of the Sixth Airborne Monitoring Survey and Airborne Monitoring Survey Outside 80 km from the Fukushima Dai-ichi NPP (Decay correction: December 28, 2012), http://emdb.jaea.go.jp/emdb/assets/site_data/en/kml/765/765_00_kmz.zip, Accessed 8 February 2018.
Fuma, S. et al. Radiocaesium contamination and dose rate estimation of terrestrial and freshwater wildlife in the exclusion zone of the Fukushima Dai-ichi Nuclear Power Plant accident. J. Environ. Radioact. 171, 176–188, https://doi.org/10.1016/j.jenvrad.2017.02.013 (2017).
Fukuda, T. et al. Distribution of artificial radionuclides in abandoned cattle in the evacuation zone of the Fukushima Daiichi nuclear power plant. PloS one 8, e54312, https://doi.org/10.1371/journal.pone.0054312 (2013).
Tanoi, K. et al. Investigation of radiocesium distribution in organs of wild boar grown in Iitate, Fukushima after the Fukushima Daiichi nuclear power plant accident. J. Radioanal. Nucl. Ch. 307, 741–746, https://doi.org/10.1007/s10967-015-4233-z (2016).
Snyder, F. & Cress, E. A. Bone marrow lipids in rats exposed to total-body irradiation. Radiat. Res. 19, 129–141 (1963).
Guo, C. Y. et al. Sensitivity and dose dependency of radiation-induced injury in hematopoietic stem/progenitor cells in mice. Sci. Rep. 5, 8055, https://doi.org/10.1038/srep08055 (2015).
Boonstra, R., Manzon, R. G., Mihok, S. & Helson, J. E. Hormetic effects of gamma radiation on the stress axis of natural populations of meadow voles (Microtus pennsylvanicus). Environ. Toxicol. Chem. 24, 334–343 (2005).
Nakamura, A. J. et al. The Causal Relationship between DNA Damage Induction in Bovine Lymphocytes and the Fukushima Nuclear Power Plant Accident. Radiat. Res. 187, 630–636, https://doi.org/10.1667/RR14630.1 (2017).
Rodrigues-Moreira, S. et al. Low-Dose Irradiation Promotes Persistent Oxidative Stress and Decreases Self-Renewal in HematopoieticStem Cells. Cell reports 20, 3199–3211, https://doi.org/10.1016/j.celrep.2017.09.013 (2017).
Li, H. et al. Long-term health effects of persistent exposure to low-dose lr192 gamma-rays. Exp. Tther. Med. 12, 2695–2701, https://doi.org/10.3892/etm.2016.3682 (2016).
Kesminiene, A. et al. Risk of hematological malignancies among Chernobyl liquidators. Radiat. Res. 170, 721–735, https://doi.org/10.1667/RR1231.1 (2008).
Leuraud, K. et al. Ionising radiation and risk of death from leukaemia and lymphoma in radiation-monitored workers (INWORKS): an international cohort study. Lancet Haematol. 2, e276–281, https://doi.org/10.1016/S2352-3026(15)00094-0 (2015).
Mortazavi, S. M. J. & Mozdarani, H. Non-linear phenomena in biological findings of the residents of high background radiation areas of Ramsar. Int. J. Radiat. Res. 11, 3–9 (2013).
Ghiassi-nejad, M., Mortazavi, S. M., Cameron, J. R., Niroomand-rad, A. & Karam, P. A. Very high background radiation areas of Ramsar, Iran: preliminary biological studies. Health phys. 82, 87–93 (2002).
International Atomic Energy Agency (IAEA). IAEA Annual Report 2011. IAEA. org. Accessed 1 January 2018.
Primate Research Institute, Kyoto University Guideline for field research for nonhuman primates. http://www.pri.kyoto-u.ac.jp/research/guide-e2008.html. Accessed 15 March, 2015.
Japanese Ministry of the Environment. State of Japan's Environment at a Glance: Extinct and Endangered Species Listed in the Red Data Book. https://www.env.go.jp/en/nature/biodiv/reddata.html, Accessed 28 March, 2018.
Wada, K., Ohtaishi, N. & Hachiya, N. Determination of age in the Japanese monkey from growth layers in the dental cementum. Primates 19, 775–784, https://doi.org/10.1007/bf02373645 (1978).
Rasband, W. S., ImageJ, U. S. National Institutes of Health, Bethesda, Maryland, USA, https://imagej.nih.gov/ij/, 1997–2016.
Brown, J. E. et al. A new version of the ERICA tool to facilitate impact assessments of radioactivity on wild plants and animals. J. Environ. Radioact. 153, 141–148, https://doi.org/10.1016/j.jenvrad.2015.12.011 (2016).
Akaike, H. Information theory and an extension of the maximum likelihood principle. Proceedings of the 2nd International Symposium on Information Theory (eds Petrov, B. N. & Caski, F.) 267–281 (Akadimiai Kiado, Budapest, 1973).
R Development Core Team (2010) R: a language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, http://www.R-project.org.
Digital elevation model (DEM) according to an open source from the Geospatial Authority of Japan, https://fgd.gsi.go.jp/download/menu.php.
We would like to thank Drs. E. Isogai and T. Fukuda, along with the graduate and medical students of Tohoku University. This work was partly supported by grants-in-Aid for Scientific Research from JSPS (KAKENHI), Nuclear Energy Science & Technology and Human Resource Development Project (through concentrating wisdom) from MEXT of Japan, the Emergency Budget for the Reconstruction of Northeastern Japan, and Discretionary Expense of the President of Tohoku University.
Institute of Development, Aging and Cancer, Tohoku University, Miyagi, Japan
Yusuke Urushihara
& Manabu Fukumoto
Department of Radiation Biology, Tohoku University, Miyagi, Japan
Graduate School of Dentistry, Tohoku University, Miyagi, Japan
Toshihiko Suzuki
, Yoshinaka Shimizu
, Shiori Fujita
& Hisashi Shinoda
Research Institute for Radiation Biology and Medicine, Hiroshima University, Hiroshima, Japan
Megu Ohtaki
Faculty of Medicine, Tohoku Medical and Pharmaceutical University, Miyagi, Japan
Yoshikazu Kuwahara
Institute for Disaster Reconstruction and Regeneration Research, Tohoku University, Miyagi, Japan
Masatoshi Suzuki
Tohoku Wildlife Management Center, Miyagi, Japan
Takeharu Uno
Department of Molecular Pathology, Tokyo Medical University, Tokyo, Japan
Akira Saito
Graduate School of Science and Technology, Niigata University, Niigata, Japan
Hideaki Yamashiro
Department of chemistry, Tohoku University, Miyagi, Japan
Yasushi Kino
Institute for Excellence in Higher Education, Tohoku University, Miyagi, Japan
Tsutomu Sekine
Search for Yusuke Urushihara in:
Search for Toshihiko Suzuki in:
Search for Yoshinaka Shimizu in:
Search for Megu Ohtaki in:
Search for Yoshikazu Kuwahara in:
Search for Masatoshi Suzuki in:
Search for Takeharu Uno in:
Search for Shiori Fujita in:
Search for Akira Saito in:
Search for Hideaki Yamashiro in:
Search for Yasushi Kino in:
Search for Tsutomu Sekine in:
Search for Hisashi Shinoda in:
Search for Manabu Fukumoto in:
Conceived and designed the experiments: Y.U., M.F. Performed experiments: Y.U., T.S., Y.S., Y. Kuwahara, M.S., T.U., S.F., H.Y., H.S., M.F. Analyzed the data: Y.U., M.O., S.F., A.S., Y. Kino, T.S., M.F. Contributed reagents/materials/analysis tools: M.O., A.S., Y. Kino., T.S. Wrote the paper: Y.U., M.O., M.F.
Correspondence to Manabu Fukumoto.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Figures and Tables
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Urushihara, Y., Suzuki, T., Shimizu, Y. et al. Haematological analysis of Japanese macaques (Macaca fuscata) in the area affected by the Fukushima Daiichi Nuclear Power Plant accident. Sci Rep 8, 16748 (2018) doi:10.1038/s41598-018-35104-0
Accepted: 28 October 2018
Developmental and hemocytological effects of ingesting Fukushima's radiocesium on the cabbage white butterfly Pieris rapae
Wataru Taira
, Mariko Toki
, Keisuke Kakinohana
, Ko Sakauchi
& Joji M. Otaki
Scientific Reports (2019)
Tolerance of High Oral Doses of Nonradioactive and Radioactive Caesium Chloride in the Pale Grass Blue Butterfly Zizeeria maha
Raj D. Gurung
, Wataru Taira
, Masaki Iwata
, Atsuki Hiyama
Insects (2019)
Big data in radiation biology and epidemiology; an overview of the historical and contemporary landscape of data and biomaterial archives
Paul N. Schofield
, Ulrike Kulka
, Soile Tapio
& Bernd Grosche
International Journal of Radiation Biology (2019)
Overwintering States of the Pale Grass Blue Butterfly Zizeeria maha (Lepidoptera: Lycaenidae) at the Time of the Fukushima Nuclear Accident in March 2011
Sakauchi
, Taira
, Toki
, Iraha
& Otaki
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Scientific Reports menu
About Scientific Reports
Guest Edited Collections
Scientific Reports Top 100 2017
Scientific Reports Top 10 2018
Editorial Board Highlights
|
CommonCrawl
|
Caring for parents: an evolutionary rationale
J. Garay1,5,
S. Számadó2,5,
Z. Varga3 &
E. Szathmáry ORCID: orcid.org/0000-0001-5227-29974,5
BMC Biology volume 16, Article number: 53 (2018) Cite this article
The evolutionary roots of human moral behavior are a key precondition to understanding human nature. Investigations usually start with a social dilemma and end up with a norm that can provide some insight into the origin of morality. We take the opposite direction by investigating whether the cultural norm that promotes helping parents and which is respected in different variants across cultures and is codified in several religions can spread through Darwinian competition.
We show with a novel demographic model that the biological rule "During your reproductive period, give some of your resources to your post-fertile parents" will spread even if the cost of support given to post-fertile grandmothers considerably decreases the demographic parameters of fertile parents but radically increases the survival rate of grandchildren. The teaching of vital cultural content is likely to have been critical in making grandparental service valuable. We name this the Fifth Rule, after the Fifth Commandment that codifies such behaviors in Christianity.
Selection for such behavior may have produced an innate moral tendency to honor parents even in situations, such as those experienced today, when the quantitative conditions would not necessarily favor the maintenance of this trait.
Darwin [1] raised the possibility that morality has an evolutionary origin. Several models rooted in evolutionary theory shed light on some basic moral issues [2,3,4,5]. In contrast, we start with a moral commandment, and investigate whether a phenotype corresponding to this moral commandment wins in a Darwinian struggle for existence, like the investigation of the conditions under which spiteful behavior will die out [6]. Here we investigate the cultural norm that promotes helping parents. We refer to this norm as the Fifth Commandment (see the supplementary information in Additional file 1). This norm has obvious links to biology, and variants of it can also be found in various cultures and religions ranging from those from the East to those from the West (Additional file 1). There is widespread evidence not just for the existence of such a norm but also for the actual support given to parents as well. The form of this support varies across cultures (emotional, instrumental, financial, etc.) and can be a function of other factors, such as the health of elderly parents. This kind of help is readily observed in different cultures: eastern, western [7,8,9,10,11] as well as hunter-gatherer societies [12]. For example, in !Kung hunter-gather community, "old people are highly valued and respected" ([12] p. 78). Moreover, elderly people in a family receive help and this help is key for their high life expectancy: "The death of a spouse and the lack of children or other close relatives to provide care may make it unlikely for a person to survive into old age" ([12] p. 84). Based on the above, we introduce the so-called Fifth Rule, which is a translation of the Fifth Commandment into biological terms and is inherent in the above interpretations: "During your reproductive period, give some of your resources to your post-fertile parents." Investigating the dynamics of this norm may shed light on the evolutionary roots of religion also [13].
During the standard life history of humans, infants grow to become parents who age into grandparents. Thus, longevities permitting, respect and help for parents become targeted at the grandparents of one's children. This truism has important consequences for the possible spread of such a behavioral trait. Behaviors can be inherited, by either genetic or cultural transmission. This inheritance assumption immediately implies that if the support given to grandparents spreads by Darwinian selection, then that ensures longer life for the parents, as their children will inherit their behavior. Like classical evolutionary game theory, we will not consider the genetic or potential cultural background of the behavior [14]. We assume that this behavior evolved when the potential for horizontal cultural transfer was negligible due to the low population density of humans [15]; thus, the success of genetically determined behaviors and the success of culturally determined behaviors were tightly linked. An adaptive phenotype will outperform its rivals on a Darwinian selection time scale, regardless of whether it is coded genetically or culturally. Here, Darwinian fitness is the average growth rate of a phenotype.
The establishment of a post-fertile period is critical for our case. Several hypotheses deal with the origin of the menopause.
Shanley and Kirkwood [16] investigate two alternative theories that might explain the origin of the menopause. The first, called the altriciality hypothesis, observes that maternal mortality increases with age. It implies a trade-off between rearing existing, still altricial children and giving birth to a new one. The second is the mother hypothesis, which states that a post-fertile grandmother can help her fertile daughter [17]. They found that neither of these ideas alone is sufficient to explain the evolution of menopause under a realistic range of life-history parameters; however, a combined model can explain it [16, 18]. Their conclusion is corroborated by other studies, both on altriciality [19] as well as on kin selection [20].
According to the grandmother hypothesis [21,22,23,24,25,26], the advantage of the post-fertile stage is that grandmothers enhance the survival of their grandchildren [22, 26, 27], by increasing either the survival rate or the fecundity of the latter [28, 29]. A third hypothesis is the embodied capital model, which emphasizes that the intergenerational transfer (IT) of skills, knowledge, and social ability needs time, and both grandmothers and grandfathers could help in the training of their grandchildren [30]. The skills and knowledge attained during childhood can increase the survival rate and fecundity for the whole adult life of the grandchildren; see Fig. 1a–c for a comparison of these alternatives. These three hypotheses do not necessary exclude each other, since the care for pre-fertile individuals includes breastfeeding, transport, feeding, and protection as well as affection and education [27, 31, 32].
The different theories. Grey arrows denote parental help, purple arrows denote forward help from grandmothers to a grandchild, and finally, yellow arrows denote the backward transfer of resources from parents to grandparents. Upward blue and downward red arrows denote the positive and negative effects from the trade-offs, respectively. a Standard life-history model with no menopause, no forward help, and no backward help. b Grandmother (purple arrow from VI to I) and altriciality (grey arrow from VI to I) hypotheses. The menopause has evolved and there is no backward help from parents to grandmothers. c Mother and embodied capital hypotheses. The menopause has evolved and there is no backward help from parents to grandmothers. d Filial piety. The menopause has evolved, there is a synergistic division of labor with backward help from parents to grandmothers, and there are no trade-offs; e Fifth rule. The menopause has evolved, there is backward help from parents to grandmothers, and there is a three-way trade-off for the parents between survival, fecundity, and helping their grandmothers. f Fifth Rule. The menopause has evolved, there is backward help from parents to grandmothers, and there is a two-way trade-off for the parents between fecundity and helping their grandmothers
All these hypotheses are aimed at explaining the evolutionary advantage of the long post-fertile life period of Homo sapiens. However, none of them assumes that there is a transfer of resources from the parents to the grandparents, thus none of them investigates the trade-off between parental reproduction or survival and the support given to grandparents. The central question is this: Will the support given to post-fertile grandmothers spread even if there is a trade-off between this support and either the fecundity or the survival rate of fertile parents?
Cyrus & Lee [3] investigated the evolution of IT from parents to grandparents in the framework of a cooperative game. They showed that filial piety can evolve through the division of labor. A fertile female transfers some of her energy to her mother, enabling the latter to redirect her efforts from inefficient foraging to the care of her grandchildren, allowing the fertile female to forage, doing so with higher efficiency than her mother. In other words, this model describes a synergistic situation where everyone does the task she is the most efficient at. However, the authors do not consider the trade-off we wish to investigate (see Fig. 1d).
We strongly concur with the statement that "Even to demonstrate, for example, that post reproductive women result in a reduction in grandchild mortality does not establish that menopause is adaptive unless it can be demonstrated that overall fitness is actually enhanced." [18] (p. 27, their emphasis). In establishing the selective advantage of caring for grandmothers, we consider the effect of the overall fitness of the family.
Since in our problem, pre-fertile, fertile, and post-fertile individuals live together in a family, we have to consider a kin demographic selection model [3, 33, 34], in which the survival and the fecundity parameters depend on the costs and benefits of intra-familiar supports. After setting up the model, we investigate whether the Fifth Rule (as a biological distillation of the Fifth Commandment; see Additional file 1) wins in a Darwinian struggle for existence. Finally, we discuss our results.
We consider a Leslie matrix model (see Table 1 for notation). Our model strictly follows the Darwinian view: fitness is determined by fecundity and the survival rate. The fecundity of a family is determined by the intergenerational help, which modifies the demographic parameters within the family. Furthermore, the carrying capacity also has an effect on survival. Thus, the survival of an individual depends on intra-familial help and on the survival probability according to the carrying capacity: our model combines these two factors.
Table 1 Model notation
We consider the following age-structured model with two sub-models. The development of a family is described by the following Leslie matrix, which contains the survival and fecundity parameters of pre-fertile and fertile individuals, and all entries depend on the level of the intra-familiar (backward) help, denoted by y:
$$ \left(\begin{array}{cccccccc}0& 0& \dots & 0& {\alpha}_{k+1}(y)& \dots & {\alpha}_{K-1}(y)& {\alpha}_K(y)\\ {}{\omega}_1(y)& 0& \dots & 0& 0& \dots & 0& 0\\ {}0& {\omega}_2(y)& \dots & \dots & & .\dots & & \\ {}0& 0& \dots & 0& & .\dots & & \\ {}& & \dots & {\omega}_k(y)& 0& \dots & & \\ {}& & \dots & 0& {\omega}_{k+1}(y)& .\dots & & \\ {}& & \dots & \dots & 0& \dots & 0& \\ {}0& 0& \dots & 0& & \dots & {\omega}_{K-1}(y)& 0\end{array}\right), $$
where ω i (y) (i = 1, …, k) denote the survival rates of children, and ω j (y) and α j (y) (j = k + 1, …, K) are the survival rate and fecundity of fertile parents, respectively. Figure 2 depicts an example. The product of the age-structured population vector and this Leslie matrix describes the dynamics of the family. The age classes of grandparents will be handled separately, since the development of the family depends on the survival rate of pre-fertile family members and the fecundity of the fertile family members. Formally, x l = ωl − 1(y)xl − 1 where ω l (y) (l = K + 1, …, H) are the survival rates of the grandparents, and x l is the number of grandparents in age class l.
The Leslie matrix. Yellow, green, and red represent the pre-fertile, fertile, and post-fertile age classes, respectively. The fecundities of the reproductive age classes are denoted by the α's and the survival rates of the age classes are represented by the ω's
In the framework of the Leslie model, it is widely accepted that the Darwinian fitness is the long-term growth rate of the phenotype (i.e., the dominant positive eigenvalue of the Leslie matrix). Surprisingly, we could not find in the literature a Darwinian explanation of this. Below, adapting our recent reasoning from [35], we propose a strictly Darwinian rationale to show that the long-term growth rate is maximized by natural selection (see Additional file 1, Section 1 for details).
What is the effect of the Fifth Rule on the entries of the above Leslie matrix? For the simplest mathematical formulation, we assume that the cost of supporting grandmothers does not depend on the age class of either parents or grandmothers. Let y ∈ [0, 1] be the cost spent on supporting grandparents. If grandmothers help in child care, the survival rates of children ω i increase with increasing y, and based on the grandmother and the mother hypotheses, ω j decreases and α j increases with increasing y, where ω i (y) (i = 1, …, k) denote the survival rates of children, and ω j (y) and α j (y) (j = k + 1, …, K) are the survival rate and fecundity of fertile parents, respectively.
Since there is a difference in intra-familiar support between families, the Leslie matrices of different family types are different. What kind of intra-familiar support ensures the highest long-term growth rate for the family? For simplicity. we denote help from grandmothers to children as forward help and help from parents to grandparents as backward help. (See Fig. 1 for a comparison of the different models.) Under well-known conditions (fulfilled in our case), the unique positive eigenvalue of the Leslie matrix is the long-term growth rate of the family; thus, we consider this eigenvalue to be the fitness [36, 37]. Formally, the fitness λ(y) is the unique positive eigenvalue of the y-dependent Leslie matrix; hence, other things being equal, families in which grandmothers are helped are competitively superior to those without this behavior.
Grandmother hypothesis
Consider the case when fertile individuals do not support grandmothers (see Fig. 1e for a general depiction of the idea). We consider the following two cases (see Additional file 1 and Fig. 3 for details):
Grandmother hypothesis without (a) and with (b) child care. Green arrows denote parental help and purple arrows denote forward help from grandparents to a grandchild
(i) If grandmothers do not help in child care but survival linearly reduces fecundity of the fertile age class, then the optimal strategy is not to spend on one's own survival to post-fertile age.
(ii) If grandmothers help in child care then the menopause is evolutionarily successful if the effect of grandchild care (\( {a}_{21}{\overline{\omega}}_2{\omega}_3 \)) on a grandchild's survival (\( {\overline{\omega}}_1 \)) is greater than the survival rate without this care, i.e., \( {a}_{21}{\overline{\omega}}_2{\omega}_3>{\overline{\omega}}_1 \) (see Table 1 for notation, where \( {\overline{\omega}}_{\ast } \) denotes averages).
In summary, the grandmother hypothesis concerns the way a female of reproductive age allocates her resources between her own survival and her own fecundity. Note that we have adopted the hypothesis that the cost spent on living to the post-fertile age reduces fecundity. Without this trade-off, living to the post-fertile age is a neutral property in the first case and a benefit in the second case.
The Fifth Rule
The Fifth Rule requires us to support our elderly (see Fig. 1f for a general depiction of the idea), which may occur when the menopause has already become evolutionarily fixed (see Additional file 1 and Fig. 4 for details). We show (Additional file 1) that the Fifth Rule (backward help) evolves when\( {a}_{21}{\omega}_2\left(b-{\omega}_3\right)>{\overline{\omega}}_1 \). This condition is satisfied if, for example, the efficiency of the support given to post-fertile parents is sufficiently large compared to the basic post-fertile survival rate (if the latter were high, than grandmothers would be around even if they were not helped).
The Fifth Rule. There is forward help in child care and backward help given to grandparents. Green arrows denote parental help, purple arrows denote forward help from grandparents to a grandchild, and finally, red arrows denote the backward transfer of resources from parents to grandparents
Of course, the coevolution of two traits—a long life after the menopause and an effective Fifth Rule—is also possible. The analytical study presented in Additional file 1 is based on two conditions. First, the development of the Fifth Rule is conditional on the existence of the menopause, since one can help a grandmother only if she is alive. (Based on this conditionality, in Additional file 1, we suppose that traits s and y determine both the increase of the grandmother's survival probability and the decrease of fecundity in multiplicative form.) Consequently, the rarity of an effective Fifth Rule is hardly surprising. Second, if a fertile mother were to give all her resources to help the survival of her mother, her fecundity would drop to zero. In Additional file 1, in terms of a fitness landscape, we show that, if the fitnessλ(s, y) has a strict global maximum (s∗, y∗) (e.g., in our case, if λ(s, y)) were strictly concave), then there exists a unique evolutionarily optimal behavior (s∗, y∗); hence, the species evolves into this state.
The Fifth Rule will spread if the cost of the support given to post-fertile grandmothers slightly decreases the demographic parameters of fertile parents, but sufficiently increases the survival rate of grandchildren. However, in general, there is a threshold over which support given to grandmothers has no evolutionary advantage. If the cost of support given to post-fertile grandmothers only decreases the demographic parameters of the family but offers no increase in the survival rate of the grandchildren, then the Fifth Rule has no evolutionary advantage. The mother hypothesis and embodied capital model should imply that grandmothers increase the survival rate of their children and that of grandchildren during their lives. Thus, if these ideas also work in human evolution, then it is even easier for the Fifth Rule to evolve (see Additional file 1).
To investigate the effects of different cost–benefit parameters on the evolvability of IT, we constructed a general example, which we analyzed numerically (see Additional file 1 and Fig. 5 for details). Our conclusions from the model are as follows. IT evolves most readily when grandparental help increases both the survival and the number of offspring [22, 26, 27] (Fig. 6, Additional file 1: Figures S1–S3). Linear cost and benefit functions do not favor the evolution of IT (Additional file 1: Figures S1, S4, and S6). Conversely, convex benefit and concave cost functions promote the evolution of IT (Additional file 1: Figures S2, S3, S5, and S7). It is possible to find cost parameters (c, d) for which IT evolves even if the efficacy of parental transfer and grandparental help (a21 and b, respectively) is low (Additional file 1: Figures S2 and S3). Conversely, it is possible to find (high) a21 and b parameters for which IT evolves even if it imposes a high cost on the survival of the parents or on the number of offspring (d and c, respectively, see Additional file 1: Figures S1 and S2).
General two-age-class model. Green arrows denote parental help, purple arrows denote forward help from grandparents to a grandchild, and finally, red arrows denote the backward transfer of resources from parents to grandparents
Numerical example for a 2 × 2 Leslie matrix (see Additional file 1 for details). a Maximum family long-term growth rate (fitness), b optimal level of backward help (y*), c average number of offspring at y*, and d offspring survival at y* all as a function of b (effectiveness of backward help) and a21 (effectiveness of forward help on offspring survival). Parameters: α2 = 6, ω1 = 0.45, ω2 = 0.62, ω3 = 0.25, d = 0.3, h = 1, c = 0.2, 0.6, 1, and a12 = 10
Since we are dealing with family issues, the natural conceptual framework is that of kin selection. Although some works incorporate demography into inclusive fitness analyses, and also consider intergenerational resource transfers (e.g., Johnstone & Cant 2010), our model also involves, in addition, an unusual loop from a parent to a grandparent to a grandchild. There are different contributions to a female's fitness from the three stages of her life history, as girl, mother, and grandmother. Our demographic model can account for these complications in a straightforward manner. Our analysis applies to grandfathers as well, provided the menopause in grandmothers constrains the realized fertility of the former in a similar way.
We have shown that the biological version of the Fifth commandment called the "Fifth rule" can spread by means of natural selection under fairly general conditions. Our argument presented in the paper focuses on grandmothers. However, helping elderly parents is not constrained to females. Does our argument hold for males as well? To understand the argument, it is important to differentiate between fertility or loss of it (i.e., the male menopause or andropause) and reproductive success in general. It is well established that testosterone levels decrease with age in men [38, 39] and that this decrease is very often paralleled with depression, nervousness, decreased libido, erectile dysfunction, poor concentration, and memory [40]. The collection of these symptoms is called the male menopause [40]. Both the usefulness of the term and the idea that these symptoms can be traced back to one cause are hotly debated [40,41,42,43]. It is clear that the decline in fertility is not as sharp and not as general as in females [40, 41]. A recent study concludes that "the existence of the clinical and biochemical syndrome known as [late-onset hypogonadism] has been confirmed, but its incidence appears to be notably lower than originally estimated" [42]. However, we think that the existence of the male menopause (a sharp decline of fertility) is not crucial to our argument. Even if a grandfather's fertility remains unchanged, his reproductive success is expected to drop for several reasons as fertility is just one (necessary) component of reproductive success:
The menopause of the grandmother denies the grandfather the most obvious reproductive opportunity.
It is probable that grandfathers will not be as successful as young males in the competition for younger females (i.e., they will enjoy fewer mating opportunities).
Even if the grandfather is successful in mating, the child might not be counted as his child, thus it requires no further resources from the grandfather, which in turn implies the grandfather will be free to provide care for his (official) grandchildren.
Older men are also more prone to cuckoldry [44]. Bribiescas [44] concludes in a review on male reproductive senescence that "while the physiological potential for fathering offspring remains intact well into the later stages of a male's life, somatic degradation that results in a decline in attractiveness, sexual motivation, energy availability, and a compromised ability to acquire resources may indeed result in a form of male reproductive senescence that severely restricts male fitness at older ages" (p. 138). Overall, most elderly males will be forced out of reproduction, either because of the loss of fertility or because of the loss of mating opportunities; hence, our arguments as presented for grandmothers apply to grandfathers as well.
The useful contribution of the grandfather to his grandchildren (especially grandsons) might manifest itself later in childhood, due to, among others, the teaching of survival practices (e.g., successful hunting). For example, amongst the hunter-gatherers of the eastern boreal forests of North America "older males acquire, harbor, and are reservoirs for enormous scales of spatial information on both resources and mobility" ([45],abstract).
We demonstrate that an essential part of the Fifth Commandment (supporting the elderly) can confer a selective advantage under the right conditions; hence some kind of evolutionary moral sense might be genetically endowed. This holds if grandparents have a positive effect on the growth rate of their family. However, this is not necessarily true nowadays [46]. It is very well possible that this support is rooted in past human evolution. The Darwinian success of the Fifth Rule cannot completely explain the present-day Fifth Commandment. Human moral rules, although rooted in Darwinian evolution, are more than what that theory supports. It seems that the main difference is that moral commandments are unconditional rules, while in Darwinian evolution there must be a selective condition determining whether a behavior is adaptive or not.
Darwin C. The Descent of Man and Seletion in Relation to Sex. London: John Murray; 1871.
Boehm C. Moral origins: The evolution of virtue, altruism, and shame. New York: Basic Books; 2012.
Ridley M. The origins of virtue. London: Penguin; 1997.
Harms W, Skyrms B: Evolution of moral norms. 2008.
Ridley M. The origins of virtue: Penguin UK; 1997.
Garay J, Csiszar V, Mori TF. Under multilevel selection: "When shall you be neither spiteful nor envious?". J Theor Biol. 2014;340:73–84.
Lowenstein A, Daatland SO. Filial norms and family support in a comparative cross-national context: evidence from the OASIS study. Ageing Soc. 2006;26(2):203–23.
Mureşan C, Hărăguş P-T. Norms of Filial Obligation and Actual Support to Parents in Central and Eastern Europe. Rom J Popul Stud. 2015;9(2):49–82.
Qi X. Filial Obligation in Contemporary China: Evolution of the Culture-System. J Theory Soc Behav. 2015;45(1):141–61.
Silverstein M, Gans D, Yang FM. Intergenerational support to aging parents: The role of norms and needs. J Fam Issues. 2006;27(8):1068–84.
Schans D. 'They ought to do this for their parents': perceptions of filial obligations among immigrant and Dutch older people. Ageing Soc. 2008;28(1):49–66.
Biesele M, Howell N. The old people give you life': Aging among! Kung hunter-gatherers. Other ways of growing old; 1981. p. 77–98.
Wilson DS. Darwin's cathedral: Evolution, religion, and the nature of society. Chicago: University of Chicago Press; 2010.
Smith JM. Evolution and the Theory of Games. Cambridge: Cambridge University Press; 1982.
Stiner MC, Munro ND, Surovell TA, Tchernov E, Bar-Yosef O. Paleolithic population growth pulses evidenced by small animal exploitation. Science. 1999;283(5399):190–4.
Shanley DP, Kirkwood TBL. Evolution of the human menopause. BioEssays. 2001;23(3):282–7.
Madrigal L, Melendez-Obando M. Grandmothers' longevity negatively affects daughters' fertility. Am J Phys Anthropol. 2008;136(2):223–9.
Kirkwood TBL, Shanley DP. The connections between general and reproductive senescence and the evolutionary basis of menopause. In: Weinstein M, Oconnor K, editors. Reproductive Aging, vol. 1204; 2010. p. 21–9.
Rogers AR. WHY MENOPAUSE. Evol Ecol. 1993;7(4):406–20.
Hill K, Hurtado AM. The evolution of premature reproductive senescence and menopause in human females. Hum Nat. 1991;2(4):313–50.
Hawkes K. Human longevity - The grandmother effect. Nature. 2004;428(6979):128–9.
Sear R, Mace R, IA MG. Maternal grandmothers improve nutritional status and survival of children in rural Gambia. Proc R Soc B Biol Sci. 2000;267(1453):1641–7.
Williams GC. Pleiotropy, natural selection, and the evolution of senescence. Sci SAGE KE. 2001;2001(1):13.
Kim PS, Coxworth JE, Hawkes K. Increased longevity evolves from grandmothering. Proc R Soc B Biol Sci. 2012;279(1749):4880–4.
Hawkes K, O'Connell JF, Blurton Jones NG. Hardworking hadza grandmothers. In: Comparative socioecology: the behavioural ecology of humans and other mammals. Standen V, Foley RA. eds. Boston: Blackwell Scientific Publications; 1989:341–66.
Lahdenpera M, Lummaa V, Helle S, Tremblay M, Russell AF. Fitness benefits of prolonged post-reproductive lifespan in women. Nature. 2004;428(6979):178–81.
Shanley DP, Sear R, Mace R, Kirkwood TBL. Testing evolutionary theories of menopause. Proc R Soc B Biol Sci. 2007;274(1628):2943–9.
Bereczkei T. Kinship network, direct childcare, and fertility among Hungarians and Gypsies. Evol Hum Behav. 1998;19(5):283–98.
Pavard S, Sibert A, Heyer E. The effect of maternal care on child survival: a demographic, genetic, and evolutionary perspective. Evolution. 2007;61(5):1153–61.
Gurven M, Kaplan H. Beyond the grandmother hypothesis: evolutionary models of human longevity. The cultural context of aging: worldwide perspectives. 2008;3:53–66.
Hames R, Draper P. Women's work, child care, and helpers-at-the-nest in a hunter-gatherer society. Hum Nat. 2004;15(4):319–41.
Sear R, Mace R. Who keeps children alive? A review of the effects of kin on child survival. Evol Hum Behav. 2008;29(1):1–18.
Garay J, Varga Z, Gamez M, Cabello T. Sib cannibalism can be adaptive for kin. Ecol Model. 2016;334:51–9.
Pavard S, Branger F. Effect of maternal and grandmaternal care on population dynamics and human life-history evolution: A matrix projection model. Theor Popul Biol. 2012;82(4):364–76.
Mace R. Evolutionary ecology of human life history. Anim Behav. 2000;59(1):1–10.
Korpelainen H. Human life histories and the demographic transition: a case study from Finland, 1870–1949. Am J Phys Anthropol. 2003;120(4):384–90.
Caswell H. Matrix population models. Sunderland: Wiley Online Library; 2001.
Ferrini RL, Barrett-Connor E. Sex hormones and age: a cross-sectional study of testosterone and estradiol and their bioavailable fractions in community-dwelling men. Am J Epidemiol. 1998;147(8):750–4.
Bremner WJ, Vitiello MV, Prinz PN. Loss of circadian rhythmicity in blood testosterone levels with aging in normal men. J Clin Endocrinol Metab. 1983;56(6):1278–81.
Gould DC, Jacobs HS, Petty R. The male menopause—does it exist? ForAgainst. BMJ. 2000;320(7238):858–61.
Matsumoto AM. Andropause: clinical implications of the decline in serum testosterone levels with aging in men. J Gerontol Ser A Biol Med Sci. 2002;57(2):M76–99.
Marshall BL. Climacteric Redux? (Re) medicalizing the Male Menopause. Men Masculinities. 2007;9(4):509–29.
Jakiel G, Makara-Studzińska M, Ciebiera M, Słabuszewska-Jóźwiak A. Andropause–state of the art 2015 and review of selected aspects. Przeglad menopauzalny. 2015;14(1):1.
Bribiescas RG. On the evolution, life history, and proximate mechanisms of human male reproductive senescence. Evol Anthropol. 2006;15(4):132–41.
Lovis WA, Donahue RE. Space, Information and Knowledge: Ethnocartography and North American Boreal Forest Hunter-Gatherers, Information and its Role in Hunter-Gatherer Bands, Santa Fe (Ideas Debates and Perspectives 5); 2011. p. 59–84.
Lahdenperä M, Russell AF, Tremblay M, Lummaa V. Selection on menopause in two premodern human populations: no evidence for the mother hypothesis. Evolution. 2011;65(2):476–89.
We would like to thank István Scheuring and Tamás Czárán for helpful comments.
JG and SS were supported by OTKA grant K 108974. ES, JG, and SS acknowledge support from Gazdaságfejlesztési és Innovációs Operatív Program (GINOP) 2.3.2-15-2016-00057 (Az evolúció fényben: elvek és megoldások). ES was supported by the European Research Council (ERC) under the European Community's Seventh Framework Program (FP7/2007-2013) under ERC grant agreement 294,332 (Project EvoEvo). SS was supported by the ERC under the European Union's Horizon 2020 research and innovation program (grant agreement 648693).
MTA-ELTE Theoretical Biology and Evolutionary Ecology Research Group and Department of Plant Systematics, Ecology and Theoretical Biology, L. Eötvös University, Pázmány P. sétány 1/C, Budapest, H-1117, Hungary
J. Garay
RECENS "Lendület" Research Group, MTA Centre for Social Science, Tóth Kálmán u. 4, Budapest, H-1097, Hungary
S. Számadó
Department of Mathematics, Szent István University, Páter K. u. 1, Gödöllő, H-2103, Hungary
Z. Varga
Parmenides Center for the Conceptual Foundations of Science, Kirchplatz 1, 82049, Pullach/Munich, Germany
E. Szathmáry
MTA Centre for Ecological Research, Evolutionary Systems Research Group, Klebelsberg Kuno utca 3, Tihany, 8237, Hungary
J. Garay, S. Számadó & E. Szathmáry
JG and ES designed the study. JG and ZV analyzed the model. ES, ZV, and SS helped write the article. SS calculated the numerical examples and made the figures. All authors read and approved the final manuscript.
Correspondence to E. Szathmáry.
Competing interest
Supporting material. Supplementary discussion, supplementary methods, Figures. S1–S7, and Tables. S1 and S2 (PDF 1152 kb)
Garay, J., Számadó, S., Varga, Z. et al. Caring for parents: an evolutionary rationale. BMC Biol 16, 53 (2018). https://doi.org/10.1186/s12915-018-0519-2
5th commandment
Intra-familiar resource transfer
Kin demography
In the Light of Evolution
|
CommonCrawl
|
What is the intuition for ECDSA?
I understand DH and ElGamal and RSA encryption/signatures. But when I look at ECDSA (or plain DSA), it seems like the formulas are just pulled out of thin air. I can verify that the algebra used in the verification formula does in fact work out, but I have no clue why forgeries are hard, why these formulas were used, why something simpler wouldn't work, and where the inventor(s) got all this stuff. Can someone explain the intuition behind ECDSA?
signature dsa
FixeeFixee
$\begingroup$ Asking for "intuition" makes it hard to know in advance exactly what kind of answer you are hoping for. Could you perhaps relate your question to exactly what you find unclear e.g. in the Wikipedia article en.wikipedia.org/wiki/Digital_Signature_Algorithm? $\endgroup$ – Henrick Hellström Mar 10 '16 at 2:12
$\begingroup$ @HenrickHellström The wikipedia article doesn't give any answers to the questions I gave above: why are forgeries hard? Where did the formulas come from? Why won't a simpler scheme work? What idea(s) did the inventor(s) use to come up with this? Wikipedia just gives what most sources give: a rote recapitulation of the formulas with no explanation. $\endgroup$ – Fixee Mar 10 '16 at 2:16
$\begingroup$ I'd suggest to remove/split away the parts on the how it works and why its hard as multiple questions in one will less likely be answered. $\endgroup$ – lathspell Mar 10 '16 at 6:31
It is possible to view DSA/ECDSA as an identification scheme (like Schnorr) but with a different variant of Fiat-Shamir. This gives the intuition that you are perhaps looking for. I will include an excerpt from Intro to Modern Cryptography 2nd edition (Section 12.5.2) which gives this explanation:
Begin Excerpt -- Section 12.5.2 DSA and ECDSA
The Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) are based on the discrete-logarithm problem in different classes of groups. They have been around in some form since 1991, and are both included in the current Digital Signature Standard (DSS) issued by NIST.
Both schemes follow a common template and can be viewed as being constructed from an underlying identification scheme (see the previous section). Let $ \mathbb{G}$ be a cyclic group of prime order $q$ with generator $g$. Consider the following identification scheme in which the prover's private key is $x$ and public key is $( \mathbb{G}, q, g, y)$ with $y=g^x$:
The prover chooses uniform $k \in \mathbb{Z}_{q}^*$ and sends $I:=g^k$.
The verifier chooses and sends uniform $\alpha, r \in \mathbb{Z}_{q}$ as the challenge.
The prover sends $s:=[k^{-1} \cdot (\alpha+xr) \bmod q]$ as the response.
The verifier accepts if $s\neq 0$ and $g^{\alpha s^{-1}} \!\cdot y^{r s^{-1}}=I$.
Note $s \neq 0$ unless $\alpha=-xr \bmod q$, which occurs with negligible probability. Assuming $s \neq 0$, the inverse $s^{-1} \bmod q$ exists and $$ g^{\alpha s^{-1}} \! \cdot y^{r s^{-1}} = g^{\alpha s^{-1}} \! \cdot g^{xrs^{-1}} = g^{(\alpha+xr) \cdot s^{-1}} = g^{(\alpha+xr) \cdot k \cdot (\alpha+xr)^{-1}} = I. $$ We thus see that correctness holds with all but negligible probability.
One can show that this identification scheme is secure if the discrete-logarithm problem is hard in the group. We merely sketch the argument, assuming familiarity with the results of the previous section (i.e., Schnorr and identification schemes). First of all, transcripts of honest executions can be simulated: to do so, simply choose uniform $\alpha, r \in \mathbb{Z}_{q}$ and $s \in \mathbb{Z}_{q}^*$, and then set $I:=g^{\alpha s^{-1}} \!\cdot y^{r s^{-1}}$. (This no longer gives a perfect simulation, but it is close enough.) Moreover, if an attacker outputs an initial message $I$ for which it can give correct responses $s_1, s_2 \in \mathbb{Z}_{q}^*$ to distinct challenges $(\alpha, r_1), (\alpha, r_2)$ then $$ g^{\alpha s_1^{-1}} \!\cdot y^{r_1 s_1^{-1}} = I = g^{\alpha s_2^{-1}} \!\cdot y^{r_2 s_2^{-1}}, $$ and so $g^{\alpha (s_1^{-1} - s_2^{-1})} = h^{r_1s_1^{-1} - r_2s_2^{-1}}$ and $\log_g h$ can be computed as in the previous section. The same holds if the attacker gives correct responses to distinct challenges $(\alpha_1, r), (\alpha_2, r)$.
The DSA/ECDSA signature schemes are constructed by ``collapsing'' the above identification scheme into a non-interactive algorithm run by the signer. In contrast to the Fiat--Shamir transform, however, the transformation here is carried out as follows:
Set $\alpha:=H(m)$, where $m$ is the message being signed and $H$ is a cryptographic hash function.
Set $r:=F(I)$ for a (specified) function $F: \mathbb{G}\rightarrow \mathbb{Z}_{q}$. Here, $F$ is a ``simple'' function that is not intended to act like a random oracle.
The function $F$ depends on the group $ \mathbb{G}$, which in turn depends on the scheme. In DSA, $ \mathbb{G}$ is taken to be an order-$q$ subgroup of $\mathbb{Z}_{p}^*$, for $p$ prime, and $F(I) =[I \bmod q]$. In ECDSA, $ \mathbb{G}$ is an order-$q$ subgroup of an elliptic-curve group $E({\mathbb Z}_p)$, for $p$ prime. (ECDSA also allows elliptic curves over other fields.) Any element of such a group can be represented as a pair $(x, y) \in {\mathbb Z}_p \times {\mathbb Z}_p$. The function $F$ in this case is defined as $F((x,y)) = [x \bmod q]$.
DSA and ECDSA - abstractly: Let $\cal G$ be a group -generator algorithm.
$\sf gen$: on input $1^n$, run ${\cal G}(1^n)$ to obtain $( \mathbb{G}, q, g)$. Choose uniform $x \in \mathbb{Z}_{q}$ and set $y:=g^x$. The public key is $(\mathbb{G}, q, g, y)$ and the private key is $x$.
As part of key generation, two functions $H: \{0,1\}^* \rightarrow \mathbb{Z}_{q}$ and $F: \mathbb{G}\rightarrow \mathbb{Z}_{q}$ are specified, but we leave this implicit.
$\sf sign$: on input the private key $x$ and a message $m \in \{0,1\}^*$, choose uniform $k \in \mathbb{Z}_{q}^*$ and set $r:=F(g^k)$. Then compute $s:= [k^{-1} \cdot (H(m) + xr) \bmod q]$. (If $r=0$ or $s=0$ then start again with a fresh choice of $k$.) Output the signature \mbox{$(r, s)$}.
$\sf vrfy$: on input a public key $(\mathbb{G}, q, g, y)$, a message $m \in \{0,1\}^*$, and a signature $(r,s)$ with $r, s \neq 0 \bmod q$, output 1 if and only if $$ r = F\left(g^{H(m) \cdot s^{-1}} y^{r \cdot s^{-1}}\right). $$
Assuming hardness of the discrete-logarithm problem, DSA and ECDSA can be proven secure if $H$ and $F$ are modeled as random oracles. As we have discussed above, however, while the random-oracle model may be reasonable for $H$, it is not an appropriate model for $F$. No proofs of security are known for the specific choices of $F$ in the standard. Nevertheless, DSA and ECDSA have been used and studied for decades without any attacks being found.
Proper generation of $k$. The DSA/ECDSA schemes specify that the signer should choose a uniform $k \in \mathbb{Z}_{q}^*$ when computing a signature. Failure to choose $k$ properly (e.g., due to poor random-number generation) can lead to catastrophic results. For starters, if an attacker can predict the value of $k$ used to compute a signature $(r, s)$ on a message $m$, then they can compute the signer's private key. This is true because $s = k^{-1} \cdot (H(m)+xr) \bmod q$, and if $k$ is known then the only unknown is the private key $x$.
Even if $k$ is unpredictable, the attacker can compute the signer's private key if the same $k$ is ever used to generate two different signatures. The attacker can easily tell when this happens because then $r$ repeats as well. Say $(r, s_1)$ and $(r, s_2)$ are signatures on messages $m_1$ and $m_2$, respectively. Then \begin{eqnarray*} s_1 & = & k^{-1} \cdot (H(m_1)+ x r) \bmod q \\ s_2 & = & k^{-1} \cdot (H(m_2) + x r) \bmod q. \end{eqnarray*} Subtracting gives $s_1-s_2 = k^{-1} \left(H(m_1)-H(m_2)\right) \bmod q$, from which $k$ can be computed; given $k$, the attacker can determine the private key $x$ as in the previous paragraph. This very attack was used by hackers to extract the master private key from the Sony PlayStation (PS3) in 2010.
$\begingroup$ This is very helpful, but without the underlying material (ie, Schnorr signatures and identification scheme discussion) it still requires some work. But perhaps the most useful thing about your answer is that you show that ECDSA is the result of an evolution (from FS, FFS, Schnorr, DSA, then ECDSA) and that helps me know what legwork I need to undertake. Thanks Yehuda! $\endgroup$ – Fixee Mar 10 '16 at 20:01
EC-Schnorr or ECDSA, there is no important difference. The addition of $k$ (Schnorr) or the muliplication with $k^{-1}$ an can be seen as encryption of message and private key. That's why it's secure.
puzzlepalace
Franz ScheererFranz Scheerer
Not the answer you're looking for? Browse other questions tagged signature dsa or ask your own question.
Justification for the way s is computed in DSA?
What is the signature scheme with the fastest batch verification protocol for multiple signers?
Elliptic curves for ECDSA
What curve and key length to use in ECDSA?
ECDSA for encryption
BER or DER X9.62 for ECDSA signature
preferred method for generating k for DSA / ECDSA
How should ECDSA handle the null hash?
What is customizable in ECDSA signature and verification?
What is the difference between ECDSA and EdDSA
What is the recommended minimum key length for ECDSA signature
|
CommonCrawl
|
DNA Fingerprint Bands Correlated with the Egg Weight Performance of Hens
Huang, Haigen;Meng, Anming;Qi, Shunzhang;Gong, Guifen;Li, Junying;Wang, Hongwei;Chou, Baoqin 1
https://doi.org/10.5713/ajas.1999.1 PDF
Beijing White Chickens laying larger eggs and smaller eggs were respectively used as parental individuals for mating to produce the F1 progeny and then the F1 progeny individuals mated to produce 125 individuals of the F2 progeny. Three bands associated with the egg weight performance were identified from DNA fingerprints of the 125 individuals generated with a bovine minisatellite probe BM6.5B. The simple linear correlation analysis showed that the coefficients of correlation between frequencies of the three bands (DB1, DB2 and DB3) and egg weights were -0.6, -0.6 and 0.9, respectively.
Effects of Sire Birth Weight on Calving Difficulty and Maternal Performance of Their Female Progeny
Paputungan, U.;Makarechian, M.;Liu, M.F. 5
Weight records from birth to calving and calving scores of 407 two-year old heifers and weights of their offspring from birth to one year of age were used to study the effects of sire birth weight on maternal traits of their female progeny. The heifers ($G_1$) were Ihe progeny of 81 sires ($G_0$) and were classified into three classes based on their sires' birth weights (High, Medium and Low). The heifers were from three distinct breed-groups and were mated to bulls with medium birth weights within each breed-group to produce the second generation ($G_2$). The data were analyzed using a covariance model. The female progeny of high birth-weight sires were heavier from birth to calving than those sired by medium and low birth-weight bulls. The effect of sire birth weight on calving difficulty scores of their female progeny was not significant. Grand progeny (G2) of low birth-weight sires were lighter at birth than those from high birth-weight sires (p < 0.05) but they did not differ significantly in weaning and yearling weights from the other two Grand progeny groups. The results indicated that using low birth weight sires would not result in an increase in the incidence of dystocia among their female progeny calving at two-year of age and would not have an adverse effect On weaning and yearling weights of their grand progeny.
Gene Gun-Mediated Human Erythropoietin Gene Expression in Primary Cultured Oviduct Cells from Laying Hens
Ochiai, H.;Park, H.M.;Sasaki, R.;Okumura, J.;Muramatsu, T. 9
Factors affecting gene gun-mediated expression of the human erythropoietin gene were investigated in primary cultured oviduct cells from laying hens. The human erythropoietin gene was transfected by a gene gun method at $1.25{\mu}g$ per dish, and cultured in a synthetic serum-free medium for 72 hrs. The concentration of human erythropoietin mRNA was determined by RNA : RNA solution hybridization. In experiment 1, the effect of changing the shooting pressure of DNA-coated microparticles with nitrogen gas was tested at 20 and $60kgf/cm^2$. The results showed that the erythropoietin mRNA concentration was significantly higher at 60 than $20kgf/cm^2$. In experiment 2, the effects of supplementing the medium with fetal calf serum at 10%, and raising the shooting pressure from 60 to $80kgf/cm^2$ on the cell number and erythropoietin gene expression were examined. Although supplementation with fetal calf serum significantly increased the cell numbes compared with no supplemented controls (p < 0.05), erythropoietin mRNA concentration per $10^3$ cells was not affected. Raising the shooting pressure from 60 to $80kgf/cm^2$ did not affect either of the parameters, In experiment 3, the effect of supplementing ascorbate 2-phosphate at 0.5 mM was tested. The results indicated that the ascorbate supplementation significantly increased the cell number (p < 0.05), and tended to increase erythropoietin mRNA concentration (p < 0.1). Thus, for human erythropoietin gene expression by using the gene gun method, shooting pressure with nitrogen gas should be sufficient at $60kgf/cm^2$ and supplementation with ascorbate phosphate would be useful to enhance not only the cell proliferation but also erythropoietin gene expression.
Sex Linked Developmental Rate Differences in Murrah Buffalo (Bubalus bubalis) Embryos Fertilized and Cultured In Vitro
Sood, S.K.;Chauhan, M.S.;Tomer, O.S. 15
https://doi.org/10.5713/ajas.1999.15 PDF
The aim of the present study was to determine the effect of paternal sex chromosome on early development of buffalo embryos fertilized and cultured in vitro. Embryos were produced in vitro from abattoir derived buffalo oocytes. The cleaved embryos were cocultured with buffalo oviductal epithelial cells and evaluated on day 7 under the phase contrast microscope to classify development. The embryos which reached the morula/blastocyst stage were fast developing, the embryos which were at 16-32 cell stage were medium developing and the embryos below 16 cell stage were slow developing. The embryos which showed some fragmentation in the blastomeres or degenerated blastomeres, were degenerating. Sex of emberyos (n=159) was determined using PCR for amplification of a male specific BRY. 1 (301 bp) and a buffalo specific satellite DNA (216 bp) fragments. The results thus obtained show that 1) X and Y chromosome bearing sperms fertilize oocytes to give almost equal numbers of cleaved XX and XY embryos, 2) male embryos develop faster than female embryos to reach advanced stage and 3) degeneration of buffalo embryos is not linked with the paternal sex chromosome. We suggest that faster development of males is due to differential processing of X and Y chromosome within the zygote for its activation and / or differential expression of genes on paternal sex chromosome sex chromosome during development of buffalo embryos fertilized and cultured in vitro which may be attributed to a combination of genetic and environmental factors.
Effect of Cycloheximide on Bovine Oocyte Nuclear Progression and Sperm Head Transformation after Fertilization In Vitro
Liu, L.;Zhang, H.W.;Qian, J.F.;Fujihara, N. 22
Bovine oocytes with compact and complete cumulus cells were cultured in 6 groups for up to 24h in TCM199 buffered with 25 mmol/1 HEPES and supplemented with 10% FCS, 1 mg/ml $17{\beta}$-estradiol, 20 IU/ml hCG. Half of the oocytes at each group cultured in the presence of $25{\mu}g/ml$ cycloheximide at different times during maturation (0, 6, 12, 18, 20, 22 h) were fixed at 24 h of maturation to examine the nuclear progression. The rests of them were inseminated with frozen-thawed spermatozoa in medium BO with 10 mg/ml BSA and 10 mg/ml heparin and fixed after additional 18-20 h culture to evaluate the sperm head transformation. When a protein synthesis inhibitor was added at the onset of the maturation, the oocytes were prevented to proceed GVBD. A few of the oocytes (16%) were able to be penetrated and sperm head decondensation was inhibited either. Addition of cycloheximide after 6-12 h of culture resulted in an increasing percentage of GVBCD (more than 80%), but the oocytes became arrested in M-I (69.2%). More than half of the oocytes was penetrated with a decondensing sperm head. Formation of male pronucleus was first obtained at 12 h of culture in the presence of cycloheximide. When cycloheximide was added from 18 h of culture onwards, nuclear progression to M-II was increasingly restored (80.4-85.5%). The proportion of male and female pronuclear formation increased from 17.9% to 46.2%. It is concluded that protein synthesis is necessary not only for GVBD and development from M-I to M-II, but also for sperm head decendensation and male pronuclear formation in bovine oocytes.
Effect of Additives, Storage Temperature and Regional Difference of Ensiling on the Fermentation Quality of Napier Grass (Pennisetum purpureum Schum.) Silage
Tamada, J.;Yokota, H.;Ohshima, M.;Tamaki, M. 28
The effects of addition of celulases (A cremonium cellulolyticus and Trichoderma viride, CE), a commercial inoculum containing lactic acid bacteria (Lactobacillus casei, LAB), fermented green juice (macerated napier grass with water was incubated anaerobically with 2% glucose for 1 day, FGJ) and glucose (G), and regional difference of ensiling on napier grass (Pennisetum purpureum Schum.) silage were studied by using 900 ml laboratory glass bottle silos under 30 and $40^{\circ}C$ storage conditions in 1995 and 1996. Experiment 1 was carried out to compare the addition of CE, LAB, FGJ and the combinations. Silages were stored for 45 days after ensiling. Experiment 2 studied the effects of applications of CE, LAB, FGJ and G. Experiment 3 was carried out using the similar additives as experiment 2 except for LAB. Silages were stored for 60 days in the experiments 2 and 3. Experiments 1 and 2 were done in Nagoya, and experiment 3 in Okinawa. Sugar addition through CE or G improved the fermentation quality in all the experiments, which resulted in a greater decrease in the pH value and an increased level of lactic acid, while butyric acid contents increased under $30^{\circ}C$ storage condition in CE addition. LAB and FGJ additions hardly affected the silage fermentation quality without additional fermentable carbohydrate. But the combination of LAB, FGJ and glucidic addition (CE and G) improved the fermentation quality. The effect of the regional difference of ensiling between temperate (Nagoya; $35^{\circ}$ N) and subtropical (Okinawa; $26.5^{\circ}$ N) zones on silage fermentation quality was not shown in the present study.
Effects of Kemzyme, Phytase and Yeast Supplementation on the Growth Performance and Pollution Reduction of Broiler Chicks
Piao, X.S.;Han, In K.;Kim, J.H.;Cho, W.T.;Kim, Y.H.;Liang, Chao 36
An experiment was conducted to evaluate the effects of dietary Kemzyme, phytase, yeast and a combination of Kemzyme, phytase and yeast (KPY) supplementation on the growth performance, nutrient utilizability and the nutrients excretion in broiler chicks. Experimental diets based on corn-soybean meal were supplemented with 0.05% Kemzyme, 0.1% phytase, 0.1% yeast, 0.25% KPY (0.05% Kemzyme + 0.1% phytase + 0.1% yeast), respectively. Each treatment had six replicates of six male birds each. A total of 180 Arbor Ares broiler chicks were fed these diets for a period of six weeks. Numerically better body weight gain was found in chicks fed Kemzyme, phytase, yeast of KPY supplemented diet. Feed conversion rate was improved by the addition of KPY compared with control group (p < 0.05). Mortality was successfully reduced by supplementation of enzymes, yeast or a combination of enzymes and yeast. The excretions of N and P were considerably reduced by supplementation of dietary enzymes, yeast or combination of all three substances, especially for KPY fed group in starting period. The nutrient excretions in the finishing period were not significantly different. It appeared that the use of Kemzyme, phytase and yeast simultaneously had an additive effect on growth rate and nutrient excretion.
Prediction of Carcass Fat, Protein, and Energy Content from Carcass Dry Matter and Specific Gravity of Broilers
Wiernusz, C.J.;Park, B.C.;Teeter, R.G. 42
Three experiments were conducted to develop and test equations for predicting carcass composition. In the first study using 52 d-old Cobb ${\times}$ Cobb male broilers, twenty four carcasses were selected from 325 processed birds based upon visual appraisal for abdominal fat (low, medium, high) and assayed for specific gravity (SG), dry matter (DM), fat, protein, and ash. In experiment 2, 120 birds were fed rations containing 2 caloric densities (2,880 and $3,200kcal\;ME_n/kg$ diet) and assayed as described above on weeks 2,3,4,5, and 6. Carcass fat was elevated (p < 0.05) with increased caloric density. In both studies predictive variables were significantly correlated with chemically determined carcass fat, protein, and ash contents. Pooled across the 2 studies, data were used to form SG, DM, and or age based equations for predicting carcass composition. Results were tested in experiment 3, where 576 birds reared to 49-d consumed either 2,880, 3,200, or $3,574kcal\;ME_n/kg$ diet while exposed to constant $24^{\circ}C$ or cycling 24 to $35^{\circ}C$ ambient temperatures. Both dietary and environmental effects impacted (p < 0.05) carcass composition. The fat content analyzed chemically was enhanced from 12.4 to 15.7%, and predicted fat was also elevated from 13.4 to 14.8% with increasing caloric density. Heat distress reduced (p < 0.05) analyzed carcass protein (18.9 vs 18.3%) and predicted protein (18.2 vs 17.5%). Predicted equation values for carcass fat, protein, ash, and energy were correlated with the chemically analyzed values at r=0.96, 0.77, 0.86, and 0.79, respectively. Results suggest that prediction equations based on DM and SG may be used to estimate carcass fat, protein, ash, and energy contents of broilers consuming diets that differ in caloric density (2,800 to $3,574kcal\;ME_n/kg$) and for broilers exposed to either constant ($24^{\circ}C$) or cycling high (24 to $35^{\circ}C$) ambient temperatures during 49-d rearing period tested in the present study.
Effects of High Dietary Calcium and Fat Levels on the Performance, Intestinal pH, Body Composition and Size and Weight of Organs in Growing Chickens
Shafey, T.M. 49
The effect of fat supplementation of high calcium (Ca) diets on the performance, intestinal pH, body composition and size and weight of organs in growing chickens were investigated in two experiments. Growing chickens tolerated a high dietary level of Ca (22.5 vs 12.1 g/kg) in the presence of 6.3 g/kg of available phosphorus without any significant effect on performance. Intestinal pH was significantly increased by the addition of excess Ca and fat which probably created the right pH for the formation of insoluble Ca soaps. Excess dietary Ca increased carcass linoleic acid concentration at the expense of palmitic and stearic acid contents, whilst the addition of sunflower oil (80 g/kg diet) to the diet increased carcass linoleic acid concentration at the expense of palmitic acid content of the carcass. Intestinal and visceral organ size and weight were not influenced by excess Ca or fat. However, there was a non significant increase in the intestinal dry weight per unit of length caused by excess dietary Ca. It was concluded that excess dietary Ca of 22.5 g/kg did not significantly influence the performance of meat chickens. However, excess Ca increased intestinal pH and altered carcass fatty acid composition. Fat supplementation did not alter intestinal pH with high Ca diets. Excess dietary fat altered carcass fatty acid composition and reduced protein content. Intestinal and visceral organ size and weights were not influenced by excess dietary levels of Ca of fat.
Production Characteristics of Nili-Ravi Buffaloes
Khan, R.N.;Akhtar, S. 56
Production and reproduction data of 47 Nili-Ravi buffaloes (162 records) were analyzed with regression techniques. Average lactation milk yield was $2,020.04{\pm}44.59$ liters, lactation length $277.42{\pm}5.70$ d and calving interval $467.10{\pm}11.58$ d. The ranges for these parameters respectively were : 609-3591 lit, 122-614 d and 228-982 d. Year of calving and lactation length had significant effect on total milk yield (p < 0.01), whereas other factors such as month of calving, lactation number and calving interval had no effect on total lactation milk yield. Year of calving had influenced significantly other traits (p < .01) such as calving interval and lactations completed. This indicated considerable environment role in buffalo productivity. Effect of month of calving on total lactation milk yield and other traits was however, found to be non-significant. Nili-Ravi buffaloes produced maximum milk during their first three lactations as compared to subsequent lactations. Regression model explained 40 percent variation in total lactation milk yield due to factors analyzed : animal (dam), year and month of calving lactation length and calving interval.
Productive and Reproductive Performance of Kajli and Lohi Ewes
Nawaz, M.;Khan, M.A.;Qureshi, M.A.;Rasool, E. 61
Data from 22837 lambings of Lohi and Kajli ewes from 1962 through 1994 were used to analyse productive and reproductive traits and wool production, Overall litter size at birth averaged 1.33 being 1.45 for Lohi and 1.21 for Kajli ewes. The corresponding values at weaning were 1.23, 1.32 and 1.14, respectively. Litter size was consistently lowest for one year old, with a substantial increase at two, three and four years of ewe age and marginal increase thereafter, Ewes lambing in spring weaned 0.08 more lambs per parturition than ewes lambed in Autumn (p<0,01). Lamb birth weights were affected by ewe breed (p<0.01) and increased with ewe age. Overall lamb weaning weight (120 d) of 17993 lambs was 20.3 kg. Weaning weight was affected by breed, sire, year of birth, sex, rearing rank and weaning age (p<0.01). The highest mean weaning weight was 21.9 kg for Lohi lambs followed by Kajli lambs (18.8 kg), Lambs from Kajli ewes were 9% heavier at birth but 14% lighter at weaning. Twin born lambs were 18% lighter at birth and 13% at weaning than single born lambs. Male lambs were 3% heavier at birth and 4.5% heavier at weaning than female lambs. Overall annual mean wool production was 2,64 kg, Kajli ewes were heavier at breeding than Lohi ewes (i.e. 46.2 vs 44.8 kg). Lohi ewes being 3% less body weight produced 38% more wool and 18% more litter weaning weight than Kajli ewes, When average weight of lamb weaned per ewe weaning lambs was adjusted for ewe average metabolic body size, Lohi ewes were most efficient (i.e. arbitrary assigned value of 100) compared to Kajli ewes achieving only 83% of Lohi level.
Calf Rearing Systems in Smallholder Dairy Farming Areas of Zimbabwe : A Diadnostic Study of the Nharira-Lancashire Area
Mandibaya, W.;Mutisi, C.;Hamudikuwanda, H. 68
A formal survey was carried out in Nharira-Lancashire areas located in Chivhu to assess the calf rearing systems practiced in smallholder dairy farming areas of Zimbabwe. A total of 47 farmers, collectively owning 305 cows and 194 calves of various breeds, participated in the survey. All the farmers allowed their calves to suckle their dams all day to obtain colostrum. The colostrums intake period was significantly (p < 0.05) shorter (5.2 vs 4.1 days) in the small scale commercial area (SSCA) compared to communal area (CA). Milk was first sold to the Nharira-Lancashire Milk Centre a day after the colostrum intake period ended. Most of the CA (91.3%) and SSCA (77.8%) farmers penned their cows and calves together at night during the colostrum intake period. Thereafter the calves were penned separate from their dams. After colostrum intake, two types of calf suckling systems were practised; twice a day suckling and twice a day then changed to once a day suckling. In both systems, suckling was allowed for 30 minutes after the cows had been hand milked. There was no significant (p < 0.05) difference in the mean weaning age of calves between the CA and SSCA (5.8 vs 5.4 months). The most common weaning method was through separation of the calves from the dams. The limitaitions to calf production in Chivhu were the prohibitively high costs of calf meals, poor feed resources during the dry season, a general lack of knowledge on calf rearing diseases and inappropriate calf housing.
Application of Molecular Biology to Rumen Microbes -Review-
Kobayashi, Y.;Onodera, R. 77
Molecular biological techniques that recently developed, have made it possible to realize some of new attempts in the research field of rumen microbiology. Those are 1) cloning of genes from rumen microorganisms mainly in E. coli, 2) transformation of rumen bacteria and 3) ecological analysis with nonculturing methods. Most of the cloned genes are for polysaccharidase enzymes such as endoglucanase, xylanase, amylase, chitinase and others, and the cloning rendered gene structural analyses by sequencing and also characterization of the translated products through easier purification. Electrotransformation of Butyrivibrio fibrisolvens and Prevotella ruminicola have been made toward the direction for obtaining more fibrolytic, acid-tolerant, depoisoning or essential amino acids-producing rumen bacterium. These primarily required stable and efficient gene transfer systems. Some vectors, constructed from native plasmids of rumen bacteria, are now available for successful gene introduction and expression in those rumen bacterial species. Probing and PCR-based methodologies have also been developed for detecting specific bacterial species and even strains. These are much due to accumulation of rRNA gene sequences of rumen microbes in databases. Although optimized analytical conditions are essential to reliable and reproducible estimation of the targeted microbes, the methods permit long term storage of frozen samples, providing us ease in analytical work as compared with a traditional method based on culturing. Moreover, the methods seem to be promissing for obtaining taxonomic and evolutionary information on all the rumen microbes, whether they are culturable or not.
Industrial Applications of Rumen Microbes - Review -
Cheng, K.J.;Lee, S.S.;Bae, H.D.;Ha, J.K. 84
The rumen microbial ecosystem is coming to be recognized as a rich alternative source of genes for industrially useful enzymes. Recent advances in biotechnology are enabling development of novel strategies for effective delivery and enhancement of these gene products. One particularly promising avenue for industrial application of rumen enzymes is as feed supplements for nonruminant and ruminant animal diets. Increasing competition in the livestock industry has forced producers to cut costs by adopting new technologies aimed at increasing production efficiency. Cellulases, xylanases, ${\beta}$-glucanases, pectinases, and phytases have been shown to increase the efficiency of feedstuff utilization (e.g., degradation of cellulose, xylan and ${\beta}$-glucan) and to decrease pollutants (e.g., phytic acid). These enzymes enhance the availability of feed components to the animal and eliminate some of their naturally occurring antinutritional effects. In the past, the cost and inconvenience of enzyme production and delivery has hampered widespread application of this promising technology. Over the last decade, however, advances in recombinant DNA technology have significantly improved microbial production systems. Novel strategies for delivery and enhancement of genes and gene products from the rumen include expression of seed proteins, oleosin proteins in canola and transgenic animals secreting digestive enzymes from the pancreas. Thus, the biotechnological framework is in place to achieve substantial improvements in animal production through enzyme supplementation. On the other hand, the rumen ecosystem provides ongoing enrichment and natural selection of microbes adapted to specific conditions, and represents a virtually untapped resource of novel products such as enzymes, detoxificants and antibiotics.
Recent Advances in Biotechnology of Rumen Bacteria - Review -
Forsberg, C.W.;Egbosimba, E.E.;MacLellan, S. 93
Recent advances in the biotechnology of ruminal bacteria have been made in the characterization of enzymes involved in plant cell wall digestion, the exploration of mechanisms of gene transfer in ruminal bacteria, and the development of vectors. These studies have culminated in the introduction and expression of heterologous glucanase and xylanase genes and a fluoroacetate dehalogenase gene in ruminal bacteria. These recent studies show the strategy of gene and vector construction necessary for the production of genetically engineered bacteria for introduction into ruminants. Molecular research on proteolytic turnover of protein in the rumen is in its infancy, but a novel protein high in essential amino acids designed for intracellular expression in ruminal organisms provides an interesting approach for improving the amino acid profile of ruminal organisms.
The Role of Rumen Fungi in Fibre Digestion - Review -
Ho, Y.W.;Abdullah, N. 104
https://doi.org/10.5713/ajas.1999.104 PDF
Since the anaerobic rumen fungi were discovered in the rumen of a sheep over two decades ago, they have been reported in a wide range of herbivores fud on high fibre diets. The extensive colonisation and degradation of fibrous plant tissues by the fungi suggest that they have a role in fibre digestion. All rumen fungi studied so far are fibrolytic. They produce a range of hydrolytic enzymes, which include the cellulases, hemicellulases, pectinases and phenolic acid esterases, to enable them to invade and degrade the lignocellulosic plant tissues. Although rumen fungi may not seem to be essential to general rumen function since they may be absent in animals fed on low fibre diets, they, nevertheless, could contribute to the digestion of high-fibre poor-quality forages.
The Role of Protozoa in Feed Digestion - Review -
Jouany, J.P.;Ushida, K. 113
Protozoa can represent as half of the total rumen microbial biomass. Around 10 genera are generally present on the same time in the rumen. Based on nutritional aspects they can be divided in large entodiniomorphs, small entodiniomorphs and isotrichs. Their feeding behaviour and their enzymatic activities differ considerably. Many comparisons between defaunated and refaunated animals were carried out during the last two decades to explain the global role of protozoa at the ruminal or animal levels. It is now generally considered that a presence of an abundant protozoal population in the rumen has a negative effect on the amino acid (AA) supply to ruminants and contribute to generate more methane but, nevertheless, protozoa must not be considered as parasites. They are useful for numerous reasons. They stabilise rumen pH when animal are fed diets rich in available starch and decrease the redox potential of rumen digesta. Because cellulolytic bacteria are very sensitive to these two parameters, protozoa indirectly stimulate the bacterial cellulolytic activity and supply their own activity to the rumen microbial ecosystem. They could also supply some peptides in the rumen medium which can stimulate the growth of the rumen microbiota, but this aspect has never been considered in the past. Their high contribution to ammonia production has bad consequences on the urinary nitrogen excretion but means also that less dietary soluble nitrogen is necessary when protozoa are present. Changes in the molar percentages of VFA and gases from rumen fermentations are not so large that they could alter significantly the use of energy by animals. The answer of animals to elimination of protozoa (defaunation) depends on the balance between energy and protein needs of animals and the supply of nutrients supplied through the diet. Defaunation is useful in case of diets short in protein nitrogen but not limited in energy supply for animals having high needs of proteins.
Molecular Analysis of Archaea, Bacteria and Eucarya Communities in the Rumen - Review-
White, B.A.;Cann, I.K.O.;Kocherginskaya, S.A.;Aminov, R.I.;Thill, L.A.;Mackie, R.I.;Onodera, R. 129
If rumen bacteria can be manipulated to utilize nutrients (i.e., ammonia and plant cell wall carbohydrates) more completely and efficiently, the need for protein supplementation can be reduced or eliminated and the digestion of fiber in forage or agricultural residue-based diets could be enhanced. However, these approaches require a complete and accurate description of the rumen community, as well as methods for the rapid and accurate detection of microbial density, diversity, phylogeny, and gene expression. Molecular ecology techniques based on small subunit (SSU) rRNA sequences, nucleic acid probes and the polymerase chain reaction (PCR) can potentially provide a complete description of the microbial ecology of the rumen of ruminant animals. The development of these molecular tools will result in greater insights into community structure and activity of gut microbial ecosystems in relation to functional interactions between different bacteria, spatial and temporal relationships between different microorganisms and between microorganisms and reed panicles. Molecular approaches based on SSU rRNA serve to evaluate the presence of specific sequences in the community and provide a link between knowledge obtained from pure cultures and the microbial populations they represent in the rumen. The successful development and application of these methods promises to provide opportunities to link distribution and identity of gastrointestinal microbes in their natural environment with their genetic potential and in situ activities. The use of approaches for assessing pupulation dynamics as well as for assessing community functionality will result in an increased understanding and a complete description of the gastrointestinal communities of production animals fed under different dietary regimes, and lead to new strategies for improving animal growth.
Role of Peptides in Rumen Microbial Metabolism - Review -
Wallace, R.J.;Atasoglu, C.;Newbold, C.J. 139
Peptides are formed in the rumen as the result of microbial proteinase activity. The predominant type of activity is cysteine ptoteinase, but others, such as serine proteinases, are also present. Many species of protozoa, bacteria and fungi are involved in ptoteolysis; large animal-to-animal variability is found when proteinase activities in different animals are compared. The peptides formed from proteolysis are broken down to amino acids by peptidases. Different peptides are broken down at different rates, depending on their chemical composition and particularly their N-terminal structure. Indeed, chemical addition to the N-terminus of small peptides, such as by acetylation, causes the peptides to become stable to breakdown by the rumen microbial population; the microorganisms do not appear to adapt to hydrolyse acetylated peptides even after several weeks exposure to dietary acetylated peptides, and the amino acids present in acetylated peptides are absorbed from the small intestine. The amino acids present in some acetylated peptides remain available in nutritional trials with rats, but the nutritive value of the whole amino acid mixture is decreased by acetylation. The genus Prevotella is responsible for most of the catabolic peptidase activity in the rumen, via its dipeptidyl peptidase activities, which release dipeptides rather than free amino acids from the N-terminus of oligopeptides. Studies with dipeptidyl peptidase mutants of Prevotella suggest that it may be possible to slow the rate of peptide hydrolysis by the mixed rumen microbial population by inhibiting dipeptidyl peptidase activity of Prevotella or the rate of peptide uptake by this genus. Peptides and amino acids also stimulate the growth of rumen microorganisms, and are necessary for optimal growth rates of many species growing on tapidly fermented substrates; in rich medium, most bacteria use pre-formed amino acids for more than 90% of their amino acid requirements. Cellulolytic species are exceptional in this respect, but they still incorporate about half of their cell N from pre-formed amino acids in rich medium. However, the extent to which bacteria use ammonia vs. peptides and amino acids for protein synthesis also depends on the concentrations of each, such that preformed amino acids and peptides are probably used to a much lesser extent in vivo than many in vitro experiments might suggest.
|
CommonCrawl
|
Regions of ryanodine receptors that influence activation by the dihydropyridine receptor β1a subunit
Robyn T. Rebbeck1,
Hermia Willemse2,
Linda Groom3,
Marco G. Casarotto2,
Philip G. Board2,
Nicole A. Beard4,
Robert T. Dirksen3 and
Angela F. Dulhunty2Email author
© Rebbeck et al. 2015
Received: 5 April 2015
Although excitation-contraction (EC) coupling in skeletal muscle relies on physical activation of the skeletal ryanodine receptor (RyR1) Ca2+ release channel by dihydropyridine receptors (DHPRs), the activation pathway between the DHPR and RyR1 remains unknown. However, the pathway includes the DHPR β1a subunit which is integral to EC coupling and activates RyR1. In this manuscript, we explore the isoform specificity of β1a activation of RyRs and the β1a binding site on RyR1.
We used lipid bilayers to measure single channel currents and whole cell patch clamp to measure L-type Ca2+ currents and Ca2+ transients in myotubes.
We demonstrate that both skeletal RyR1 and cardiac RyR2 channels in phospholipid bilayers are activated by 10–100 nM of the β1a subunit. Activation of RyR2 by 10 nM β1a was less than that of RyR1, suggesting a reduced affinity of RyR2 for β1a. A reduction in activation was also observed when 10 nM β1a was added to the alternatively spliced (ASI(−)) isoform of RyR1, which lacks ASI residues (A3481-Q3485). It is notable that the equivalent region of RyR2 also lacks four of five ASI residues, suggesting that the absence of these residues may contribute to the reduced 10 nM β1a activation observed for both RyR2 and ASI(−)RyR1 compared to ASI(+)RyR1. We also investigated the influence of a polybasic motif (PBM) of RyR1 (K3495KKRRDGR3502) that is located immediately downstream from the ASI residues and has been implicated in EC coupling. We confirmed that neutralizing the basic residues in the PBM (RyR1 K-Q) results in an ~50 % reduction in Ca2+ transient amplitude following expression in RyR1-null (dyspedic) myotubes and that the PBM is also required for β1a subunit activation of RyR1 channels in lipid bilayers. These results suggest that the removal of β1a subunit interaction with the PBM in RyR1 could contribute directly to ~50 % of the Ca2+ release generated during skeletal EC coupling.
We conclude that the β1a subunit likely binds to a region that is largely conserved in RyR1 and RyR2 and that this region is influenced by the presence of the ASI residues and the PBM in RyR1.
Excitation-contraction coupling
Dihydropyridine receptor β1a subunit
Ryanodine receptor isoforms
Cardiac muscle
Contraction in skeletal and cardiac muscle depends on Ca2+ release from the intracellular sarcoplasmic reticulum (SR) Ca2+ store through ryanodine receptor (RyR) Ca2+ release channels embedded in the SR membrane. This Ca2+ release is crucial to excitation-contraction (EC) coupling. During EC coupling, cardiac RyRs (RyR2) are activated by an influx of extracellular Ca2+ through depolarization-activated dihydropyridine receptor (DHPR) L-type channels located in the surface and transverse-tubule membranes. In contrast, EC coupling in skeletal muscle is independent of extracellular Ca2+, apparently requiring a physical interaction between skeletal isoforms of the RyR (RyR1) and DHPR [1, 2]. However, despite exhaustive investigation, the physical components of this interaction still remain unclear [3, 4] and are investigated in this manuscript.
It is well established that the skeletal isoforms of both the membrane spanning α1S subunit and the cytoplasmic β1a subunit of the DHPR heteropentamer are essential for skeletal EC coupling [5, 6]. The α1S subunit contains the voltage sensor for EC coupling [7, 8] and the "critical" region for skeletal EC coupling (residues L720-764/5) in its intracellular II-III loop [9–11]. The β1a subunit is responsible for the targeting of the DHPR to the triad and assembly into tetrads that are closely aligned with RyR1 in the SR [12–14]. There is also evidence that the β1a subunit also plays an active role in the EC coupling process. The β1a subunit directly activates RyR1 channels incorporated into lipid bilayers and enhances voltage-activated Ca2+ release in skeletal muscle fibers [5, 15, 16]. The C-terminal region of β1a (V490-M524) supports β1a binding to RyR1 in vitro and influences voltage-induced Ca2+ release in mouse myotubes [15, 17, 18]. A peptide corresponding to the same residues mimics full length β1a subunit activation of RyR1 channels in lipid bilayers [15] and a truncated peptide of the same region enhances voltage-induced Ca2+ release to the same degree as the full length β1a subunit in intact adult mouse muscle fibers [16, 19]. Furthermore, overexpression of a β subunit interacting protein, Rem, in adult mouse skeletal muscle fibers was recently shown to reduce voltage-induced Ca2+ transients by ~65 % without substantially altering α1S subunit membrane targeting or intramembrane gating charge movement or SR Ca2+ store content [20]. This suggests that the DHPR-RyR1 interaction may be uncoupled by virtue of direct interference of β1a subunit. Residues in RyR1 that influence binding to the β1a subunit have also been identified. The M3201-W3661 fragment of RyR1 binds to β1a and the strength of binding is substantially reduced by replacing the six basic residues in a polybasic motif (PBM; K3495KKRRDGR3502) with glutamines [18]. Replacement of the same six residues with glutamines in the full-length RyR protein substantially reduces depolarization-dependent Ca2+ release [18]. The in vitro studies indicate a high-affinity interaction between the isolated RyR1 and the β1a subunit that is influenced by the PBM. However, the basic residues unlikely bind directly to the hydrophobic residues in the β1a C-terminus, although they could contribute to the overall conformation of the binding domain [21]. Similarly, it is unlikely that basic residue binding to the hydrophobic residues could contribute to EC coupling, although both basic residues and hydrophobic residues in the β1a C-terminus influence EC coupling [16, 19].
The fact that skeletal DHPR and RyR isoforms are critical for skeletal-type EC coupling [22–26] suggests that isoform-specific regions of these proteins enable unique interactions in skeletal muscle. Also, in the context of isoform dependence, we reported that an alternatively spliced region of RyR1 (A3481-Q3485), located close to the PBM, is significant in setting the gain of EC coupling [27]. It is notable that RyR2 lacks the equivalent sequence to the ASI residues in ASI(+)RyR1 and, in this respect, more closely resembles the ASI(−)RyR1 isoform. Therefore, here we examined the RyR isoform dependence of the in vitro interaction with the β1a subunit. We use the RyR isoforms as tools to explore regions of the RyR1 that influence its interaction with the C-tail of the β1a subunit. Interactions between RyR2 and the cardiac β subunit were not examined as they have no physiological significance, and there is little sequence homology between the C-terminal tails of the cardiac and skeletal β isoforms [12–14].
The results indicate that while β1a activates RyR1 and RyR2 isolated from the skeletal muscle and the heart and activates recombinant ASI(−)RyR1 and ASI(+)RyR1, β1a activation of RyR2 and AS1(−)RyR1 requires higher β1a concentrations than that required to activate RyR1 or ASI(+)RyR1. In addition, we show that neutralization of the basic residues in the RyR1 PBM abolishes β1a activation of RyR1 in lipid bilayers and confirm that this also markedly reduces voltage-dependent Ca2+ release in skeletal myotubes. Together, the results reinforce the conclusion that β1a binding to RyR1 contributes to EC coupling and suggest that the region encompassing the adjacent ASI residues and PBM is a determinant of β1a binding to and regulation of RyR1.
The work was approved by The Australian National University Animal Experimentation Ethics Committee (Australian Capital Territory, Australia) and by the University Committee on Animal Resources at the University of Rochester (New York, USA).
Preparation of RyR1 ASI (−) and K-Q cDNA
The ASI(−)RyR1 variant was introduced into rabbit RyR1 cDNA (accession #X15750) using two-step site-directed mutagenesis as described previously [28]. The K-Q mutant (K3495KKRRGDR3502) was similarly introduced into a rabbit RyR1 cDNA by two-step site-directed mutagenesis in the following manner: using a BsiWI/BamHI subclone of RyR1, residues R3498Q and R3499Q were introduced via mutagenesis to create a double mutation (R3498Q/R3499Q). This mutant was used as a template to introduce a third mutation R3502Q. Finally, glutamine substitutions for residues K3495, K3496, and K3497 were introduced into the triple mutated plasmid to generate the PBM mutant Q3495QQQQGDQ3502 (K-Q mutant). The entire PCR-modified cDNA portion of the BswiWI/BamHI mutant subclone was confirmed by sequence analysis and then cloned back into full-length RyR1.
Preparation of SR vesicles
Skeletal muscle SR vesicles were prepared from back and leg muscles (fast twitch skeletal muscle) from New Zealand white rabbits [29–31] and cardiac SR vesicles collected from sheep hearts [32, 33]. Vesicles were stored at −70 °C.
Transfection and preparation of microsomal protein
Microsomal vesicles were collected from HEK293 transfected with recombinant rabbit RyR1 ASI(+), ASI(−), or K-Q RyR1 mutant cDNAs in mammalian expression vector (pCIneo) as described previously [28] with minor modifications. HEK cells were grown in 175-mm2 flasks at 37 °C, 5 % CO2 in 10 % fetal calf serum in MEM. At 50–60 % confluence, cells were transfected with 80 μg cDNA in a phosphate buffer solution (125 mM CaCl2, 70 mM NaH2PO4, 140 mM NaCl, 76 mM HEPES, 7 mM Na2HPO4, pH 7.2) using a calcium phosphate precipitation method. Cells were maintained for 48 h and then harvested in phosphate buffer (137 mM NaCl; 7 mM Na2HPO4; 2.5 mM NaH2PO4.H2O; and, 2 mM EGTA, pH 7.4). The pellet was resuspended in homogenizing buffer (300 mM sucrose, 5 mM imidazole, 1× complete EDTA-free protease inhibitor cocktail, pH 7.4), homogenized and centrifuged at 11,600 × g for 20 min. The resulting pellet was resuspended in homogenizing buffer, further homogenized and centrifuged at 91,943 × g for 2 h. The pellet was resuspended in homogenizing buffer, homogenized, and then briefly sonicated. The microsomal mixture was separated into 15 μL aliquots and stored at −70 °C.
Preparation and injection of dyspedic myotubes
Primary cultures of myotubes were obtained from skeletal myoblasts isolated from newborn RyR1-null (dyspedic) mice as previously described [34, 35]. Four to 6 days after initial plating of myoblasts, nuclei of dyspedic myotubes were microinjected with cDNAs encoding CD8 (0.1 μg/μl) and the appropriate RyR1 expression plasmid (0.5 μg/μl) [36]. Expressing myotubes were identified 2–4 days after cDNA microinjection by incubation with CD8 antibody beads (Dynabeads, Dynal USA). All animals were housed in a pathogen-free area at the University of Rochester and experiments performed in accordance with procedures reviewed and approved by the local University Committees on Animal Resources.
Preparation of β1a subunit
The β1a protein was expressed in transformed Escherichia coli BL21(DE3) and purified as described previously [15]. The proteins were dialyzed against a phosphate buffer (50 mM Na3PO4, 300 mM NaCl, pH 8) and stored at −70 °C.
Single-channel recording and analysis
Channels from cardiac, skeletal, or HEK293 microsomal vesicles were incorporated into lipid bilayers with solutions containing (mM): cis (20 CsCl, 230 CsCH3O3S, 10 TES, and 1 CaCl2) and trans (20 CsCl, 30 CsCH3O3S, 10 mM TES, and 1 CaCl2), pH 7.2. After RyR incorporation, 200 mM CsMS was added to the trans solution for symmetrical [Cs+]. BAPTA was added to the cis solution as determined with a Ca2+ electrode to achieve 10 μM Ca2+, and 2 mM ATP was added. Bilayer potential, Vcis-Vtrans, was switched between −40 and +40 mV. Channel activity under each condition was analyzed over 180 s using the program Channel 2 (developed by P. W. Gage and M. Smith). Threshold levels for channel opening were set to exclude baseline noise at ~20 % of the maximum single-channel conductance and open probability (P o ), mean open time (T o ), and closed open time (T c ) measured. Dwell-time distributions for each channel were obtained using the log-bin method [37–39]. Event frequency (probability) was plotted against equally spaced bins (on a logarithmic scale) for open or closed durations (seven bins per decade). The time constants are indicated by the frequency peaks. The area under each peak indicates the fraction of single-channel open or closed events falling into each time constant component.
Simultaneous measurements of macroscopic Ca2+ currents and transients in myotubes
The whole-cell patch clamp technique was used to simultaneously measure voltage-gated L-type Ca2+ currents (L currents) and Ca2+ transients in expressing myotubes [36]. Patch clamp experiments were conducted using an external solution consisting of (in millimolar): 145 TEA-Cl, 10 CaCl2, and 10 HEPES, pH 7.4 with TEA-OH and an internal pipette solution consisting of (in millimolar): 145 Cs-aspartate, 10 CsCl, 0.1 Cs2-EGTA, 1.2 MgCl2, 5 Mg-ATP, 0.2 K5-fluo-4, and 10 HEPES, pH 7.4 with CsOH. Peak L-current magnitude was normalized to cell capacitance (pA/pF), plotted as a function of the membrane potential (I-V curves in Fig. 6c), and fitted according to:
$$ I = {G}_{\mathbf{max}}*\ \left({V}_m - {V}_{\mathbf{rev}}\right)\ /\ \left(1 + \exp \left[\left({V}_{\mathbf{G1}/\mathbf{2}} - {V}_m\right)\ /\ {k}_G\right]\right) $$
where G max is the maximal L-channel conductance, V m is test potential, V rev is the L-channel reversal potential, V G1/2 is the potential for half-maximal activation of G max, and k G is a slope factor. Relative changes in fluo-4 fluorescence (ΔF/F) were measured at the end of each 200-ms depolarization, plotted as a function of the membrane potential, and fitted according to:
$$ \varDelta F/F = \left(\varDelta F\ /\ {F}_{\mathbf{max}}\right)/\left\{1 + \exp\ \left[\left({V}_{\mathbf{F1}/\mathbf{2}}-{V}_m\right)\ /\ {k}_F\right]\right\} $$
where ΔF/F max is the maximal fluorescence change, V F1/2 is the potential for half-maximal activation of ΔF/F max, and k F is a slope factor. The bell-shaped voltage dependence of ΔF/F measurements obtained in RyR1 K-Q mutant-expressing myotubes were fitted according to the following equation:
$$ \varDelta F/F = \left({\left(\varDelta F\ /\ F\right)}_{\mathbf{max}}\left(\left({V}_m - {V}_{\mathbf{rev}}\right)/k^{\prime}\right)\right)/\left(1 + \exp\ \left(\left({V}_{\mathbf{F1}/\mathbf{2}}-{V}_m\right)\ /\ {k}_F\right)\right) $$
where (ΔF/F)max, V m , V rev, V F1/2, and k F have their usual meanings. The additional variable k′ is a scaling factor that varies with (ΔF/F)max [40, 41]. The maximal rate of voltage-gated SR Ca2+ release was approximated from the peak of the first derivative of the fluo-4 fluorescence trace (dF/dt) elicited during the test depolarization at 30 mV. Pooled current-voltage (I-V) and fluorescence-voltage (ΔF/F-V) data in Table 1 are expressed as mean ± SEM.
Parameters of fitted I-V and [ΔF/F]-V curves
I-V data
[∆F/F]-V data
G max (nS/nF)
k (mV)
V half (mV)
Vrev (mV)
(∆F/F)max
WT RyR1 (n = 12)
264 ± 16
5.4 ± 0.4
10.5 ± 1.8
71 ± 1.8
-4.7 ± 1.6
RyR1 K-Q (n = 10)
201 ± 17*
1.6 ± 0.3*
4.1 ± 04.
Maximal L-channel conductance (G max), the potential for half-maximal G max (V half), slope factor (k), and reversal potential (V rev). Values presented as mean ± SEM for I-V data presented in Fig. 6c. Maximal Ca2+ transient [(ΔF/F)max], the potential at half maximal fluorescence (V half) and slope factor (k). Values presented as mean ± SE for [ΔF/F]-V data presented in Fig. 6d
*p < 0.05 vs WT RyR1-expressing dyspedic myotubes
Immunofluorescence labeling
RyR-null (dyspedic) myotubes expressing either WT RyR or RyR K-Q mutant that were plated on glass coverslips were fixed and immunostained with a mouse monoclonal anti-RyR antibody (34C, 1:10; Developmental Studies Hybridoma Bank) and a sheep polyclonal anti-DHPR antibody (1:200; Upstate Biotechnology) overnight at 4 °C as previously described [41]. On the following day, coverslips were washed with PBS three times each for 5 min and then incubated for 1 h at room temperature in blocking buffer containing a 1:500 dilution of Alexa Fluor 488–labeled donkey anti-mouse IgG (Molecular Probes) and 1:500 dilution of rhodamine-labeled donkey anti-sheep IgG (Jackson ImmunoResearch Laboratories Inc.) and washed with PBS (three times for 5 min each). Coverslips were mounted on glass slides and images obtained using a Nikon Eclipse-C1 confocal microscope (Nikon Instruments Inc.) and a 40× oil objective. All confocal images were sampled at a spatial resolution (pixel diameter) of 100 nm.
Average data are given as the mean ± SEM. Statistical significance was evaluated by a paired or unpaired two-way Student's t-test or analysis of variance (ANOVA) with Fisher's post hoc test, as appropriate. The numbers of observations (N) are given in the figure legends. To reduce the effects of variability in control single-channel activity parameters (P oC , T cC , T oC ) and to evaluate parameters after β1a subunit (P oB , T cB , T oB ) addition, data were expressed as the difference between the logarithmic values, i.e., log10 rel P o = log10 P oB –log10 P oC . The difference from control was assessed with a paired t-test applied to log10 P oC and log10 P oB . Variance in P o parameter values was assessed with an unpaired t-test. A p value of <0.05 was considered significant.
Ability of the β1a subunit to activate different RyR isoforms
The β1a subunit activates RyR1 and RyR2 channels
As we reported previously [15], when added to the cytoplasmic cis chamber, the full-length β1a subunit increases the activity of native RyR1 channels incorporated into planar lipid bilayers (Fig. 1a). Both 10- or 100-nM concentrations of β1a subunit maximally activate RyR1 channels in the presence of 10 μM Ca2+ and 2 mM Na2 ATP [15]. The records in Fig. 1b show that RyR2 channel activity also increases upon cytoplasmic exposure to 10 nM β1a subunit, but in contrast to RyR1, greater activation of RyR2 is observed with 100 nM β1a. On average, addition of 10 nM or 100 nM β1a to the cis solution significantly increased the relative P o of RyR2 by 1.8-fold and 2.6-fold, respectively (Fig. 2a, left). Data is presented as average relative P o which is the average of the logarithm to the base 10 of P o of each individual channel in the presence of β1a, relative to the logarithm of the P o of its internal control activity measured before application of β1a. Use of relative P o eliminates any effect of the normal variability between individual RyR channels [39, 42]. The logarithm is used to reveal the extent of variation of the effects of β1a. The average of the P o parameter values are also shown to indicate absolute level of each parameter (Fig. 2a–c, right), however, the relative changes should be used as the most accurate indicator of effects of β1a on RyRs. The effects on RyR2 channel activity were similar at +40 and −40 mV (relative P o with 10 nM β1a increasing by ~2-fold at +40 mV and ~1.7-fold at −40 mV), and these values were combined in the average data in Fig. 2a. It has been established that the activation of RyR1 by β1a is maximal at 10 nM and does not increase between 10 and 1000 nM [15]. Therefore, the reduced efficacy of 10 nM β1a on RyR2 suggests that affinity of RyR2 for β1a is lower than that of RyR1.
β1a subunit increases RyR1 and RyR2 channel activity in lipid bilayers. a and b Three second (3 s) traces of representative activity from native (a) RyR1 or (b) RyR2 channels recorded at a test potential of +40 mV. Openings are shown as upward inflections from the closed (c) state to the maximum open (o) level. Results are shown before (top panel; control, cis 10 μM [Ca2+] and 2 mM ATP) and after addition of 10 nM β1a subunit (middle panel) and then 100 nM β1a subunit (bottom panel) to the cis chamber. Open probability (P o ) is shown at the right hand corner of each trace
β1a subunit increases RyR1 and RyR2 channel activity in lipid bilayers. Single-channel gating parameters of RyR1 and RyR2 in response to 10 or 100 nM β1a subunit. a (left) Average relative P o (log10 rel P o ) is the average of the differences between the logarithm of P o following addition of β1a subunit (log10 P oB ) and the logarithm of the control P o (log10 P oC ), where P oC was measured before β1a subunit addition. b (left) Average relative mean open time (log10 rel T c ). c (left) Average relative mean closed time (log10 rel T o ) were calculated in the same way as the average log10 rel P o (above). a–c (right) The average single-channel parameter values are shown right of the corresponding relative values. a–c Single-channel parameters were calculated from ~180 s of channel activity (at +40 and −40 mV). Data are shown for 0 nM β1a (black bar), 10 nM β1a subunit (dark shade bar), and 100 nM β1a subunit (light shade bar), when examined. Error bars indicate ± SEM., n = 7−15 experiments/bar. *p < 0.05 vs control determined using paired (left) or un-paired (right) Student's t-test, # p < 0.05 vs 10 nM β1a subunit with RyR2 determined by ANOVA
The action of β1a on single-channel gating parameters (Fig. 2b, c) reflected the changes in P o (Fig. 2a, left and right). Both RyR1 and RyR2 activity increased with 10 and 100 nM β1a as a result of increases in mean channel open time and an abbreviation of mean channel closed time (Fig. 2b, c). There was also a trend towards a greater increase in mean open time in RyR2 with the higher β1a concentration that is consistent with the greater RyR2 open probability in the presence of 100 nM β1a. In contrast, RyR1 mean open time was similar at both β1a concentrations. Mean closed times were similarly reduced for both RyR isoforms by 10 and 100 nM β1a.
The effects of β1a on the open (τ o ) and closed (τ c ) time constant components and the relative distribution of events between time constants is presented in Figs. 3 and 4. Open events in RyR1 and RyR2 channels were well described by the sum of three time constants of ~1 (τ o1), ~10 (τ o2), and ~100 ms (τ o3) (Fig. 3). Closed times were also characterized by three time constants of ~1 (τ c1), ~10 (τ c2), and ~100 ms (τ c3) (Fig. 3). Figure 4 shows plots of the average probability of open (Fig. 4a, b, upper plots) and closed (Fig. 4a, b, lower plots) events as a function of the average time constant in the absence (control) and presence of either 10 or 100 nM β1a. Neither the time constants nor the relative probability of events for each time constant varied significantly (p = 0.12–0.99) between +40 and −40 mV and thus were combined in the average data.
Effects of β1a subunit on the distribution of representative RyR channel open and closed dwell times. Exponential open and closed time constants determined for RyR1 (a–c) and RyR2 (d–f). Open and closed times were collected into logged bins and the square root of the relative frequency of events (probability1/2) was plotted against the logarithm of open (open circles) or closed times (filled circles) in milliseconds. Examples are shown for the data from representative individual channels under control (a, d) and after exposure to 10 nM β1a subunit (b, e) and then 100 nM β1a subunit (c, f). The solid lines represent the fit of multiple exponentials to the data. The individual open time constants (τ o1, τ o2, and τ o3) and individual closed time constants (τ c1, τ c2, and τ c3) are indicated by arrows
Effects of β1a subunit on the distribution of RyR channel open and closed dwell times. The probability of open and closed events falling into each time constant is plotted against the respective time constant. The open (τ o , top graphs) and closed (τ c , bottom graphs) time constants and the probability of events in each time constant component were calculated from ~180 s of single channel activity (at +40 and −40 mV). Data is shown for a RyR1 and b RyR2 before (open circle) and after addition of 10 nM β1a subunit (open triangle) or 100 nM β1a subunit (open square), n = 6–12 channel traces. Error bars indicate ± SEM. The individual open time constants (τ o1, τ o2, and τ o3) and individual closed time constants (τ c1, τ c2, and τ c3) are indicated on the top and bottom graphs, respectively. *p < 0.05 vs the probability of events in each time constant in control with 10 nM β1a subunit, determined by ANOVA. # p < 0.05 vs the probability of events in each time constant in control with 100 nM β1a subunit, determined by ANOVA
Both 10 and 100 nM concentrations of β1a subunit decreased the fraction of RyR2 openings in τ o1 by 18.7 ± 1.8 % (p = 0.003) and 16.3 ± 2.0 % (p = 0.012), respectively (Fig. 4b). There was a corresponding increase in the fraction of events for the longer open time constant components at both β1a concentrations (Fig. 4b). In contrast to RyR2, the maximal increases in RyR1 activity after exposure to 10 or 100 nM β1a subunit were reflected in a reduction in the fraction of RyR1 open events in τ o1 and increases in events in the longer time constant group (τ o2) at both β1a concentrations (Fig. 4a). The closed time constant distributions in RyR2 and RyR1 were also altered by both 10 and 100 nM β1a, albeit in slightly different ways. There was an apparent transfer of 14.9 ± 3.1 % of closed events in RyR2 from τ c2 to τ c1 with 10 nM β1a and 13.8 ± 4.7 % with 100 nM β1a (Fig. 4b). In contrast, for RyR1, there were fewer long closed events in τ c3 and more short closed events in τ c1 with both 10 and 100 nM β1a than in control (Fig. 4a).
Overall, the results indicate that 10 and 100 nM β1a increase both RyR1 and RyR2 activity but with a reduced activation of RyR2 by 10 nM β1a. The dwell-time distributions indicate subtle differences between RyR1 and RyR2 in the effects of β1a in redistribution between the different time constant components. In particular, β1a induced a significant increase in events in the longest open time constant component in RyR2 but not RyR1 activity, while significantly reducing the number of events in the longest closed time constant component of RyR1 but not RyR2 activity.
The alternatively spliced ASI residues impact the functional interaction between β1a and RyR1
There is a curious similarity between the cardiac RyR2 isoform and the ASI(−) splice variant of RyR1 in that both lack ASI residues. This may be relevant to the effect of β1a on RyR1 and its contribution to EC coupling as we have shown that the presence of the alternatively spliced AS1 residues influences the gain of EC coupling in skeletal myotubes [27] and modulates RyR1 activity in vitro [28]. Therefore, we determined the impact of the alternatively spliced ASI residues on the activation of RyR1 by β1a. Recombinant ASI(-)RyR1 and ASI(+)RyR1 constructs [28, 43] were incorporated into lipid bilayers and the actions of the β1a subunit on channel activity examined (Fig. 5). The ASI(+) isoform is the adult isoform of RyR1 and its sequence is equivalent to the adult rabbit RyR1 used in the previous section and to the cloned wild type (WT) rabbit RyR1 sequence described in the following section. It is notable in the single-channel activity, as shown in Fig. 5a, b (and in Fig. 7 below), that the recombinant channels (both ASI(-)RyR1 and ASI(+)RyR1) display strong sub-conductance (or sub-state) activity, with long channel openings to levels at ~50 % of the maximal conductance. Channel activity was measured as usual ("Methods" section) with an open threshold set at ~20 % of the maximum single-channel conductance to exclude baseline noise but to include sub-conductance openings to levels >20 % of the maximum. It is important to note that similar amounts of sub-conductance activity were seen in HEK293-expressed WT and ASI(−) compared in Fig. 5 and in WT and RyR1 K-Q channels compared in Fig. 8. Similarly, the smaller amounts of sub-conductance activity were comparable in RyR1 and RyR2 isolated from muscle tissue and compared in Fig. 1. In each case, sub-state activity was similar in constructs being compared.
ASI residues enhance the effect of β1a on recombinant RyR1 channel activity in lipid bilayers. a, b Three second (3 s) traces of ASI(+)RyR1 (a) or ASI(−)RyR (b) activity at +40 mV, opening upwards from the closed (c) to maximum open (o) level, before (top panel; control, cis 10 μM [Ca2+], no ATP) and after addition of 10 nM β1a subunit (middle panel) or 50 nM β1a subunit (bottom panel) to the cis chamber. c Average relative P o (log10 rel P o ) were calculated in the same ways as described for averaged relative P o in Fig. 2a, left. d Average P o . c and d Single channel parameters were calculated from ~180 s of channel activity (at +40 and −40 mV). Data in d is shown for 0 nM β1a (black bar), 10 nM β1a subunit (dark grey bar), and 50 nM β1a subunit (light grey bar). Error bars indicate + SEM, n = 9–12 experiments/bar. *p < 0.05 vs control or 0 nM β1a subunit determined using paired (c) or un-paired (d) Student's t-test, # p < 0.05 vs 10 nM β1a subunit on RyR1 ASI(+) determined by ANOVA
Subconductance activity has been associated with full or partial depletion of FKBP12 from RyRs [29, 30]. Densitometry measurements of immunoprobed RyR and FKBP following co-immunoprecipitation of the RyR1 complex indicate a 65 % reduction (p = 0.014) in FKBP bound to RyR1 in recombinant ASI(+) RyR1 when compared to native RyR1 isolated from muscle. Thus, the sub-conductance activity observed for the recombinant channels is consistent with reduced FKBP12 expression in HEK293 cells and reduced amounts associated with the recombinant RyR1 channels.
Cytoplasmic addition of 10 nM β1a to ASI(+)RyR1 channels produced a significant ~4.4-fold increase in relative P o and a significantly smaller ~2.3-fold increase in relative P o of ASI(-)RyR1 (Fig. 5c). There was no significant difference between the degree of activation of the two RyR1 splice variants following application of 50 nM β1a (Fig. 5c), so that the efficacy of 10 nM β1a on ASI(−)RyR1 isoform appears to be less than that on ASI(+)RyR1. Therefore, the responses of both ASI(-)RyR1 and RyR2 that lack the ASI sequence to application of 10 nM β1a are significantly reduced compared with RyR proteins that contain the ASI sequence, i.e., ASI(+)RyR1 or adult RyR1 isolated from rabbit skeletal muscle.
The impact of the polybasic K3495-R3502 residues on EC coupling and β1a activation of RyR1
The RyR1 polybasic motif facilitates EC coupling in expressing dyspedic myotubes
The PMB (residues K3495-R3502) in RyR1, located immediately downstream from the ASI region (A3481-Q3485), has been implicated in β1a binding to RyR1 and EC coupling [18]. To assess the effect of the PMB on the interaction between β1a and RyR1 channels in bilayers, a mutant of RyR1 in which all six polybasic residues were substituted with glutamines (RyR1 K-Q) was constructed. The functional effects of the RyR1 K-Q mutant on voltage-gated SR Ca2+ release and DHPR L-type currents were confirmed following expression in dyspedic myotubes [18].
Depolarization-dependent Ca2+ release was measured simultaneously with DHPR L-type Ca2+ currents (Fig. 6a, b). Peak L-current density (Fig. 6c) and maximal DHPR Ca2+ conductance (G max) were significantly reduced in RyR1 K-Q mutant-expressing myotubes compared to WT RyR1-expressing myotubes (Table 1). Consistent with an earlier report [18], maximal voltage-induced SR Ca2+ release was also significantly reduced in RyR1 K-Q mutant-expressing myotubes (Fig. 6d and Table 1). In addition, the maximum rate of depolarization-induced Ca2+ release (approximated from the peak of the first derivative of the fluo-4 fluorescence trace elicited during a test depolarization at 30 mV) was significantly reduced in RyR1 K-Q-expressing myotubes compared to WT RyR1-expressing myotubes (Fig. 6e). These findings indicate that the RyR1 K-Q mutation substantially reduces voltage-induced SR Ca2+ release, with a small effect on maximal L-channel conductance. It should be noted that the reduced Ca2+ release is unlikely to result from reduced expression of RyR1 K-Q as it was previously shown that peak 4-chloro-m-cresol stimulated SR Ca2+ release was similar in WT RyR1- and RyR1 K-Q-expressing myotubes [18]. In addition, we found that WT RyR1 and RyR1 K-Q exhibited a similar punctate pattern and DHPR co-localization in expressing myotubes, consistent with similar levels of WT and K-Q expression and junctional localization (Additional file 1: Figure S1 and Additional file 2).
PBM mutation diminishes depolarization-induced SR Ca2+ release and DHPR Ca2+ currents in dyspedic myotubes. Dyspedic myotubes were transfected with either WT RyR1 or RyR1 K-Q mutant. a, b Representative L-type currents (lower trace) and Ca2+ transient (upper trace) obtained following depolarization to the indicated potentials of dyspedic myotubes expressing either a WT RyR1 or b RyR1 K-Q mutant. c Voltage dependence of average (±SEM) peak L-type Ca2+ current density (pA/pF) as a function of voltage. The data were fit (continuous line) with a modified Boltzmann function. d Voltage dependence of average (±SEM) peak Ca2+ transient amplitude as a function of voltage. The data were fit (continuous line) with a Boltzmann function. c, d Average (±SEM) values of the parameters from individual fits to each myotube are shown in Table 1. e The average (±SEM) peak of the first derivative of the fluo-4-fluorescence trace elicited during the test depolarization at 30 mV. c–e n = 10–12 myotubes
The small reduction in G max is unlikely to fully account for the large reduction in voltage-induced SR Ca2+ release observed in RyR1 K-Q-expressing myotubes (Fig. 6d). This is supported by the sigmoidal voltage dependence of peak Ca2+ release, a feature of skeletal-type EC coupling demonstrating that Ca2+ release is independent of Ca2+ influx. The reduction in depolarization-induced DHPR currents and SR Ca2+ release could have resulted from poor targeting of the RyR1 K-Q mutant to the triad junction. However, double immunofluorescence labeling of RyR1 and the DHPR α1 subunit in expressing myotubes indicates that the DHPR and RyR1 proteins similarly co-localized as indicated by the yellow puncta in the overlays shown in Additional file 1. Therefore, compared to WT RyR1, the efficacy of voltage-induced SR Ca2+ release is reduced in RyR1 K-Q-expressing myotubes.
The polybasic motif in RyR1 is required for β1a activation of RyR1
We explored the possibility that the reduction in efficiency of depolarization-induced Ca2+ release was due to an effect of the K-Q substitution on gating properties of the RyR channel or to its response to cytoplasmic [Ca2+] or ATP. RyR1 K-Q mutant channels exhibited unitary conductance of 222.3 ± 18.5 pS at +40 and 247.3 ± 33.1 pS at −40 mV, similar (p = 0.07–0.78) to that of WT RyR1 (311.9 ± 25.2 pS at +40 and 236.5 ± 12.5 pS at −40 mV), or ~220 pS as previously reported for WT RyR1 expressed in HEK293 cells under the recording conditions used in this study [44].
The effects of cytoplasmic Ca2+ and ATP were similar (p = 0.356–0.894) between +40 and -40 mV, and the data were combined. A decrease in cis free [Ca2+] from 1 mM to 10 μM caused a 1.7-fold increase in WT RyR1 P o and a similar 1.6-fold increase in RyR1 K-Q P o , (log10 rel P o of 0.22 ± 0.06 [p = 0.013] and 0.20 ± 0.09 [p = 0.048], respectively, n = 7 for each). Similar increases in P o with a decrease in cis free [Ca2+] from 1 mM to 10 μM have been reported previously for recombinant WT RyR1 channels in lipid bilayers [44] and for [3H]ryanodine binding to RyR1 [33]. Addition of 2 mM Na2 ATP to the cis solution increased WT RyR1 activity by 2.2-fold and RyR1 K-Q activity by 2.5-fold (log10 rel P o of 0.34 ± 0.12 [p = 0.032] and 0.40 ± 0.14 [p = 0.037], respectively, n = 7 for each). As observed for recombinant WT RyR1, prominent sub-state activity was also observed for recombinant RyR1 K-Q channels (Fig. 7a, b). The similar conductance, sub-state activity, and regulation by Ca2+ and ATP between WT RyR1 and RyR1 K-Q channels indicate that the K-Q mutation does not markedly alter RyR1 function in the absence of the β1a subunit.
The K-Q mutation abolishes β1a activation of RyR1 activity. a, b Three second (3 s) traces of WT RyR1 (a) or RyR K-Q mutant (b) activity at +40 mV, opening upward from the closed (c) state to the maximum open (o) level, before (top panel; control, cis 10 μM [Ca2+] and 2 mM ATP) and after addition of 100 nM β1a subunit (bottom panel) to the cis chamber. Open probability (P o ) is shown at the right hand corner of each trace
As before (Figs. 1, 2, 3, 4 and 5), cis addition of 100 nM β1a significantly increased WT RyR1 channel activity (Fig. 7a). In marked contrast, the activity of RyR1 K-Q channels was unaffected by addition of 100 nM β1a (Fig. 7a). As effects of the β1a subunit on WT or RyR1 K-Q channels did not differ (p = 0.677–0.991) between +40 and −40 mV, values at these two potentials were again combined in the average data. On average, addition of 10 or 100 nM β1a significantly increased WT RyR1 relative P o by 1.8- and 1.9-fold, respectively (Fig. 8a), due to a significant increase in mean open time (Fig. 8b) and decrease in mean closed time (Fig. 8c). On the other hand, neither the relative P o nor the mean open or closed times of RyR1 K-Q channels were significantly altered by addition of either 10 or 100 nM β1a subunit (Fig. 8a–c). Thus, the ability of the β1a subunit to activate RyR1 was abolished by neutralizing the polybasic residues within the K4395-R3502 region, indicating that the PBM is required for the functional effect of β1a subunit on RyR1 activity.
Effect of β1a subunit on RyR1 in lipid bilayers is abolished for the K-Q mutation. a–c (left) Average relative P o (log10 rel P o ; a), mean open time (log10 To; b) or mean closed time (log10 rel T c ; c) were calculated in the same ways as described for averaged relative P o in Fig. 2a, left. a–c (right) The average of the single channel parameter values shown to the right of the corresponding relative values. a–c Single-channel parameters were calculated from ~180 s of channel activity (at +40 and −40 mV). Data are shown without β1a (0 nM β1a) (black bar), 10 nM β1a subunit (dark grey bar), and 100 nM β1a subunit (light grey bar), where applicable. Error bars indicate ± −SEM. n = 5–14 experiments/bar. *p < 0.05 vs control determined by paired (left) or un-paired (right) Student's t-test, # p < 0.05 vs WT RyR2 determined by ANOVA
The results presented here provide novel insight into the regions of RyR1 that influence the action of the β1a subunit on RyR1 activity and have implications for the role of the β1a subunit in skeletal muscle EC coupling. Our results demonstrate that the functional effect of 100 nM β1a subunit is conserved between RyR1 and RyR2, although the activation by 10 nM β1a was lower in RyR2 than in RyR1. Interestingly, a difference was also observed for the activation of ASI(−)RyR1 and ASI(+)RyR1 isoforms by 10 nM β1a, in that the lower concentration of β1a was also less effective in activating ASI(−)RyR1 than ASI(+)RyR1. In contrast to the maintained, although different, activation of the two RyR isoforms by the β1a subunit, neutralization of the PBM in RyR1 abolished β1a activation of RyR1. One interpretation of this finding is that the ~50 % reduction in depolarization-dependent Ca2+ release results from disruption of direct β1a activation of RyR1 during EC coupling.
The action of β1a subunit on RyR1 and RyR2 channel activity is largely conserved
The activation of RyR1 and RyR2 by β1a suggests that the β1a binding site is conserved across these RyR isoforms. The small concentration-dependent differences between effects on RyR1 and RyR2 suggest minor differences in either the binding residues or the binding pocket that reduces the affinity of β1a for RyR2 (and ASI(−)RyR1). It is difficult to identify specific sequences that could account for the different affinities for β1a as there is a 13.2 % (>600 residues) sequence disparity between RyR1 and RyR2 isoforms, according to a CLUSTALW multiple alignment [45] of rabbit RyR1 [Swiss-Prot: P11716.1] and rabbit RyR2 [Swiss-Prot: P30957.3]. Given that the string of positive residues is reduced from six to five, this variation is unlikely to account for the observed concentration-dependent difference between β1a modulation of the two isoforms, although such a possibility cannot be fully excluded. Interestingly, except for one additional positive charge in the RyR1, the PBM is conserved in RyR2 (RyR1: K3495KKRR_ _ R3502 and RyR2: K3452MKRK_ _R3459) and thus is unlikely to account for the observed concentration-dependent difference between β1a modulation of the two isoforms. However, just upstream from the PBM, four of the five ASI residues in RyR1 (A3481-Q3485) are missing from the rabbit (and predicted pig) RyR2 sequence. It may be significant that the lack of ASI residues in full-length ASI(−)RyR1 reduces the efficacy of 10 nM β1a-mediated activation. Thus, it is plausible that the difference between β1a modulation of RyR1 and RyR2 is partially due to the presence or absence of the ASI residues, respectively. The conservation of the modulatory effect of β1a on RyR1 and RyR2 does not reflect the in vivo studies showing that RyR2 is unable to replace RyR1 in skeletal muscle EC coupling [25, 46]. However, the lack of skeletal muscle EC coupling in RyR2-expressing dyspedic myotubes is most likely due to the fact that DHPR tetrads are not restored in RyR2-expressing dyspedic myotubes [46], indicating that β1a is unable to correctly align DHPRs with RyR2 in order to ensure a direct interaction between the two proteins. It is also possible that the II-III loop critical region is unable to engage with RyR2 through β1a.
The importance of the RyR1 polybasic motif for β1a subunit increase in RyR1 channel activity
The role of the RyR1 PBM in the β1a-mediated increase in channel activity was assessed from the response of recombinant RyR1 K-Q channels in bilayers to the addition of the β1a subunit. RyR1 K-Q and WT RyR1 channel conductance and regulation by cytoplasmic modulators were similar, indicating that RyR1 K-Q channels function normally. However, RyR1 K-Q channel activity was unaltered by the β1a subunit. Therefore, the reduction in voltage-gated Ca2+ release observed in RyR1 K-Q-expressing myotubes is likely to reflect a specific effect of the polybasic residues on β1a subunit regulation of RyR1 channel activity during EC coupling rather than a general effect on RyR1 channel function. However, we cannot rule out the possibility that modest differences in RyR1 expression contribute to the reduced L-channel conductance and voltage-gated Ca2+ release in K-Q-expressing myotubes, although this seems unlikely given previous reports of 4-chloro-m-cresol stimulated SR Ca2+ release in myotubes RyR1 K-Q [18] and the data in Additional file 1.
Given that the PBM in the larger M3201-W3661 fragment of RyR1 is required for pull down of the β1a subunit [18], it is likely that the lack of an effect of β1a on RyR1 K-Q channels is due to the inability of β1a to bind to the PBM mutant channel. Alternatively, the PBM may be important for maintaining RyR1 in a conformation permissive for β1a binding, rather than directly contributing to binding, as the RyR1 basic residues would be unlikely to interact with the hydrophobic residues in the β1a C-terminal domain (L496, L500, and W503) previously shown to bind RyR1 [21]. In addition, although the PBM is implicated in ASI-mediated inter-domain inhibition of RyR1 [27, 43], the structure of this motif is not altered by substituting three of the six basic residues with alanine residues [27]. Thus, neutralization of the PBM more likely disrupts the inter-domain interaction rather than changes the intrinsic structure of the ASI-polybasic region. In this case, disruption of the RyR1 PBM inter-domain interaction may alter an essential conformation of the β1a binding site or prevent β1a access to its binding site on RyR1.
The β1a subunit is unlikely to be the sole signaling conduit between the DHPR and RyR1 during EC coupling. Consistent with this, expression of the RyR1 K-Q mutant in dyspedic myotubes partially restored sigmoidal, depolarization-dependent Ca2+ release even though β1a modulation of RyR1 in bilayers was abolished. In addition, previous studies have also shown that truncation of β1a C-terminal residues, essential for β1a modulation of RyR1, also reduces but does not abolish depolarization-induced Ca2+ release [17, 40], an outcome that was also observed in adult skeletal muscle fibers that overexpressed a β1a subunit interacting protein, Rem [20]. Finally, alanine substitution of β1a subunit hydrophobic triplet residues (L496, L500, and W503) only partially reduces depolarization-induced Ca2+ release in β1a null myotubes [14], despite this mutation fully abolishing β1a modulation of RyR1 activity in vitro [21].
The role of the RyR1 ASI residues in β1a subunit increase in RyR1 channel activity
It is curious that 10 nM β1a subunit increased ASI(−)RyR1 activity less than ASI(+)RyR1 given that EC coupling is enhanced in dyspedic myotubes that express ASI(−)RyR1 relative to ASI(+)RyR1 [27]. The greater activation of ASI(+)RyR1 by 10 nM β1a is consistent with effects reported previously of agonists of RyR1, including caffeine and 4-chloro-m-cresol [27, 28]. Thus, the increased gain of EC coupling observed for ASI(-)RyR1 may not reflect a contribution of the β1a subunit to EC coupling. However, it is possible that activation of RyR1 by agonist binding includes a common mechanism for activation by agonists that differs from that involved in EC coupling. As it is likely that more than one interaction between RyR1 and the DHPR is involved in EC coupling, the combined result of these interactions may produce different effects on the two alternatively spliced variants such that ASI(−)RyR1 channels are activated more strongly by depolarization than ASI(+)RyR1 channels.
The possibility that the ASI region is involved in an inhibitory inter-domain interaction was previously investigated using peptides corresponding to the ASI region from T3471–G3500 [43]. The peptide corresponding to the ASI(−) sequence was more effective in activating ASI(-)RyR1 than ASI(+)RyR1. Together with the finding that ASI(−)RyR1 channels were generally less active than ASI(+)RyR1 channels, these findings suggest that stronger inhibitory inter-domain interactions may exist in ASI(−)RyR1. It is possible then that the triggering mechanism activated during EC coupling disrupts this inhibitory inter-domain interaction giving rise to greater activation of ASI(-)RyR1. This disruption may not occur with RyR1 agonist binding and indeed a stronger inhibitory inter-domain interaction in ASI(-)RyR1 may even oppose activation by β1a and other agonists, allowing for these triggers to more strongly activate ASI(+)RyR1 channels.
The results presented in this study suggest that a functional β1a interaction is conserved between RyR1 and RyR2 and that β1a activation of RyRs is regulated by the presence of the ASI residues. Importantly, we also show that the PBM residues are essential for direct activation of RyR1 by β1a subunit in vitro. This suggests that the ~50 % reduction in Ca2+ release during EC coupling in dyspedic myotubes expressing RyR1 with a neutralized PBM is due to removal of β1a activation of RyR1, and hence, that other DHPR-RyR1 coupling elements (e.g., II-III loop critical domain) contribute to transmission of the remaining Ca2+ release during EC coupling.
ASI:
alternatively splicing region I
DHPR:
dihydropyridine receptor
excitation-contraction
PBM:
polybasic motif
RyR1:
skeletal ryanodine receptor
cardiac ryanodine receptor
sarcoplasmic reticulum
The authors are grateful to Suzy Pace and to Joan Stivala for assistance with the preparation of skeletal and cardiac muscle SR vesicles. We also thank Dr. PD. Allen for providing access to the dyspedic mice used in this study. The work was supported by grants from the Australian National Health and Medical Research Council, APP1020589 and APP APP1002589 to AFD, MGC, and PGB, Muscular Dystrophy Association (MDA275574) and National Institutes of Health (AR059646) to RTD, a Career development award (APP1003985) to NAB, an Australian Postgraduate Award to RTR, and an Australia National University postgraduate award to HW.
Additional file 1: Figure S1. Description of data: a figure with two parts showing immune-fluorescent labeling of DHPR and RyR1 in myotubes.
Additional file 2: A legend to additional Figure S1.
All authors participated in study design, data interpretation, and preparation and critical revision of the manuscript for important intellectual content. RR contributed to the design of the native RyR1 and RyR2 experiments and to the recombinant WT RyR1 and RyR1 K-Q. She carried out lipid bilayer experiments and analysis of data and expressed and purified RyR1 K-Q and WT RyR1 constructs. HW contributed to the design of the ASI(+) and ASI(−) experiments, expressed and isolated channel protein, and carried out single-channel recording and analysis with recombinant ASI(+) and ASI(−) RyR1. LG designed the RyR1 K-Q construct and carried out simultaneous measurements of macroscopic Ca2+ currents and Ca2+ transients in myotubes and immunofluorescence labeling of myotubes and data analysis. MGC and PGB provided major input into the design and data interpretation. PGB also contributed to recombinant protein expression and purification. NB performed the experiments and analyzed the data showing the presence and level of FKBP12 expression in HEK cells. RD participated in the design of the study, particularly to RyR1 K-Q construct design, measurements of macroscopic Ca2+ currents, and Ca2+ transients in myotubes and immunofluorescence labeling of myotubes, and contributed to the analysis and interpretation of the data. AD contributed to the concepts, design and coordination of all aspects of the experiments, interpretation of the data, and coordination of manuscript preparation and submission. All authors read and approved the final manuscript.
Department of Biochemistry, Molecular Biology and Biophysics, University of Minnesota, Minneapolis, MN, USA
John Curtin School of Medical Research, Australian National University, Canberra, Australian Capital, PO Box 334, Canberra, ACT 2601, Australia
Department of Physiology and Pharmacology, University of Rochester Medical Center, Rochester, NY, USA
Discipline of Biomedical Sciences, Centre for Research in Therapeutic Solutions, University of Canberra, Canberra, ACT 2601, Australia
Dirksen RT. Bi-directional coupling between dihydropyridine receptors and ryanodine receptors. Front Biosci. 2002;7:d659–670.PubMedView ArticleGoogle Scholar
Dulhunty AF, Haarmann CS, Green D, Laver DR, Board PG, Casarotto MG. Interactions between dihydropyridine receptors and ryanodine receptors in striated muscle. Prog Biophys Mol Biol. 2002;79:45–75.PubMedView ArticleGoogle Scholar
Rebbeck RT, Karunasekara Y, Board PG, Beard NA, Casarotto MG, Dulhunty AF. Skeletal muscle excitation-contraction coupling: who are the dancing partners? Int J Biochem Cell Biol. 2014;48:28–38.PubMedView ArticleGoogle Scholar
Beam KG, Bannister RA. Looking for answers to EC coupling's persistent questions. J Gen Physiol. 2010;136:7–12.PubMed CentralPubMedView ArticleGoogle Scholar
Gregg RG, Messing A, Strube C, Beurg M, Moss R, Behan M, et al. Absence of the beta subunit (cchb1) of the skeletal muscle dihydropyridine receptor alters expression of the alpha 1 subunit and eliminates excitation-contraction coupling. Proc Natl Acad Sci U S A. 1996;93:13961–6.Google Scholar
Tanabe T, Beam KG, Powell JA, Numa S. Restoration of excitation-contraction coupling and slow calcium current in dysgenic muscle by dihydropyridine receptor complementary DNA. Nature. 1988;336:134–9.PubMedView ArticleGoogle Scholar
Rios E, Brum G. Involvement of dihydropyridine receptors in excitation-contraction coupling in skeletal muscle. Nature. 1987;325:717–20.PubMedView ArticleGoogle Scholar
Schneider MF, Chandler WK. Voltage dependent charge movement of skeletal muscle: a possible step in excitation-contraction coupling. Nature. 1973;242:244–6.PubMedView ArticleGoogle Scholar
Nakai J, Tanabe T, Konno T, Adams B, Beam KG. Localization in the II-III loop of the dihydropyridine receptor of a sequence critical for excitation-contraction coupling. J Biol Chem. 1998;273:24983–6.PubMedView ArticleGoogle Scholar
Takekura H, Paolini C, Franzini-Armstrong C, Kugler G, Grabner M, Flucher BE. Differential contribution of skeletal and cardiac II-III loop sequences to the assembly of dihydropyridine-receptor arrays in skeletal muscle. Mol Biol Cell. 2004;15:5408–19.PubMed CentralPubMedView ArticleGoogle Scholar
Wilkens CM, Kasielke N, Flucher BE, Beam KG, Grabner M. Excitation-contraction coupling is unaffected by drastic alteration of the sequence surrounding residues L720-L764 of the alpha 1S II-III loop. Proc Natl Acad Sci U S A. 2001;98:5892–7.PubMed CentralPubMedView ArticleGoogle Scholar
Schredelseker J, Dayal A, Schwerte T, Franzini-Armstrong C, Grabner M. Proper restoration of excitation-contraction coupling in the dihydropyridine receptor beta1-null zebrafish relaxed is an exclusive function of the beta1a subunit. J Biol Chem. 2009;284:1242–51.PubMed CentralPubMedView ArticleGoogle Scholar
Dayal A, Bhat V, Franzini-Armstrong C, Grabner M. Domain cooperativity in the beta1a subunit is essential for dihydropyridine receptor voltage sensing in skeletal muscle. Proc Natl Acad Sci U S A. 2013;110:7488–93.PubMed CentralPubMedView ArticleGoogle Scholar
Eltit JM, Franzini-Armstrong C, Perez CF. Amino acid residues 489–503 of dihydropyridine receptor (DHPR) beta1a subunit are critical for structural communication between the skeletal muscle DHPR complex and type 1 ryanodine receptor. J Biol Chem. 2014;289:36116–24.PubMedView ArticleGoogle Scholar
Rebbeck RT, Karunasekara Y, Gallant EM, Board PG, Beard NA, Casarotto MG, et al. The beta(1a) subunit of the skeletal DHPR binds to skeletal RyR1 and activates the channel via its 35-residue C-terminal tail. Biophys J. 2011;100:922–30.Google Scholar
Garcia MC, Carrillo E, Galindo JM, Hernandez A, Copello JA, Fill M, et al. Short-term regulation of excitation-contraction coupling by the beta1a subunit in adult mouse skeletal muscle. Biophys J. 2005;89:3976–84.Google Scholar
Beurg M, Ahern CA, Vallejo P, Conklin MW, Powers PA, Gregg RG, et al. Involvement of the carboxy-terminus region of the dihydropyridine receptor beta1a subunit in excitation-contraction coupling of skeletal muscle. Biophys J. 1999;77:2953–67.Google Scholar
Cheng W, Altafaj X, Ronjat M, Coronado R. Interaction between the dihydropyridine receptor Ca2+ channel beta-subunit and ryanodine receptor type 1 strengthens excitation-contraction coupling. Proc Natl Acad Sci U S A. 2005;102:19225–30.PubMed CentralPubMedView ArticleGoogle Scholar
Hernandez-Ochoa EO, Olojo RO, Rebbeck RT, Dulhunty AF, Schneider MF. beta1a490-508, a 19-residue peptide from C-terminal tail of Cav1.1 beta1a subunit, potentiates voltage-dependent calcium release in adult skeletal muscle fibers. Biophys J. 2014;106:535–47.PubMed CentralPubMedView ArticleGoogle Scholar
Beqollari D, Romberg CF, Filipova D, Meza U, Papadopoulos S, Bannister RA. Rem uncouples excitation-contraction coupling in adult skeletal muscle fibers. J Gen Physiol. 2015;46(1):97–108.View ArticleGoogle Scholar
Karunasekara Y, Rebbeck RT, Weaver LM, Board PG, Dulhunty AF, Casarotto MG. An alpha-helical C-terminal tail segment of the skeletal L-type Ca2+ channel beta1a subunit activates ryanodine receptor type 1 via a hydrophobic surface. FASEB J. 2012;26:5049–59.PubMedView ArticleGoogle Scholar
Beurg M, Sukhareva M, Ahern CA, Conklin MW, Perez-Reyes E, Powers PA, et al. Differential regulation of skeletal muscle L-type Ca2+ current and excitation-contraction coupling by the dihydropyridine receptor beta subunit. Biophys J. 1999;76:1744–56.Google Scholar
Tanabe T, Mikami A, Numa S, Beam KG. Cardiac-type excitation-contraction coupling in dysgenic skeletal muscle injected with cardiac dihydropyridine receptor cDNA. Nature. 1990;344:451–3.PubMedView ArticleGoogle Scholar
Protasi F, Takekura H, Wang Y, Chen SR, Meissner G, Allen PD, et al. RYR1 and RYR3 have different roles in the assembly of calcium release units of skeletal muscle. Biophys J. 2000;79:2494–508.Google Scholar
Nakai J, Ogura T, Protasi F, Franzini-Armstrong C, Allen PD, Beam KG. Functional nonequality of the cardiac and skeletal ryanodine receptors. Proc Natl Acad Sci U S A. 1997;94:1019–22.PubMed CentralPubMedView ArticleGoogle Scholar
Fessenden JD, Wang Y, Moore RA, Chen SR, Allen PD, Pessah IN. Divergent functional properties of ryanodine receptor types 1 and 3 expressed in a myogenic cell line. Biophys J. 2000;79:2509–25.PubMed CentralPubMedView ArticleGoogle Scholar
Kimura T, Lueck JD, Harvey PJ, Pace SM, Ikemoto N, Casarotto MG, et al. Alternative splicing of RyR1 alters the efficacy of skeletal EC coupling. Cell Calcium. 2009;45:264–74.Google Scholar
Kimura T, Nakamori M, Lueck JD, Pouliquin P, Aoike F, Fujimura H, et al. Altered mRNA splicing of the skeletal muscle ryanodine receptor and sarcoplasmic/endoplasmic reticulum Ca2+-ATPase in myotonic dystrophy type 1. Hum Mol Genet. 2005;14:2189–200.Google Scholar
Ahern GP, Junankar PR, Dulhunty AF. Single channel activity of the ryanodine receptor calcium release channel is modulated by FK-506. FEBS Lett. 1994;352:369–74.PubMedView ArticleGoogle Scholar
Ahern GP, Junankar PR, Dulhunty AF. Subconductance states in single-channel activity of skeletal muscle ryanodine receptors after removal of FKBP12. Biophys J. 1997;72:146–62.PubMed CentralPubMedView ArticleGoogle Scholar
Saito A, Seiler S, Chu A, Fleischer S. Preparation and morphology of sarcoplasmic reticulum terminal cisternae from rabbit skeletal muscle. J Cell Biol. 1984;99:875–85.PubMedView ArticleGoogle Scholar
Chamberlain BK, Fleischer S. Isolation of canine cardiac sarcoplasmic reticulum. Methods Enzymol. 1988;157:91–9.PubMedView ArticleGoogle Scholar
Laver DR, Roden LD, Ahern GP, Eager KR, Junankar PR, Dulhunty AF. Cytoplasmic Ca2+ inhibits the ryanodine receptor from cardiac muscle. J Membr Biol. 1995;147:7–22.PubMedView ArticleGoogle Scholar
Nakai J, Dirksen RT, Nguyen HT, Pessah IN, Beam KG, Allen PD. Enhanced dihydropyridine receptor channel activity in the presence of ryanodine receptor. Nature. 1996;380:72–5.PubMedView ArticleGoogle Scholar
Avila G, Dirksen RT. Functional impact of the ryanodine receptor on the skeletal muscle L-type Ca(2+) channel. J Gen Physiol. 2000;115:467–80.PubMed CentralPubMedView ArticleGoogle Scholar
Avila G, O'Connell KM, Groom LA, Dirksen RT. Ca2+ release through ryanodine receptors regulates skeletal muscle L-type Ca2+ channel expression. J Biol Chem. 2001;276:17732–8.PubMedView ArticleGoogle Scholar
Laver DR, van Helden DF. Three independent mechanisms contribute to tetracaine inhibition of cardiac calcium release channels. J Mol Cell Cardiol. 2011;51:357–69.PubMedView ArticleGoogle Scholar
Sigworth FJ, Sine SM. Data transformations for improved display and fitting of single-channel dwell time histograms. Biophys J. 1987;52:1047–54.PubMed CentralPubMedView ArticleGoogle Scholar
Tae HS, Cui Y, Karunasekara Y, Board PG, Dulhunty AF, Casarotto MG. Cyclization of the intrinsically disordered alpha1S dihydropyridine receptor II-III loop enhances secondary structure and in vitro function. J Biol Chem. 2011;286:22589–99.PubMed CentralPubMedView ArticleGoogle Scholar
Sheridan DC, Cheng W, Ahern CA, Mortenson L, Alsammarae D, Vallejo P, et al. Truncation of the carboxyl terminus of the dihydropyridine receptor beta1a subunit promotes Ca2+ dependent excitation-contraction coupling in skeletal myotubes. Biophys J. 2003;84:220–37.Google Scholar
Goonasekera SA, Chen SR, Dirksen RT. Reconstitution of local Ca2+ signaling between cardiac L-type Ca2+ channels and ryanodine receptors: insights into regulation by FKBP12.6. Am J Physiol Cell Physiol. 2005;289:C1476–1484.PubMedView ArticleGoogle Scholar
Copello JA, Barg S, Onoue H, Fleischer S. Heterogeneity of Ca2+ gating of skeletal muscle and cardiac ryanodine receptors. Biophys J. 1997;73:141–56.PubMed CentralPubMedView ArticleGoogle Scholar
Kimura T, Pace SM, Wei L, Beard NA, Dirksen RT, Dulhunty AF. A variably spliced region in the type 1 ryanodine receptor may participate in an inter-domain interaction. Biochem J. 2007;401:317–24.PubMed CentralPubMedView ArticleGoogle Scholar
Goonasekera SA, Beard NA, Groom L, Kimura T, Lyfenko AD, Rosenfeld A, et al. Triadin binding to the C-terminal luminal loop of the ryanodine receptor is important for skeletal muscle excitation contraction coupling. J Gen Physiol. 2007;130:365–78.Google Scholar
Combet C, Blanchet C, Geourjon C, Deleage G. NPS@: network protein sequence analysis. Trends Biochem Sci. 2000;25:147–50.PubMedView ArticleGoogle Scholar
Protasi F, Paolini C, Nakai J, Beam KG, Franzini-Armstrong C, Allen PD. Multiple regions of RyR1 mediate functional and structural interactions with alpha(1S)-dihydropyridine receptors in skeletal mudarw1scle. Biophys J. 2002;83:3230–44.Google Scholar
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
|
CommonCrawl
|
Ground state solutions for a class of generalized quasilinear Schrödinger–Poisson systems
Liejun Shen ORCID: orcid.org/0000-0003-4412-26941,2
Boundary Value Problems volume 2018, Article number: 44 (2018) Cite this article
This paper is concerned with the existence of ground state solutions for a class of generalized quasilinear Schrödinger–Poisson systems in \(\mathbb {R}^{3}\) which have appeared in plasma physics, as well as in the description of high-power ultrashort lasers in matter. By employing a change of variables, the generalized quasilinear systems are reduced to a semilinear one, whose associated functionals are well defined in the usual Sobolev space and satisfy the mountain-pass geometric. Finally, we use Ekeland's variational principle and the mountain-pass theorem to obtain the ground state solutions for the given problem.
Introduction and main results
The aim of this paper is to establish the existence of ground state solutions to the following generalized quasilinear Schrödinger–Poisson system:
$$ \textstyle\begin{cases} -\operatorname{div}(g^{2}(u)\nabla u)+g(u)g^{\prime}(u)|\nabla u|^{2}+a(x)u+\phi G(u)g(u)=k(x,u), & x\in \mathbb {R}^{3}, \\ -\Delta\phi=G^{2}(u), & x\in \mathbb {R}^{3}, \end{cases} $$
where \(g:\mathbb {R}\to \mathbb {R}^{+}=[0,\infty)\) is an even differential function, \(g^{\prime}(s)\geq0\) for all \(s\geq0\), and \(G(t)=\int_{0}^{t}g(s)\,ds\), \(k:\mathbb {R}^{3}\times \mathbb {R}\to \mathbb {R}\) is continuous, \(a(x):\mathbb {R}^{3}\to \mathbb {R}^{+}\) is continuous.
If \(\phi=0\) in (1.1), solutions of this type are related to the existence of solitary wave solutions for quasilinear Schrödinger equations of the form
$$ i\partial_{t}z=-\Delta z+W(x)z-k(x,z)-\omega\Delta l \bigl(|z|^{2}\bigr)l^{\prime }\bigl(|z|^{2}\bigr)z,\quad x\in \mathbb {R}^{N}, $$
where ω is a real constant, \(N\geq3\), \(z:\mathbb {R}\times \mathbb {R}^{N}\to \mathbb{C}\), \(W:\mathbb {R}^{N}\to \mathbb {R}\) is a given potential, \(l:\mathbb {R}\to \mathbb {R}\) and \(k:\mathbb {R}^{N}\times \mathbb {R}\to \mathbb {R}\) are suitable functions.
The semilinear case corresponding to \(\omega=0\) has been studied extensively by many scholars in recent years (see [1–5]). Quasilinear equations of the form (1.2) have been derived as models of several physical phenomena corresponding to various types of \(l(s)\). For instance, the case \(l(s)=s\) models the time evolution of the condensate wave function in a superfluid film [6, 7], and it is called the superfluid film equation in fluid mechanics by Kurihara [6]. In the case \(l(s)=(1+s)^{1/2}\), problem (1.2) models the self-channeling of a high-power ultra short laser in matter, the propagation of a high-irradiance laser in a plasma creates an optical index depending nonlinearly on the light intensity and this leads to an interesting new nonlinear wave equation (see [8–11]). Equation (1.2) also appears in plasma physics and fluid mechanics [12–15], in dissipative quantum mechanics [16] and in condensed matter theory [17].
Recently, Deng–Peng–Yan [18] introduced a class of generalized quasilinear critical Schrödinger equations,
$$ -\operatorname{div}\bigl(g^{2}(u)\nabla u \bigr)+g(u)g^{\prime}(u)|\nabla u|^{2}+a(x)u =k(x,u),\quad x\in \mathbb {R}^{N} , $$
to study the existence of positive soliton solutions. The reason we call Eq. (1.3) a generalized quasilinear Schrödinger equation is that if we take
$$g^{2}(u)=1+\frac{ [l^{\prime}(u^{2}) ]^{2}}{2}, $$
then the following quasilinear equation:
$$-\Delta u+a(x)u-\Delta l\bigl(u^{2}\bigr)l^{\prime} \bigl(u^{2}\bigr)u=k(x,u),\quad x\in \mathbb {R}^{N}, $$
turns into it (see [18, 19]). Equation (1.3) also arises in biological models and propagation of laser beams when \(g(u)\) is a positive constant. If we set \(g^{2}(u) =1 +2u^{2}\), i.e. \(l(s) =s\), we get the superfluid film equation in plasma physics:
$$-\Delta u+V(x)u-\Delta\bigl(u^{2}\bigr)u=k(x,u),\quad x\in \mathbb {R}^{N}. $$
If we set \(g^{2}(u) =1 + \frac{u^{2}}{2(1+u^{2})}\), i.e. \(l(s)=(1+s)^{1/2}\), we get the equation
$$-\Delta u+V(x)u- \bigl[\Delta\bigl(1+u^{2}\bigr)^{\frac{1}{2}} \bigr] \frac {1}{2(1+u^{2})^{\frac{1}{2}}}u=k(x,u), \quad x\in \mathbb {R}^{N}, $$
which models the self-channeling of a high-power ultrashort laser in matter. For the related and important results on quasilinear Schrödinger equations, we refer the reader to [19–26] and the references therein.
We call problem (1.1) the generalized quasilinear Schrödinger–Poisson system because of the coupling of the Poisson equation with (1.3). Indeed, if we choose \(g(t)=1\) for all \(t\in \mathbb {R}\), then (1.1) transforms to the following classical Schrödinger–Poisson system:
$$\textstyle\begin{cases} -\Delta u+a(x)u+\phi u=k(x,u), & x\in \mathbb {R}^{3}, \\ -\Delta\phi=u^{2}, & x\in \mathbb {R}^{3}, \end{cases} $$
proposed by Benci–Fortunato [27, 28] to represent solitary waves for nonlinear Schrödinger type equations and look for the existence of standing waves interacting with an unknown electrostatic field. We refer the reader to [29–34] for some related and important results. In view of this, it is also reasonable to consider the generalized quasilinear Schrödinger–Poisson system.
According to Ruiz [35], for any \(u\in H^{1}(\mathbb {R}^{3})\) we can define
$$\phi_{u}(x)=\frac{1}{4\pi} \int_{\mathbb {R}^{3}}\frac{u^{2}(y)}{|x-y|}\, dy, $$
which is a weak solution to \(-\Delta\phi=u^{2}\) in \(\mathbb {R}^{3}\). Therefore the weak solution of \(-\Delta\phi=G^{2}(u)\) can be represented as
$$\phi_{G(u)}(x)=\frac{1}{4\pi} \int_{\mathbb {R}^{3}}\frac{G^{2}(u(y))}{|x-y|}\,dy $$
and then (1.1) can be reduced to a single equation:
$$ -\operatorname{div}\bigl(g^{2}(u)\nabla u \bigr)+g(u)g^{\prime}(u)|\nabla u|^{2}+a(x)u+\phi _{G(u)} G(u)g(u)=k(x,u),\quad x\in \mathbb {R}^{3}. $$
In this paper, we establish the existence of ground state solutions for problem (1.1). To this end, we assume \(k(x,t)=b(x)|G(u)|^{p-2}G(u)g(u)-c(x)|G(u)|^{q-2}G(u)g(u)\). Hence the problem (1.4) can be rewritten in the following form:
$$\begin{aligned}& -\operatorname{div}\bigl(g^{2}(u)\nabla u\bigr)+g(u)g^{\prime}(u) \vert \nabla u \vert ^{2}+a(x)u+\phi _{G(u)} G(u)g(u) \\& \quad =b(x) \bigl\vert G(u) \bigr\vert ^{p-2}G(u)g(u)-c(x) \bigl\vert G(u) \bigr\vert ^{q-2}G(u)g(u), \end{aligned}$$
whose corresponding variational functional is given by
$$\begin{aligned} I(u) =&\frac{1}{2} \int_{\mathbb {R}^{3}}g^{2}(u) \vert \nabla u \vert ^{2}\,dx+\frac{1}{2} \int _{\mathbb {R}^{3}}a(x)u^{2}\,dx +\frac{1}{4} \int_{\mathbb {R}^{3}}\phi_{G(u)}G^{2}(u)\,dx \\ &{}-\frac{1}{p} \int_{\mathbb {R}^{3}}b(x) \bigl\vert G(u) \bigr\vert ^{p} \,dx+\frac{1}{q} \int_{\mathbb {R}^{3}}c(x) \bigl\vert G(u) \bigr\vert ^{q} \,dx. \end{aligned}$$
Unfortunately, the above functional I may be not well defined in \(H^{1}(\mathbb {R}^{3})\). To overcome this difficulty, we make a change of variable constructed by Shen–Wang [19],
$$v=G(u)= \int^{u}_{0}g(\tau)\,d\tau. $$
Then we get
$$\begin{aligned} J(v) =&\frac{1}{2} \int_{\mathbb {R}^{3}} \vert \nabla v \vert ^{2}\,dx+ \frac {1}{2} \int_{\mathbb {R}^{3}}a(x) \bigl\vert G^{-1}(v) \bigr\vert ^{2}\,dx + \frac{1}{4} \int_{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx \\ &{}-\frac{1}{p} \int_{\mathbb {R}^{3}}b(x) \vert v \vert ^{p}\,dx+ \frac{1}{q} \int_{\mathbb {R}^{3}}c(x) \vert v \vert ^{q} \,dx. \end{aligned}$$
Since g is a nondecreasing positive function, we get \(|G^{-1}(v)|\leq |v|/g(0)\). It is clear that J is well defined in \(H^{1}(\mathbb {R}^{3})\) and \(J\in C^{1}\) if assumption (H1) holds.
If u is a nontrivial solution of (1.5), then it should satisfy
$$\begin{aligned}& \int_{\mathbb {R}^{3}} \bigl[g^{2}(u)\nabla u\nabla \varphi+g(u)g^{\prime }(u) \vert \nabla u \vert ^{2} \varphi+a(x)u\varphi +\phi_{G(u)}G(u)g(u)\varphi \\& \quad {}-b(x) \bigl\vert G(u) \bigr\vert ^{p-2}G(u)g(u)\varphi+c(x) \bigl\vert G(u) \bigr\vert ^{q-2}G(u)g(u)\varphi \bigr]\,dx=0, \end{aligned}$$
for any \(\varphi\in C^{\infty}_{0}(\mathbb {R}^{3})\). Let \(\varphi=\psi/g(u)\), we know that the above formula is equivalent to
$$\begin{aligned} \bigl\langle J^{\prime}(v),\psi\bigr\rangle =& \int_{\mathbb {R}^{3}} \biggl[ \nabla v \nabla\psi+a(x)\frac{G^{-1}(v)}{g(G^{-1}(v))} \psi+\phi_{v} v\psi -b(x)|v|^{p-2}v\varphi+c(x)|v|^{q-2}v \varphi \,dx \biggr] \\ =&0,\quad \forall\psi\in C^{\infty}_{0}\bigl(\mathbb {R}^{3} \bigr). \end{aligned}$$
Therefore, in order to find the nontrivial solutions of (1.5), it suffices to study the existence of the nontrivial solutions of the following equations:
$$ -\Delta v+a(x)\frac{G^{-1}(v)}{g(G^{-1}(v))}+\phi_{v} v=b(x)|v|^{p-2}v-c(x)|v|^{q-2}v. $$
It is easy to verify that the problem (1.5) is equivalent to problem (1.7) and the nontrivial critical points of \(J(v)\) are the nontrivial solutions of problem (1.7). Inspired by all the work described above, particularly, by the results in [18, 25], we intend to show the existence of ground state solutions of problem (1.7). To this end, we first give some assumptions on g, a, b and c.
(g):
\(g\in C^{1}(\mathbb {R})\) is an even positive function and \(g^{\prime}(t)\geq0\) for all \(t\geq0\) and \(g(0) =1\);
(H1):
\(a(x)\), \(b(x)\) and \(c(x)\) are continuous and nonnegative and bounded;
\(a(x)\leq\lim_{|x|\to\infty}a(x)\triangleq a_{\infty}\), \(b(x)\geq\lim_{|x|\to\infty}b(x)\triangleq b_{\infty}\) and \(c(x)\leq\lim_{|x|\to\infty}c(x)\triangleq c_{\infty}\) and one of these inequalities is strict on a set of positive measure.
Our main result is as follows.
Suppose (g) and (H1)–(H2) hold. Problem (1.1) admits at least a ground state solution if \(2< q<4<p<6\).
To prove our main theorem, we need to introduce the limiting equation at infinity related to problem (1.7)
$$ -\Delta v+a_{\infty}\frac{G^{-1}(v)}{g(G^{-1}(v))}+\phi_{v} v=b_{\infty}|v|^{p-2}v-c_{\infty}|v|^{q-2}v, $$
which plays a vital role.
The outline of this paper is as follows. In Sect. 2, we introduce and provide several lemmas. In Sect. 3, we prove the limiting equation (1.8) has a ground state solution. The proof of Theorem 1.1 is completed in Sect. 4.
Throughout this paper we shall denote by C and \(C_{i}\) (\(i=1, 2,\ldots \)) various positive constants whose exact value may change from line to line but are not essential to the analysis of the problem. \(L^{p}(\mathbb {R}^{3})\) (\(1\leq p\leq+\infty\)) is the usual Lebesgue space with the standard norm \(|u|_{p}\). We use "→" and "⇀" to denote the strong and weak convergence in the related function space, respectively. For any \(\rho>0\) and any \(x\in \mathbb {R}^{3}\), \(B_{\rho}(x)\) denotes the ball of radius ρ centered at x, that is, \(B_{\rho}(x):=\{y\in \mathbb {R}^{3}:|y-x|<\rho \}\).
Variational settings and preliminaries
In this section, we will give some lemmas which are useful for the main results. To solve problem (1.1), we firstly introduce some function spaces. Throughout the paper, we consider the Hilbert space \(H^{1}(\mathbb {R}^{3})\) with the inner product and the norm as follows:
$$\begin{aligned}& (u,v)= \int_{\mathbb {R}^{3}}\nabla u\nabla v+a(x)uv\,dx\quad \text{and} \\& \|u\|= \biggl( \int_{\mathbb {R}^{3}}|\nabla u|^{2}+ a(x)u^{2}\,dx \biggr)^{\frac{1}{2}}, \quad \forall u,v\in H^{1}\bigl(\mathbb {R}^{3} \bigr) \end{aligned}$$
which are equivalent to the usual inner product and the norm in \(H^{1}(\mathbb {R}^{3})\) because of the assumption (H1).
Assume (g), the functions \(g(s)\) and \(G(s)\) have the following properties:
both G and \(G^{-1}\) are odd and for all \(s\geq0\), \(t\geq0\), we have
$$G(t)\leq g(t)t,\qquad s/g\bigl(G^{-1}(s)\bigr)\leq G^{-1}(s) \leq s; $$
for all \(s\geq0\), \(G^{-1}(s)/s\) is non-increasing and
$$\lim_{s\to0}\frac{G^{-1}(s)}{s}=\frac{1}{g(0)}=1\quad \textit{and}\quad \lim_{s\to\infty}\frac{G^{-1}(s)}{s}= \textstyle\begin{cases} \frac{1}{g(\infty)}, & \textit{if } g \textit{ is bounded}, \\ 0, & \textit{if } g \textit{ is unbounded}. \end{cases} $$
The proof is standard, see [18, 25] for example. □
$$ f(x,s)\triangleq b(x)|s|^{p-2}s-c(x)|s|^{q-2}s+a(x)s-a(x) \frac {G^{-1}(s)}{g(G^{-1}(s))} $$
$$ F(x,s)\triangleq\frac{1}{p}b(x)|s|^{p}- \frac{1}{q}c(x)|s|^{q}+\frac {1}{2}a(x)s^{2}- \frac{1}{2}a(x) \bigl\vert G^{-1}(s) \bigr\vert ^{2}. $$
The functions \(f(x,s)\) and \(F(x,s)\) satisfy the following properties under the assumptions (g) and (H1)–(H2).
\(f(x,s)=o(s)\) and \(F(x,s)=o(s^{2})\) as \(s\to0^{+}\) uniformly in \(x\in \mathbb {R}^{3}\);
\(f(x,s)=o(s^{5})\) and \(F(x,s)=o(s^{6})\) as \(s\to+\infty\) uniformly in \(x\in \mathbb {R}^{3}\);
\(\frac{1}{4}f(x,s)s-F(x,s)+\frac{1}{4}a(x)s^{2}\geq\frac {1}{4}a(x)|G^{-1}(s)|^{2}\) uniformly in \(x\in \mathbb {R}^{3}\);
\(\lim_{|x|\to\infty}f(x,s)=f_{\infty}(s)\) exists and
$$f_{\infty}(s)\triangleq b_{\infty}|s|^{p-2}s-c_{\infty}|s|^{q-2}s+a_{\infty}s-a_{\infty}\frac{G^{-1}(s)}{g(G^{-1}(s))}. $$
Furthermore, we have
$$ 3f_{\infty}(s)s-f^{\prime}_{\infty}(s)s^{2}-2a_{\infty}s^{2}\leq0\quad \textit {for any } s\in \mathbb {R}. $$
Points (1)–(2) are obvious, see [18] for example. Recalling that \(2< q<4<p<6\) and using Lemma 2.1(1), we have
$$ \begin{aligned}[b] \frac{1}{4}f(x,s)s-F(x,s)+ \frac{1}{4}a(x)s^{2}&=\frac{p-4}{4p}b(x) \vert s \vert ^{p} +\frac{4-q}{4q}c(x) \vert s \vert ^{q} + \frac{1}{4}a(x) \bigl\vert G^{-1}(s) \bigr\vert ^{2} \\ &\quad {}+\frac{1}{4}a(x) \biggl[ \bigl\vert G^{-1}(s) \bigr\vert ^{2}-\frac {G^{-1}(s)s}{g(G^{-1}(s))} \biggr], \end{aligned} $$
which yields the point (3). The first part of point (4) follows from our assumption (H2). Using \(2< q<4<p<6\) and Lemma 2.1(1) again, we obtain
$$\begin{aligned}& 3f_{\infty}(s)s-f^{\prime}_{\infty}(s)s^{2}-2a_{\infty}s^{2} \\& \quad = (4-p)b_{\infty}|s|^{p}+(q-4)c_{\infty}|s|^{q} \\& \qquad {}+a_{\infty}\frac{g(G^{-1}(s))s^{2}-G^{-1}(s)g^{\prime }(G^{-1}(s))s^{2}-3G^{-1}(s)g^{2}(G^{-1}(s))s}{g^{3}(G^{-1}(s))} \\& \quad \leq a_{\infty}\frac{g(G^{-1}(s))s^{2}-G^{-1}(s)g^{\prime }(G^{-1}(s))s^{2}-3G^{-1}(s)g^{2}(G^{-1}(s))s}{g^{3}(G^{-1}(s))} \\& \quad \leq a_{\infty}\frac{-2g(G^{-1}(s))s^{2}-G^{-1}(s)g^{\prime }(G^{-1}(s))s^{2}}{g^{3}(G^{-1}(s))}\leq0, \end{aligned}$$
which gives the second part of point (4). The proof is complete. □
We collect some properties of the functions \(\phi_{v}\).
(see [29, Lemmas 2.1–2.2])
For any \(v\in H^{1}(\mathbb {R}^{3})\), we have:
There exists a constant \(C>0\) such that \(\int_{\mathbb {R}^{3}}\phi _{v}v^{2}\,dx\leq C|v|_{12/5}^{4}\);
\(\phi_{v}(x)\geq0\), \(\phi_{tv}(x)=t^{2}\phi_{v}(x)\) and \(\phi _{v(\cdot+y)}=\phi_{v}(\cdot+y)\);
If \(v_{n}\rightharpoonup v\) in \(H^{1}(\mathbb {R}^{3})\) and \(v_{n}\to v\) a.e. in \(\mathbb {R}^{3}\), we have
$$ \lim_{n\to\infty} \biggl[ \int_{\mathbb {R}^{3}}\phi_{v_{n}}v_{n}^{2}\,dx- \int_{\mathbb {R}^{3}}\phi_{v_{n}-v}(v_{n}-v)^{2} \,dx- \int_{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx \biggr]=0 $$
$$ \lim_{n\to\infty} \biggl[ \int_{\mathbb {R}^{3}}\phi_{v_{n}}v_{n}\varphi \,dx- \int _{\mathbb {R}^{3}}\phi_{v}v\varphi \,dx \biggr]=0 \quad \textit{for any } \varphi\in C^{\infty}_{0}\bigl(\mathbb {R}^{3} \bigr). $$
We now introduce some definitions. Let \((X,\|\cdot\|)\) be a Banach space with its dual space \((X^{-1},\|\cdot\|_{*})\), and Φ be its functional on X. The \((\mathit{Ce})\) sequence (or \((\mathit{PS})\) sequence) at level \(c\in \mathbb {R}\) (\((\mathit{Ce})_{c}\) sequence (\((\mathit{PS})_{c}\) sequence) in short) corresponding to Φ assumes that \(\Phi(x_{n})\to c\) and \((1+\|x_{n}\| )\|\Phi^{\prime}(x_{n})\|_{*}\to0\) (\(\Phi^{\prime}(x_{n})\to0\)) as \(n\to\infty\), where \(\{x_{n}\}\subset X\). If for any \((\mathit{Ce})_{c}\) sequence \(\{x_{n}\}\) in X, there exists a subsequence \(\{x_{n_{k}}\}\) such that \(x_{n_{k}}\to x_{0}\) in X for some \(x_{0}\in X\), then we say that the functional Φ satisfies the so called \((\mathit{Ce})_{c}\) condition.
The functional \(J(v)\) satisfies the mountain-pass geometry, that is,
there exist \(\eta,\rho>0\) such that \(J(v)\geq\eta>0\) when \(\| v\|=\rho\);
there exists \(e\in H^{1}(\mathbb {R}^{3})\) with \(\|e\|>\rho\) such that \(J(e)<0\).
(i) From Lemma 2.2(1)–(2), for any \(\varepsilon >0\), there exists \(C_{\varepsilon }>0\) such that
$$\begin{aligned} J(v) =& \frac{1}{2}\|v\|^{2}+\frac{1}{4} \int_{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx- \int _{\mathbb {R}^{3}}F(x,v)\,dx \\ \geq& \frac{1}{2}\|v\|^{2}-\varepsilon \|v\|^{2}-C_{\varepsilon }\|v\|^{6}. \end{aligned}$$
It follows that
$$J(v) \geq C\|v\|^{2}-C\|v\|^{6} $$
if we choose sufficiently small \(\varepsilon >0\) and \(\rho>0\), which implies the result (i).
(ii) Choosing \(v_{0}\in H^{1}(\mathbb {R}^{3})\setminus \{0\}\) and using Lemma 2.1(1), one has
$$\begin{aligned} J(tv_{0}) \leq&\frac{t^{2}}{2}\|v_{0}\|^{2}+ \frac{t^{4}}{4} \int_{\mathbb {R}^{3}}\phi _{v}v^{2}\,dx- \frac{t^{p}}{p} \int_{\mathbb {R}^{3}}b(x)|v_{0}|^{p}\,dx \\ &{}+ \frac{t^{q}}{q} \int _{\mathbb {R}^{3}}c(x)|v_{0}|^{q}\,dx \to-\infty \end{aligned}$$
as \(t\to+\infty\). Hence letting \(e=t_{0}v_{0}\in H^{1}(\mathbb {R}^{3})\setminus \{ 0\}\) with \(t_{0}\) sufficiently large, we have \(\|e\|>\rho\) and \(J(e)<0\). □
By Lemma 2.4 and the variant mountain-pass theorem [36, Theorem 1], a \((\mathit{Ce})_{c}\) sequence of the functional \(J(v)\) at the level
$$ c:=\inf_{\gamma\in\Gamma}\max_{t\in[0,1]}J\bigl( \gamma(t)\bigr)>0 $$
can be constructed, where the set of paths is defined as
$$\Gamma:= \bigl\{ \gamma\in C \bigl([0,1],H^{1}\bigl(\mathbb {R}^{3} \bigr) \bigr):\gamma (0)=0, J\bigl(\gamma(1)\bigr)< 0 \bigr\} . $$
In other words, there exists a sequence \(\{v_{n}\}\subset H^{1}(\mathbb {R}^{3})\) such that
$$ J(v_{n})\to c, \bigl(1+ \Vert v_{n} \Vert \bigr) \bigl\Vert J^{\prime}(v_{n}) \bigr\Vert _{*}\to0 \quad \text{as } n\to \infty. $$
Any sequence \(\{v_{n}\}\subset H^{1}(\mathbb {R}^{3})\) verifying (2.8) is bounded.
Since \(\{v_{n}\}\subset H^{1}(\mathbb {R}^{3})\) is a \((\mathit{Ce})_{c}\) sequence, we have
$$\begin{aligned} c+1 \geq& J(v_{n})-\frac{1}{4}\bigl\langle J^{\prime }(v_{n}),v_{n}\bigr\rangle \\ =&\frac{1}{4} \int_{\mathbb {R}^{3}}|\nabla v_{n}|^{2}\,dx+ \int_{\mathbb {R}^{3}} \biggl[\frac{1}{4}f(x,v_{n})v_{n}-F(x,v_{n})+ \frac{1}{4}a( x)v_{n}^{2} \biggr]\,dx \\ \stackrel{\text{(2.4)}}{\geq}&\frac{1}{4} \int_{\mathbb {R}^{3}}|\nabla v_{n}|^{2}\,dx+ \frac{1}{4} \int_{\mathbb {R}^{3}}a( x) \bigl\vert G^{-1}(v_{n}) \bigr\vert ^{2} \end{aligned}$$
and by Lemma 2.1(1),
$$\begin{aligned} \begin{aligned}[b] \int_{\mathbb {R}^{3}}a(x)v_{n}^{2}\,dx &= \int _{|G^{-1}(v_{n})|>1}a(x)v_{n}^{2}\,dx+ \int_{|G^{-1}(v_{n})|\leq1}a(x)v_{n}^{2}\,dx \\ & \leq C \int_{\mathbb {R}^{3}}v_{n}^{6}\,dx+g^{2}(1) \int_{|G^{-1}(v_{n})|\leq 1}a(x)\frac{v_{n}^{2}}{g^{2}(G^{-1}(v_{n}))}\,dx \\ &\leq C \int_{\mathbb {R}^{3}}v_{n}^{6}\,dx+g^{2}(1) \int_{\mathbb {R}^{3}}a(x) \bigl\vert G^{-1}(v_{n}) \bigr\vert ^{2}\,dx. \end{aligned} \end{aligned}$$
Combining (2.9)–(2.10) and the Sobolev inequality, the sequence \(\{v_{n}\}\) is bounded. □
Let \(\{v_{n}\}\) be a \((\mathit{Ce})\) sequence of J. Going to a subsequence if necessary, we may assume that \(v_{n}\rightharpoonup v\) in \(H^{1}(\mathbb {R}^{3})\), \(v_{n}\to v\) in \(L^{r}_{\mathrm{loc}}(\mathbb {R}^{3})\) with \(r\in[1,6)\) and \(v_{n}\to v\) a.e. in \(\mathbb {R}^{3}\). Set \(w_{n}=v_{n}-v\), by Lemma 2.2 we have the following lemma.
(see [37, Lemma 1.3])
If \(v_{n}\rightharpoonup v\) in \(H^{1}(\mathbb {R}^{3})\) and \(v_{n}\to v\) a.e. in \(\mathbb {R}^{3}\), then
$$\lim_{n\to\infty} \biggl[ \int_{\mathbb {R}^{3}}F(x,v_{n})\,dx- \int_{\mathbb {R}^{3}}F(x,v)\,dx- \int_{\mathbb {R}^{3}}F(x,w_{n})\,dx \biggr]=0. $$
As a consequence of Lemma 2.6, we have the following lemma.
Let \(\{v_{n}\}\) be a \((\mathit{Ce})\) sequence of J at the level c, and set \(w_{n}=v_{n}-v\), then \(\{w_{n}\}\) is a \((\mathit{PS})\) sequence of J at the level \(c-J(v)\).
We claim first that \(J^{\prime}(v)=0\). In fact, it is enough to show that \(\langle J^{\prime}(v),\varphi\rangle=0\) for any \(\varphi\in C^{\infty}_{0}(\mathbb {R}^{3})\). By Lemma 2.2(1)–(2), it is easy to verify
$$\lim_{n\to\infty} \int_{\mathbb {R}^{3}}f(x,w_{n})\varphi \,dx=\lim _{n\to\infty } \biggl[ \int_{\mathbb {R}^{3}}f(x,v_{n})\varphi \,dx- \int_{\mathbb {R}^{3}}f(x,v)\varphi \,dx \biggr]=0. $$
Using the above formula and (2.6), we have
$$ 0=\lim_{n\to\infty}\bigl\langle J^{\prime}(v_{n}), \varphi\bigr\rangle =\bigl\langle J^{\prime}(v),\varphi\bigr\rangle $$
which yields the claim. By (2.11), we derive
$$\begin{aligned}& \lim_{n\to\infty}J(w_{n})=\lim_{n\to\infty} \bigl[J(v_{n})-J(v) \bigr]=c-J(v), \\& \lim_{n\to\infty}\bigl\langle J^{\prime}(w_{n}), \varphi\bigr\rangle =\lim_{n\to\infty} \bigl[\bigl\langle J^{\prime}(v_{n}),\varphi\bigr\rangle -\bigl\langle J^{\prime}(v),\varphi\bigr\rangle \bigr]=0, \end{aligned}$$
which show that \(\{w_{n}\}\) is a \((\mathit{PS})\) sequence of J at the level \(c-J(v)\). □
The proofs of the following lemmas can be found in the corresponding references.
(see [38, 39])
Let \(\{\rho_{n}\}\) be a sequence of nonnegative functions satisfying \(|\rho_{n}|_{1}=\lambda \) and \(\lambda >0\) is fixed, then there exists a subsequence, still denoted by \(\{\rho_{n}\}\), satisfying one of the following two possibilities:
(Vanishing) for any fixed \(R > 0\), we have
$$\lim_{n\to\infty}\sup_{y\in \mathbb {R}^{N}} \int_{B_{R}(y)}\rho_{n}(x)\,dx=0. $$
(Nonvanishing) there exist \(\beta>0\), \(\overline{R}\in (0,+\infty)\) and \(\{y_{n}\}\subset \mathbb {R}^{N}\) such that
$$\lim_{n\to\infty} \int_{B_{\overline{R}}(y_{n})}\rho_{n}(x)\,dx\geq \beta>0. $$
Assume that \(\{u_{n}\}\) is bounded in \(H^{1}(\mathbb {R}^{3})\) and satisfies
$$\lim_{n\to\infty}\sup_{y\in \mathbb {R}^{3}} \int_{B_{R}(y)}|u_{n}|^{2}\,dx=0, $$
for some \(R>0\). Then \(u_{n}\to0\) in \(L^{r}(\mathbb {R}^{3})\) for every \(2< r<6\).
The existence of ground state solution for limit equation at infinity
In this section, by employing Ekeland's variational principle [40], we prove the existence of ground state solution for problem (1.8) which is the limit equation of problem (1.7) at infinity. To establish the ground sate solution of problem (1.8), we set
$$m_{\infty}=\inf_{u\in\mathcal{N}_{\infty}}J_{\infty}(u) \quad \text{and}\quad \mathcal{N}_{\infty}= \bigl\{ u\in H^{1}\bigl( \mathbb {R}^{3}\bigr)\setminus \{0\}:\bigl\langle J_{\infty}^{\prime}(u),u \bigr\rangle =0 \bigr\} . $$
Since \(a_{\infty}\) is positive, the norm and inner product in this section are not distinguished by the norm and inner product used in the previous section. To show that the Nehair manifold \(\mathcal{N}_{\infty}\) is nonempty and \(m_{\infty}\) is well defined, we prove the following lemma.
Assume (g) and (H1)–(H2), then we have the following properties:
For any \(v\in H^{1}(\mathbb {R}^{3})\setminus \{0\}\), there exists a unique \(t_{v}>0\) such that \(t_{v}v\in\mathcal{N}_{\infty}\) and \(J_{\infty}(t_{v}v)=\max_{t\geq0}J_{\infty}(tv)\). In particular, if \(v\in\mathcal {N}_{\infty}\) we have \(J_{\infty}(v)=\max_{t\geq0}J_{\infty}(tv)\);
There exists \(\alpha> 0\) such that \(\|u\|\geq\alpha\) for all \(u\in\mathcal{N}_{\infty}\);
\(J_{\infty}\) is bounded from below on \(\mathcal{N}_{\infty}\) by a positive constant;
\(J_{\infty}\) is coercive on \(\mathcal{N}_{\infty}\), i.e. \(J_{\infty}(v)\to\infty\) as \(\|v\|\to\infty\) when \(v\in\mathcal {N}_{\infty}\).
(a) Given \(v\in H^{1}(\mathbb {R}^{3})\setminus \{0\}\), by [41, Lemma 2.4] and Lemma 2.2(1)–(2) we derive that
$$\xi(t)=c_{1}t^{2}+c_{2}t^{4}- \int_{\mathbb {R}^{3}}F_{\infty}(tv)\,dx $$
has a unique positive critical point which corresponds to its maximum, where the fact \(F_{\infty}(tv)/t^{2}\to\infty\) as \(t\to\infty\) is used. Let \(\xi(t)=J_{\infty}(tv)\), then we can conclude the result (a).
(b) If \(v\in\mathcal{N}_{\infty}\), using Lemma 2.2(1)–(2) one has
$$\begin{aligned} \|v\|^{2} &\leq\|v\|^{2}+ \int_{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx= \int_{\mathbb {R}^{3}}f_{\infty}(v)v\,dx\leq \varepsilon \int_{\mathbb {R}^{3}}v^{2}\,dx+C_{\varepsilon }\int_{\mathbb {R}^{3}}v^{6}\,dx \\ & \leq C\varepsilon \|v\|^{2}+CC_{\varepsilon }\|v\|^{6}= \frac{1}{2}\|v\|^{2}+C\|v\|^{6} \end{aligned}$$
if we choose \(C\varepsilon =1/2\) and the result (b) follows.
(c) If \(v\in\mathcal{N}_{\infty}\), then
$$J_{\infty}(v)=J_{\infty}(v)-\frac{1}{4}\bigl\langle J_{\infty}^{\prime }(v),v\bigr\rangle \geq \frac{1}{4} \int_{\mathbb {R}^{3}}|\nabla v|^{2}\,dx+\frac{1}{4} \int_{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1}(v) \bigr\vert ^{2}\,dx, $$
which together with the result (b) and (2.10) gives the result (c).
(d) Combining the above formula and (2.10), the result (d) is obvious. □
Assume (g), then \(m_{\infty}=\inf_{u\in\mathcal{N}_{\infty}}J_{\infty}(u)\) can be attained.
If it is possible to verify that a minimizing sequence of \(m_{\infty}\) is radially symmetric, the minimizer may be easily obtained. In particular, the minimizer is a critical point of \(J_{\infty}\) restricted on \(\mathcal{N}_{\infty}\). Finally, we proceed as follows.
Step 1: Any minimizing sequence of \(m_{\infty}\) can be radially symmetric.
Let \(\{v_{n}\}\subset H^{1}(\mathbb {R}^{3})\) be a minimizing sequence of \(m_{\infty}\), that is, \(\langle J^{\prime}_{\infty}(v_{n}),v_{n}\rangle=0\) and \(J_{\infty}(v_{n})\to m_{\infty}\) as \(n\to\infty\). According to the Schwarz symmetrization \(v_{n}^{*}\) of \(v_{n}\), we know that \(v_{n}^{*}\) is continuous and nonnegative and satisfies
$$\begin{aligned}& \int_{\mathbb {R}^{3}} \bigl\vert \nabla v_{n}^{*} \bigr\vert ^{2}\,dx\leq \int_{\mathbb {R}^{3}} \vert \nabla v_{n} \vert ^{2}\,dx \quad \text{and} \\& \int_{\mathbb {R}^{3}}h\bigl(v_{n}^{*}\bigr)\,dx= \int_{\mathbb {R}^{3}}h(v_{n})\,dx\quad \text{for any } h(v_{n})\in L^{1}\bigl(\mathbb {R}^{3}\bigr), \end{aligned}$$
which give \(\langle J^{\prime}_{\infty}(v_{n}^{*}),v_{n}^{*}\rangle\leq \langle J^{\prime}_{\infty}(v_{n}),v_{n}\rangle=0\). It is obvious that \(\langle J^{\prime}_{\infty}(tv_{n}^{*}),tv_{n}^{*}\rangle>0\) for sufficiently small \(t>0\). Hence there is \(t_{0}\in(0,1]\) satisfying \(\langle J^{\prime}_{\infty}(t_{0}v_{n}^{*}),t_{0}v_{n}^{*}\rangle=0\) and then
$$\begin{aligned} m_{\infty} \leq&J_{\infty}\bigl(t_{0}v_{n}^{*} \bigr)=J_{\infty}\bigl(t_{0}v_{n}^{*}\bigr)- \frac {1}{4}\bigl\langle J^{\prime}_{\infty}\bigl(t_{0}v_{n}^{*}\bigr),t_{0}v_{n}^{*} \bigr\rangle \\ =&\frac{t_{0}^{2}}{4} \int_{\mathbb {R}^{3}} \bigl\vert \nabla v_{n}^{*} \bigr\vert ^{2}\,dx+\frac{1}{4} \int _{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1} \bigl(t_{0}v_{n}^{*}\bigr) \bigr\vert ^{2}\,dx+ \frac{p-4}{4p}t_{0}^{p} \int_{\mathbb {R}^{3}}b_{\infty}\bigl\vert v_{n}^{*} \bigr\vert ^{p}\,dx \\ &{}+\frac{4-q}{4q}t_{0}^{q} \int_{\mathbb {R}^{3}}c_{\infty}\bigl\vert v_{n}^{*} \bigr\vert ^{q}\,dx \\ \leq&\frac{1}{4} \int_{\mathbb {R}^{3}} \bigl\vert \nabla v_{n}^{*} \bigr\vert ^{2}\,dx+\frac{1}{4} \int _{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1} \bigl(v_{n}^{*}\bigr) \bigr\vert ^{2}\,dx+\frac{p-4}{4p} \int_{\mathbb {R}^{3}}b_{\infty}\bigl\vert v_{n}^{*} \bigr\vert ^{p}\,dx \\ &{}+\frac{4-q}{4q} \int_{\mathbb {R}^{3}}c_{\infty}\bigl\vert v_{n}^{*} \bigr\vert ^{q}\,dx \\ \leq&\frac{1}{4} \int_{\mathbb {R}^{3}} \vert \nabla v_{n} \vert ^{2}\,dx+\frac{1}{4} \int_{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1}(v_{n}) \bigr\vert ^{2}\,dx+\frac{p-4}{4p} \int_{\mathbb {R}^{3}}b_{\infty} \vert v_{n} \vert ^{p}\,dx \\ &{}+\frac{4-q}{4q} \int_{\mathbb {R}^{3}}c_{\infty} \vert v_{n} \vert ^{q}\,dx \\ =&J_{\infty}(v_{n})-\frac{1}{4}\bigl\langle J^{\prime}_{\infty}(v_{n}),v_{n}\bigr\rangle =J_{\infty}(v_{n})\to m_{\infty} \end{aligned}$$
which yields \(t_{0}=1\). Therefore we conclude that \(\langle J^{\prime }_{\infty}(v_{n}^{*}),v_{n}^{*}\rangle=0\) and \(J_{\infty}(v_{n}^{*})\to m_{\infty}\) as \(n\to\infty\). So the proof of Step 1 is complete.
Step 2: Any minimizing sequence of \(m_{\infty}\) contains a strongly convergent subsequence.
Let \(\{v_{n}\}\subset H^{1}(\mathbb {R}^{3})\) be a minimizing sequence of \(m_{\infty}\), then similar to Lemma 2.5 we know that \(\{v_{n}\}\) is bounded in \(H^{1}(\mathbb {R}^{3})\). From the Step 1, we know that \(\{v_{n}\}\) is radially symmetric. Up to a subsequence, there exists \(v\in H^{1}(\mathbb {R}^{3})\) such that \(v_{n}\rightharpoonup v\) in \(H^{1}(\mathbb {R}^{3})\), \(v_{n}\to v\) in \(L^{r}(\mathbb {R}^{3})\) with \(r\in(2,6)\) and \(v_{n}\to v\) a.e. in \(\mathbb {R}^{3}\). By using Fatou's lemma, one has
$$\begin{aligned}& \int_{\mathbb {R}^{3}}|\nabla v|^{2}\,dx+ \int_{\mathbb {R}^{3}}a_{\infty}\bigl|G^{-1}(v)\bigr|^{2} \,dx+ \int _{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx \\& \quad \leq\liminf_{n\to\infty} \biggl( \int_{\mathbb {R}^{3}}|\nabla v_{n}|^{2}\,dx+ \int_{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1}(v_{n}) \bigr\vert ^{2}\,dx+ \int_{\mathbb {R}^{3}}\phi _{v_{n}}{v_{n}}^{2} \,dx \biggr) \\& \quad = \liminf_{n\to\infty} \biggl( \int_{\mathbb {R}^{3}}b_{\infty}| v_{n}|^{p}\,dx- \int_{\mathbb {R}^{3}}c_{\infty}|v_{n}|^{q}\,dx \biggr) \\& \quad = \int_{\mathbb {R}^{3}}b_{\infty}|v|^{p}\,dx- \int_{\mathbb {R}^{3}}c_{\infty}|v|^{q}\,dx, \end{aligned}$$
which gives \(\langle J^{\prime}_{\infty}(v),v\rangle\leq0\). It is easy to check that \(\langle J^{\prime}_{\infty}(tv),tv\rangle>0\) for sufficiently small \(t>0\). Hence there exists \(\overline{t}\in(0,1]\) such that \(\langle J^{\prime}_{\infty}(\overline{t}v),\overline {t}v\rangle=0\) and then
$$\begin{aligned} m_{\infty} \leq&J_{\infty}(\overline{t}v)=J_{\infty}(\overline {t}v)-\frac{1}{4}\bigl\langle J^{\prime}_{\infty}( \overline{t}v),\overline {t}v\bigr\rangle \leq J_{\infty}(v)- \frac{1}{4}\bigl\langle J^{\prime}_{\infty}(v),v\bigr\rangle \\ \leq& \liminf_{n\to\infty} \biggl[J_{\infty}(v_{n})- \frac {1}{4}\bigl\langle J^{\prime}_{\infty}(v_{n}),v_{n} \bigr\rangle \biggr]=m_{\infty}, \end{aligned}$$
which indicates that \(\overline{t}=1\). Thus we have \(\langle J^{\prime }_{\infty}(v),v\rangle=0\) and \(J_{\infty}(v)=m_{\infty}\). □
Proposition 3.3
Assume (g), any minimizer of \(m_{\infty}\) is a critical point of \(J_{\infty}\) in \(H^{1}(\mathbb {R}^{3})\).
If v is a minimizer of \(m_{\infty}\), according to Lemma 3.2 we know that v is a critical point of \({J_{\infty }}|_{\mathcal{N}_{\infty}}\), that is, \(v\in\mathcal{N}_{\infty}\) and \({J_{\infty}^{\prime}}|_{\mathcal{N}_{\infty}}(v)=0\). Hence there is a Lagrange multiplier \(\lambda\in \mathbb {R}\) such that \(J_{\infty}^{\prime }(v)=\lambda\Psi_{\infty}^{\prime}(v)\), where \(\Psi_{\infty}(v)=\langle J_{\infty}^{\prime}(v),v\rangle\). To end the proof, it is enough to show that \(\lambda=0\).
In fact, using \(0=\langle J_{\infty}^{\prime}(v),v\rangle=\lambda \langle\Psi_{\infty}^{\prime}(v),v\rangle\) and \(v\in H^{1}(\mathbb {R}^{3})\setminus \{0\}\) we have
$$\begin{aligned} \bigl\langle \Psi_{\infty}^{\prime}(v),v\bigr\rangle =&2 \int_{\mathbb {R}^{3}}|\nabla v|^{2}+a_{\infty}v^{2}\,dx+4 \int_{\mathbb {R}^{3}}\phi_{v}v^{2}\,dx - \int_{\mathbb {R}^{3}} \bigl[f_{\infty}^{\prime}(v)v^{2}+f_{\infty}(v)v \bigr]\,dx \\ =&-2 \int_{\mathbb {R}^{3}}|\nabla v|^{2}\,dx+ \int_{\mathbb {R}^{3}} \bigl[3f_{\infty}(v)v-f_{\infty}^{\prime}(v)v^{2}-2a_{\infty}v^{2} \bigr]\,dx \\ \stackrel{\text{(2.3)}}{\leq}&-2 \int_{\mathbb {R}^{3}}|\nabla v|^{2}\,dx< 0, \end{aligned}$$
which implies that \(\lambda=0\). Hence the proof is complete. □
Assume (g), the system (1.8) has a ground state solution.
In view of Lemma 3.1, we know that the Nehair manifold \(\mathcal{N}_{\infty}\) is nonempty and \(m_{\infty}\) is well defined. It follows from Lemma 3.2 that \(m_{\infty}\) is attained by some \(v\in\mathcal{N}_{\infty}\) and v is a critical point of \({J_{\infty}}|{\mathcal{N}_{\infty}}\). By Proposition 3.4, we have \(J_{\infty}^{\prime}(v)=0\) in the whole space \(H^{1}(\mathbb {R}^{3})\). Consequently, we have shown that \(J(v)=m_{\infty}>0\) and \(J_{\infty}^{\prime}(v)=0\), which show that v is a ground state solution of problem (1.8). The proof is complete. □
Proof of Theorem 1.1
In this section, we prove Theorem 1.1 by applying the variant mountain-pass theorem [36]. In the following, we will verify that the mountain-pass value c given by (2.7) is satisfied.
Suppose that (g) and (H1)–(H2) hold, then \(0< c< m_{\infty}\).
By Theorem 3.4, we know that there exists \(v\in H^{1}(\mathbb {R}^{3})\) such that \(J_{\infty}^{\prime}(v)=0\) and \(J_{\infty}(v)=m_{\infty}>0\). Since \(J(0)=0\) and \(\lim_{t\to\infty}J(tv)=-\infty\), there exists \(\tilde{t}>0\) such that
$$J(\tilde{t}v)=\max_{t\geq0}J(tv). $$
Choosing a sufficient large \(t_{0}>0\) to satisfy \(J(t_{0}v)<0\), then \(\gamma_{0}(t)=tt_{0}v\in\Gamma\) and hence
$$c\leq\max_{t\in[0,1]}J\bigl(\gamma_{0}(t)\bigr)\leq\max _{t\geq 0}J(tv)=J(\tilde{t}v)< J_{\infty}(\tilde{t}v) \leq J_{\infty}(v)=m_{\infty}, $$
where (H2) yields the strict inequality. The proof is complete. □
Assume that (g) and (H1)–(H2) hold, then \(J(v)\) satisfies the \((\mathit{Ce})_{c}\) condition if \(c\in(0,m_{\infty})\).
Let \(\{v_{n}\}\) be a \((\mathit{Ce})_{c}\) sequence of \(J(v)\). Similar to Lemma 2.5, we find that \(\{v_{n}\}\) is bounded in \(H^{1}(\mathbb {R}^{3})\) and there exists a subsequence, still denoted by \(\{v_{n}\}\), such that
$$\textstyle\begin{cases} v_{n}\rightharpoonup v, &\text{in } H^{1}(\mathbb {R}^{3}), \\ v_{n}\to v, &\text{in } L^{r}_{\mathrm{loc}}(\mathbb {R}^{3}) \text{ for } 1\leq r< 6, \\ v_{n}\to v, &\text{a.e. in } \mathbb {R}^{3}, \end{cases} $$
and \(J^{\prime}(v)=0\) with \(J(v)\geq0\) by (2.4). Denote \(w_{n}=v_{n}-v\), then it follows from Lemma 2.7 that \(\{w_{n}\}\) is a \((\mathit{PS})\) sequence of \(J(v)\) at the level \(c-J(v)\). To prove that \(\|w_{n}\|\to0\) as \(n\to\infty\), we divide our proof into the following two steps.
Step 1: The nonvanishing case for \(\rho_{n}=|w_{n}|_{2}^{2}\) in Lemma 2.8 can never occur.
Arguing indirectly, by Lemma 2.8 we know that there exist \(\beta>0\), \(\overline{R}\in(0,+\infty)\) and \(\{y_{n}\}\subset \mathbb {R}^{N}\) such that
$$ \liminf_{n\to\infty} \int_{B_{\overline{R}}(y_{n})}|w_{n}|^{2}\,dx\geq \beta>0. $$
Without loss of generality, we choose \(|y_{n}|\to\infty\) as \(n\to \infty\). Otherwise, \(\{w_{n}\}\) is tight, and thus \(|w_{n}|_{2}\to0\) as \(n\to\infty\), which yields a contradiction to (4.1). Denote \(\overline{w}_{n}(x)=w_{n}(x+y_{n})\). Since \(\limsup_{n\to\infty}\| \overline{w}_{n}\|=\limsup_{n\to\infty}\|w_{n}\|\leq C<+\infty \), we may assume that there exists \(w_{0}\in H^{1}(\mathbb {R}^{3})\) such that \(\overline{w}_{n}\rightharpoonup w_{0}\) in \(H^{1}(\mathbb {R}^{3})\), \(\overline{w}_{n}\to w_{0}\) in \(L^{r}_{\mathrm{loc}}(\mathbb {R}^{3})\) with \(r\in[1,6)\) and \(\overline{w}_{n}\to w_{0}\) a.e. in \(\mathbb {R}^{3}\). We then claim that \(\{ \overline{w}_{n}\}\) is a \((\mathit{PS})\) sequence of \(J_{\infty}(v)\) at the level \(c-J(v)\). In fact, in view of \(\lim_{n\to\infty}a(x+y_{n})=a_{\infty}\), we have
$$\int_{\mathbb {R}^{3}}a(x) \bigl\vert G^{-1}(w_{n}) \bigr\vert ^{2}\,dx= \int_{\mathbb {R}^{3}}a(x+y_{n}) \bigl\vert G^{-1}( \overline{w}_{n}) \bigr\vert ^{2}\,dx = \int_{\mathbb {R}^{3}}a_{\infty}\bigl\vert G^{-1}( \overline{w}_{n}) \bigr\vert ^{2}\,dx+o(1) $$
and by \(\lim_{n\to\infty}b(x+y_{n})=b_{\infty}\) and \(\lim_{n\to\infty }c(x+y_{n})=c_{\infty}\),
$$\begin{aligned}& \int_{\mathbb {R}^{3}}b(x)|w_{n}|^{p}\,dx= \int_{\mathbb {R}^{3}}b_{\infty}|\overline{w}_{n}|^{p} \,dx+o(1), \\& \int_{\mathbb {R}^{3}}c(x)|w_{n}|^{q}\,dx= \int_{\mathbb {R}^{3}}c_{\infty}|\overline{w}_{n}|^{q} \,dx+o(1), \end{aligned}$$
which give
$$\begin{aligned} J(w_{n}) &=\frac{1}{2} \int_{\mathbb {R}^{3}}|\nabla w_{n}|^{2}+a(x)w_{n}^{2} \,dx+\frac {1}{4} \int_{\mathbb {R}^{3}}\phi_{w_{n}}w_{n}^{2}\,dx- \int_{\mathbb {R}^{3}}F(x,w_{n})\,dx \\ & =\frac{1}{2} \int_{\mathbb {R}^{3}}|\nabla\overline{w}_{n}|^{2}+a_{\infty}\overline{w}_{n}^{2}\,dx+\frac{1}{4} \int_{\mathbb {R}^{3}}\phi_{\overline{w}_{n}}\overline{w}_{n}^{2} \,dx- \int_{\mathbb {R}^{3}}F_{\infty}(\overline{w}_{n}) \,dx+o(1) \\ &=J_{\infty}(\overline{w}_{n})+o(1). \end{aligned}$$
On the other hand, since \(\lim_{n\to\infty}a(x+y_{n})=a_{\infty}\), for any \(\phi\in C^{\infty}_{0}(R^{N})\) we have
$$\begin{aligned} &\biggl\vert \int_{\mathbb {R}^{3}} \bigl[a(x+y_{n})-a_{\infty}\bigr] \frac {G^{-1}(\overline{w}_{n})}{g(G^{-1}(\overline{w}_{n}))}\phi \,dx \biggr\vert \\ &\quad \leq \int_{\mathbb {R}^{3}} \bigl\vert \bigl[a(x+y_{n})-a_{\infty}\bigr]\overline{w}_{n}\phi \bigr\vert \,dx \\ &\quad \leq C \biggl( \int_{\mathbb {R}^{3}} \bigl\vert a(x+y_{n})-a_{\infty}\bigr\vert ^{2}|\phi|^{2} \biggr)^{\frac{1}{2}}\to0. \end{aligned}$$
Denote \(\phi_{n}(x)=\phi(x-y_{n})\), we can deduce that
$$\begin{aligned} &\int_{\mathbb {R}^{3}}a(x)\frac{G^{-1}(w_{n})}{g(G^{-1}(w_{n}))}\phi _{n}\,dx \\ &\quad = \int_{\mathbb {R}^{3}}a(x+y_{n})\frac{G^{-1}(\overline{w}_{n})}{g(G^{-1}(\overline{w}_{n}))}\phi \,dx \\ &\quad = \int_{\mathbb {R}^{3}}a_{\infty}\frac{G^{-1}(\overline{w}_{n})}{g(G^{-1}(\overline{w}_{n}))}\phi \,dx+ \int_{\mathbb {R}^{3}} \bigl[a(x+y_{n})-a_{\infty}\bigr] \frac{G^{-1}(\overline{w}_{n})}{g(G^{-1}(\overline{w}_{n}))}\phi \,dx \\ &\quad = \int_{\mathbb {R}^{3}}a_{\infty}\frac{G^{-1}(\overline{w}_{n})}{g(G^{-1}(\overline{w}_{n}))}\phi \,dx+o(1). \end{aligned}$$
Similarly, we derive
$$ \begin{aligned} & \int_{\mathbb {R}^{3}}b(x)|w_{n}|^{p-2}w_{n} \phi_{n}\,dx= \int_{\mathbb {R}^{3}}b_{\infty}|\overline{w}_{n}|^{p-2} \overline{w}_{n}\phi \,dx +o(1), \\ & \int_{\mathbb {R}^{3}}c(x)|w_{n}|^{q-2}v_{n} \phi_{n}\,dx= \int_{\mathbb {R}^{3}}c_{\infty}|\overline{w}_{n}|^{q-2} \overline{w}_{n}\phi \,dx +o(1). \end{aligned} $$
Combining (4.2) and (4.3), we have
$$\bigl\langle J_{\infty}^{\prime}(\overline{w}_{n}),\phi \bigr\rangle = \bigl\langle J^{\prime}(w_{n}),\phi_{n} \bigr\rangle +o(1)=o(1). $$
Hence the claim is true. Furthermore, we can conclude that \(J_{\infty}^{\prime}(w_{0})=0\). We now use (4.1) to show that \(w_{0}\not \equiv0\). In the contrary case \(\overline{w}_{n}\to0\) in \(L^{2}_{\mathrm{loc}}(\mathbb {R}^{3})\), we have
$$\begin{aligned} 0 =&\lim_{n\to\infty} \int_{B_{\overline{R}}(0)}|\overline{w}_{n}|^{2}\,dx= \lim _{n\to\infty} \int_{B_{\overline{R}}(y_{n})}|w_{n}|^{2}\,dx \\ \geq&\liminf _{n\to\infty} \int_{B_{\overline {R}}(y_{n})}|w_{n}|^{2}\,dx\geq\beta>0, \end{aligned}$$
which yields a contradiction. Thus \(w_{0}\not\equiv0\).
Denote \(z_{n}=\overline{w}_{n}-w_{0}\), by the Brézis–Lieb lemma [42] we easily get
$$J_{\infty}(\overline{w}_{n})=J_{\infty}(z_{n})+J_{\infty}(w_{0})+o(1) $$
$$J_{\infty}^{\prime}(\overline{w}_{n})=J_{\infty}^{\prime}(z_{n})+J_{\infty}^{\prime}(w_{0})+o(1)=J_{\infty}^{\prime}(z_{n})+o(1). $$
Hence we have
$$\liminf_{n\to\infty}J_{\infty}(z_{n}) = \liminf _{n\to\infty} \biggl[J_{\infty}(z_{n})- \frac{1}{4}\bigl\langle J_{\infty}^{\prime}(z_{n}),z_{n} \bigr\rangle \biggr] \stackrel{\text{(2.4)}}{\geq} 0 $$
$$\begin{aligned} c & =J(v_{n})+o(1)=J(v)+J(w_{n})+o(1)=J(v)+J_{\infty}( \overline{w}_{n})+o(1) \\ &=J(v)+J_{\infty}(z_{n})+J_{\infty}(w_{0})+o(1), \end{aligned}$$
which implies that
$$c\geq J_{\infty}(w_{0})\geq m_{\infty}, $$
a contradiction. So the proof of Step 1 is complete.
Step 2: \(\|w_{n}\|\to0\) as \(n\to\infty\).
In fact, as a consequence of Step 1 and Lemma 2.8, for any fixed \(R>0\), we have
$$\lim_{n\to\infty}\sup_{y\in \mathbb {R}^{3}} \int_{B_{R}(y)}w_{n}^{2}\,dx=0. $$
By Lemma 2.9, we have
$$\lim_{n\to\infty} \int_{\mathbb {R}^{3}}|w_{n}|^{r}\,dx=0 \quad \text{for all } 2< r< 6. $$
Since \(2< q< p<6\), by (H1) we have
$$\lim_{n\to\infty} \int_{\mathbb {R}^{3}}b(x)|w_{n}|^{p}\,dx=\lim _{n\to\infty} \int _{\mathbb {R}^{3}}c(x)|w_{n}|^{q}\,dx=0. $$
$$\lim_{n\to\infty} \biggl( \int_{\mathbb {R}^{3}}|\nabla w_{n}|^{2}\,dx + \int_{\mathbb {R}^{3}}a(x) \bigl\vert G^{-1}(w_{n}) \bigr\vert ^{2}\,dx \biggr)=\lim_{n\to\infty}\bigl\langle J^{\prime}(w_{n}),w_{n}\bigr\rangle =0, $$
which together with (2.10) yields \(\|w_{n}\|\to0\) as \(n\to \infty\).
Summing the above two steps, we obtain \(v_{n}\to v\) in \(H^{1}(\mathbb {R}^{3})\) as \(n\to\infty\). □
Complement of proof of Theorem 1.1
By Lemma 2.4 and the variant mountain-pass theorem [36], a sequence verifying (2.8) can be obtained. Using Lemma 4.1 and Proposition 4.2, there exists \(v\in H^{1}(\mathbb {R}^{3})\) such that \(J^{\prime}(v)=0\) and \(J(v)=c>0\), which imply that v is a nontrivial solution to (1.1). To show the existence of ground state solution, we set
$$m=\inf_{u\in\mathcal{N}}J(u) \quad \text{and}\quad \mathcal{N}= \bigl\{ u \in H^{1}\bigl(\mathbb {R}^{3}\bigr)\setminus \{0\}:\bigl\langle J^{\prime}(u),u\bigr\rangle =0 \bigr\} . $$
Obviously, we have \(c\geq m\). On the other hand, for any \(v\in\mathcal {N}\), choosing a sufficient large \(t_{0}>0\) to satisfy \(J(t_{0}v)<0\), then \(\gamma_{0}(t)=tt_{0}v\in\Gamma\) and similar to the proof of Lemma 3.1 \((a)\) we obtain
$$c\leq\max_{t\in[0,1]}J\bigl(\gamma_{0}(t)\bigr)\leq\max _{t\geq0}J(tv)=J(v), $$
which indicates that \(m\geq c\). Hence \(J(v)=m>0\). The proof is complete. □
In this paper, we consider the existence of ground state solutions for a class of generalized quasilinear Schrödinger–Poisson systems in \(\mathbb {R}^{3}\). By employing a change of variables constructed by Shen–Wang [19] and used in [18, 25] and the references therein, the generalized quasilinear systems are reduced to a semilinear one, whose associated functionals are well defined in the usual Sobolev space and satisfy the mountain-pass geometric. To obtain a ground state solution, the nonlinearities in [18, 25] are assumed to satisfy the monotone condition which is unnecessary in this paper and we believe that it is a partial extension which can reduce the restrictions on the nonlinearity.
Berestycki, H., Lions, P.L.: Nonlinear scalar field equations I, existence of a ground state. Arch. Ration. Mech. Anal. 84, 313–346 (1983)
Berestycki, H., Lions, P.L.: Nonlinear scalar field equations II, existence of infinitely many solutions. Arch. Ration. Mech. Anal. 82, 347–375 (1983)
Rabinowitz, P.: On a class of nonlinear Schrödinger equations. Z. Angew. Math. Phys. 43, 270–291 (1992)
Bahrouni, A., Ounaies, H., Radulescu, V.D.: Infinitely many solutions for a class of sublinear Schrödinger equations with indefinite potentials. Proc. R. Soc. Edinb., Sect. A 145, 445–465 (2015)
Chaieb, M., Dhifli, A., Zermani, S.: Existence and asymptotic behavior of positive solutions of a semilinear elliptic system in a bounded domain. Opusc. Math. 36, 315–336 (2016)
Kurihara, S.: Large-amplitude quasi-solitons in superfluid films. J. Phys. Soc. Jpn. 50, 3262–3267 (1981)
Laedke, E., Spatschek, K., Stenflo, L.: Evolution theorem for a class of perturbed envelope soliton solutions. J. Math. Phys. 24, 2764–2769 (1983)
Borovskii, A., Galkin, A.: Dynamical modulation of an ultrashort high-intensity laser pulse in matter. J. Exp. Theor. Phys. 77, 562–573 (1983)
Chen, X., Sudan, R.: Necessary and sufficient conditions for self-focusing of short ultraintense laser pulse in underdense plasma. Phys. Rev. Lett. 70, 2082–2085 (1993)
Ritchie, B.: Relativistic self-focusing and channel formation in laser-plasma interactions. Phys. Rev. E 50, 687–689 (1994)
De Bouard, A., Hayashi, N., Saut, J.: Global existence of small solutions to a relativistic nonlinear Schrödinger equation. Commun. Math. Phys. 189, 73–105 (1997)
Takeno, S., Homma, S.: Classical planar Heisenberg ferromagnet, complex scalar field and nonlinear excitations. Prog. Theor. Phys. 65, 172–189 (1981)
Quispel, G., Capel, H.: Equation of motion for the Heisenberg spin chain. Physica A 110, 41–80 (1982)
Kosevich, A., Ivanov, B., Kovalev, A.: Magnetic solitons in superfluid films. Phys. Rep. 194, 117–238 (1990)
Poppenberg, M., Schmitt, K., Wang, Z.: On the existence of soliton solutions to quasilinear Schrödinger equations. Calc. Var. Partial Differ. Equ. 14, 329–344 (2002)
Hasse, R.: A general method for the solution of nonlinear soliton and kink Schrödinger equations. Z. Phys. B 37, 83–87 (1980)
Makhankov, V.G., Fedyanin, V.K.: Nonlinear effects in quasi-one-dimensional models of condensed matter theory. Phys. Rep. 104, 1–86 (1984)
Deng, Y., Peng, S., Yan, S.: Positive soliton solutions for generalized quasilinear Schrödinger equations with critical growth. J. Differ. Equ. 258, 115–147 (2015)
Shen, Y., Wang, Y.: Soliton solutions for generalized quasilinear Schrödinger equations. Nonlinear Anal. 80, 194–201 (2013)
Ding, Y., Lin, F.: Solutions of perturbed Schrödinger equations with critical nonlinearity. Calc. Var. Partial Differ. Equ. 30, 231–249 (2007)
Ding, Y., Liu, X.: Semiclassical solutions of Schrödinger equations with magnetic fields and critical nonlinearities. Manuscr. Math. 140, 51–82 (2013)
He, X., Qian, A., Zou, W.: Existence and concentration of positive solutions for quasilinear Schrödinger equations with critical growth. Nonlinearity 26, 3137–3168 (2013)
He, Y., Li, G.: Concentration soliton solutions for quasilinear Schrödinger equations involving critical Sobolev exponents. Discrete Contin. Dyn. Syst. 36, 731–762 (2016)
Li, F., Zhu, X., Liang, Z.: Multiple solutions to a class of generalized quasilinear Schrödinger equations with a Kirchhoff-type perturbation. J. Math. Anal. Appl. 443, 11–38 (2016)
Zhu, X., Li, F., Liang, Z.: Existence of ground state solutions to a generalized quasilinear Schrödinger–Maxwell system. J. Math. Phys. 57, 101505 (2016)
Li, Z., Zhang, Y.: Solutions for a class of quasilinear Schrödinger equations with critical Sobolev exponents. J. Math. Phys. 58, 021501 (2017)
Benci, V., Fortunato, D.: An eigenvalue problem for the Schrödinger–Maxwell equations. Topol. Methods Nonlinear Anal. 11, 283–293 (1998)
Benci, V., Fortunato, D.: Solitary waves of the nonlinear Klein–Gordon coupled with Maxwell equations. Rev. Math. Phys. 14, 409–420 (2002)
Zhao, L., Zhao, F.: On the existence of solutions for the Schrödinger–Poisson equations. J. Math. Anal. Appl. 346, 155–169 (2008)
Jiang, Y., Zhou, H.: Schrödinger–Poisson system with steep potential well. J. Differ. Equ. 251, 582–608 (2011)
Huang, L., Rocha, E., Chen, J.: Two positive solutions of a class of Schrödinger–Poisson system with indefinite nonlinearity. J. Differ. Equ. 255, 2463–2483 (2013)
Sun, J., Ma, S.: Ground state solutions for some Schrödinger–Poisson systems with periodic potentials. J. Differ. Equ. 260, 2119–2149 (2016)
Cortázar, C., Elgueta, M., García-Melián, J.: Analysis of an elliptic system with infinitely many solutions. Adv. Nonlinear Anal. 6, 1–12 (2017)
Filippucci, R., Vinti, F.: Coercive elliptic systems with gradient terms. Adv. Nonlinear Anal. 6, 165–182 (2017)
Ruiz, D.: On the Schrodinger–Poisson–Slater system: behavior of minimizers, radial and nonradial cases. Arch. Ration. Mech. Anal. 198, 349–368 (2010)
Costa, D., Miyagaki, O.: Nontrival solutions for pertubations of the p-Laplacian on unbounded domains. J. Math. Anal. Appl. 193, 737–755 (1995)
Chabrowski, J.: Weak Convergence Methods for Semilinear Elliptic Equations. World Scientific, Singapore (1999)
Lions, P.L.: The concentration-compactness principle in the calculus of variation. The locally compact case. Part I. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 1, 109–145 (1984)
Lions, P.L.: The concentration-compactness principle in the calculus of variation. The locally compact case. Part II. Ann. Inst. Henri Poincaré, Anal. Non Linéaire 1, 223–283 (1984)
Ekeland, I.: Nonconvex minimization problems. Bull. Am. Math. Soc. 1, 443–473 (1979)
Guo, Z.: Ground states for Kirchhoff equations without compactcondition. J. Differ. Equ. 259, 2884–2902 (2015)
Brézis, H., Lieb, E.: A relation between pointwise convergence of functions and convergence of functionals. Proc. Am. Math. Soc. 88, 486–490 (1983)
The author would like to thank the handling editors and anonymous referee for the help in the processing of the paper.
The author was supported by NSFC (Grant Nos. 11371158, 11771165), the program for Changjiang Scholars and Innovative Research Team in University (No. IRT13066).
Hubei Key Laboratory of Mathematical Sciences, Central China Normal University, Wuhan, P.R. China
Liejun Shen
School of Mathematics and Statistics, Central China Normal University, Wuhan, P.R. China
The author read and approved the final manuscript.
Correspondence to Liejun Shen.
The author declares that he has no interests.
Shen, L. Ground state solutions for a class of generalized quasilinear Schrödinger–Poisson systems. Bound Value Probl 2018, 44 (2018). https://doi.org/10.1186/s13661-018-0957-3
Received: 11 September 2017
Ground state
Generalized quasilinear
Mountain-pass theorem
|
CommonCrawl
|
Asian Economics Letters
https://a-e-l.scholasticahq.com/feed
Vol. 4, Issue Early View, 2023January 31, 2023 AEST
Market Efficiency and Volatility Persistence of Green Investments Before and During the COVID-19 Pandemic
OlaOluwa Yaya, Rafiu Akano, Oluwasegun Adekoya,
Green investment Volatility persistence COVID-19 pandemic JEL: C22 Q47
Copyright Logoccby-sa-4.0
Yaya, O., Akano, R., & Adekoya, O. (2023). Market Efficiency and Volatility Persistence of Green Investments Before and During the COVID-19 Pandemic. Asian Economics Letters, 4(Early View).
Data Sets/Files (4)
Download all (4)
Figure 1. Plots of price and log-returns of Green investments
Table 1. Results of I(d) based on Chebyshev polynomial in time
Table 2. Results of persistence of Log-returns based on Robinson's (1994) linear models
Table 3. Results on absolute returns
If this problem reoccurs, please contact Scholastica Support
Using a nonlinear \(I(d)\) framework with Chebyshev polynomial in time, we investigate the market efficiency and volatility persistence of five green investments before and during the COVID-19 pandemic. Our results show that, except for the MSCI global green building index, green investments are more efficient and exhibit higher volatility persistence before the crisis, as compared to the crisis period. Thus, green investors are likely to make arbitrage profits during the pandemic.
The clamor for a low-carbon economy to support friendly environmental projects to alleviate the negative effects of climate change led to the introduction of green investments and, since 2007, their markets have grown from $0.8 billion to $257.7 billion in 2019 (Climate Bonds Initiative, 2019; Hammoudeh et al., 2020). The launch of "Principles of Green Bond" by the International Capital Markets Association in 2014 further created more awareness of green bonds and green stocks among scholars, investors, and policymakers. Green investments are known to be useful in rating a low carbon economy (Larcker & Watts, 2019), and for reducing global coal consumption leading to low CO2 emissions (Glomsrød & Wei, 2018).
Green finance is a future-oriented type of finance that targets the financial industry by improving the environment and enhancing economic growth due to its low-carbon initiative. The current COVID-19 pandemic has affected global finance, quite more than the global financial crisis of 2008/09, with the market fearing more during the health crisis (Yaya et al., 2021). The pandemic led to the further disentanglement of international financial markets, which affected the level of market integration. Quite several papers investigate the impact of the pandemic on financial markets (Darjana et al., 2022; Salisu & Sikiru, 2020), and on energy and oil (Narayan, 2020), among others. The global concern on green finance for economic growth is growing rapidly amid economic and geo-politically-induced uncertainties (Adekoya et al., 2022), particularly the current global health concern as it imparts on green investments.
This paper, therefore, investigates the level of market efficiency and volatility persistence of green investments before and during the COVID-19 pandemic, using a two-year daily data window in each case. While the determination of market efficiency will render useful information for market players in terms of the possibility of trading for excess gains (Gil-Alana et al., 2018; Yaya et al., 2021), the assessment of volatility persistence will help policymakers to know how best to tackle market disruptions caused by a one-time shock to keep the green investment market in shape towards the fulfillment of its environmental sustainability objective. Fractional integration techniques are employed on the datasets to test: (a) the white noise hypothesis in prices and returns; and (b) the persistence in absolute returns used as a proxy for volatility in the series. Thus, market efficiency in price series requires that price series are \(I(d = 1)\) as in the case of random walk, which further implies that the first differences of price series (i.e. the log-returns) are \(I(d = 0).\) Evidence of market inefficiency, thus, means that \(I(d < 1),\) which is the case of long-range dependency of the series.
II. The \(I(d)\) model for testing market efficiency
The persistence analysis conducted in this paper is based on Cuestas and Gil-Alana's (2016) nonlinear \(I(d)\) framework. The authors introduced the Chebyshev polynomials in time to the fractionally integrated model of Robinson (1994) to form a non-linear deterministic test for testing non-linearity in \(I(d)\) processes. The setup of the test is as follows. Consider a general model,
\[ y_t=f\left(\theta ; z_t\right)+x_t, \quad t=1,2, \ldots, \tag{1} \]
where \(y_t\) is the observed time series and \(x_t\) follows an \(I(d)=(1-L)^d\) process, with \(x_t = 0\) for \(t \leq 0\) and \(d>0\) where \(L\) is the lag-operator \((Lx_t=x_{t-1})\) and \(u_t\) is \(I(0)\) series. The function \(f(.)\) is a non-linear function that depends on the unknown parameter vector of dimensions \(m,\ \theta,\ z_{t},\) which is a vector of deterministic terms. Then, Eq. (1) can be re-written as,
\[ y_t=\sum_{i=0}^m \theta_i P_{i, N}(t)+x_t, \quad t=0, \pm 1, \ldots, \tag{2} \]
where the order of the Chebyshev polynomial is m. The Chebyshev polynomial \(P_{i, N}(t)\) in Eq. (2) is defined as,
\[ \begin{align} P_{i, N}(t)&=\sqrt{2} \cos [i \pi(t-0.5) / N], \\ t&=1,2, \ldots, N ; i=1,2, \ldots, \tag{3} \end{align} \]
with \(P_{0, N}(t)=1\). From the polynomial, whenever m = 0, the model is expressed with an intercept only; if m = 1, it contains an intercept and a linear trend, and when m > 1, it becomes non-linear, and the higher m is the less linear the approximated deterministic component becomes. The choice of value for m then depends on the significance of the Chebyshev coefficients.
The non-linear deterministic approach of long-range dependence by Chebyshev polynomials is a modification and improvement of Robinson's (1994) fractional integration technique. Robinson (1994) considers the same setup as in Eqs. (1) and (2) with \(f(.)\) in Eq. (2) of the linear form, \(\theta^{\prime} z_t\), testing the null hypothesis,
\[ H_0: d=d_0, \tag{4} \]
for any real value \(d_0\). Under \(H_0\) and using the two equations,
\[ y_t^*=\theta^{\prime} z_t^*+u_t, \quad t=1,2, \ldots, \tag{5} \]
where \(y_t^*=(1-L)^{d_0} y_t\) and \(z_t^*=(1-L)^{d_0} z_t\). Then, given the linear nature of the above relationship and the \(I(0)\) nature of the error term \(u_t\), the coefficients in Eq. (5) can be estimated by standard Ordinary Least Square (OLS) or Generalized Least Squares (GLS) methods. The same applies to the case of \(f(.)\) containing the Chebyshev polynomials, noting that the relationship is linear in parameters. Thus, combining Eqs. (1) and (3), we obtain,
\[ y_t^*=\sum_{i=0}^m \theta_i P_{i, N}^*(t)+u_t, \quad t=0, \pm 1, \ldots, \tag{6} \]
where \(P_{i, N}^*(t)=(1-L)^{d_0} P_{i, N}(t)\)and using OLS/GLS methods, under the null hypothesis in (4), the residuals \(\hat{u}_t\) are,
\[ \begin{align} \hat{u}_t&=y_t^*-\sum_{i=0}^m \theta_i P_{i, N}^*(t) ; \\ \hat{\theta}&=\left(\sum_{t=1}^N P_t P_t^{\prime}\right)^{-1}\left(\sum_{t=1}^N P_t y_t^*\right), \tag{7} \end{align} \]
and \(P_t\) is the \((m \times 1)\) vector of Chebyshev polynomials. Based on the above residuals \(\hat{u}_t\), we estimate the variance,
\[ \hat{\sigma}^2(\tau)=\frac{2 \pi}{N} \sum_{j=1}^N g\left(\lambda_j ; \hat{\tau}\right)^{-1} I_{\hat{u}}\left(\lambda_j\right) ; \quad \lambda_j=\frac{2 \pi j}{N}, \tag{8} \]
where \(I_{\hat{u}}\left(\lambda_j\right)\) is the periodogram of \(\hat{u}_t\); \(g\) is a function related to the spectral density function of \(u_t\); and the nuisance parameter \(\tau\) is estimated by \(\hat{\tau}=\arg \min _{\tau \in N^*} \sigma^2(\tau)\), where \(N^*\) is a suitable subset of the \(R^{q}\) Euclidean space.
III. Data and Empirical Results
We obtain daily data on green investments from Datastream. We consider a two-year data window before the COVID-19 pandemic, i.e. before the World Health Organization's pandemic declaration date of 11 March 2020, and another two-year data window after this date. Thus, the entire sample analyzed spans 1 March 2018 to 13 January 2022. Five green investment indices, i.e. bonds and stocks, are analyzed. The green bond indices are the S&P Green bond select index (SPGRSLL), and the S&P Green bond index (SPGRBND), while the green stock indices are the Morgan Stanley Capital International (MSCI) global alternative energy index (MSGLAEL), MSCI global pollution prevention index (MSGLPPL), and the MSCI global green building index (MSGLGBL). The MSCI indices for green investments take up about half of the revenue from securities on environmental-friendly projects, such as those of green building, alternative energy, clean water, or pollution prevention. Thus, the five variables analyzed in this paper represent global green investments.
Plots of prices of these green investments are given in Figure 1 with the corresponding log-returns superimposed. The green assets are seen to exhibit significant volatility in both prices and returns, with stronger evidence since 2020. The relative stable trend enjoyed by the assets at the beginning of the sample period halted with a sharp drop in their prices around the first quarter of 2020, coinciding with the period when the pandemic outbreak news peaked (Umar et al., 2021).
Figure 1.Plots of price and log-returns of Green investments
We start the main results with the logged prices, as reported in Table 1. the d estimates for both green bond indices, SPGRSLL (1.0117) and SPGRBND (0.9943), are not significantly different from unity before the COVID-19 pandemic, implying that the null hypothesis of random walk, which is consequently associated with market efficiency, cannot be rejected. This is unlike other green assets whose d estimates exceed one. During the pandemic, however, the green bond market tends to lose its efficiency in favour of persistence, following an increase in the d estimates of both green bond indices beyond the region of d = 1. Other green assets still maintain their initial status, except the global green building index (MSGLGBL), which is now demonstrating a random walk, given its d estimate is 1.0123.
Table 1.Results of I(d) based on Chebyshev polynomial in time
Series d (95% CI) c cos1 cos2 cos3
Before COVID-19 pandemic
SPGRSLL 1.0117 (0.9409, 1.0825) 5.2709 (0.246) -3.0474
(-2.22) 0.9282
(1.37) 0.7049
SPGRBND 0.9943 (0.9149, 1.0737) 51.2094
(0.140) -0.2556
(-0.175) 1.3346
MSGLAEL 1.0687 (0.9928, 1.1446) 6.6794
(0.743) 0.9613
MSGLPPL 1.0732 (0.9942, 1.1522) 0.7738 (0.022) -13.7793
MSGLGBL 1.0773 (1.0003, 1.1543) -0.4082
(-0.004) 31.4861
(0.589) 19.4720
(0.780) -12.8924
(-0.803)
During COVID-19 pandemic
SPGRSLL 1.0551 (0.9740, 1.1362) 31.2183
(1.57) -1.3711
(-0.587) -1.6601
(-1.53) -1.3421
(-0.761) -10.9694
MSGLPPL 1.0643 (0.9779, 1.1505) 233.903
(1.57) -32.8907
MSGLGBL 1.0123 (0.9660, 1.1086) -390.537
(42.8) -95.7030
(-1.62) -42.8160
(-1.49) 24.8326
Notes: Significant parameter estimates of \(d\) and Chebyshev polynomial at 5% level are in bold
We next turn to the log-returns and volatility (absolute returns) results. The consideration of volatility persistence is an extension of the conventional weak-form efficiency hypothesis that merely relies on asset prices or returns. Volatility persistence is important in determining how long-lasting the effect of shocks that increase the riskiness of a financial asset would be. As shown in Tables 2 and 3 for the log-returns and volatility results, respectively, the significance of the fractional parameter, d, tends to vary for some assets both across the series (returns and volatility) and periods (before and during the pandemic). Nonetheless, significance is established in most cases, and there is clear evidence that the estimates of d fall in the 0<d<0.5 range. This suggests that the green assets' returns and volatilities demonstrate long memory and mean-reverting features. Therefore, the effect of shocks will only be transitory, dying out in no distant time. Besides, the values of d seem to be greater during the pandemic, as an indication that the rate at which it will die out will be slower in this period. This is consistent with the finding of Adekoya et al. (2021) that the green bond market shows evidence of stronger persistence during the pandemic. One probable reason for this is that, apart from the pandemic affecting the individual financial market, it resulted in significant risk transmissions, induced high fear and pessimism in investors (Umar et al., 2021), and erratic speculative behaviour. Based on these factors, adjusting to a normal market state could require a longer recovery time.
Table 2.Results of persistence of Log-returns based on Robinson's (1994) linear models
Series d (95% CI) c t
SPGRSLL 0.0352 (-0.0314, 0.1018) -0.0066
(-0.948) 2.60E-05
SPGRBND 0.0211 (-0.0526, 0.0948) -0.0097
(-1.12) 3.79E-05
MSGLAEL 0.0687 (-0.0087, 0.1461) -0.0549
MSGLPPL 0.0721 (-0.0057, 0.1499) -0.0198
SPGRSLL 0.1485 (0.0646, 0.2324) -0.0030
(-0.163) -1.97E-05
(-0.0311)
SPGRBND 0.1762 (0.0872, 0.2652) -0.0012
MSGLAEL 0.0155 (-0.0609, 0.0919) 0.1539
MSGLPPL 0.1281 (0.0379, 0.2183) -0.0432
MSGLGBL -0.0197 (-0.0956, 0.0562) 0.1389
Notes: Significant parameter estimates of \(d\) are in bold
Table 3.Results on absolute returns
SPGRBND 0.0325 (-0.0371, 0.1021) 0.0057
(1.06) -2.17E-05
(0.110) -1.86E-05
MSGLPPL 0.0647 (-0.0029, 0.1323) 0.0162
SPGRSLL 0.2539 (0.1871, 0.3207) 0.0566
SPGRBND 0.1687 (0.0966, 0.2408) 0.0489
MSGLPPL 0.1895 (0.1264, 0.2526) 0.4445
MSGLGBL 0.0480 (-0.0173, 0.1133) 0.1624
IV. Conclusion
This study examines the market efficiency and volatility of green investments before and after the COVID-19 pandemic. Using fractional integration methods, we find that the green bond market, which was efficient before the pandemic, demonstrates inefficiency during the crisis. However, other green markets are inefficient in both periods, except for MSGLGBL. In addition, the green assets' returns and volatilities are found to observe mean-reverting behaviour, indicating that the effect of shocks will be temporary, although will die out more slowly during the health crisis.
Green investors can glean from these findings that they can make abnormal profits following the inefficient states of the markets, except for green bonds during tranquil periods. However, they should be aware that any shock that adversely affects returns during a similar crisis will have a relatively slower time of disappearance unlike when the market is normal.
Submitted: January 28, 2022 AEST
Accepted: July 07, 2022 AEST
Adekoya, O. B., Oliyide, J. A., Asl, M. G., & Jalalifar, S. (2021). Financing the green projects: Market efficiency and volatility persistence of green versus conventional bonds, and the comparative effects of health and financial crises. International Review of Financial Analysis, 78, 101954. https://doi.org/10.1016/j.irfa.2021.101954
Adekoya, O. B., Oliyide, J. A., Yaya, O. S., & Al-Faryan, M. A. S. (2022). Does oil connect differently with prominent assets during war? Evidence from intra-day data during the Russia-Ukraine saga. Resources Policy, 77, 102728. https://doi.org/10.1016/j.resourpol.2022.102728
Climate Bonds Initiative. (2019). Green Bond market summary. https://www.climatebonds.net/market/data/
Cuestas, J. C., & Gil-Alana, L. A. (2016). Testing for long memory in the presence of non- linear deterministic trends with Chebyshev polynomials. Studies in Nonlinear Dynamics and Econometrics, 20(1), 57–74.
Darjana, D., Wiryono, S. K., & Koesrindartoto, D. P. (2022). The COVID-19 Pandemic Impact on Banking Sector. Asian Economics Letters, 3(3). https://doi.org/10.46557/001c.29955
Gil-Alana, L. A., Gupta, R., Shittu, O. I., & Yaya, O. S. (2018). Market Efficiency of Baltic Stock Markets: A Fractional Integration Approach. Physica A: Statistical Mechanics and Its Applications, 511, 251–262. https://doi.org/10.1016/j.physa.2018.07.029
Glomsrød, S., & Wei, T. (2018). Business as unusual: The implications of fossil divestment and Green Bonds for financial flows, economic growth and energy market. Energy for Sustainable Development, 44, 1–10. https://doi.org/10.1016/j.esd.2018.02.005
Hammoudeh, S., Ajmi, A. N., & Mokni, K. (2020). Relationship between Green Bonds and financial and environmental variables: A novel time-varying causality. Energy Economics, 92, 104941. https://doi.org/10.1016/j.eneco.2020.104941
Larcker, D. F., & Watts, E. (2019). Where's the Greenium? SSRN Electronic Journal, 69(239), 2-3,. https://doi.org/10.2139/ssrn.3333847
Narayan, P. K. (2020). Oil price news and COVID-19—Is there any connection? Energy Research Letters, 1(1). https://doi.org/10.46557/001c.13176
Robinson, P. M. (1994). Efficient tests of nonstationary hypotheses. Journal of the American Statistical Association, 89(428), 1420–1437. https://doi.org/10.1080/01621459.1994.10476881
Salisu, A. A., & Sikiru, A. A. (2020). Pandemics and the Asia-Pacific Islamic Stocks. Asian Economics Letters, 1(1). https://doi.org/10.46557/001c.17413
Umar, Z., Adekoya, O. B., Oliyide, J. A., & Gubareva, M. (2021). Media sentiment and short stocks performance during a systemic crisis. International Review of Financial Crisis, 78, 101896. https://doi.org/10.1016/j.irfa.2021.101896
Yaya, O. S., Gil-Alana, L. A., Vo, X. V., & Adekoya, O. B. (2021). How fearful are Commodities and US stocks in response to Global fear? Persistence and Cointegration analyses. Resources Policy, 74, 102273. https://doi.org/10.1016/j.resourpol.2021.102273
|
CommonCrawl
|
Modelling transmission and control of the COVID-19 pandemic in Australia
Sheryl L. Chang ORCID: orcid.org/0000-0002-6119-828X1,
Nathan Harding ORCID: orcid.org/0000-0003-1707-92481,
Cameron Zachreson ORCID: orcid.org/0000-0002-0578-40491,
Oliver M. Cliff ORCID: orcid.org/0000-0001-5041-40901 &
Mikhail Prokopenko ORCID: orcid.org/0000-0002-4215-03441,2
Nature Communications volume 11, Article number: 5710 (2020) Cite this article
Computational models
There is a continuing debate on relative benefits of various mitigation and suppression strategies aimed to control the spread of COVID-19. Here we report the results of agent-based modelling using a fine-grained computational simulation of the ongoing COVID-19 pandemic in Australia. This model is calibrated to match key characteristics of COVID-19 transmission. An important calibration outcome is the age-dependent fraction of symptomatic cases, with this fraction for children found to be one-fifth of such fraction for adults. We apply the model to compare several intervention strategies, including restrictions on international air travel, case isolation, home quarantine, social distancing with varying levels of compliance, and school closures. School closures are not found to bring decisive benefits unless coupled with high level of social distancing compliance. We report several trade-offs, and an important transition across the levels of social distancing compliance, in the range between 70% and 80% levels, with compliance at the 90% level found to control the disease within 13–14 weeks, when coupled with effective case isolation and international travel restrictions.
The coronavirus disease 2019 (COVID-19) pandemic is an ongoing crisis caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The first outbreak was detected in December 2019 in Wuhan, the capital of Hubei province, rapidly followed by the rest of Hubei and all other provinces in China. Within mainland China the epidemic was largely controlled by mid- to late March 2020, having generated >81,000 cases (cumulative incidence on 20 March 20201). This was primarily due to intense quarantine and social distancing (SD) measures, including: isolation of detected cases; tracing and management of their close contacts; closures of potential zoonotic sources of SARS-CoV-2; strict traffic restrictions and quarantine on the level of entire provinces (including suspension of public transportation, closures of airports, railway stations and highways within cities); cancellation of mass gathering activities; and other measures aimed to reduce transmission of the infection2,3,4.
Despite the unprecedented domestic control measures, COVID-19 was not completely contained and the disease reached other countries. On 31 January 2020, the epidemic was recognised by the World Health Organisation (WHO) as a public health emergency of international concern, and on 11 March 2020, the WHO declared the outbreak a pandemic5. Effects of the COVID-19 pandemic have quickly spilled over from the healthcare sector into international trade, tourism, travel, energy and finance sectors, causing profound social and economic ramifications6. While worldwide public health emergencies have been declared and mitigated in the past—for example, the "swine flu" pandemic in 20097,8,9,10—the scale of socioeconomic disruptions caused by the unfolding COVID-19 pandemic is unparalleled in recent history.
Australia began to experience most of these consequences, with the number of confirmed COVID-19 cases crossing 1000 by 21 March 2020, while (at that time) doubling every 3 days, and the cumulative incidence growth rate averaging 0.20 per day during the first 3 weeks of March 2020 (Appendix A in Supplementary information). In response, the Australian government introduced strict intervention measures in order to prevent the epidemic from continuing along such trends and to curb the devastating growth seen in other COVID-19-affected nations. Nevertheless, there is an ongoing debate on the utility of specific interventions (e.g. school closures), the low compliance with SD measures (e.g. reduction of mass gatherings), and the optimal combination of particular health intervention options balanced against social and economic ramifications, and restrictions on civil liberties. In the context of this debate, there is an urgent requirement for rigorous and unbiased evaluations of available options. The present study makes a contribution towards this requirement and provides timely input into the Australian pandemic response discussion. Specifically, we develop a large-scale Agent-Based Model (ABM) capturing salient features of COVID-19 transmission in Australia, and use it to evaluate the effectiveness of non-pharmaceutical interventions with respect to the population's compliance with the suggested measures.
Governments around the world are presently fighting the spread of COVID-19 within their jurisdictions by developing, applying and adjusting multiple variations on pandemic intervention strategies. While these strategies vary across nations, they share fundamental approaches that are adapted by national healthcare systems, aiming at a broad adoption within societies. In the absence of a COVID-19 vaccine, as pointed out by Ferguson et al.11, mitigation policies may include case isolation (CI) of patients and home quarantine (HQ) of their household (HH) members, SD of the individuals within specific age groups (e.g. the elderly, defined as >75 years), as well as people with compromised immune systems or other vulnerable groups. In addition, suppression policies may require an extension of CI and HQ with SD of the entire population. Often, such SD is supplemented by school and university closures.
Our primary objective is an evaluation of several intervention strategies that have been deployed in Australia, or have been considered for a deployment: restriction on international arrivals ("travel ban"); in-home CI of ill individuals; HQ of family members of ill individuals; SD at various population compliance levels up to and including 100%, a full lockdown; school closures (SCs), which affect the behaviour of school children as well as their parents and teachers. We explore these intervention strategies independently and in various combinations, as detailed in "Methods". Each scenario is traced over time and compared to the baseline model in order to quantify its potential to curtail the epidemic in Australia. Our aims are to identify minimal effective levels of SD compliance, and to determine the potential impact of school closures on the effectiveness of intervention measures.
Stochastic ABMs have been established as robust tools for tracing the fine-grained effect of heterogeneous intervention policies in diverse epidemic and pandemic settings7,8,12,13,14,15,16,17,18, including for policy advice currently in place in the USA and the UK11. In this study, we follow the ABM approach to quantitatively evaluate and compare several mitigation and suppression measures, using a high-resolution individual-based computational model calibrated to key characteristics of COVID-19 pandemics. The approach uses a modified and extended agent-based model, ACEMod (Australian Census-based Epidemic Model), previously developed and validated for simulations of pandemic influenza in Australia19,20,21,22. The epidemiological component, AMTraC-19, is developed and calibrated specifically to COVID-19 via reported invariants (outputs) such as the growth rate above. Importantly, our sensitivity analysis shows that key epidemiological outputs from our model (e.g. the growth rate, R0, generation time, etc.) are robust to uncertainty in the input parameters (e.g. the natural history of the disease, fraction of symptomatic cases, etc.).
In investigating possible effects of various intervention policies, we are able to provide clear and tangible goals for the population and government to pursue in order to mitigate the pandemic within Australia. The key result, based on a comparison of several intervention strategies, is an actionable transition across the levels of SD compliance, identified between 70 and 80% levels. A compliance of below 70% is unlikely to succeed for any duration of SD, while a compliance at the 90% level is found to control the disease within 13–14 weeks, when coupled with effective CI, HQ and international travel restrictions. We validate these results by a comparison with the actual epidemic and SD compliance observed in Australia. In doing so, we confirm that the model has successfully predicted the cumulative incidence as well as the timing of both the incidence and prevalence peaks. Moreover, we illustrate trade-offs between these levels and duration of the interventions, and between the interventions' delay and their duration. Specifically, our simulations suggest that a 3-day delay in introducing strict intervention measures lengthens their required duration by over 3 weeks on average, that is, 23.56 days (with standard deviation of 11.167).
We present results of the high-resolution (individual-based) pandemic modelling in Australia, including a comparative analysis of intervention strategies. As discussed above, we performed our analysis using ACEMod, an established Australian Census calibrated ABM that captures fine-grained demographics and social dynamics19,20,21,22. The epidemiological component of our model, AMTraC-19, was developed and calibrated to match key characteristics of COVID-19 (see "Methods").
The input parameters were calibrated to generate key characteristics in line with reported epidemiological data on COVID-19. We primarily calibrated by comparing these epidemiological characteristics to the mean of output variables, inferred from Monte Carlo simulations during non-intervention periods, with confidence intervals (CIs) constructed by bootstrapping (i.e. random sampling with replacement) with the bias-corrected percentile method23.
The key output variables, inferred in concordance with available data, include: a reproductive number R0 of 2.77, 95% CI [2.73, 2.83], N = 6315; a generation period Tgen of 7.62 days, 95% CI [7.53, 7.70], N = 6315; a growth rate of cumulative incidence during a period of sustained and unmitigated local transmission at \(\dot{C}=0.167\) per day, 95% CI [0.164, 0.170], N = 20; and an attack rate in children of Ac = 6.154%, 95% CI [6.15%, 6.16%], N = 20. The relatively narrow CIs reflect the intrinsic stochasticity of the simulations carried out for the default values of input parameters. The broad range of possible variations in response to changes in the input parameters, as well as the robustness of the model and its outcomes, is established by the sensitivity analysis (see Appendix D in Supplementary information). This is followed by validation against actual epidemic timeline in Australia (see Appendix H in Supplementary information), confirming that the adopted parametrisation is acceptable.
A trace of the baseline model—no interventions whatsoever—is shown in Fig. 1, with clear epidemic peaks in both incidence (Fig. 1a) and prevalence (Fig. 1b) evident after 105–110 days from the onset of the disease in Australia, that is, occurring around mid-May 2020. The scale of the impact is very high, with nearly 50% of the Australian population showing symptoms. This baseline scenario is provided only for comparison, in order to evaluate the impact of interventions, most of which were already in place in Australia during the early phase of epidemic growth. To re-iterate, we consider timely intervention scenarios applicable to the situation in Australia at the end of March 2020, with the number of confirmed COVID-19 cases crossing 2000 on 24 March 2020, and the growth rate of cumulative incidence \(\dot{C}\) averaging 0.20 per day during the first 3 weeks of March. We observe that the simulated baseline generates \(\dot{C}\approx 0.17\) per day, in a good agreement with actual dynamics.
Fig. 1: Effects of case isolation, home quarantine and school closures.
A combination of the case isolation (CI) and home quarantine (HQ) measures delays epidemic peaks and reduce their magnitude, in comparison to no interventions (NI), whereas school closures (SCs) have short-term effect. Several baseline and intervention scenarios, traced for a incidence, b prevalence, c cumulative incidence and d the daily growth rate of cumulative incidence \(\dot{C}\), shown as average (solid) and 95% confidence interval (shaded) profiles, over 20 runs. The 95% confidence intervals are constructed from the bias-corrected bootstrap distributions. The strategy with school closures combined with case isolation lasts 49 days (7 weeks), marked by a vertical dashed line. Restrictions on international arrivals are set to last until the end of each scenario. The alignment between simulated days and actual dates may slightly differ across separate runs.
Case isolation and home quarantine
All the following interventions include restrictions on international arrivals, triggered by the threshold of 2000 cases. Three mitigation strategies are of immediate interest:
case isolation,
in-home quarantine of household contacts of confirmed cases,
school closures, combined with (i) and (ii).
These strategies are shown in Fig. 1, with the duration of the SC strategy set as 49 days (7 weeks), starting when the threshold of 2000 cases is reached. The CI strategy coupled with the HQ strategy delays the epidemic peak by ~26 days on average (e.g. shifting the incidence peak from days 97.5 to 123.2, Fig. 1a, and the prevalence peak from days 105 to 130.7, Fig. 1b, on average). In addition, CI combined with HQ reduces the height of the epidemic peak by ~47–49%. The main contributing factor is CI, as adding HQ, with 50% in-home compliance, to CI of 70% symptomatic individuals, delays the epidemic peak by <3 days on average. The overall attack rate resulting from the coupled policy is also reduced in comparison to the baseline scenario (Fig. 1c). However, the CI and HQ strategies, even when coupled together, are not effective for epidemic suppression, with prevalence still peaking in millions of symptomatic cases (1.873M) (Fig. 1b). Such an outcome would have completely overburdened the Australian healthcare system24.
Adding school closures to the CI and HQ approach also does not achieve a significant reduction in the overall attack rate (Fig. 1). The peaks of both incidence (Fig. 1a) and prevalence (Fig. 1b) are delayed by ~4 weeks (~27 days for both incidence and prevalence). However, their magnitudes remain practically the same, due to a slower growth rate of cumulative incidence (Fig. 1d). This is observed irrespective of the commitment of parents to stay home (Appendix G in Supplementary information). We also traced the dynamics resulting from the SC strategy for two specific age groups: children and individuals >65 years old, shown in Appendix G in Supplementary information. The 4-week delays in occurrence of the peaks are observed across both age groups, suggesting that there is a strong concurrence in the disease spread across these age groups. We also observe that under the SC strategy coupled with CI and HQ, the magnitude of the incidence peak for children increases by ~7% shown in Appendix G in Supplementary information (Supplementary Fig. 9a). This may be explained by increased interactions of children in household and community social mixing environments, when schools are closed. Under this strategy, there is no difference in the magnitude of the incidence peak for the older age group (Appendix G in Supplementary information, Supplementary Fig. 10a). We also note that the considered interventions succeed in reducing a relatively high variance in the incidence fraction of symptomatic older adults, thus reducing the epidemic potential to adversely affect this age group specifically.
In short, the only tangible benefit of school closures, coupled with CI and HQ, is in delaying the epidemic peak by 4 weeks, at the expense of a slight increase in the contribution of children to the incidence peak. While school closures are considered an important part of pandemic influenza response, our results suggest that this strategy is much less effective in the context of COVID-19. The gains are further reduced by other societal costs of school closures, for example, drawing their parents employed in healthcare and other critical infrastructure away from work. There is, nevertheless, one more possible benefit of school closures, discussed in the context of the population-wide SD in Appendix G in Supplementary information.
Next, we examine the effects of population-wide SD in combination with CI and restrictions on international arrivals. Here, we present the effects of different compliance levels on the epidemic dynamics. Low compliance levels, set at <70%, did not show any potential to suppress the disease in the considered time horizon (28 weeks), while the total lockdown, that is, complete SD at 100%, managed to reduce the incidence and prevalence to zero, after 49 days of the mitigation. However, because it is unrealistic to expect 100% compliance in the Australian context, we focus on the practically achievable compliance levels: 70, 80 and 90%, with their duration set to 91 days (13 weeks), shown in Fig. 2.
Fig. 2: Effects of social distancing.
Strong compliance with social distancing (at 80% and above) effectively controls the disease during the suppression period, while lower levels of compliance (at 70% or less) do not succeed for any duration of the suppression. A comparison of social distancing strategies, coupled with case isolation, home quarantine and international travel restrictions, across different compliance levels (70, 80 and 90%). Duration of each social distancing (SD) strategy is set to 91 days (13 weeks), shown as a grey shaded area between days 51 and 142 (the start and end days of SD varied across stochastic runs: for 70% SD the last day of suppression was 141.4 on average; for 80% SD it was 144.2; and for 90% SD it was 141.5, see Source data file). Case isolation, home quarantine and restrictions on international arrivals are set to last until the end of each scenario. Traces include a incidence, b prevalence, c cumulative incidence and d the daily growth rate of cumulative incidence \(\dot{C}\), shown as average (solid) and 95% confidence interval (shaded) profiles, over 20 runs. The 95% confidence intervals are constructed from the bias-corrected bootstrap distributions. The alignment between simulated days and actual dates may slightly differ across separate runs.
Importantly, during the time period that the SD level is maintained at 70%, the disease is not controlled, with the numbers of new infected cases (incidence) remaining in hundreds, and the number of active cases (prevalence) remaining in thousands. Thus, 70% compliance is inadequate for reducing the effective reproductive number below 1.0. In contrast, the two higher levels of SD, 80 and 90%, are more effective at suppressing both prevalence and incidence during the 13-week SD period. Figure 2 contrasts these three levels of SD compliance, "zooming in" into the key time period, immediately following the introduction of SD. Crucially, there is a qualitative difference between the lower levels of SD compliance (70%, or less) and the higher levels (80%, or more). For the SD compliance set at 80 and 90%, we observe a reduction in both incidence (Fig. 2a) and prevalence (Fig. 2b), lasting for the duration of the strategy (91 days). With SD compliance of 80%, the disease is not completely eliminated, but incidence is reduced to <100 new cases per day, with prevalence below 1000 by the end of the suppression period (Fig. 2b). It is important to note that while the disease is suppressed during the period over which SD is in effect, resurgence of transmission is likely unless complete or near-complete elimination has been achieved upon cessation of SD measures. Our results suggest that this level of compliance would succeed in eliminating the disease in Australia if the strategy was implemented for a longer period, for example, another 4–6 weeks.
The 90% SD compliance practically controls the disease, bringing both incidence and prevalence to very low numbers of isolated cases (and reducing the effective reproductive number to nearly zero). It is possible for the epidemic to spring back to significant levels even under this level of compliance, as the remaining sporadic cases indicate a potential for endemic conditions. We do not quantify these subsequent waves, as they develop beyond the immediately relevant time horizon. Nevertheless, we do share the concerns expressed by the Imperial College COVID-19 Response Team: "The more successful a strategy is at temporary suppression, the larger the later epidemic is predicted to be in the absence of vaccination, due to lesser build-up of herd immunity"11. Given that the herd immunity threshold is determined by 1 − 1/R025, the extent required to build up collective immunity for COVID-19, assuming R0 = 2.77, may be estimated as 0.64, that is, 64% of the population becoming infected or eventually immunised.
The cumulative incidence for the best achievable scenario (90% SD compliance coupled with CI, HQ, and restrictions on international arrivals) settles in the range of 8000–10,000 cases during the suppression period, with resurgence still possible at some point after intervention measures are relaxed (Fig. 2c). The range of cumulative incidence at the end of the suppression is 8313–10,090 over 20 runs, with the mean of 9122 cases and 95% CI [8898, 9354], constructed from the bias-corrected bootstrap distribution (see Source data file). In terms of case numbers, this is an outcome several orders of magnitude better than the worst -case scenario, developing in the absence of the combined mitigation and suppression strategies.
We compare two sets of scenarios. In our primary scenarios, aligned with the actual epidemic curves in Australia, the SD measures are triggered by 2000 confirmed cases. In alternative scenarios, the strict suppression measures are initiated earlier, being triggered by crossing the threshold of 1000 cases (Appendix H.1 in Supplementary information). The best agreement between the actual and simulation timelines is found to match a delayed but high (90%) SD compliance, appearing to be followed from 24 March 2020, after a 3-day period with a weaker compliance, which commenced on 21 March 2020 when the international travel restrictions were introduced, as shown in Fig. 3 and detailed in Appendix H.2 in Supplementary information. For the 1000 case threshold scenario, we present the effects of different SD compliance levels (70 and 90%) on the spatial distribution of cases on day 60. These are shown in Appendix I in Supplementary information, as choropleth maps of the four largest Australian Capital Cities: Sydney, Melbourne, Brisbane and Perth.
Fig. 3: Model validation with actual data.
A comparison between actual epidemic curves in Australia (black dots, shown until 28 June 2020), and the primary simulation scenario, using a threshold of 2000 cases (crossed on 24 March 2020) and following 90% of social distancing (SD), coupled with case isolation, home quarantine and international travel restrictions, shown until early July 2020 (yellow colour). Duration of the SD strategy is set to 91 days (13 weeks), shown as a grey shaded area. Case isolation, home quarantine and restrictions on international arrivals are set to last until the end of the scenario. Traces include a incidence, b prevalence, c cumulative incidence and d daily growth rate of cumulative incidence, shown as average (solid), 95% confidence interval (thin solid) profiles, as well as the ensemble of 20 runs (scatter). The 95% confidence intervals are constructed from the bias-corrected bootstrap distributions. The alignment between simulated days and actual dates may slightly differ across separate runs. Data sources: refs. 64,67.
It is clear that there is a trade-off between the level of SD compliance and the duration of the SD strategy: the higher the compliance, the more quickly incidence is suppressed. Both 80 and 90% compliance levels control the spread within reasonable time periods: 18–19 and 13–14 weeks, respectively. In contrast, lower levels of compliance (at 70% or less) do not succeed for any duration of the imposed SD limits. This quantitative difference is of major policy setting importance, indicating a sharp transition in the performance of these strategies in the region between 70 and 80%.
Referring to Fig. 4, the identified transition across the levels of compliance with SD may also be interpreted as a tipping point or a phase transition26. Various critical phenomena have been discovered previously in the context of epidemic models, often interpreting epidemic diffusion in statistical–mechanical terms, for example, as percolation within a network27,28,29,30. The transition across the levels of SD compliance is similar to percolation transition in a forest-fire model with immune trees31. Distinct epidemic phases are evident in Fig. 4 at a certain percolation threshold between the SD compliance of 70 and 80%, at which the critical regime exhibits the effective reproductive number Reff = 1.0. That is, crossing this regime signifies moving into the phase where the epidemic is controlled, that is, reducing Reff below 1.0.
Fig. 4: Phase transition across the levels of social distancing compliance.
Colour image plot of disease prevalence as a function of time (horizontal axis) and social distancing (SD) compliance (vertical axis). A phase transition is observed between 70 and 80% SD compliance (marked by a dotted line). For SD compliance levels below 80%, the prevalence continues to grow after social distancing is implemented, while for compliance levels at or above 80% the prevalence declines, following a peak formed after ~2 months. The colours correspond to log-prevalence, traced from the epidemic's onset until the end of the suppression period. The isolines trace contours with constant values of log-prevalence. Vertical dashes mark the time when threshold of 2000 is crossed, triggering SD, averaged over 20 runs for each SD level. Social distancing is coupled with case isolation, home quarantine and international travel restrictions. The alignment between simulated days and actual dates may slightly differ across separate runs.
We do not attempt to establish a more precise level of required compliance between 70 and 80%. Such a precision would be of lesser practical relevance than the identification of 80% compliance as the minimal acceptable level of SD, with 90% providing a shorter timeframe. The robustness of these results is established by sensitivity analysis presented in Appendix D.2 in Supplementary information.
In addition, a 3-day delay in introducing strong SD measures is projected to extend the required suppression period by ~3 weeks, beyond the 91-day period considered in the primary scenario (see Appendix H in Supplementary information). Finally, we report fractions of symptomatic cases across mixing contexts (Appendix J in Supplementary information), with the infections through HHs being predominant. Notably, the HH fractions steadily increase with the strengthening of SD compliance, while the corresponding fractions of infections in the workplace and school environments decrease.
In short, the best intervention approach identified in our study is a combination of international travel restrictions, CI, HQ and SD with at least 80%–90% compliance for a duration of ~91 days (13 weeks). These measures have been implemented in Australia to a reasonable degree; however, it is unclear if testing throughput and contact tracing resources are sufficient to facilitate effective interventions if incidence increases substantially. For these reasons, it is our conclusion that SD is likely to continue to be the instrumental line of defense against COVID-19 in Australia. In our study, compliance levels below 80% resulted in higher prevalence at the end of suppression period, and increasing incidence during the SD period.
We point out that our results are relevant only for the duration of the mitigation and suppression, and a resurgence of the disease is possible once these interventions cease, as shown in Fig. 2. We also note that a rebound in the incidence and prevalence post-suppression period is not unavoidable: more efficient and large-scale testing methods are expected to be developed in several months, and so the resultant contact tracing and CI are likely to prevent a resurgence of the disease. The international travel restrictions are assumed to stay in place. Hence, we do not quantify the precise impact of control measures beyond the selected time horizon (28 weeks), aiming to provide immediately relevant insights. Furthermore, our results should not be seen as policies optimised over all possible parameter combinations, but rather as a clear demonstration of the extent of SD required to reduce incidence and prevalence over 2–6 months.
In this study, we simulated several possible scenarios of COVID-19 pandemic's spread in Australia. The model, AMTraC-19, was calibrated to known pandemic dynamics, and accounted for age-dependent attack rates, a range of reproductive numbers, age-stratified and social context-dependent transmission rates, household clusters (HCs) and other social mixing contexts, symptomatic–asymptomatic distinction, and other relevant epidemiological parameters. An important calibration result was the need for age-dependent fractions of symptomatic agents, with the fraction of symptomatic children found to be one-fifth of that of the adults.
We reported several findings relevant to COVID-19 mitigation and suppression policy setting. The first implication is that the effectiveness of school closures is limited (under our assumptions on the age-dependent symptomatic fractions and the infectivity in children), producing a 4-week delay in epidemic peak, without a significant impact on the magnitude of the peak, in terms of incidence or prevalence. The temporal benefit of this delay may be offset not only by logistical complications, but also by some increases in the fractions of both children and older adults during the period around the incidence peak. As the clinical picture of COVID-19 in children continues to be refined32, these findings may benefit from a re-evaluation when more extensive paediatric data become available.
The second implication is related to the SD strategy, which showed little benefit for lower levels of compliance (at 70% or less)—these levels do not produce epidemic suppression for any duration of the SD restrictions. Only when the SD compliance levels exceed 80%, there is a reduction in incidence and prevalence. Our modelling results indicate existence of an actionable transition across these strategies between 70 and 80%. In other words, increasing a compliance level just by 10%, from 70 to 80%, may effectively control the spread of COVID-19 in Australia, by reducing the effective reproductive number to near zero (during the suppression period).
We also reported a trade-off between the compliance levels and the duration of SD mitigation, with 90% compliance significantly reducing incidence and prevalence after a shorter period of 91 days (13 weeks). Although a resurgence of the disease is possible once these interventions cease, we believe that this study could facilitate a timely planning of effective intervention and exit strategies. In particular, this study contributed to the report, "Roadmap to Recovery", presented to the Australian Federal Government on 29 April 2020, providing evidence for a comparison between two options. Rather than recommending "a single dominant option for pandemic response in Australia", the roadmap pointed out considerable and evolving uncertainties, and presented two strategies: (i) a state by state elimination of local community transmissions (with the restrictions remaining for a longer duration, but achieving lower cases and greater public confidence), and (ii) controlled adaptation aimed at some minimal level of symptomatic cases within the health system capacity (with phased and adaptive lifting of restrictions, beginning as early as 15 May 2020, but acknowledging the high likelihood of prolonged global circulation of SARS-CoV-2)33. However, a precise evaluation of detailed exit strategies, as well as the probability of elimination, lies outside the scope of our study.
Future research will address several limitations of our study, including a more fine-grained implementation of natural history of the disease, reducing uncertainty around the transmissibility and infectivity in young people, incorporation of more recent Australian Bureau of Statistics (ABS) data from 2020, and an account of hospitalisations and in-hospital transmissions. We also hope to trace specific spatial pathways and patterns of epidemics, in order to enable a detailed understanding of how the infection spreads in diverse circumstances and localities, with the aim to identify the best ways to locate and curtail the pandemic spread in Australia. It would be interesting to contrast our ABM with network-based approaches: while both frameworks depart from the compartmental fully mixed models in capturing specific interactions affecting the infection spread, there are differences in describing the context dependence and ways to intervene28,34. In network-based models, the most effective interventions have been found to be those which reduce the diversity of interactions35, and can be modelled by changes in the topology of contact networks36. Thus, one future direction would be a comparison of the epidemic and intervention thresholds across the ABM and network-based models. Other avenues lead to analysis of precursors and critical thresholds for possible emergence of new strains, as well as various "change points" in the spreading rate29,37,38, studies of genomic surveillance data interpreted as complex networks39,40,41, dynamic models of social behaviour in times of health crises42,43,44 and investigations of global socioeconomic effects of the COVID-19 pandemic6,45,46.
ACEMod employs a discrete-time and stochastic agent-based model to investigate complex outbreak scenarios across the nation over time. The ACEMod simulator comprises over 24 million software agents, each with attributes of an anonymous individual (e.g. age, gender, occupation, susceptibility and immunity to diseases), as well as contact rates within different social contexts (HHs, HCs, local neighbourhoods, schools, classrooms, workplaces). The set of generated agents captures average characteristics of the real population, for example, ACEMod is calibrated to the Australian Census data (2016) with respect to key demographic statistics. In addition, the ACEMod simulator has integrated layered school attendance data from the Australian Curriculum, Assessment and Reporting Authority, within a realistic and dynamic interaction model, comprising both mobility and human contacts. These social mixing layers represent the demographics of Australia as close as possible to the ABS and other datasets, as described in Appendix F in Supplementary information.
Potential interactions between spatially distributed agents are represented using data on mobility in terms of commuting patterns (work, study and other activities), adjusted to increase precision and fidelity of commute networks47. Each simulation scenario runs in 12-h cycles ("day" and "night") over the 196 days (28 weeks) of an epidemic, and agents interact across distinct social mixing groups depending on the cycle, for example, in working groups and/or classrooms during a "day" cycle, and their HHs, HCs and local communities during the "night" cycle. The interactions result in transmission of the disease from infectious to susceptible individuals: given the contact and transmission rates, the simulation computes and updates agents' states over time, starting from initial infections, seeded in international airports around Australia19,20. The simulation is implemented in C++11, using the g++ compiler (GCC) 4.9.3 and GNU Autotools (autoconf 2.69, automake 1.15), running under CentOS release 6.9 (upstream Red Hat 4.4.7-18) on a High-Performance Computing service and utilising 4264 cores of computing capacity. Post processing of simulation results is carried out with MATLAB R2020a.
Simulating disease transmission in ACEMod requires both (i) specifics of local transmission dynamics, dependent on individual health characteristics of the agents, such as susceptibility and immunity to disease, driven by their transmission and contact rates across different social contexts; and (ii) a natural disease history model for COVID-19, that is, the infectivity profile from the exposure, to the peak of infectivity, and then to recovery, for a single symptomatic or asymptomatic infected individual. The infectivity of agents is set to exponentially rise and peak at 5 days, after 2 days of zero infectivity. The symptoms are set to last up to 12 days post the infectivity peak, during which time infectiousness linearly decreases to zero. The probability of transmission for asymptomatic/presymptomatic agents is set as 0.3 of that of symptomatic individuals; and the age-dependent fractions of symptomatic cases are set as σc = 0.134 for children, and σa = 0.669 for adults. These parameters were calibrated to available estimates of key transmission characteristics of COVID-19 spread, implemented in AMTraC-19, the Agent-based Model of Transmission and Control of the COVID-19 pandemic in Australia.
Despite several similarities with influenza, COVID-19 has a number of notable differences, specifically in relation to transmissions across children, its reproductive number R0, incubation and generation periods, proportion of symptomatic to asymptomatic cases, the infectivity of the asymptomatic and presymptomatic individuals and so on (see Appendix B in Supplementary information). While uncertainty around the reproductive number R0, the incubation and generation periods, as well as the age-dependent attack rates of the disease, have been somewhat reduced3,4,48, there is still an ongoing effort in estimating the extent to which people without symptoms, or exhibiting only mild symptoms, might contribute to the spread of the coronavirus49. Furthermore, the question whether the ratio of symptomatic to total cases is constant across age groups, especially children, has not been explored in studies to date, remaining another critical unknown.
Thus, our first technical objective was to calibrate the AMTraC-19 model for specifics of COVID-19 pandemic, in order to determine key disease transmission parameters of AMTraC-19, so that the resultant dynamics concur with known estimates. In particular, we investigated a range of the reproductive number R0 (the number of secondary cases arising from a typical primary case early in the epidemic). The range 2.0–2.5 has been initially reported by the WHO-China Joint Mission on Coronavirus Disease 20193. Several studies estimated that before travel restrictions were introduced in Wuhan on 23 January 2020, the median daily reproduction number R0 in Wuhan was 2.35, with 95% CI [1.15, 4.77]50. On 15 April 2020, Australian health authorities reported R0 in the range 2.6–2.733, while more recent Australian and international studies investigated R0 in the range 2.5–3.524,33,38,44. For example, a median R0 = 3.4 (CI [2.4, 4.7]) was used in a model of the COVID-19 spread in Germany38, while the estimates reviewed by Liu et al.51 ranged from 1.4 to 6.49, with a mean of 3.28 and a median of 2.79. In our model, R0, our output variable, y1, was investigated between 1.94 and 3.12, see Table 1, by varying a scaling factor κ responsible for setting the contagiousness of the simulated epidemic, as explained in Appendix C in Supplementary information19,21.
Table 1 The reproductive number R0 and the generation period Tgen (with 95% confidence intervals (CIs), constructed from the bias-corrected bootstrap distribution), for various values of the scaling parameter κ.
We aimed for the generation period Tgen, that is, our output variable y2, to stay in the range 6.0–10.018,52,53. This is also in line with the reported mean serial interval of 7.5 days (with 95% CI [5.3, 19])52.
In addition, we aimed to keep the resultant daily growth rate of cumulative incidence \(\dot{C}\), output variable y3, ~0.2 per day, in order to be consistent with the disease dynamics reported in Australia and internationally (see Appendix A in Supplementary information). Our focus was to characterise the rate of a rapid infection increase during the sustained but unmitigated local transmission. This calibration target was chosen at the time, mid-March 2020, to complement R0 and the generation period, given the lack of data on the epidemic peak values, and fragmented patient recovery and prevalence data. By that time, despite different initial conditions and disease surveillance regimes, as well as diversity of case definitions, several countries exhibited a similar growth pattern. This suggested that a steady growth rate of ~0.2 per day may provide a consistent calibration target during the early growth period, with seven out of the top eight affected nations settling around this rate after a noisy transient (except South Korea where the initial growth had the cluster nature, following a superspreading event54).
Another key constraint was a low attack rate in children, Ac, that is, our output variable y4, reported to be in single digits. For example, only 2.4% of all reported cases in China were children, while a study in Japan observed that "it is remarkable that there are very few child cases aged from 0 to 19 years", with only 3.4% of all cases in this age group55.
The calibration was aimed at satisfying our key constraints, given by the expected ranges of output variables. In doing so, we varied several "free" parameters, such as transmission and contact rates, the fraction of symptomatic cases (making it age-dependent), the probability of transmission for both symptomatic and asymptomatic agents, and the infectivity profile from the exposure. Specifically, we explored the time to infectivity peak, our input parameter x1, in proximity to known estimates of the mean incubation period, that is, between 4 and 7 days, calibrating the time to peak to 5.0 days. In several studies, the mean incubation period was reported as 5.2 days, 95% CI [4.1, 7.0]52, while being distributed around a mean of ~5 days within the range of 2–14 days with 95% CI56. We also varied the symptoms' duration after the peak of infectivity, that is, recovery period, our input parameter x2, between 7 and 21 days, and calibrated it at 12.0 days, on a linearly decreasing profile from the peak.
The contact and transmission rates across various mixing contexts detailed in Appendices C and E in Supplementary information. The probability of transmission for asymptomatic/presymptomatic agents, our input parameter x3, was set as 0.3 of that of symptomatic individuals (lower than in the ACEMod influenza model), having been explored between 0.05 and 0.45. Both symptomatic and asymptomatic infectivity profiles were changed to increase exponentially after a latent period of 2 days, reaching the infectivity peak after 5 days, with the onset of symptoms distributed across agents during this period, see Appendix C in Supplementary information.
The fraction of symptomatic cases, our input parameter x4, was investigated between 0.5 and 0.8, and set to two-thirds of the total cases (σa = 0.669), which concurs with several studies. For example, the initial data on 565 Japanese citizens evacuated from Wuhan, China, who were symptom-screened and tested, indicated that 41.6% were asymptomatic, with a lower bound estimated as 33.3% (95% CI [8.3, 58.3])57. The proportion of asymptomatic cases on the Diamond Princess cruise ship was estimated between 17.9 (95% credible interval (CrI): 15.5–20.2%) and 39.9% (95% CrI: 35.7–44.1%)58, noting that most of the passengers were 60 years and older, and more likely to experience more symptoms. The modelling study of Ferguson et al.11 also set the fraction of symptomatic cases to σ = 0.669.
However, we found that our output variables were within the expected ranges only when this fraction is age-dependent, with the fraction of symptomatic cases among children, our input parameter x5, calibrated to one-fifth of the one for adults, that is, σc = 0.134 for children, and σa = 0.669 for adults. This calibration outcome per se, achieved after exploring the range σc ∈ [0.05, 0.25], is in agreement with the reported low symptomaticity in children worldwide, and the observation that "children are at similar risk of infection as the general population, although less likely to have severe symptoms"59. Another study of epidemiological characteristics of 2143 paediatric patients in China noted that over 90% of patients were asymptomatic, mild or moderate cases60.
In summary, this combination of parameters resulted in the dynamics that matched several COVID-19 pandemic characteristics. It produced the following estimates and their CIs, constructed from the bias-corrected bootstrap distribution:
the reproductive number R0 = 2.77, with 95% CI [2.73, 2.83] (sample size N = 6315);
the generation period Tgen = 7.62 days, with 95% CI [7.53, 7.70] (N = 6315);
the growth rate of cumulative incidence, determined at day 50, during a period of sustained unmitigated local transmission, \(\dot{C}=0.167\) per day, with 95% CI [0.164, 0.170] and range 0.156–0.182 (N = 20);
the attack rate in children Ac = 6.154%, with 95% CI [6.15, 6.16%] and range 6.14–6.16% (N = 20).
Both the reproductive number and the generation period correspond to κ = 2.75 (see Table 1 for other values of κ). The resultant dynamics are shown in Figs. 5 and 6. The sensitivity analysis of the output variables to changes in the input parameters is presented in Appendix D.1 in Supplementary information. We point out that, in hindsight, one may choose more comprehensive calibration targets and refine the model with different parametrisations. The model presented in this study was calibrated by 24 March 2020, using Australian and international incidence and prevalence data from two preceding months, as well as constraints on the output variables detailed above. At the time, a limited testing capacity resulting in possible under-reporting of cases (especially paediatric) may have introduced a potential bias in model calibration. Nevertheless, the study is described here as an approach, which succeeded in accurately predicting the epidemic peaks in Australia in early April (both incidence and prevalence), while providing timely advice on relevant pandemic interventions.
Fig. 5: Model calibration with scaling factor κ.
Tracing d the expected growth rate of cumulative incidence \(\dot{C}\) per day, while varying scaling factor κ (proportional to the reproductive number R0), with a incidence, b prevalence and c cumulative incidence. Averages over 20 runs are shown as solid profiles, with 95% confidence intervals shown as shaded profiles. The 95% confidence intervals are constructed from the bias-corrected bootstrap distributions. The alignment between simulated days and actual dates may slightly differ across separate runs.
Fig. 6: Model calibration: epidemic curves for children.
Tracing d the attack rate in children, while varying scaling factor κ (i.e. reproductive number R0), with a incidence, b cumulative incidence and c incidence fraction for children. Averages over 20 runs are shown as solid profiles, with 95% confidence intervals shown as shaded profiles. The 95% confidence intervals are constructed from the bias-corrected bootstrap distributions. The alignment between simulated days and actual dates may slightly differ across separate runs.
Fraction of local community transmissions
We trace scenarios of COVID-19 pandemic spread in Australia, initiated by passenger arrivals via air traffic from overseas. This process maintains a stream of new infections at each time step, set in proportion to the average daily number of incoming passengers at that airport20,21. These infections occur probabilistically, generated by binomial distribution B(P, N), where P and N are selected to generate one new infection within a 50 km radius of the airport, per 0.04% of incoming arrivals on average.
In a separate study41, we directly compared the fractions of local transmissions detected by our ABM with the genomic sequencing of SARS-CoV-2, carried out in a subpopulation of infected patients within New South Wales, the most populous state of Australia, until 28 March 2020. Only a quarter of sequenced cases was deemed to be locally acquired (cases who had not travelled overseas in the 14 days before illness onset), and this was in concordance with the trace obtained from our ABM model. Specifically, having simulated the 5-week period preceding intervention measures, we inferred all local transmission links within HHs, HCs, and local government areas that map to the census statistical areas (SAs). Each directed link connecting two infected individuals in the same mixing context is detected if the infected agents share the same HH, HC or SA identifier, and the direction is inferred using the relevant simulation time steps. Then, the fraction of local community transmissions is determined as the ratio between the number of the inferred transmission links and the number of total infections during the corresponding time period. These fractions ranged between 18.6% (std. dev. 2.9%) for HH and HC combined, and 34.9% (std. dev. 8.2%) for all transmissions within HH, HC and SA, broadly agreeing with the fraction identified through genomic surveillance: 25.8% for all local transmissions41.
We performed our sensitivity analysis using the local (point-based) sensitivity analysis (LSA)61, as well as global sensitivity analysis with the Morris method (the elementary effect method)62. Each method computes the response of an "output" variable of interest, for example, the generation period, to the change in an "input" parameter, for example, the fraction of symptomatic cases. The response Fi,j of the state variable yj to parameter xi from a scaled vector of all k input parameters, X = [0, 1]k, is determined as a finite difference
$${F}_{i,j}=\frac{{y}_{j}({x}_{1},{x}_{2},\ldots ,{x}_{i}+\Delta ,{x}_{i+1},\ldots ,{x}_{k})-{y}_{j}({\bf{X}})}{\Delta },$$
where Δ is a discretisation step, dividing each dimension of the parameter space. The distribution of each response Fi,j is obtained by repeated random sampling with a number of simulation runs per step. In LSA, an input parameter is varied, while keeping other inputs set at their base points, that is, default values. In the Morris method, an input parameter is varied at a number of different points sampled across the domains of other parameters. The mean \({\mu }_{i,j}^{* }\) of the absolute response ∣Fi,j∣ serves to capture the influence of the parameter xi on the output yj: a large mean suggests a higher sensitivity. The standard deviation σi,j of the response Fi,j is a complementary measure of sensitivity: a large deviation indicates that the dependency between the input and output is nonlinear. In the Morris method, a large deviation may also indicate that the input parameter interacts with other parameters63. Importantly, the responses are not directly comparable across the output variables, and instead are ranked across the inputs for each output. A model is generally considered robust if most of the dependencies are characterised by low means and deviations, with the variations contained within acceptable ranges of the output variables. Appendix D in Supplementary information summarises the investigated ranges and results of the sensitivity analysis.
International travel restrictions
In our model, restriction on international arrivals is set to be enforced from the moment when the number of confirmed infections exceeds the threshold of 2000 cases. This concurs well with the actual epidemic timeline in Australia, which imposed a ban on all arrivals of non-residents, non-Australian citizens, from 9 p.m. of 20 March 2020, with a requirement for strict self-isolation of returning citizens. The number of COVID-19 cases crossed 1000 cases on 21 March 2020, and doubled to slightly over 2000 on 24 March 2020, so the 2000 threshold chosen on our model reflects a delay in implementing the measures. The restriction on international arrivals is included in modelling of all other strategies, and is not traced independently, as this mitigation approach is not under debate.
Case isolation
The CI mitigation strategy assumes that 70% of symptomatic cases stay at home, reduce their non-household contacts by 75% (so that their transmission rates decrease to 25% of the baseline rate) and maintain their household contacts (i.e. their transmission rates within household remain unchanged). The assumption that even relatively mild symptomatic cases are identified and isolated is justified by the practice adopted in Australia where a comprehensive disease surveillance regime was consistently implemented. This included screening of syndromic fever and cough in combination with exhaustive case identification and management, thus enabling early detection (e.g. >1% of the Australian population has been tested for the coronavirus by early April 2020, and the numbers of tests conducted in Australia per new confirmed case of COVID-19, as well as per capita, remain among the highest in the world)24,41,64,65.
Home quarantine
In our model of the HQ strategy for household contacts of index cases, we allow compliance to vary within affected households (i.e. at the individual level). In our implementation, 50% of individuals will comply with HQ if a member of their household becomes ill. We simulate this as a reduction to 25% of their usual non-household contact rates, and a consequent doubling of their contact rates within the household. Both CI and HQ strategies are assumed to be in force from the first day of the epidemic, as has been the case in Australia.
If an individual complies with SD, all working group contacts are removed, and all non-household contact rates are set to 50% of the baseline value, while keeping contact rates within households unaltered. To simulate imposition of the intervention policy by the federal government, the SD strategy is triggered by crossing the threshold of 2000 cases (matching the actual timeline on 24 March 2020). An alternative threshold of 1000 cases, matching the actual numbers reported on 21 March 2020, is considered to evaluate a delayed introduction of strong SD measures (Appendix H in Supplementary information). In our study, we vary the SD compliance level from 0 to 100% (full lockdown); the compliance level is simply the percentage of individuals who comply with the measure.
School closure removes students, their teachers and a fraction of parents from daytime interactions (their corresponding transmission rates are set to zero), but increases their interaction rates within households (with a 50% increase in household contact rates). All students and teachers are affected. For each affected household, a randomly selected parent chooses to stay at home, with a varying degree of commitment. Specifically, we compared 25 or 50% commitment, as in Australia there is no legal age for leaving school-age children home alone for a reasonable time, in relevant circumstances. This parameter range is concordant with the report of ABS, summarising a survey of household impacts of COVID-19 during early April 2020: the proportion of adults keeping their children home from school or childcare reached 24.9%66. The upper considered limit, a half of parents, accounts for reasonable scenarios ensuring adequate parental supervision. School closures are assumed to be followed with 100% compliance, and may be concurrent with all other strategies described above. The SC strategy is also triggered by crossing the threshold of 2000 cases. We note that the Australian Federal Government has, so far, not enforced schools closures, and so we investigate the SC intervention separately from, or coupled with, the SD strategy. Hence, the evaluation of school closures provides an input to policy setting, rather than forecasts possible epidemic dynamics.
The agents affected by various compliance choices are determined in the beginning of each simulation run, with dependency between voluntary measures that does not allow an individual to be compliant with HQ if they are not also compliant with CI. Then, the relevant changes in contact behaviour are applied to the selected agents in every 12-h cycle. The restrictions are applied in a specific order: CI, HQ, SD and SC, with only the most relevant distancing assigned during each simulation cycle. For example, if a student is ill and in CI, the contact reduction factors associated with home quarantine, SD, and school closure would not apply to them, even if they are considered compliant with those measures. The micro- and macro-distancing parameters defining the levels of compliance, together with the affected non-household and household contacts are summarised in Table 2.
Table 2 The micro- and macro-distancing parameters: macro-compliance levels and context-dependent micro-distancing levels.
Duration of measures
While the CI and HQ strategies are assumed to last during the full course of the epidemic, we vary the duration of SD and/or SC strategies across a range of intervals, with a specific focus on 49 and 91 days, that is, 7 or 13 weeks.
Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
The data can be made available to approved bona fide researchers after their host institution has signed a Data Access/Confidentiality Agreement with the University of Sydney. Mediated access will enable data to be shared and results to be confirmed without unduly compromising the University's ability to commercialise the software. To the extent that this data sharing does not violate the commercialisation and licensing agreements entered into by the University of Sydney, the data will be made publicly available after the appropriate licensing terms agreed. Post-processing Source Data and Supplementary Data (Supplementary Data 1 and 2) are provided with this paper. Source data are provided with this paper.
Code availability
The code can be made available to approved bona fide researchers after their host institution has signed a Data Access/Confidentiality Agreement with the University of Sydney. Mediated access will enable code to be shared and results to be confirmed without unduly compromising the University's ability to commercialise the software. To the extent that this code sharing does not violate the commercialisation and licensing agreements entered into by the University of Sydney, the code will be made publicly available after the appropriate licensing terms agreed.
National Health Commission (NHC) of the People's Republic of China. NHC daily reports. http://www.nhc.gov.cn/yjb/pzhgli/new_list.shtml (2020).
Wang, C., Horby, P. W., Hayden, F. G. & Gao, G. F. A novel coronavirus outbreak of global health concern. Lancet 395, 470–473 (2020).
WHO. Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19) (WHO, 2020).
The Novel Coronavirus Pneumonia Emergency Response Epidemiology Team. Vital surveillances: the epidemiological characteristics of an outbreak of 2019 novel coronavirus diseases (COVID-19)—China, 2020. China CDC Weekly 2, 113–122 (2020).
WHO. WHO Director-General's opening remarks at the media briefing on COVID-19—11 March 2020 (2020). https://www.who.int/dg/speeches/detail/who-director-general-s-opening-remarks-at-the-media-briefing-on-covid-19---11-march-2020 (2020).
Lenzen, M. et al. Global socio-economic losses and environmental gains from the Coronavirus pandemic. PLoS ONE 15, 1–13 (2020).
Longini, I. M. et al. Containing pandemic influenza at the source. Science 309, 1083–1087 (2005).
ADS CAS Article PubMed Google Scholar
Ferguson, N. M. et al. Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature 437, 209–214 (2005).
Nsoesie, E. O., Beckman, R. J. & Marathe, M. V. Sensitivity analysis of an individual-based model for simulation of influenza epidemics. PLoS ONE 7, 0045414 (2012).
ADS Article Google Scholar
Nsoesie, E. O., Brownstein, J. S., Ramakrishnan, N. & Marathe, M. V. A systematic review of studies on forecasting the dynamics of influenza outbreaks. Influenza Other Respir. Viruses 8, 309–316 (2014).
Ferguson, N. M. et al. Imperial College COVID-19 Response Team. Impact of non-pharmaceutical interventions (NPIs) to reduce COVID-19 mortality and healthcare demand. Preprint at https://doi.org/10.25561/77482 (2020).
Halloran, M. E., Longini, I. M., Nizam, A. & Yang, Y. Containing bioterrorist smallpox. Science 298, 1428–1432 (2002).
Eubank, S. et al. Modelling disease outbreaks in realistic urban social networks. Nature 429, 180 (2004).
ADS CAS Article PubMed PubMed Central Google Scholar
Longini, I. M., Halloran, M. E., Nizam, A. & Yang, Y. Containing pandemic influenza with antiviral agents. Am. J. Epidemiol. 159, 623–633 (2004).
Germann, T. C., Kadau, K., Longini, I. M. & Macken, C. A. Mitigation strategies for pandemic influenza in the United States. Proc. Natl Acad. Sci. USA 103, 5935–5940 (2006).
Barrett, C., Bisset, K., Leidig, J., Marathe, A. & Marathe, M. V. An integrated modeling environment to study the co-evolution of networks, individual behavior and epidemics. AI Mag. 31, 75–87 (2010).
Balcan, D. et al. Modeling the spatial spread of infectious diseases: the global epidemic and mobility computational model. J. Comput. Sci. 1, 132–145 (2010).
Chao, D. L., Halloran, M. E., Obenchain, V. J. & Longini Jr, I. M. FluTE, a publicly available stochastic influenza epidemic simulation model. PLoS Comput. Biol. 6, e1000656 (2010).
ADS MathSciNet Article PubMed PubMed Central Google Scholar
Cliff, O. M. et al. Investigating spatiotemporal dynamics and synchrony of influenza epidemics in Australia: an agent-based modelling approach. Simul. Model. Pract. Theory 87, 412–431 (2018).
Zachreson, C. et al. Urbanization affects peak timing, prevalence, and bimodality of influenza pandemics in Australia: results of a census-calibrated model. Sci. Adv. 4, eaau5294 (2018).
ADS Article PubMed PubMed Central Google Scholar
Harding, N., Spinney, R. E. & Prokopenko, M. Phase transitions in spatial connectivity during influenza pandemics. Entropy 22, 133 (2020).
Zachreson, C., Fair, K. M., Harding, N. & Prokopenko, M. Interfering with influenza: nonlinear coupling of reactive and static mitigation strategies. J. R. Soc. Interface 17, 20190728 (2020).
Efron, B. & Tibshirani, R. J. An Introduction to the Bootstrap, Vol. 57 (Chapman & Hall, New York, 1994).
Moss, R. et al. Modelling the impact of COVID-19 in Australia to inform transmission reducing measures and health system preparedness. medRxiv https://doi.org/10.1101/2020.04.07.20056184 (2020).
Anderson, R. M. & May, R. M. Vaccination and herd immunity to infectious diseases. Nature 318, 323–329 (1985).
Yeomans, J. M. Statistical Mechanics of Phase Transitions (Clarendon Press, 1992).
Newman, M. E. & Watts, D. J. Scaling and percolation in the small-world network model. Phys. Rev. E 60, 7332 (1999).
ADS CAS Article Google Scholar
Newman, M. E. Spread of epidemic disease on networks. Phys. Rev. E 66, 016128 (2002).
ADS MathSciNet CAS Article Google Scholar
Harding, N., Nigmatullin, R. & Prokopenko, M. Thermodynamic efficiency of contagions: a statistical mechanical analysis of the SIS epidemic model. Interface Focus 8, 20180036 (2018).
Harding, N., Spinney, R. E. & Prokopenko, M. Population mobility induced phase separation in SIS epidemic and social dynamics. Sci. Rep. 10, 7646 (2020).
Guisoni, N., Loscar, E. & Albano, E. Phase diagram and critical behavior of a forest-fire model in a gradient of immunity. Phys. Rev. E 83, 011125 (2011).
Hoang, A. et al. COVID-19 in 7780 pediatric patients: a systematic review. EClinicalMedicine 24, 100433 (2020).
Group of Eight (Go8) Australian Universities Taskforce. COVID-19 Roadmap to Recovery: A Report for the Nation (Group of Eight (Go8) Australian Universities Taskforce, 2020).
Cauchemez, S. et al. Role of social networks in shaping disease transmission during a community outbreak of 2009 H1N1 pandemic influenza. Proc. Natl Acad. Sci. USA 108, 2825–2830 (2011).
Meyers, L. A., Newman, M., Martin, M. & Schrag, S. Applying network theory to epidemics: control measures for Mycoplasma pneumoniae outbreaks. Emerg. Infect. Dis. 9, 204 (2003).
Article PubMed Central Google Scholar
Small, M. & Cavanagh, D. Modelling strong control measures for epidemic propagation with networks—a COVID-19 case study. IEEE Access 8, 109719–109731 (2020).
Antia, R., Regoes, R. R., Koella, J. C. & Bergstrom, C. T. The role of evolution in the emergence of infectious diseases. Nature 426, 658–661 (2003).
Dehning, J. et al. Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions. Science https://doi.org/10.1126/science.abb9789 (2020).
Piraveenan, M., Prokopenko, M. & Zomaya, A. Y. Assortativeness and information in scale-free networks. Eur. Phys. J. B 67, 291–300 (2009).
Cliff, O. et al. Network properties of Salmonella epidemics. Sci. Rep. 9, 6159 (2019).
Rockett, R. J. et al. Revealing COVID-19 transmission in Australia by SARS-CoV-2 genome sequencing and agent based modelling. Nat. Med. https://doi.org/10.1038/s41591-020-1000-7 (2020).
Mossong, J. et al. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 5, e74 (2008).
Chang, S. L., Piraveenan, M., Pattison, P. & Prokopenko, M. Game theoretic modelling of infectious disease dynamics and intervention methods: a review. J. Biol. Dyn. 14, 57–89 (2020).
MathSciNet Article PubMed Google Scholar
Gros, C., Valenti, R., Schneider, L., Valenti, K. & Gros, D. Containment efficiency and control strategies for the corona pandemic costs. arXiv:2004.00493 (2020).
Walker, P. G. T. et al. Imperial College COVID-19 Response Team. The global impact of COVID-19 and strategies for mitigation and suppression. Science. https://doi.org/10.1126/science.abc0035 (2020).
Dignum, F. et al. Analysing the combined health, social and economic impacts of the corovanvirus pandemic using agent-based social simulation. Minds Mach. 30, 177–194 (2020).
Fair, K. M., Zachreson, C. & Prokopenko, M. Creating a surrogate commuter network from Australian Bureau of Statistics census data. Sci. Data 6, 150 (2019).
Guan, W.-j. et al. Clinical characteristics of coronavirus disease 2019 in China. N. Engl. J. Med. 382, 1708–1720 (2020).
Li, R. et al. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2). Science https://doi.org/10.1126/science.abb3221 (2020).
Kucharski, A. J. et al. Early dynamics of transmission and control of COVID-19: a mathematical modelling study. Lancet Infect. Dis. https://doi.org/10.1016/S1473-3099(20)30144-4 (2020).
Liu, Y., Gayle, A. A., Wilder-Smith, A. & Rocklöv, J. The reproductive number of COVID-19 is higher compared to SARS coronavirus. J. Travel Med. 27, taaa021 (2020).
Li, Q. et al. Early transmission dynamics in Wuhan, China, of novel coronavirus–infected pneumonia. N. Engl. J. Med. 382, 1199–1207 (2020).
Huang, H. et al. Epidemic features and control of 2019 novel coronavirus pneumonia in Wenzhou, China. Preprints with Lancet (2020).
Shim, E., Tariq, A., Choi, W., Lee, Y. & Chowell, G. Transmission potential and severity of COVID-19 in South Korea. Int. J. Infect. Dis. 93, 339–344 (2020).
Mizumoto, K., Omori, R. & Nishiura, H. Age specificity of cases and attack rate of novel coronavirus disease (COVID-19). medRxiv https://doi.org/10.1101/2020.03.09.20033142 (2020).
Linton, N. M. et al. Incubation period and other epidemiological characteristics of 2019 novel coronavirus infections with right truncation: a statistical analysis of publicly available case data. J. Clin. Med. 9, 538 (2020).
Nishiura, H. et al. Estimation of the asymptomatic ratio of novel coronavirus infections (covid-19). Int. J. Infect. Dis. https://doi.org/10.1016/j.ijid.2020.03.020 (2020).
Mizumoto, K., Kagaya, K., Zarebski, A. & Chowell, G. Estimating the asymptomatic proportion of coronavirus disease 2019 (COVID-19) cases on board the Diamond Princess cruise ship, Yokohama, Japan, 2020. Eurosurveillance 25, 2000180 (2020).
Bi, Q. et al. Epidemiology and transmission of COVID-19 in Shenzhen China: analysis of 391 cases and 1,286 of their close contacts. medRxiv https://doi.org/10.1101/2020.03.03.20028423 (2020).
Dong, Y. et al. Epidemiological characteristics of 2143 pediatric patients with 2019 coronavirus disease in China. Pediatrics https://doi.org/10.1542/peds.2020-0702 (2020).
Cacuci, D. G. Sensitivity and Uncertainty Analysis: Theory, Vol. 1 (Chapman & Hall/CRC, 2003).
Morris, M. D. Factorial sampling plans for preliminary computational experiments. Technometrics 33, 161–174 (1991).
Wu, J., Dhingra, R., Gambhir, M. & Remais, J. V. Sensitivity analysis of infectious disease models: methods, advances and their application. J. R. Soc. Interface 10, 20121018 (2013).
Johns Hopkins University. Coronavirus COVID-19 Global Cases (Johns Hopkins University, Baltimore, 2020).
Lokuge, K. et al. Exit strategies: optimising feasible surveillance for detection, elimination and ongoing prevention of COVID-19 community transmission. medRxiv https://doi.org/10.1101/2020.04.19.20071217 (2020).
Australian Bureau of Statistics. Household Impacts of COVID-19 Survey, 1–6 Apr 2020 (Australian Bureau of Statistics, 2020).
Wikipedia Contributors. 2019–20 coronavirus pandemic in mainland China; 2020 coronavirus pandemic in Wikipedia; nation: Australia; France; Germany; Iran; Italy; South Korea; Spain; the United States. The Free Encyclopedia (2020).
We are grateful to Stuart Kauffman, Edward Holmes, Joel C. Miller, Paul Ormerod, Kristopher Fair, Philippa Pattison, Mahendra Piraveenan, Manoj Gambhir, Joseph Lizier, Peter Wang, John Parslow, Jonathan Nolan, Neil Davey, Vitali Sintchenko, Tania Sorrell, Ben Marais, and Stephen Leeder, for discussions of various intricacies involved in agent-based modelling of infectious diseases, and computational epidemiology in general. We were supported through the Australian Research Council grants DP160102742 (S.L.C., N.H., O.M.C., C.Z., M.P.) and DP200103005 (M.P.). ACEMod is registered under The University of Sydney's invention disclosure CDIP Ref. 2019-123. AMTraC-19 is registered under The University of Sydney's invention disclosure CDIP Ref. 2020-018. We are thankful for a support provided by High-Performance Computing (HPC) service (Artemis) at the University of Sydney.
Centre for Complex Systems, Faculty of Engineering, University of Sydney, Sydney, NSW, 2006, Australia
Sheryl L. Chang, Nathan Harding, Cameron Zachreson, Oliver M. Cliff & Mikhail Prokopenko
Marie Bashir Institute for Infectious Diseases and Biosecurity, University of Sydney, Westmead, NSW, 2145, Australia
Mikhail Prokopenko
Sheryl L. Chang
Nathan Harding
Cameron Zachreson
Oliver M. Cliff
S.L.C., N.H., O.M.C. and M.P. developed and calibrated COVID-19 epidemiological model. C.Z. implemented intervention strategies. S.L.C. carried out computational simulation, prepared figures, source and supplementary data files. S.L.C., C.Z., O.M.C. and M.P. performed sensitivity analysis and tested the model. M.P. conceived the study and drafted the manuscript, with all authors contributing. All authors contributed to analysis and interpretation of the results, and gave final approval for publication.
Correspondence to Mikhail Prokopenko.
Peer review information Nature Communications thanks Michael Small, James Wood, and the other, anonymous reviewer(s) for their contribution to the peer review of this work. Peer review reports are available.
Descriptions of Additional Supplementary Files
Supplementary Data 1
Source data
Chang, S.L., Harding, N., Zachreson, C. et al. Modelling transmission and control of the COVID-19 pandemic in Australia. Nat Commun 11, 5710 (2020). https://doi.org/10.1038/s41467-020-19393-6
Tracking the Wings of Covid-19 by Modeling Adaptability with Open Mobility Data
José Sousa
& João Barata
Applied Artificial Intelligence (2021)
Smart technologies driven approaches to tackle COVID-19 pandemic: a review
Hameed Khan
, K. K. Kushwah
, Saurabh Singh
, Harshika Urkude
, Muni Raj Maurya
& Kishor Kumar Sadasivuni
3 Biotech (2021)
Risk mapping for COVID-19 outbreaks in Australia using mobility data
, Lewis Mitchell
, Michael J. Lydeamore
, Nicolas Rebuli
, Martin Tomko
& Nicholas Geard
Journal of The Royal Society Interface (2021)
Contact network models matching the dynamics of the COVID-19 spreading
Matúš Medo
Journal of Physics A: Mathematical and Theoretical (2021)
COVID-19 PANDEMİSİ EVDE KAL UYGULAMASI: TOPLUMUN TUTUM VE DAVRANIŞLARI
Mahmut KILIÇ
, Güllü USLUKILIÇ
& Şerife OK
Bozok Tıp Dergisi (2020)
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Statistical distributions
Revision as of 07:23, 13 July 2011 by Philip Schrodt (talk | contribs) (→Student's t Distribution)
2.1 Need to do
3 The Normal Distribution
3.1 Bases for the Normal Distribution
3.2 Properties of the Normal Distribution
3.3 The Standard Normal Distribution
3.4 Why do we care about the normal distribution?
3.5 Additional points needed on the normal
4 The <math>\chi^{2}</math> Distribution
4.1 Characteristics of the <math>\chi^{2}</math> Distribution
4.2 Derivation of the <math>\chi^{2}</math> from Gamma functions
4.3 Additional points needed on the chi-square
5 Student's <math>t</math> Distribution
5.1 Additional points needed on the t distribution
6 The <math>F</math> Distribution
6.1 Additional points needed on the F distribution
7 Summary: Relationships Among Continuous Distributions
In the previous chapter we discussed probability theory, which we expressed in terms of a variable $X$. We defined $X$ as a set of realizations of some process, which in turn is governed by rules of probability regarding potential outcomes in the sample space.
The variables we were talking about have been what are called random variables, which means that they have a probability distribution. As we noted before, broadly speaking, there are two kinds of random variables: discrete and continuous.
Discrete variables can take on any one of several distinct, mutually-exclusive values.
Congressperson's ideology score {0,1,2,3...,100}
An individual's political affiliation (Democrat, Republican, Independent}
Whether or not a country is a member of European Union (true/false)
A Continuous variable can take on any value in its range.
Individual income
National population
This chapter focuses on a family of continuous distributions that are the most widely used in statistical inference, and are found in a wide variety of contexts, both applied and theoretical. The <math>Normal</math> distribution is the well-known "bell-shaped curve" that most students usually encounter first in the artificial context of academic testing, but due to a powerful result called the Central Limit Theorem, occurs in a wide variety of uncontrolled situations where the value of a random variables is determined by the average effect of a large number of random variables with any combination of distributions. The <math>\chi^{2}</math>, <math>t</math> and <math>F</math> distributions can be derived from various products of normally-distributed variables, and are used extensively in statistical inference and applied statistics, so it's useful to understand them in a bit of depth.
Need to do
Philip Schrodt 06:57, 13 July 2011 (PDT)
Probably need to get most of the probability chapter---which are the moment hasn't been started---written before this one. In particular, will the pdf and cdf be defined there or here?
Add some of the discrete distributions, particularly the binomial
Add the uniform?
Do we add---or link to on another page---the derivation of the mean and standard errors for these: that code is available in CCL on an assortment of places on the web
The Normal Distribution
We are all used to seeing normal distributions described, and to hearing that something is "normally distributed." We know that a normal distribution is "bell-shaped," and symmetrical, and probably that it has some mean and some standard deviation.
Formally, if <math>X</math> is a normally distributed variate with mean <math>\mu</math> and variance <math>\sigma^{2}</math>, then:
<math>f(x) = \frac{1}{\sigma \sqrt{2\pi}} \text{exp} \left( - \frac{(x - \mu)^{2}}{2 \sigma^{2}} \right)</math>.
We denote this <math>X \sim N(\mu,\sigma^{2})</math>, and say ``<math>X</math> is distributed normally with mean mu and variance sigma squared. The symbol <math>\phi</math> is often used as a shorthand to represent the normal density in \eqref{normalden}:
<math>X \sim \phi_{\mu, \sigma^{2}}</math>.
The corresponding normal CDF -- which is the probability of a normal random variate taking on a value less than or equal to some specified number -- is (as always) the indefinite integral of \eqref{normalden}. This has no simple closed-form solution, so we typically just write:
<math>F(x) \equiv \Phi_{\mu, \sigma^{2}}(x) = \int \phi_{\mu, \sigma^{2}} f(x) d x.</math>
Here are a bunch of normal curves
Bases for the Normal Distribution
The most common justification for the normal distribution has its roots in the 'central limit theorem'. Consider <math>i = {1,2,...N}</math> independent, real-valued random variates $<math>X_{i}</math>$, each with finite mean $<math>\mu_{i}</math>$ and variance <math>\sigma^{2}_{i} > 0</math>. If we consider a new variable $<math>X</math>$ defined as the sum of these variables:
<math>X = \sum_{i=1}^{N} X_{i}</math>
then we know that
<math> \text{E}(X) = \sum_{i=1}^{N} \mu_{i} </math>
<math> \text{Var}(X) = \sum_{i=1}^{N} \sigma^{2}_{i} </math>
The central limit theorem states that:
<math> \underset{N \rightarrow \infty}{\lim} X = \underset{N \rightarrow \infty}{\lim} \sum_{i=1}^{N} X_{i} \overset{D}{\rightarrow} N(\cdot) </math>
where the notation <math>\overset{D}{\rightarrow}</math> indicates convergence in distribution. That is, as <math>N</math> gets sufficiently large, the distribution of the sum of <math>N</math> independent random variates with finite mean and variance will converge to a normal distribution. As such, we often think of a normal distribution as being appropriate when the observed variable <math>X</math> can take on a range of continuous values, and when the observed value of <math>X</math> can be thought of as the product of a large number of relatively small, independent ``shocks or perturbations.
Properties of the Normal Distribution
A normal variate <math>X</math> has support in <math>\mathfrak{R}</math>.
The normal is a two-parameter distribution, where <math>\mu \in (-\infty, \infty)</math> and <math>\sigma^{2} \in (0, \infty)</math>.
The normal distribution is always symmetrical (<math>M_{3} = 0</math>) and mesokurtic.
item The normal distribution is preserved under a linear transformation. That is, if <math>X \sim N(\mu,\sigma^{2})</math>, then <math>aX + b \sim N(a\mu + b, a^{2} \sigma^{2})</math>. (Why? Recall our earlier results on <math>\mu</math> and <math>\sigma^{2}</math>).
The Standard Normal Distribution
One linear transformation is especially useful:
<math> \begin{align}
b & = \frac{-\mu}{\sigma} \\
a & = \frac{1}{\sigma}
\end{align} </math>.
This yields:
ax + b & \sim N(a\mu+b, a^{2} \sigma^{2}) \\
& \sim N(0,1)
\end{align} </math>
This is the standard normal density function. We often denote this <math>\phi(\cdot)</math>, and say that "X is distributed as standard normal." We can also get this by transforming ("standardizing") the normal variate <math>X</math>...
If <math>X \sim N(\mu,\sigma^{2})</math>, then <math>Z = \frac{(x - \mu)}{\sigma} \sim N(0,1)</math>.
The density function then reduces to:
<math> f(z) = \equiv \phi(z) = \frac{1}{\sqrt{2\pi}} \text{exp} \left[ - \frac{(z)^{2}}{2} \right] </math>
Similarly, we often write the CDF for the standard normal as <math>\Phi(\cdot)</math>.
Why do we care about the normal distribution?
The normal distribution's importance lies in its relationship to the central limit theorem. As we'll discuss at more length later, the central limit theorem means that as one's sample size increases, the distribution of sample means (or other estimates) approaches a normal distribution.
Additional points needed on the normal
More extended discussion of the CLT, and a note that if we are dealing with a data generating process where the "error" is the average (or cumulative) effect of a large number of random variables with a variety of distributions, the CLT tells us that the net effect will be normally distributed. This, in turn, explains why linear models that assume Normally distributed error---regression and ANOVA---have proven to be so robust in practice
Link to a number of examples of normally distributed data...should be easy to find these on the web. E.g. the classical height. Maybe SAT scores, though these are artificially normal
ref to the wikipedia article; there is also a nice graphic to snag from there---introductory sidebar---which shows the standard normal
sidebar on the log-normal?
something about the bivariate normal and some nice graphics of this?
sidebar on the issue of fat tails and how these destroyed the economy in 2007?---there is a fairly readable Wired article on this: http://www.wired.com/techbiz/it/magazine/17-03/wp_quant
The <math>\chi^{2}</math> Distribution
The chi-square (<math>\chi^{2}</math>) distribution is a one-parameter distribution defined only how positive values. If <math>Z \sim N(0,1)</math>, then <math>Z^{2} \sim \chi^{2}_{1}</math>. That is, the square of a <math>N(0,1)</math> variable is chi-squared with one degree of freedom. The fact that the square of a standard normal variate is a one-degree-of-freedom chi-square variable also explains why (e.g.) a chi-squared variate is only defined for nonnegative real numbers. If <math>W_{1},W_{2},...W_{k}</math> are all independent <math>\chi^{2}_{1}</math> variables, then <math>\sum_{i=1}^{k}W_{i} \sim \chi^{2}_{k}</math>. (The sum of <math>k</math> independent chi-squared variables is chi-squared with <math>k</math> degrees of freedom). By extension, the sum of the squares of <math>k</math> independent <math>N(0,1)</math> variables are also <math>\sim \chi^{2}_{k}</math>.
The <math>\chi^{2}</math> distribution is positively skewed, with <math>\text{E}(W) = k</math> and <math>\text{Var}(W) = 2k.</math>
Figure below presents five <math>\chi^{2}</math> densities with different values of <math>k</math>.
Need to define degrees of freedom here
Characteristics of the <math>\chi^{2}</math> Distribution
If <math>W_{j}</math> and <math>W_{k}</math> are independent <math>\chi^{2}_{j}</math> and <math>\chi^{2}_{k}</math> variables, respectively, then <math>W_{j} + W_{k}</math> is <math>\sim \chi^{2}_{j+k}</math>; this result can be extended to any number of independent chi-squared variables. This in turn implies the result the sum of the squares of <math>k</math> independent <math>N(0,1)</math> variables are also <math>\sim \chi^{2}_{k}</math>
Derivation of the <math>\chi^{2}</math> from Gamma functions
Gill discusses the <math>\chi^{2}</math> distribution as a special case of the gamma PDF. That's fine, but there's actually a much more intuitive way of thinking about it, and one that comports more closely with how it is (most commonly) used in statistics. Formally, a variable <math>W</math> that is distributed as <math>\chi^{2}</math> with <math>k</math> degrees of freedom has a density of:
<math>\begin{align} f(w) &=& \frac{1}{2^{k} \Gamma(k)} w^{k} \text{exp} \left[ \frac{-w}{2} \right] \\
&=& \frac{w^{\frac{k-2}{2}} \exp(\frac{-w}{2})}{2^{\frac{k}{2}} \Gamma(\frac{k}{2})}
where <math>\Gamma(k) = \int_{0}^{\infty} t^{k - 1} \text{exp}(-t) \, dt</math> is the gamma integral (see, e.g., Gill, p.\ 222). As with the normal distribution, the need to write the distribution in this fashion reflects the fact that it has no closed-form solution. The corresponding CDF is
<math> F(w)=\frac{\gamma(k/2,w/2)}{\Gamma(k/2)} </math>
where <math>\Gamma(\cdot)</math> is as before and <math>\gamma(\cdot)</math> is the \texttt{http://en.wikipedia.org/wiki/Incomplete\_Gamma\_function}{lower incomplete gamma function}. We write this\footnote{One also occasionally sees <math>W \sim \chi^{2}(k)</math>, with the degrees of freedom in parentheses.} as <math>W \sim \chi^{2}_{k}</math>, and say ``<math>W</math> is distributed as chi-squared with <math>k</math> degrees of freedom. \\
Additional points needed on the chi-square
Probably want to mention the use in contingency tables here, since the connection isn't obvious.
Agresti and Finlay state this was introduced by Pearson in 1900, apparently in the context of contingency tables---confirm this, any sort of story here?
As df becomes very large, the chi-square approximates the normal; this is a asymptotic distribution and for practical purposes, can be used if df > 50
Discuss more about the assumption of statistical independence?
Chi-square as the test for comparing whether an observed frequency fits a known distribution
Student's <math>t</math> Distribution
For a variable <math>X</math> which is distributed as <math>t</math> with <math>k</math> degrees of freedom, the PDF function is:
<math> f(x) = \frac{\Gamma(\frac{k+1}{2})} {\sqrt{k\pi}\,\Gamma(\frac{k}{2})} \left(1+\frac{x^2}{k} \right)^{-(\frac{k+1}{2})}\! </math>
where once again <math>\Gamma(\cdot)</math> is the gamma integral. We write <math>X \sim t_{k}</math>, and say ``<math>X</math> is distributed as Student's <math>t</math> with <math>k</math> degrees of freedom. The figure below presents <math>t</math> densities for five different values of <math>k</math>, along with a standard normal density for comparison.
The t-distribution is sometimes known as "Student's t", after a then-anonymous ``student of the statistician Karl Pearson. The story, from Wikipedia,
The t-statistic was introduced in 1908 by William Sealy Gosset, a chemist working for the Guinness brewery in Dublin, Ireland ("Student" was his pen name). Gosset had been hired due to Claude Guinness's innovative policy of recruiting the best graduates from Oxford and Cambridge to apply biochemistry and statistics to Guinness' industrial processes. Gosset devised the t-test as a way to cheaply monitor the quality of stout. He published the test in Biometrika in 1908, but was forced to use a pen name by his employer, who regarded the fact that they were using statistics as a trade secret. In fact, Gosset's identity was unknown to fellow statisticians.
Note a few things about <math>t</math>:
The mean/mode/median of a <math>t</math>-distributed variate is zero, and its variance is <math>\frac{k}{k - 2}</math>.
<math>t</math> looks like a standard normal distribution (symmetrical, bell-shaped) but has thicker ``tails (read: higher probabilities of draws being relatively far from the mean/mode). However...
...as <math>k</math> gets larger, <math>t</math> converges to a standard normal distribution; at or above <math>k = 30</math> or so, the two are effectively indistinguishable.
The importance of the <math>t</math> distribution lies in its relationship to the normal and chi-square distributions. In particular, if <math>Z \sim N(0,1)</math> and <math>W \sim \chi^{2}_{k}</math>, and <math>Z</math> and <math>W</math> are independent, then
<math>\frac{Z}{\sqrt{W/k}} \sim t_{k} </math>
That is, the ratio of an <math>N(0,1)</math> variable and a (properly transformed) chi-squared variable follows a <math>t</math> distribution, with d.f.\ equal to the number of d.f.\ of the chi-squared variable. Of course, this also means that <math>\frac{Z^{2}}{W/k} \sim t_{k}.</math>
Since we know that <math>Z^{2} \sim \chi^{2}_{1}</math>, this means that another derivation of the <math>t</math> distribution is as a ratio of a <math>\chi^{2}_{1}</math> variate and a <math>\chi^{2}_{k}</math> variate.
Additional points needed on the t distribution
May want to note that it is ubiquitous in the inference on regression coefficients
Might want to note somewhere---this might go earlier in the discussion of df---that in most social science research (e.g. survey research and time-series cross-sections), the sample sizes are well above the point where the t is asymtotically normal. The t is actually important only in very small samples, though these can be found in situations such as small subsamples in survey research (are Hispanic ferret owners in Wyoming more likely to support the Tea Party?) and situations where the population itself is small (e.g. state membership in the EU, Latin America, or ECOWAS), and experiments with a small number of subjects or cases (this is commonly found in medical research, for example, and this also motivated Gossett's original development of the test, albeit with yeast and hops---we presume---rather than experimental subjects.). In these instances, using the conventional normal approximation to the t---in particular, the rule-of-thumb of looking for standard errors less than twice the size of the coefficient estimate to establish two-tailed 0.05 significance---will be misleading.
The <math>F</math> Distribution
An <math>F</math> distribution is best understood as the ratio of two chi-squared variates. Formally, if <math>X</math> is distributed as <math>F</math> with <math>k</math> and <math>\ell</math> degrees of freedom, then the PDF of <math>X</math> is:
<math> f(x) = \frac{\left(\frac{k\,x}{k\,x + \ell}\right)^{k/2} \left(1-\frac{k\,x}{k\,x + \ell}\right)^{\ell/2}}{x\; \mathrm{B}(k/2, \ell/2)} </math>
%where <math>\mathrm{B}(\cdot)</math> is the ``beta function.\footnote{That is, <math>\mathrm{B}(x,y) = \int_0^1t^{x-1}(1-t)^{y-1}\,dt</math>.} The corresponding CDF is (once again) complicated, so we'll skip it. We write <math>X \sim F_{k,\ell}</math>, and say ``<math>X</math> is distributed as <math>F</math> with <math>k</math> and <math>\ell</math> degrees of freedom. \\}
The <math>F</math> is a two-parameter distribution, with degrees of freedom parameters (say <math>k</math> and <math>\ell</math>), both of which are limited to the positive integers. An <math>F</math> variate <math>X</math> has support on the non-negative real line; it has expected value equal to <math>\text{E}(X) = \frac{\ell}{\ell - 2}, </math>
which implies that the mean of an <math>F</math>-distributed variable converges on 1.0 as <math>\ell \rightarrow \infty</math>. Likewise, it has variance <math>\text{Var}(X) = \frac{2\,\ell^2\,(k+\ell-2)}{k (\ell-2)^2 (\ell-4)}, </math>
which bears no simple relationship to either <math>k</math> or <math>\ell</math>. It is (generally) positively skewed. Examples of some <math>F</math> densities with different values of <math>k</math> and <math>\ell</math> are presented in Figure \ref{Fs}.
As I noted a minute ago, if <math>W_{1}</math> and <math>W_{2}</math> are independent and <math>\sim \chi^{2}_{k}</math> and <math>\chi^{2}_{\ell}</math>, respectively, then <math>\frac{W_{1}}{W_{2}} \sim F_{k,\ell} </math>
That is, the ratio of two chi-squared variables is distributed as <math>F</math> with d.f.\ equal to the number of d.f.\ in the numerator and denominator variables, respectively. This implies (at least) a couple of interesting things:
If <math>X \sim F(k, \ell)</math>, then <math>\frac{1}{X} \sim F(\ell, k)</math> (because <math>\frac{1}{X} = \frac{1}{(W_{1} / W_{2})} = \frac{W_{2}}{W_{1}}</math>).
The square of a <math>t</math> distributed variable is <math>\sim F(1,k)</math> (\textit{why}? -- take the formula for <math>t</math>, and square it...)
Additional points needed on the F distribution
Summary: Relationships Among Continuous Distributions
The substantive importance of all these distributions will become apparent as we move on to sampling distributions and statistical inference. In the meantime, it is useful to consider the relationship between the four distributions we discussion above
Retrieved from "http://wiki.opossem.org/index.php?title=Statistical_distributions&oldid=2136"
|
CommonCrawl
|
Short-term efficacy of reducing screen media use on physical activity, sleep, and physiological stress in families with children aged 4–14: study protocol for the SCREENS randomized controlled trial
Martin Gillies Banke Rasmussen ORCID: orcid.org/0000-0002-2114-21851,
Jesper Pedersen1,
Line Grønholt Olesen1,
Søren Brage1,2,
Heidi Klakk1,3,
Peter Lund Kristensen1,
Jan Christian Brønd1 &
Anders Grøntved1
BMC Public Health volume 20, Article number: 380 (2020) Cite this article
During the recent decade presence of digital media, especially handheld devices, in everyday life, has been increasing. Survey data suggests that children and adults spend much of their leisure on screen media, including use of social media and video services. Despite much public debate on possible harmful effects of such behavioral shifts, evidence from rigorously conducted randomized controlled trials in free-living settings, investigating the efficacy of reducing screen media use on physical activity, sleep, and physiological stress, is still lacking. Therefore, a family and home-based randomized controlled trial – the SCREENS trial – is being conducted. Here we describe in detail the rationale and protocol of this study.
The SCREENS pilot trial was conducted during the fall of 2018 and spring of 2019. Based on experiences from the pilot study, we developed a protocol for a parallel group randomized controlled trial. The trial is being conducted from May 2019 to ultimo 2020 in 95 families with children 4–14 years recruited from a population-based survey. As part of the intervention family members must handover most portable devices for a 2-week time frame, in exchange for classic mobile phones (not smartphones). Also, entertainment-based screen media use during leisure must be limited to no more than 3 hours/week/person. At baseline and follow-up, 7-day 24-h physical activity will be assessed using two triaxial accelerometers; one at the right hip and one the middle of the right thigh. Sleep duration will be assessed using a single channel EEG-based sleep monitor system. Also, to assess physiological stress (only assessed in adults), parameters of 24-h heart rate variability, the cortisol awakening response and diurnal cortisol slope will be quantified using data sampled over three consecutive days. During the study we will objectively monitor the families' screen media use via different software and hardware monitoring systems.
Using a rigorous study design with state-of-the-art methodology to assess outcomes and intervention compliance, analyses of data from the SCREENS trial will help answer important causal questions of leisure screen media habits and its short-term influence on physical activity, sleep, and other health related outcomes among children and adults.
NCT04098913 at https://clinicaltrials.gov [20-09-2019, retrospectively registered].
Time spent using screen-based media devices is ubiquitous in everyday life of children and adults of the twenty-first century. Rapid technological development and market introduction of handheld screen-based devices, such as smartphones and tablets, to consumers all over the world, has changed the way and the amount of time humans interact with electronic media. To the extent that self-report depicts screen time habits accurately, evidence suggests that British children and youth (8–18 years) engage in four hours and 45 min of screen time a day on average, as a main activity or while engaging in other activities [1]. Furthermore, results from the same study indicate a pronounced increase in screen time from 2010 to 2015 in British children [1] and an increase in computer use during leisure hours from 2001 to 2016 in most age groups in North America [2] has been reported. Based on a 2018 survey of 3660 school children in Denmark, 24% of boys and at least 19% of girls aged 13 and 15 spend at least four hours/day on weekdays watching movies, tv-series, Youtube-movies or entertainment shows [3]. Also, 88% of adult Danes report using the internet daily, as part of their daily routine [4]. Clearly, adults and children spend much of their leisure time engaging in some form of entertainment-based screen media.
In the public debate, there is much discussion about whether use of screen media carries a risk to our mental well-being and physical health. According to a 2016 Technical report from The American Academy Pediatrics, screen-based media use includes some beneficial effects, such as improved knowledge acquisition at an early age, access to important information and creating enhanced opportunities for communication [5]. However, there is also evidence which suggest that screen media use has a negative relationship with children and adolescents' sleep [6], as well a myriad of other aspects of health, including adiposity, unhealthy dietary pattern, symptoms of depression, a poor quality of life [7] and decreased physical activity [8]. Of concern may be the effect of excessive screen time in childhood on children's physical activity habits. Some evidence suggests that childhood physical activity habits track to some degree into young adulthood [9] and concerns may therefore be raised regarding lifelong physical inactivity.
Since just before the turn of the century, experimental studies have been conducted to investigate the effect of change in screen media use on health-related outcomes in children. Among the randomized controlled studies, which have investigated the effect of a reduction in screen media use on physical activity [10,11,12,13,14,15,16,17], only one found significant increase [14]. However, this trial was limited by measuring physical activity by self-report [14]. This and several of the other randomized controlled studies are limited by small samples sizes [10, 11, 13, 14] and only one trial measured change in screen time via an objective instrument [15]. Furthermore, only one study emphasized the impact of a screen media reduction on change in physical activity in adults (the primary caregiver) [12].
In addition to the effect of screen time on physical activity, some recent lab-based experimental evidence suggests that exposure to digital screens may also negatively affect circadian rhythm and sleep in adults [18,19,20]. Although some evidence suggests an impact of limiting screen-based media use on health, several of the studies include noteworthy methodological limitations. Therefore, we are still limited in our basic understanding of the causal relation between screen media use and physical activity, sleep and physiological stress. Furthermore, because of staggering screen media technology changes in the past decade, some existing research on screen media use may have limited generalizability to current screen time behavior and culture. Rigorously conducted randomized controlled trials in free-living settings including adults and children, employing objective measures to detect changes in both exposure and outcomes, are needed to refute or confirm hypotheses of how habitual use of screen media, in its modern form, affects physical activity, sleep and physiological stress. The SCREENS trial is a randomized controlled trial which aims to investigate the short-term efficacy of limiting leisure screen media use on objectively assessed habitual physical activity, sleep duration and quality, in parents and their 4–14-year old children and measures of physiological stress, in adults.
The objectives of the SCREENS trial are to investigate the short-term efficacy of limiting screen media use during leisure on adults' and children's:
Non-sedentary time (all activities not performed in a sitting or lying position) during leisure measured by combined hip- and thigh worn accelerometry
Total sleep time, sleep latency, and wake after sleep onset measured by home-based single channel electroencephalography (EEG) sleep monitoring
Parent reported psychological well-being, in their children
Leisure- and total time engaged with moderate- and vigorous physical activity
Also, in adults, the objectives of the study are to investigate the short-term efficacy of limiting screen media use during leisure on:
Subjectively assessed sleep quality
The cortisol awakening response and diurnal cortisol obtained from saliva sampling, as markers of physiological stress
Heart rate variability using 24-h assessment, also as a marker of physiological stress
Self-reported mental well-being and mood states
The study tests the hypotheses that restricting leisure screen media use to an amount much below habitual levels for a period of two weeks increases leisure time spent being non-sedentary, increases total sleep duration and decreases markers of physiological stress, in families of adults and children.
Methods/design
SCREENS pilot trial
From November of 2018 to March of 2019 we conducted the SCREENS (not an abbreviation) pilot trial (ClinicalTrials.cov ID: NCT03788525) in families residing in the Municipality of Middelfart on the Island of Funen, in Denmark. The purpose of the pilot was to assess degree of compliance to the prescribed intervention, compliance to home-based objective assessments of physical activity, sleep, and measures of physiological stress. The purpose was furthermore to assess feasibility of our recruitment strategy, as well as resources required of participants and researchers involved and other general aspects of conducting such a study, previously outlined in detail [21]. Preliminary results from the pilot study show that the measurement and intervention protocol generally were feasible, although adjustments prior to conducting the full-scale randomized controlled trial of the SCREENS trial, were necessary. These adjustments are included in the protocol for the full-scale SCREENS randomized controlled trial described in this paper.
SCREENS survey and randomized controlled trial
This study was commenced when a survey was sent out medio May 2019 to selected postal districts in the Municipality of Odense. The study is now being expanded to the remaining municipalities on Funen (except Middelfart). The following sections will describe in detail the methodology of the SCREENS trial, including a description of the recruitment processes based on a population-based survey. A home- and family-based screen media use reduction intervention will be evaluated using a two-arm, parallel, randomized controlled superiority trial design. This study protocol was developed in accordance with the SPIRIT 2013 checklist for study protocols of randomized controlled trials (see Additional file 1).
The recruitment consists of two stages; the first stage is sending out a survey to approximately 3000 Danish adults residing in the municipality of Odense (the fourth largest Danish municipality) or neighboring municipalities, via an electronic mailbox system (e-boks) available to Danish citizens. There is a dual purpose of the survey; first, to obtain extensive descriptive data on modern screen media behavior among adults and children 6–10 years of age, and secondly, to serve as a recruitment platform for the SCREENS trial. Survey respondents will at the end of the survey be invited to hear more about the SCREENS trial. The second stage include contacting families who have responded to this invitation. Stage 1 and stage 2 will be repeated, in the different municipalities. Adults whose families are eligible are invited to participate in the trial. A broad overview of the survey send outs and recruitment for and participation in the SCREENS trial is given in Fig. 1.
Overview of surveys and subsequent recruitment for and conduct of the SCREENS trial. A visual overview of the approximately one-and-a-half-year span of the study, which includes digitally mailing out surveys including questions regarding screen media use in children and adults. Following each survey is recruitment for and conduct of the SCREENS trial. The designation of each month on the x-axis denotes the first day of said month. Notice that the duration and timing of each wave (survey and experiment) varies, as some of the depicted waves include periods without any activity because they span holidays. However, for the sake of simplicity, this has not been changed
Stage 1: Recruitment via survey
Figure 2 gives an overview of the flow of participants through the study in its entirety. For the survey, one randomly selected adult and one randomly selected child household member between six and ten years of age will be invited. To receive the survey, the adult and the child must share an address and the adult must have full custody of the child, according to the Danish National Civil Registry. No further restrictions are put on the invitees of the survey and thus all households in the municipalities that meet the criteria above will be invited. The sampling frame of the survey invitees was gathered from the Danish National Civil Registry obtained through the National Health Data Authority. The survey includes questions for the adult pertaining to the adult's and the child's screen media habits, including amount of screen use and questions on other domains of the family screen media home environment, such as rule setting.
Flow chart of participants from recruitment to statistical analyses. The flow chart above gives a broad overview of the recruitment processes via an electronic survey, initial phone contact, meeting in the families' household, participation in the SCREENS trial and, ultimately, the statistical analyses. R; Randomization, *; Possible source of missing data, **; Stages at which participants may choose to discontinue
Respondents are asked in the survey whether they would like to be contacted regarding a different study (the SCREENS trial). Therefore, all potential participants of the SCREENS trial will be survey respondents. Those who answer 'yes' to the question in the survey and who meet the following preliminary inclusion criteria (based on survey questions) will be contacted via phone regarding participation:
The adult must be classified as having high screen media use, which we define as being above the 40th percentile for total screen time during leisure, according to a questionnaire battery, included in the survey. The 40th percentile was estimated as 2 h and ~ 23 min/day (weighed average of week- and weekend days) based on the first 1000 survey respondents (all from the Municipality of Odense). This arbitrary cut-point was defined as a compromise between assuring enough eligible adults such that recruitment into the study would progress at a reasonable pace, whole making sure to include only those with enough screen media use during leisure. This criterion was based only on the adult who completed the survey, as arguably his or her screen media use to some extent could be a proxy for the screen media use of the entire family.
To exclude families who are coping with e.g. disturbed sleep patterns and others stress factors from having either newborns, toddlers or very young children, households must include only children ≥4 years of age.
Adults must not be outside the labor market or educational system.
Adults must not have any regular night shifts.
Recruitment via surveys will continue until we reach the number of participants required for the statistical analyses (see "Justification for sample size"). We expect to have sampled enough participants for the trial following the 8th or second last survey. Thus, in the final survey, we expect to not include the question regarding the SCREENS trial (see Fig. 1, to the right).
Stage 2: Recruitment following survey
Those respondents who meet the initial criteria will receive a phone call from a member of the research team who will explain the content of the SCREENS trial. Further screening will take place in two steps; first, on the phone, we will screen for health-related matters and practical issues regarding the project. The adult must confirm that:
At least one adult and one child, from the household, is willing to participate
The family has the resources, primarily in terms of spare time, to complete the study as outlined. This includes being able to restrict leisure screen time, including in weekends, during a two-week timeframe
At least one participating adult and all participating children must be able to hand-over their smartphone(s) and tablets for the screen time restriction period
The family is motivated to restrict leisure screen time for a short period of time
These exclusion criteria for both adults and children include:
Not being able to engage in regular physical activity during everyday life
Having a diagnosed sleep disorder that continues to affect sleep
Diagnosed with or in the process of being cleared from any neuropsychiatric disorders, such as attention deficit hyperactivity disorder, or developmental disorders, such as autism spectrum disorders
Been on sick leave within the last 3 months due to stress
If families are eligible according to the criteria above, a mandatory information meeting of approximately 45 min will be held at the family's home. Here, the study will be explained in detail, also including a demonstration of the measurement equipment used in the study. At this meeting we will also register the amount of screen media units available in the household. All families will be offered a minimum of 24 h to consider whether they would like to participate. Families are handed a written consent form, which must be filled out, before participation in SCREENS trial. Figure 2 below provides a complete overview of the flow of participants through the study in its entirety.
'Active' and 'passive' participants
Based on our experience from the SCREENS pilot trial, some families choose or are only able to include some family members in the study. Those who participate we define as 'active' participants. Conversely, members of the household who do not take part in the study are defined as 'passive' participants. If a family chooses to include only partial participation of the household, it is an inclusion criterion that 'passive' participants fully support the constraints of the 2-week screen time restriction period that 'active' participants in the household must comply with. Teenagers between 15 and 17 years are by default assigned to be 'passive' participants. 'Passive' participants are not required to hand-over their smartphone(s) and tablet(s).
Participant safety
As the authors are not aware of any side-effects associated with participation in the study, the study participants are informed verbally and in writing that they should notify the researchers if they experience any side-effects or harm during the SCREENS trial. No specific protocol to mitigate adverse events has been developed and there has not been formulated a formal trial termination procedure. All study participants are informed verbally and in writing that they can withdraw their participation from the study at any time, without any need to justify their reason for doing so.
Justification of sample size
The primary outcome is average change in accelerometry-derived non-sedentary time (min/day) during leisure, in children. Based on families, who currently either have or are registered for the SCREENS trial, we expect to include 1.96 children per family. Based on preliminary data from our pilot, we expect a standard deviation of 57 min/week for average change in non-sedentary time from baseline to follow-up in the experimental group and 39.7 min/week for the control group. Based on other internal work in children 0–17 years of age, we expect a 0.3 correlation between sibling non-sedentary time. In regard to clinical relevancy, a cross-over study found that intermittent interruptions of walking, amounting to a total of 18 min during 3 hours of sitting, resulted in favorable metabolic changes, when compared to sitting only, in children 10.2 (1.5) years of age [22]. For the current study, a 24-min change is deemed a clinically relevant effect size, based on the expected between-group difference and project resources. Therefore, assuming an intraclass correlation coefficient of 0.3 for sibling non-sedentary time and a cluster size of 1.96 children per family, to detect a 24 min/day difference with a power of 80% and α = 0.05, a total of 88 families including 174 children is required for the analyses.
Thus far we have experienced a 0 % drop-out. Therefore, the main threat against achieving enough subjects for our analyses may arguably be missing data. Therefore, by sampling 95 families into the study, including a total of 186 children, we have safeguarded our primary analysis against a family dropout rate or data loss for other reasons of approximately 7.4%.
Organization of the SCREENS trial
The study is organized into a planning/working group and a data collection group. The former consists of the entire list of authors, whereas the latter consists of MGR, JP, as well as pre-graduate student SS, and student helpers Stud. Cand. Scient. SM and Stud. Cand. Scient JH.
Study design and structure of the SCREENS trial
Figure 3 gives an overview of the course of the SCREENS trial, as well as the exposure and outcome measurements. As illustrated, the SCREENS trial includes three meetings and one phone call, which takes place in the following chronological order; a baseline meeting, a post-baseline/pre-experiment meeting, a mid-experiment/pre-follow-up phone call and a post experiment/post-follow-up meeting. The meetings will all take place in the families' household and will be held by the same member of the research staff. The baseline and follow-up periods last a week (7 × 24 hours) and span 8 days. If e.g. a period starts at 5 pm on a Wednesday, the period finishes on the same day and at the same time, exactly a week later.
An overview of the SCREENS trial as well as the included measurements. The figure illustrates the course of the SCREENS trial scaled in days, including the experiment phase and the timing and duration of each outcome measurement protocol. Notice that the protocol for baseline measurements and the protocol for follow-up measurements, differ only in that there is one additional day of sleep measurement at baseline (a "test" night to get acquainted with this protocol) and that the questionnaires are administrated at opposite extremes. The first meeting is an information meeting in the families' household, whereas the second through fourth meeting take place during the trial
The intervention takes place in the families' household during everyday activities. The intervention requires that the family make several changes during everyday life, regarding screen time habits, during leisure, for 2 weeks. Two weeks was chosen as a compromise between assuring that there was enough time to detect change (including time to adapt), with how long we could expect families to heavily restrict leisure screen time. One of the main components of the intervention is that portable screen-based devices (smartphones and tablets) must be handed over to the research staff for the duration of the intervention. The devices will be stored in a locked safe at the Department of Sports Science and Clinical Biomechanics at the University of Southern Denmark. Every family member who owns a device with a sim card will be offered a Nokia 130 phone in exchange, in which their sim card will be inserted. The Nokia phone can perform the few operations that arguably are essential during everyday life; calling, text messaging, and setting alarm clocks.
During the screen media reduction period, which only targets leisure hours, a finite amount of entertainment-based screen media use is allowed. Entertainment-based screen time is defined as; watching streaming-based services (Netflix, HBO, Amazon, Youtube etc.), most broadcast television, surfing online, gaming, use of social media to connect with friends and family and more. In the context of this study, it also includes watching the news (although it may be debatable whether this is considered entertainment-based). During the 2-week screen time restriction, adults and children are allowed up to 3 hours per week per person of entertainment-based screen media use. On the contrary, necessary screen time includes brief contact via phone to plan social gatherings, including social arrangements with one's family or play dates for one's children, as well as necessary shopping online, e.g. grocery shopping. Necessary screen time also includes all screen time relating to one's children's school or nursery, e.g. reading information letters from the teacher. Adults are permitted up to 30 min of necessary screen time per day. Children or adolescents, who are required to do homework on digital screens, are permitted to do so with no constrains on time. All entertainment-based and necessary screen time must be noted in sheets that will be handed out at the post-baseline/pre-experiment meeting.
The research staff will place three to five 'intervention reminders' in the household. An 'intervention reminder' is a A5 sheet on which the rules of the intervention are listed. These will act as environmental cues, where the goal is that when family members see the reminder they are reminded of the details regarding their participation in the study. The reminders will be placed strategically in the household, including in the living room by the television, at household computers and in a place where the family often gathers, e.g. in the kitchen. Table 1. gives a summary of the core components of the screen media restriction protocol.
Table 1 Summary of core components of the screen media reduction protocol
The goal is to intervene as little as possible in the potential behavior change of the family, beyond establishing the framework of the intervention. The purpose of this is to increase as much as possible the ecological validity of the data.
The control group will continue with their everyday habits, only interrupted by follow-up measurements. Those who are randomized to the control group will be offered to complete the intervention protocol when they have finished the control group period, although they will not repeat the outcome assessments a third time.
Scheduling meetings
All meetings and phone calls will be scheduled in advance of the study. This is to ensure that meetings and measurements are structured in accordance to the overall protocol. This includes making sure that baseline and follow-up periods are placed at approximately the same days during the week (to ensure comparability). Lastly, early planning is also to ensure that the 2-week screen time restriction is placed at a time where the family can manage to implement the behavior change. Ideally, the course of the trial should be scheduled in a manner, such that baseline and the experiment phase, including follow-up measurements, are placed back to back. However, for the latter point, for practical reasons regarding the family's schedule, it may be necessary to allow a week or more to pass between end of baseline and the beginning of the experiment. Reasons for doing so could be that a family is travelling at this time or if there are upcoming television events, which the family would like watch, e.g. national soccer events, which collide with an experiment phase scheduled at an ideal time. Judgements regarding scheduling conflicts will be considered weighing the importance of assuring progress of the trial against pragmatic considerations.
Baseline meeting
At the baseline meeting, the research staff will instruct the families on how to conduct the protocol for collecting data. Because the families oversee data collection themselves, extensive training will be given regarding the protocols. At this meeting, the family members will be instructed start wearing accelerometer belts and we will also demonstrate the use of the sleep equipment (details on outcome measurement protocols later).
At the meeting we will install instruments to objectively assess screen time on the devices in the household in attempt to accurately monitor compliance. We will install an application (SDU Device Tracker) on the family members' smartphones, and tablets. Furthermore, we will mount tv-monitors on every TV in the household. Finally, tracking software for PCs will be installed on every PC in the household, if possible (the monitoring systems are described in more detail, later).
At the end of the meeting a sheet will be handed out, which outlines some of the practical matters regarding the intervention. The sheet also includes several points for discussion in the family, as they are encouraged to discuss what challenges a 2-week screen media use reduction period could entail. They are also encouraged to discuss how any issues might be resolved. This includes a discussion of how to manage everyday tasks without a smartphone at one's disposal. The family members are encouraged to write down these challenges and their potential solutions.
Post-baseline/pre-experiment meeting and randomization
At the post-baseline/pre-experiment meeting the family will be randomized by research staff using the Odense Patient data Explorative Network (OPEN) Randomize platform (part of a research infrastructure service for researchers in the Region of Southern Denmark), to either the screen media restriction protocol or to control. Staff at the OPEN Randomize platform - whom the research staff do not work with - have generated a randomization table with a hidden allocation sequence using computer generated random permuted blocks of 2–4 families and an allocation ratio of 1:1. After randomization the group allocation will not be masked to the research staff and given the nature of the intervention cannot be concealed from the participants.
At this meeting, we will also handover new accelerometers belts for the follow-up measurements. All other equipment can be re-used.
Mid-experiment/pre-follow-up phone call
Between the post-baseline/pre-experiment meeting and the post experiment/post-follow-up meeting, the research staff will phone the contact person in the family. The purpose of this phone call is twofold; first, to motivate the family to maintain the screen time reduction during the entire 2-week duration. The researcher will ask the family how well they are doing in terms of restricting their screen time according to the protocol. Secondly, the purpose of the phone call is to remind the family that they must soon start up the follow-up measurements and to ask the family if they have any questions relating to the measurement protocol. The control group will also receive a phone call, only for the second purpose.
Post experiment/post-follow-up meeting
At the post experiment/post-follow-up meeting, which takes place immediately or shortly after the end of the experiment phase, the researcher will congratulate the family on their completion of the study. The children will each receive a colorful diploma signed by the member of the research staff in charge of this family. The families will receive a 500 DKK reimbursement for the time they put into completing the study (independent of group allocation).
Theoretical underpinning of intervention
Bandura's social cognitive theory [23] serves as the theoretical framework of the intervention. The reciprocally determined and causal relationship between an individual's environment, his or her personal factors and behavior, as proposed by Albert Bandura [23], serves as a theoretical underpinning of the core components outlined earlier. Specifically, we target personal factors and induce specific changes to the household environment, to encourage the families to change, i.e. decrease the level of their screen time. Conversely, we expect that changes in screen media use may also influence personal factors, such as an individual's biology, as defined by Bandura [23], e.g. sleep quality, mood, and mental stress.
We specifically target the families at the personal level through several means. At an early stage in the trial, we ask the families to discuss and find solutions to issues they may meet during a 2-week screen media use reduction. Although the researchers do not get involved in the details, we encourage planning and goal-setting [24]. The study is emphasized as family-based and therefore we encourage a change in the overall household culture relating to screen media use. We suggest that parents act as models of this new behavior, which might make it easier for the children to adopt the behavior, as well [25].
Meeting only one research staff member might increase the level of commitment to the intervention protocol, via increased self-regulation [26] in terms of decreasing screen media use. This may arguably be because one knows one must face the same member of the research team several times during a short time period.
During the 2-week restriction period we make specific changes to the household environment; we install monitoring systems on the household televisions and computers, as well as on smartphones and tablets. We also place three 'intervention reminders', i.e. environmental cues. The goal of all these environmental changes is to emphasize self-regulatory processes [26] necessary to change and maintain the screen-time reduction. The mid-experiment phone call is included for a similar purpose.
The goal of the reimbursement following completion of the study is to have an incentive structure, an extrinsic reward, i.e. an reward coming from something exterior to the person [27]. By anticipating a reward, this may facilitate the self-regulatory processes [26] required to be compliant to the SCREENS trial protocols.
Below is a table summarizing the elements of Social Cognitive Theory discussed above (see Table 2). The table is structured with inspiration from Table 1 in the study by Hinkley et al. [13] aiming to reduce screen media use in 2–3-year-old children.
Table 2 Bandura's Social Cognitive Theory (SCT) as a theoretical framework for the intervention
Exposure and outcome measurement protocol
As already mentioned, figure three gives an overview of the overall trial protocol, including an overview of the timing of measurements. On the first day of baseline measurement, questionnaires will be administered to the participants. The questionnaires will be sent by e-mail to an adult in the household. Then, on the first night, the first of four sleep measurements during baseline will be carried out. The first sleep measurement is meant as a "test" measurement (not included in statistical analyses and not included at follow-up), as some adjustment to the sleep measurement protocol is necessary. The three remaining consecutive sleep measurements will take place from the night of day five till day six through the night of day seven till day eight. On day one, accelerometers will be mounted onto the participants and worn throughout the given measurement period. On day four, adult participants will be instructed to mount heart rate variability monitors to their upper body at the same time accelerometers were mounted 4 days prior. These monitors will be worn for three consecutive days. Therefore, accelerometry and heart rate variability measurements will finish on the exact same time on day eight. Finally, from the morning of day five till the evening of day seven, adult participants will collect four saliva samples per day; three in the morning upon awakening and one before bedtime.
Primary and secondary outcomes and endpoints
The primary outcome is the mean between-group difference in accelerometry-derived non-sedentary time (min/day) during leisure, in children. The primary endpoint is the primary outcome at follow-up assessment (only one follow-up).
The full list of secondary and exploratory outcomes can be found at: https://clinicaltrials.gov/ct2/show/NCT04098913.
Accelerometry
Both adults and children will undergo 24-h accelerometry for seven consecutive days using two Axivity AX3 (Axivity Ltd., Newcastle upon Tyne, United Kingdom) triaxial accelerometers. The sensors are small (dimensions: 23 mm × 32.5 mm × 7.6 mm) weighing only 11 g. Acceleration is measured in three axes. Sensitivity will be set to +/− 8 g and sampling frequency to 50 Hz.
The accelerometers are worn at two anatomical locations; one is fixated to the body in a pocket attached to a belt worn around the waist, where the sensor is placed such that it is on the right hip facing away from the right of the body. A second belt is worn around the right thigh midway between the hip and the knee, where an accelerometer is placed in a pocket on this belt facing away from the body. The devices will be worn for 1 week (seven consecutive days) at baseline and at follow-up, which has been suggested as the number of days required to estimate habitual physical activity [28].
Time spent in distinct activity types (sitting, moving, standing, biking, stair climbing, running, walking, and lying down) are determined from the acceleration measured with the thigh worn device using the method proposed by Skotte et al. (2014) [29] using 1-s epochs. In this study, the method was validated with adults in a standardized field test and demonstrated a sensitivity > 95% and specificity > 99% for all activities. Also, during almost 6 days of measurement in free-living, sensitivity and specificity were 98 and 93%, respectively, for classification of sitting time [29]. Children and adolescent specific decision thresholds for the method were developed using an internally conducted study (publication in preparation). The results indicate high sensitivity and specificity of measurement. Non-sedentary time is defined based on this method and includes all activities, including standing, other than sitting and lying. We will analyze non-sedentary time as the amount per day (total amount per 7 days divided by seven). In addition, time spent within physical activity intensity domains (sedentary, light, moderate and vigorous) will be estimated using ActiGraph counts generated with the waist worn device [30] using 10-s epochs. The cut-points defining intensity domains are determined using an internal calibration study conducted in a group of children and adults (results not published).
Non-wear periods are identified and marked as missing data by evaluating three signal features generated from acceleration in combination with temperature and predefined expected awake time (06:00 AM to 10:00 PM). Periods of no movement (acceleration below 20 mg) longer than 120 min are always identified as non-wear and shorter periods from 45 to 120 min are identified as non-wear if the average temperature is below an individually estimated non-moving temperature (NMT) threshold. Periods of 10 to 45 min with no movement are only identified as non-wear if the average temperature is below the NMT threshold and if the end of the period is within the expected awake time. Device transportation (periods of device movement when the device is not worn by the subject) is identified as non-wear if the average temperature of the period is below the NMT threshold.
A valid day (restricted to leisure time during wake hours) of measurement will be defined as a day containing no more than a total of 10 % non-wear. A valid baseline and follow-up measurement must include at least three valid weekdays and at least one valid weekend day. We will include a sensitivity analysis of the valid data where we impute for each individual non-wear time based on available data, for the same type of day and timing of non-wear (time of day).
The software OmGUI version 1.0.0.37 will be used in the set-up, download, re-sampling and conversion of the accelerometer data. The raw accelerometry data will be processed using Matlab (Mathworks Inc., Natick, Massachusetts, US) release R2019a version 9.6.0.1099231.
Sleep monitoring
Both adults and children will undergo sleep assessments using the Zmachine® Insight+ model DT-200 (General Sleep Coorporation, Firmware version 5.1.0). The device measures sleep by single channel electroencephalography (EEG) from the differential mastoid (A1–A2) EEG location on a 30-s epoch basis. The sleep apparatus is developed for use in a free-living setting for objective measurement of sleep, including measurement of sleep duration and sleep stage classification (Light Sleep (N1 & N2), Deep Sleep (Slow Wave Sleep) and Rapid-eye movement (REM) Sleep), as well as computation of sleep-specific quantities, e.g. latency to the respective sleep stages. The algorithm in Zmachine insight+ has been compared to polysomnography (PSG) in adults with and without chronic sleep issues within a laboratory setting and shown a high degree of validity for this purpose [31, 32]. The Zmachine device has also been shown to be valid in terms of classification of sleep; detecting of sleep versus awake with the Zmachine algorithm was done with a sensitivity, specificity, positive predictive value and negative predictive value of 95.5, 92.5, 98 and 84.2%, respectively against polysomnography [31]. A second Zmachine algorithm, which further differentiates sleep into specific stages, has also been evaluated; classification of sleep into light, deep and REM was achieved with sensitivities of 84, 74 and 72% and with positive predictive values of 85, 78 and 73%, respectively against polysomnography [32].
Three electrodes (Ambu A/S, type: N-00-S/25) are mounted on the mastoids (signal) and the back of the neck (ground). Thirty minutes before the subjects plan to go to bed to sleep, the skin areas are cleansed with an alcohol swab and then electrodes are attached to the skin. An EEG-cable connects the three electrodes to the Zmachine device, whereafter a sensor check is performed to detect whether one or more electrodes are not mounted correctly. If there are sensor problems, these are solved swiftly by a simple change of said electrodes.
Because the Zmachine device has not been developed for children directly and because some adults may tend to twist and turn during sleep, we developed custom-made pockets, which allow for fixation of the EEG-cable and Zmachine device itself to the accelerometer belt at the hip. This fixation assures e.g. that cables will not wrap around the child's neck during sleep. We also recommend these pockets for adults whose sleep includes much twisting and turning.
Mental stress: heart rate variability, cortisol awakening response and diurnal cortisol
In adult participants, physiological stress will be assessed by measuring three components associated with human stress; the beat-to-beat interval variability (ms) of the heart [33], the saliva cortisol awakening response (a unique feature of cortisol circadian rhythm) [34] and 24-h diurnal saliva cortisol [35]. As habitual excessive screen time may negatively impact both main stress pathways – the sympathetic adrenal medullary (SAM) axis and the hypothalamic pituitary adrenal cortex (HPA) axis [36] – we include multiple measurements of physiological stress.
We will collect 24-h measurements of Heart Rate Variability (HRV) for three consecutive days using the Firstbeat Bodyguard 2 HRV measurement device. The device is non-invasive and allows ambulatory continuous recording of R–R heartbeat intervals in a free-living setting. In addition, after merging and aligning epoch-by-epoch data on intensity and type of physical activity from accelerometry with R-R beat intervals, HRV activity arising from physical exertion will be delineated from non-physical activity associated HRV activity. The latter is proposed to be reflective of states of mental stress. A recent systematic review and meta-analysis provide evidence for construct validity of HRV as an objective measure to quantify psychological stress [33].
Adults will mount the Firstbeat Bodyguard 2 device to the chest by electrodes designed for long-term measurements (Ambu A/S, type: L-00-S/25); one electrode will be attached to the right side of the body, immediately below to collarbone. The second electrode will be attached to the left side of the body, at the level of the rib cage. The Firstbeat device will be attached to the electrode below the collarbone and from the device a cable will connect to the electrode on the rib cage.
Raw R-R heartbeat intervals will be processed using Kubios HRV Premium 3.0.2. including algorithms for beat detection and artifact correction for each data file and subsequent calculation of HRV time-, −frequency and non-linear domain summary data [37, 38].
Salivary cortisol awakening response
A distinct component of diurnal cortisol rhythm in healthy humans is the morning rise in cortisol, which is a steep rise in cortisol during the 45 min following awakening. When the pattern of morning cortisol rise deviates from what is normally observed, it is argued that this is reflective of malfunction in neuroendocrine systems [39].
As recommended as a minimum for measurement of the cortisol awakening response, adult participants will collect saliva samples immediately upon awakening, 30 min and 45 min following awakening [39]. The saliva samples will be collected using Salivette®, code blue (Sarstedt), which contains a swab that absorbs saliva during chewing. Immediately at awakening, the participants will deliver the first saliva sample. Participants are instructed to not remove the electrodes and sleep apparatus (described earlier) until after the first saliva sample has been collected, such that awakening time can be assessed. Also, at awakening, the participants will start a dual timer (S. Brannan & Sons Ltd., England), which is set to count down from 30 and 45 min, simultaneously. When the timers reach 0, a distinct alarm tone rings notifying the participants that they must collect a second and third saliva sample, respectively. According to expert consensus it is recommended that caffeinated and sugared drinks, as well as food/breakfast is not consumed over the measurement period in the morning. Although brushing one's teeth during cortisol saliva sampling is allowed [39], we have chosen a more conservative approach where we disallow this 10–15 min before each saliva sample. After each sample is delivered, the participants must place the sample in their household freezer.
In a checklist (described later) participants must report the exact time at which a saliva sample has been delivered. Also, the participants are handed a series of pairs of barcoded stickers, which contain a unique identifier for the person and the sample. After each sample, one sticker must be put in the personal checklist in a section indicating which saliva sample has be taken (day and time), whereas the other is put onto the Salivette containing the sample. In the checklist, also time of saliva sampling is reported by the subject.
At the final meeting, the samples will be transported in a freezer box to the Department of Sports Science and Clinical Biomechanics at the University of Southern Denmark. Here, they will be stored at − 20 degrees Celsius. When enough samples have been transported to the Department (no more than 500 samples), they will be transported in a freezer box from the University to the Clinical Biochemical Department at Slagelse Hospital (~ 70 km drive) for analysis for cortisol and cortisone content using the LC-MS method (Phenomenex Inc., USA, application 20,655). Before analysis, the samples are stored in the laboratory freezer at − 80 degrees Celcius. The laboratory has external quality control of their chemical analyses of salivary cortisol and cortisol by UK-NEQAS, England and are ranked satisfactory. The research staff will have continued dialogue with biochemists at the laboratory to assure that protocols for correct storage and analysis are met. Following laboratory analysis, the samples will be destroyed.
Salivary diurnal cortisol
Diurnal cortisol is composed of multiple measurable components of cortisol dynamics; first, as described above is the cortisol awakening response, and secondly, the decline of cortisol across the span of the day, i.e. the diurnal cortisol slope. In line with previous research [35], we will assess diurnal cortisol by adding a single saliva sample immediately before bedtime beyond the three saliva samples in the morning hours. By measuring cortisol at four points during daytime, we will estimate diurnal cortisol levels, to assess the state of the HPA axis output [35] during the span of the day.
Overview of questionnaires
In addition to the physical measurements we will also administer questionnaires at baseline and follow-up, to assess the effect of the screen media use restriction on different psychological and physical constructs. The adults will answer the following questionnaires; WHO-5, Profile of Moods Scale [40] and two components of the Leeds sleep evaluation questionnaire [41]. On behalf of the participating children, the adults will answer the Strength and Difficulties Questionnaire [42].
WHO-5
To assess the adult participants' subjective sense of well-being we will administer the WHO-5 [43]. The tool consists of five simple and non-intrusive questions regarding an individual's psychological well-being. The questionnaire has been translated into more than 30 languages and has been used in countless studies around the globe. WHO-5 has been demonstrated to be applicable in a wide array studies; it has been valuable tool in clinical studies, as well as a screening tool for depression. The clinometric validity of WHO-5 has been shown to be high [43].
Leeds sleep evaluation questionnaire
To assess subjective sense of restlessness during sleep and degree of interrupted sleep, we will use two of the 10 items of the Leeds Sleep Evaluation Questionnaire. The questionnaire items are answered on a visual analogue scale. The tool in its entirety has shown to be valid in some populations, both in healthy and in diseased individuals [41].
Profile of mood states
The adults' mood state will be assessed using the original 64-item Profile of Moods States questionnaire, which is comprised of six scales relating to 6 states or emotions. The questionnaire has been used in psychological research for decades and has shown high internal consistency and construct validity for its mood scales [40].
Strengths and difficulties questionnaire
Data on Children's wellbeing and mental health will be collected using the Danish parent reported Strength and Difficulties Questionnaire (SDQ) including the one-month follow-up versions, covering the age band 2–4, 5–6, 4–10 and 11–17 years adapted to kindergarten or school conditions. Parent reported SDQ has in general provided good psychometric properties [44, 45], also reported for Danish children [42]. Besides the total SDQ score and daily function, the broader internalising, externalising and prosocial subscales [46, 47] will be of primary interest in this low-risk population study.
Objective monitoring of screen time: television, smartphone, tablet and computer activity
As shown in Fig. 3, throughout the entire course of the SCREENS trial we will objectively measure screen time on the participating families' devices, including portable devices, such as smartphones and tablets, as well as televisions and computers.
SDU device tracker: monitoring of smartphone, tablet and personal computer activity
We have developed a non-commercial monitoring device (SDU Device Tracker) for monitoring screen time activity on smartphones, tablets and personal computers. For tablets and smartphones, the software is installed as an application using a custom-made QR-code. The app can track screen time activity, as well as the amount of times a device has been picked-up (opened). The app registers data on a second-to-second basis, thus allowing for detailed analysis of timing and amount of screen media use. We have developed a Python-based (Python 3.7.3) data reduction software, which perform data quality control and can summarize individual app data. Based on the processing of the data diagnostics of the application activity can be made, including how many times the app has been closed (force quitted) by the user and for how long, thus quantifying the amount of time where screen media use has not been recorded. Data from the application is continuously sent encrypted to a secure server at the University of Southern Denmark. The application is currently compatible with iOS and Android systems (smartphones and tablets) and OS X and Windows (PCs).
Although SDU Device Tracker will be installed on most devices, it may only be installed on devices where there is a possibility that said device will be used by a family member. Families may own several devices, including devices that they no longer use, e.g. devices stored in their basement. Therefore, due to time constraints during scheduled meetings, we may not install SDU Device Tracker on unused devices. However, we will take a conservative approach and install the application on most devices, where there is a small chance that said device might be used. For devices used for work, we may not be permitted to install the application.
The SDU Device Tracker applications have undergone extensive internal validation and quality control and we are currently conducting ongoing formal validation of the apps.
TV-monitoring device
A small electronic device was developed by engineers at our department (Department of Sports Science and Clinical Biomechanics) to assess the amount of TV usage. Detecting TV usage is assessed by measuring the power cord current using a hall sensor. A hall sensor is a current sensor generating a voltage proportional to the current flowing through the sensor. The voltage is converted using an analog to digital converter with the installed micro-controller, in 1-min epocs. TV usage is detected by using a simple threshold, which is set substantial higher than the stand-by current (equal to or above the signal strength half-way between the minimum and maximal). By using this threshold, we also consider the fact that some TVs exhibit short burst of electrical activity, often during nocturnal hours. These signals, which are multiples above standby signal strength, but also multiples below television activity (which is often a 5-fold signal stronger than stand-by mode). The signals are characteristic of TV, which download information for the TV-users, when the TV is shut off. By using the defined cut-off above, TV-time is easily distinguished from these shorts burst of electrical activity not associated with TV usage.
At baseline, we hand out a television usage checklist, which we place by each television with more than one user. Each checklist will contain a page for each day of the baseline period. Here, each family member must mark whether they have used the television in 15-min intervals of the day (03:00–03:00). It is possible for family members to mark the same time slots. The marked time slots will then be cross-referenced with the objective measures of tv-usage to categorize individual TV usage on shared TVs.
We may refrain from installing a TV-monitor on televisions that may never be used, e.g. TVs tucked away in the basement, which may not even be plugged in.
Sheets and personalized checklists
During the intervention, the subjects must fill out small sheets. First, every member of the family must report the amount and type of entertainment-based screen time that they have used, of the up to three hours per week, which is permitted. Each family member is handed their own personal sheet to fill in these details. Secondly, the subjects are asked to report, on a separate paper, if they have exceeded the limit for entertainment-based screen media use. In this sheet, any screen media use of 'passive' participants, is also reported. Third, for those participants for whom screen time is necessary during everyday life, i.e. during school- or work life, a third sheet is handed out. On this sheet this necessary screen time during the 2-week experiment period must be noted.
Personalized checklists
Each participant will receive a detailed but concise personal checklist, which chronicles the content of the baseline and follow-up days in terms of the respective measurement protocols. Each checklist contains the dates and times which are relevant to the physical measurements, including e.g. when devices should be worn, when existing electrodes must be switched to new ones and when devices must be removed. As described earlier, adult participants must also register the number and timing of the saliva samples that they have collected. Participants must also register any irregularities pertaining to the physical measurements, e.g. issues with timing and reasons for non-wear. Lastly, in the checklist participants must also when they wake up, when they go to work or school (and if they are sick), when they leave work or school, as well as when they go to bed. This information will also be used to time annotate the time series data.
To estimate the degree to which families are compliant to the intervention two calculations will be made for each family: 1) a calculation of a threshold for compliance for entertainment-based screen time and 2) an estimate of total entertainment-based screen time during the intervention. The threshold for compliance will equal the maximum entertainment-based screen time permitted. The threshold is calculated by multiplying the number of participating family members by 3 h/week of permitted screen time and multiplying this by 2 weeks, i.e. the duration of the experiment.
$$ \mathbf{Threshold}\ \mathbf{for}\ \mathbf{family}-\mathbf{wise}\ \mathbf{compliance}=\mathrm{n}\ \mathrm{participating}\ \mathrm{family}\ \mathrm{members}\times 3\kern0.28em \mathrm{hours}/\mathrm{week}\times 2\kern0.28em \mathrm{weeks} $$
In this calculation it is assumed that each family member uses their permitted screen time by themselves. As family members may use their entertainment-based screen time together, we expect that a compliant family will have a level of entertainment-based screen time lower than the compliance threshold.
To estimate the total amount of entertainment-based screen time, multiple sources of data will be used. First, objectively measured TV-, smartphone-, tablet-, ipad- and PC-time will be summarized. Then, entertainment-based screen time by self-report, on devices whose activity we do not measure, e.g. TV-viewing on a TV outside the household, will be added to the summary of objectively assessed data. Next, we will subtract all self-reported necessary screen time on devices that we measure, which otherwise would be considered entertainment based. Lastly, we will subtract all screen time (entertainment-based and necessary) by 'passive' participants.
$$ \mathbf{Family}-\mathbf{wise}\ \mathbf{level}\ \mathbf{of}\ \mathbf{entertainment}-\mathbf{based}\ \mathbf{screen}\ \mathbf{time}=\mathrm{total}\ \mathrm{objective}\mathrm{ly}\ \mathrm{measured}\ \mathrm{screen}\ \mathrm{time}\ \left(\min /2\ \mathrm{weeks}\right)+\mathrm{self}-\mathrm{reported}\kern0.6em \mathrm{entertainment}-\mathrm{based}\ \mathrm{screen}\ \mathrm{time}\ \mathrm{beyond}\ \mathrm{objective}\ \mathrm{measures}\ \left(\min /2\ \mathrm{weeks}\right)-\mathrm{self}-\mathrm{reported}\ \mathrm{necessary}\ \mathrm{screen}\ \mathrm{time}\ \left(\min /2\ \mathrm{weeks}\right)\ \mathrm{in}\ \mathrm{objective}\ \mathrm{measures}-\mathrm{self}-\mathrm{reported}\ \mathrm{screen}\ \mathrm{time}\ \mathrm{in}\ {\mathrm{objective}\ \mathrm{measures}}_{\mathrm{passive}\ \mathrm{participants}}\ \left(\min /2\ \mathrm{weeks}\right) $$
To compute degree of compliance for a family, we will calculate the proportion of entertainment-based screen time of the compliance threshold, where a number below or equal to 1 or 100% will indicate that the family have been compliant to the intervention protocol.
We will also attempt to calculate individual level compliance using all the available data sources. However, this computation may be complicated by the fact that it may not always be clear who the user of a shared device is. For this reason, it may be difficult to parse all screen media use in a household onto distinct users. Also, although we will attempt to describe individual usage on shared televisions based on television usage checklists filled out during baseline (described earlier), we cannot be certain that tv usage profile during baseline can be generalized to the remainder of the study period.
Questionnaire: feasibility of and compliance to the intervention
At the end of the intervention, each adult must complete a questionnaire concerning; 1) the degree to which the adult was compliant to the intervention protocol, 2) the challenges and opportunities the family faced during the 2-week screen time reduction period and 3) the changes that were made during everyday life during the period of the intervention. Questions relating to the latter will mainly be concerned with what activities that were introduced or emphasized, when screen time was no longer possible to the same extent. One of the adults must also complete a similar questionnaire on behalf of the children.
Background data
Basic background information is collected in the survey, including educational attainment according to the International Standard Classification of Education, work experience and current employment, as well as household constellation. Data on ethnicity (ethnic origin) is gathered from the Danish Civil Registry and is available for the selected children and selected adults, as well as other parents of the child not registered at the same address. From the survey and initial meetings with the families, data on screen time habits and culture within the family household regarding screen time is collected. Beyond the amount and timing of screen time, data on the number of devices in the household, the age when the child had his/her own smartphone, rules regarding screen media use and screen media culture around family meals, among others, is also collected. At baseline, each adult on behalf of themselves and their child must report their current bodyweight (kg) and height (cm) to compute body mass index. Also, data on gender and age will be collected for all participants.
Survey data will be collected and stored using REDCap (a secure application for building and managing online surveys and databases), which is managed by the researcher service organization OPEN in the Region of Southern Denmark. This mode of storage is in accordance with the General Data Protection Regulation for data handling. Exposure and outcome data collected during the trial will be stored in the families' households until completion of the study, after which it will be transported back to the University of Southern Denmark. At the University, data will be extracted from the measurement devices and stored in safe folders on the University servers.
Currently MGR, JP, LGO, PLK, JCB, AG and our pre-graduate research scholar and two student helpers have been granted access to the data by the Danish Data Protection Agency. The data will be stored in its raw form and no statistical investigation will be conducted, before the data collection is complete.
A detailed description of data management and our a priori planned statistical analyses can be found at (see 'Study documents'): https://clinicaltrials.gov/ct2/show/NCT04098913?cond=screens&draw=2&rank=1.
This paper describes in detail the protocol for a randomized controlled trial that aims to investigate the short-term efficacy of limiting leisure screen media use on objectively assessed habitual physical activity, sleep, and physiological stress in parents and their 4–14-year old children, in free-living.
The SCREENS trial addresses a research gap and limitations of observational and experimental studies of the effects of screen media use in children and adults. This includes possible lack of generalizability of experimental laboratory findings to free-living conditions, that may limit findings from acute effect cross-over studies of exposure to digital screen devices during evening hours and sleep [18, 19, 48, 49], and of acute effects of availability of screen devices on physical activity behavior [50]. The study also attempts to overcome limitations from experimental studies conducted in free-living that have not documented compliance to screen media during intervention precisely, or have not minimized noncompliance [12, 14]. In our study we will attempt to objectively monitor compliance by installing newly developed monitoring systems on screen time devices in the participating families' households. This will markedly decrease the reliance on memory and influence of social desirability bias in gathering compliance data on screen media use. Also, we have taken important steps in the enrollment phase (e.g. exclusion based on parents' perceived resources for behavior change) to enhance the families' compliance to the screen media restriction protocol and during the intervention by closely following the families and optimizing the participants' experience. In an efficacy trial conducted in free-living condition as the present, these aspects are essential to make valid causal inferences from the data. Also, importantly, much of the available research on screen time and health is, for obvious historical reasons, not based on screen media use in its modern form, but rather television usage, and is also to a large extent based on self-report [7].
Our goal is that results from statistical analyses will move the field forward towards answering causal questions which currently remain unanswered. Furthermore, registration of screen time activity is collected in no higher than 1-min intervals, and opportunities for investigating with high accuracy and detail, not only how amount but also timing of screen time exposure is related to parameters of health, will be rendered possible in secondary exploratory analyses. By merging objective data on screen time consumption, with high-quality 24-h measurements on highly detailed accelerometry and heart-rate-variability, as well as detailed EEG data during nocturnal hours, we will generate a data resource from which detailed investigations are possible. Also, by conducting the study in a familiar setting, i.e. in the families' home during everyday life and not in a laboratory, the ecological validity of the results will be high.
We hope that the findings from this study, will help advance the research frontier within the field of screen time and health. Importantly, it is our vision that the results from the SCREENS trial, alongside results from other new studies in the field, will provide a solid foundation for policy makers to make political decisions regarding usage of screen-based media, e.g. formulate concrete national guidelines or recommendations for screen media use in children, adolescents and adults. Also, as we expect that our findings will be of public interest, the results will be disseminated to practitioners in relevant fields, including practitioners in municipal settings as well as at institutions in the private sector, whose work revolves around childhood health. Also, results will be presented at scientific conferences nationally and internationally and published in peer-reviewed journals (regardless of the direction of the findings). Our work may inspire our scientific colleagues in their future work, including replication of our study in other national and cultural settings, as well as developing intervention studies, whose aims are to facilitate long-term behavior change regarding screen time for specific populations who consume screen media in excessive amounts.
Upon completion of the data collection process and when structured datasets have been developed, the data may be made available for use beyond the research staff involved. Availability of the data and the material, including coding and material, e.g. consent forms and checklist templates, used in the data collection process, can be made available upon application to the head of research and project leader of the SCREENS trial Professor Anders Grøntved ([email protected]), as well following upon approval from the Danish Data Protection Agency, if necessary.
HRV:
Odense patient data explorative network
Mullan K. Technology and children's screen-based activities in the UK: The story of the millennium so far. Child Indicators Research. 2018;11(6):1781–800.
Yang L, Cao C, Kantor ED, Nguyen LH, Zheng X, Park Y, et al. Trends in sedentary behavior among the US population, 2001-2016. Trends in sedentary BehaviorTrends in sedentary behavior. JAMA. 2019;321(16):1587–97.
Folkesundhed SIf. Skolebørnsundersøgelsen 2018 - Helbred, trivsel og sundhedsadfærd blandt 11-, 13- og 15-årige skoleelever i Danmark; 2019.
Danmarks Statistik. It-anvendelse i befolkningen 2018; 2018.
Reid Chassiakos YL, Radesky J, Christakis D, Moreno MA, Cross C. Children and adolescents and digital media. Pediatrics. 2016;138(5):e20162593.
Hale L, Guan S. Screen time and sleep among school-aged children and adolescents: a systematic literature review. Sleep Med Rev. 2015;21:50–8.
Stiglic N, Viner RM. Effects of screentime on the health and well-being of children and adolescents: a systematic review of reviews. BMJ Open. 2019;9(1):e023191.
Gomes TN, Katzmarzyk PT, Hedeker D, Fogelholm M, Standage M, Onywera V, et al. Correlates of compliance with recommended levels of physical activity in children. Sci Rep. 2017;7(1):16507.
Telama R, Yang X, Leskinen E, Kankaanpaa A, Hirvensalo M, Tammelin T, et al. Tracking of physical activity from early childhood through youth into adulthood. Med Sci Sports Exerc. 2014;46(5):955–62.
Todd MK, Reis-Bergan MJ, Sidman CL, Flohr JA, Jameson-Walker K, Spicer-Bartolau T, et al. Effect of a family-based intervention on electronic media use and body composition among boys aged 8--11 years: a pilot study. J Child Health Care. 2008;12(4):344–58.
Ni Mhurchu C, Roberts V, Maddison R, Dorey E, Jiang Y, Jull A, et al. Effect of electronic time monitors on children's television watching: pilot trial of a home-based intervention. Prev Med. 2009;49(5):413–7.
Maddison R, Marsh S, Foley L, Epstein LH, Olds T, Dewes O, et al. Screen-time weight-loss intervention targeting children at home (SWITCH): a randomized controlled trial. Int J Behav Nutr Phys Act. 2014;11:111.
Hinkley T, Cliff DP, Okely AD. Reducing electronic media use in 2-3 year-old children: feasibility and efficacy of the family@play pilot randomised controlled trial. BMC Public Health. 2015;15:779.
Ford BS, McDonald TE, Owens AS, Robinson TN. Primary care interventions to reduce television viewing in African-American children. Am J Prev Med. 2002;22(2):106–9.
Epstein LH, Roemmich JN, Robinson JL, Paluch RA, Winiewicz DD, Fuerch JH, et al. A randomized trial of the effects of reducing television viewing and computer use on body mass index in young children. Arch Pediatr Adolesc Med. 2008;162(3):239–45.
Babic MJ, Smith JJ, Morgan PJ, Lonsdale C, Plotnikoff RC, Eather N, et al. Intervention to reduce recreational screen-time in adolescents: outcomes and mediators from the 'Switch-off 4 healthy Minds' (S4HM) cluster randomized controlled trial. Prev Med. 2016;91:50–7.
Mendoza JA, Baranowski T, Jaramillo S, Fesinmeyer MD, Haaland W, Thompson D, et al. Fit 5 kids TV reduction program for Latino preschoolers: a cluster randomized controlled trial. Am J Prev Med. 2016;50(5):584–92.
Gronli J, Byrkjedal IK, Bjorvatn B, Nodtvedt O, Hamre B, Pallesen S. Reading from an iPad or from a book in bed: the impact on human sleep. A randomized controlled crossover trial. Sleep Med. 2016;21:86–92.
Chang AM, Aeschbach D, Duffy JF, Czeisler CA. Evening use of light-emitting eReaders negatively affects sleep, circadian timing, and next-morning alertness. Proc Natl Acad Sci U S A. 2015;112(4):1232–7.
Bues M, Pross A, Stefani O, Frey S, Anders D, Späti J, et al. LED-backlit computer screens influence our biological clock and keep us more awake. J Soc Inf Disp. 2012;20(5):266–72.
Thabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, et al. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010;10:1.
Belcher BR, Berrigan D, Papachristopoulou A, Brady SM, Bernstein SB, Brychta RJ, et al. Effects of interrupting Children's sedentary behaviors with activity on metabolic function: a randomized trial. J Clin Endocrinol Metab. 2015;100(10):3735–43.
Bandura A. Social cognitive theory. Ann Child Dev. 1989;6:1–60.
Bandura A. Social cognitive theory: an agentic perspective. Annu Rev Psychol. 2001;52:1–26.
Allen BP. Thinking ahead and learning mastery of one's circumstances: Albert Bandura. In: Personality Theories: Development, Growth, and Diversity. New York: Routledge; 2015. p. 305–7.
Bandura A. Social cognitive theory of self-regulation. Organ Behav Hum Decis Process. 1991;50(2):248–87.
Bandura A, Walters RH. Social learning theory. Englewood Cliffs: Prentice-hall; 1977.
Jaeschke L, Steinbrecher A, Jeran S, Konigorski S, Pischon T. Variability and reliability study of overall physical activity and activity intensity levels using 24 h-accelerometry-assessed data. BMC Public Health. 2018;18(1):530.
Skotte J, Korshoj M, Kristiansen J, Hanisch C, Holtermann A. Detection of physical activity types using triaxial accelerometers. J Phys Act Health. 2014;11(1):76–84.
Brond JC, Andersen LB, Arvidsson D. Generating ActiGraph counts from raw acceleration recorded by an alternative monitor. Med Sci Sports Exerc. 2017;49(11):2351–60.
Kaplan RF, Wang Y, Loparo KA, Kelly MR, Bootzin RR. Performance evaluation of an automated single-channel sleep-wake detection algorithm. Nat Sci Sleep. 2014;6:113–22.
Wang Y, Loparo KA, Kelly MR, Kaplan RF. Evaluation of an automated single-channel sleep staging algorithm. Na Sci Sleep. 2015;7:101–11.
Kim HG, Cheon EJ, Bai DS, Lee YH, Koo BH. Stress and heart rate variability: a meta-analysis and review of the literature. Psychiatry Investig. 2018;15(3):235–45.
Clow A, Thorn L, Evans P, Hucklebridge F. The awakening cortisol response: methodological issues and significance. Stress. 2004;7(1):29–37.
Ryan R, Booth S, Spathis A, Mollart S, Clow A. Use of salivary diurnal cortisol as an outcome measure in randomised controlled trials: a systematic review. Ann Behav Med. 2016;50(2):210–36.
McEwen BS. Stress, adaptation, and disease. Allostasis and allostatic load. Ann N Y Acad Sci. 1998;840:33–44.
Tarvainen MP, Niskanen JP, Lipponen JA, Ranta-Aho PO, Karjalainen PA. Kubios HRV--heart rate variability analysis software. Comput Methods Prog Biomed. 2014;113(1):210–20.
Tarvainen MP, Ranta-Aho PO, Karjalainen PA. An advanced detrending method with application to HRV analysis. IEEE Trans Biomed Eng. 2002;49(2):172–5.
Stalder T, Kirschbaum C, Kudielka BM, Adam EK, Pruessner JC, Wust S, et al. Assessment of the cortisol awakening response: expert consensus guidelines. Psychoneuroendocrinology. 2016;63:414–32.
Spielberger CD. Review of profile of mood states. Prof Psychol. 1972;3(4):387–8.
Manzar MD, Salahuddin M, Maru TT, Alghadir A, Anwer S, Bahammam AS, et al. Validation of the adapted Leeds sleep evaluation questionnaire in Ethiopian university students. Health Qual Life Outcomes. 2018;16(1):49.
Niclasen J, Teasdale TW, Andersen AM, Skovgaard AM, Elberling H, Obel C. Psychometric properties of the Danish strength and difficulties questionnaire: the SDQ assessed for more than 70,000 raters in four different cohorts. PLoS One. 2012;7(2):e32025.
Topp CW, Ostergaard SD, Sondergaard S, Bech P. The WHO-5 well-being index: a systematic review of the literature. Psychother Psychosom. 2015;84(3):167–76.
Stone LL, Otten R, Engels RC, Vermulst AA, Janssens JM. Psychometric properties of the parent and teacher versions of the strengths and difficulties questionnaire for 4- to 12-year-olds: a review. Clin Child Fam Psychol Rev. 2010;13(3):254–74.
Theunissen MH, Vogels AG, de Wolff MS, Reijneveld SA. Characteristics of the strengths and difficulties questionnaire in preschool children. Pediatrics. 2013;131(2):e446–54.
Goodman A, Lamping DL, Ploubidis GB. When to use broader internalising and externalising subscales instead of the hypothesised five subscales on the strengths and difficulties questionnaire (SDQ): data from British parents, teachers and children. J Abnorm Child Psychol. 2010;38(8):1179–91.
Niclasen J, Skovgaard AM, Andersen AM, Somhovd MJ, Obel C. A confirmatory approach to examining the factor structure of the strengths and difficulties questionnaire (SDQ): a large scale cohort study. J Abnorm Child Psychol. 2013;41(3):355–65.
Driller M, Uiga L. The influence of night-time electronic device use on subsequent sleep and propensity to be physically active the following day. Chronobiol Int. 2019;36(5):717–24.
Rangtell FH, Ekstrand E, Rapp L, Lagermalm A, Liethof L, Bucaro MO, et al. Two hours of evening reading on a self-luminous tablet vs. reading a physical book does not alter sleep after daytime bright light exposure. Sleep Med. 2016;23:111–8.
Kobak MS, Lepp A, Rebold MJ, Faulkner H, Martin S, Barkley JE. The effect of the presence of an internet-connected Mobile tablet computer on physical activity behavior in children. Pediatr Exerc Sci. 2018;30(1):150–6.
International Committee of Medical Journal Editors. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals; 2019.
We would like to thank the twelve families who participated in the pilot study, whose participation helps finalize the development of the SCREENS trial and thus the protocol included in this paper.
We would also like to thank the researcher service organization OPEN for their work now and in the future, which includes sending our electronic survey's and cover letters, as well as managing the storage of data from the SCREENS trial under conditions which comply with the General Data Protection Regulation.
We would also like to thank Henrik Olsen and Kristian Jacobsen of the engineer staff at the Department of Sports Science and Clinical Biomechanics, for the many hours they spent creating and testing the tv-monitors. We also want to thank software engineer Kasper Dissing Bargsteen for the development and testing of the tracking software for windows and mac personal computers.
A European Research Council Starting Grant (no. 716657) was the source of funding for the project.
The funders had no role in the design of the study design, nor did they play a role in the collection, management, analysis, and interpretation of the data from the study. Nor did they have a role in the writing of the study protocol as well as in the decision to submit the report for publication in BMC Public Health.
Department of Sports Science and Clinical Biomechanics, Research Unit for Exercise Epidemiology, Centre of Research in Childhood Health, University of Southern Denmark, 5230, Odense, Denmark
Martin Gillies Banke Rasmussen, Jesper Pedersen, Line Grønholt Olesen, Søren Brage, Heidi Klakk, Peter Lund Kristensen, Jan Christian Brønd & Anders Grøntved
MRC Epidemiology Unit, Cambridge School of Clinical Medicine, Institute of Metabolic Science, University of Cambridge, Box 285, Cambridge Biomedical Campus, Cambridge, CB2 0QQ, UK
Søren Brage
Department of Physiotherapy and Research Center for Health Science, University College Lillebælt, Odense, Denmark
Heidi Klakk
Martin Gillies Banke Rasmussen
Jesper Pedersen
Line Grønholt Olesen
Peter Lund Kristensen
Jan Christian Brønd
Anders Grøntved
Initial development of study and acquisition of funding: AG PLK JBC LGO SB. Further development of study design and methods: AG MGR JP LGO PLK HK. Wrote the first draft for the manuscript: MGR. Contributed to the further development and writing of the manuscript: All authors. Approved the final version: All authors. All authors contributed to the current work in a manner which complies with The International Committee of Medical Journal Editors [51].
Correspondence to Martin Gillies Banke Rasmussen or Anders Grøntved.
The SCREENS trial was approved by the Scientific Committee of Southern Denmark (Project-ID: S-20170213 CSF). For the survey, opening the survey window includes consent. Furthermore, the survey concerns an adult and a child, of which the adult has full custody of said child.
Written consent will be gathered before any participation in the SCREENS randomized trial is possible. Although parents will consent verbally and in writing on behalf of their children, signs of child dissent will be contraindicatory to participation in the SCREENS trial. The research staff in charge of each family will gather the filled-out consent forms.
Any amendments, e.g. changes to inclusion criteria or changes to the intervention protocol, will be submitted to the Scientific Committee of Southern Denmark. No such changes will be implemented before written approval of the amendments has been received by Anders Grøntved (Head of research). The amendments will also be added to the clinicaltrial.gov registration and must also be reported to the European Research Council (major funder).
SPIRIT_checklist.doc'. The additional file includes the SPIRIT checklist for study protocols of clinical trials, regarding the SCREENS trial. An indication of where we have addressed each item is noted in the checklist.
Rasmussen, M.G.B., Pedersen, J., Olesen, L.G. et al. Short-term efficacy of reducing screen media use on physical activity, sleep, and physiological stress in families with children aged 4–14: study protocol for the SCREENS randomized controlled trial. BMC Public Health 20, 380 (2020). https://doi.org/10.1186/s12889-020-8458-6
|
CommonCrawl
|
Last edited by Kazratilar
10 edition of Cohomology of sheaves found in the catalog.
Cohomology of sheaves
by Birger Iversen
Published 1986 by Springer-Verlag in Berlin, New York .
Sheaf theory.,
Homology theory.
Bibliography: p. [461]-464.
Statement Birger Iversen.
Series Universitext
LC Classifications QA612.36 .I93 1986
Pagination xi, 464 p. :
Manifolds, Sheaves, and Cohomology / This book explains techniques that are essential in almost all branches of modern geometry such as algebraic geometry, complex geometry, or non-archimedian geometry. It uses the most accessible case, real and complex manifolds, as a model. The author especially emphasizes the difference between loca. Etale cohomology is an important branch in arithmetic geometry. This book covers the main materials in SGA 1, SGA 4, SGA 4 1/2 and SGA 5 on etale cohomology theory, which includes decent theory, etale fundamental groups, Galois cohomology, etale cohomology, derived categories, base change theorems, duality, and l-adic cohomology.
The cohomology of a sheaf S Sh R X on a paracompact space X can b e computed a follows. Choose a soft (or flabby) resolution of S, i.e., a complex of soft sheaves S 0 d S 1Author: Liviu Nicolaescu. 1 The global section functor Let X be a topological space. Denote by ShX the category of sheaves of abelian groups defined on X and by Abgr the category of abelian groups. ShX is an abelian category and has enough injectives since every sheaf F PShX can be mapped F ãÑ ¹ xPX Fx ãÑ ¹ xPX Ix pq where the second map is the direct product of stalkwise injections Fx ãÑIx into injective.
I need a good reference book where I can learn the cohomology of sheaves through the approach of Čech cohomology. The Hartshorne's book, for example, doesn't help me a lot because he choose the "derived functors approach". Buy Manifolds, Sheaves, and Cohomology (Springer Studium Mathematik - Master) 1st ed. by Wedhorn, Torsten (ISBN: ) from Amazon's Book Store. Everyday low prices and free delivery on eligible orders/5(4).
GREEK traditional architecture
Jamaican Sunset (The Buccaneers Series #3)
SAS high-performance forecasting 2.3
Dale S. Rice.
evolution of Indian classical music, 1200-1600 A.D.
Fetal and Neonatal Physiology (2 Volume Set
Unknown reality.
Straight forward English designed to help an ordinary person to write a clear message.
NAHC Member Secrets
High school physical science as preparation for college sciences
The Christmas hirelings
Vision & principles
Cornbread Nation 1
Dynamics of condition parameters and organ measurements in pheasants
Civil government for Porto Rico.
Strategic plan FY 2003-FY 2008.
Lets visit Greece
Eugene District proposed resource management plan/environmental impact statement
The Last of the Badmen
Glenco guaranteed roses
Cohomology of sheaves by Birger Iversen Download PDF EPUB FB2
The fundamental concepts in the study of locally compact spaces is cohomology with compact support and a particular class of sheaves,the so-called soft sheaves. This class plays a double role as the basic vehicle for the internal theory and is the key to applications in : Springer-Verlag Berlin Heidelberg.
The most satis factory general class is that of locally compact spaces and it is the study of such spaces which occupies the central part of this text. The fundamental concepts in the study of locally compact spaces is cohomology with compact support and a particular class of sheaves,the so-called soft sheaves.
13 Cohomology of Sheaves discuss in this book, and we try to provide motivations for the introduction of the concepts and tools involved. These sections introduce topics in the same order in which they are presented in the book.
All historical references are taken from Dieudonn e [8]. This is a. The readership for this book will mostly consist of beginner to intermediate graduate students, and it may serve as the basis for a one-semester course on the cohomology of sheaves and its relation to real and complex manifolds." (Rui Miguel Saramago, zbMATH)Cited by: 4.
The fundamental concepts in the study of locally compact spaces is cohomology with compact support and a particular class of sheaves,the so-called Cohomology of sheaves book sheaves.
This class plays a double role as the basic vehicle for the internal theory and is the key to applications in : Birger Iversen. In mathematics, sheaf cohomology is the application of homological algebra to analyze the global sections of a sheaf on a topological y speaking, sheaf cohomology describes the obstructions to solving a geometric problem globally when it can be solved locally.
The central work for the study of sheaf cohomology is Grothendieck's Tôhoku paper. The general theory of sheaves is very limited and no essential result is obtainable without turn ing to particular classes of topological spaces. The most satis factory general class is that of locally compact spaces and it is the study of such spaces which occupies the central part of this This text exposes the basic features of cohomology of 5/5(1).
The readership for this book will mostly consist of beginner to intermediate graduate students, and it may serve as the basis for a one-semester course on the cohomology of sheaves and its relation to real and complex manifolds." (Rui Miguel Saramago, zbMATH Author: Torsten Wedhorn.
The readership for this book will mostly consist of beginner to intermediate graduate students, and it may serve as the basis for a one-semester course on the cohomology of sheaves and its relation to real and complex manifolds." (Rui Miguel Saramago, zbMATH)Brand: Springer Spektrum.
Singular cohomology. Singular cohomology is a powerful invariant in topology, associating a graded-commutative ring to any topological space. Every continuous map f: X → Y determines a homomorphism from the cohomology ring of Y to that of X; this puts strong restrictions on the possible maps from X to more subtle invariants such as homotopy groups, the cohomology ring tends to be.
The fundamental concepts in the study of locally compact spaces is cohomology with compact support and a particular class of sheaves, the so-called soft sheaves. This class plays a double role as the basic vehicle for the internal theory and is the key to applications in analysis.
Manifolds, Sheaves, and Cohomology. Authors (view affiliations) Torsten Wedhorn This book explains techniques that are essential in almost all branches of modern geometry such as algebraic geometry, complex geometry, or non-archimedian geometry.
and complex manifolds, as a model. The author especially emphasizes the difference between. In the present book, Ueno turns to the theory of sheaves and their cohomology.
Loosely speaking, a sheaf is a way of keeping track of local information defined on a topological space, such as the local holomorphic functions on a complex manifold or the local sections of a vector bundle. COHOMOLOGY OF SHEAVES 6 Surjective.
LetFbeanO∗ X-torsor. Considerthepresheafofsets L 1: U7−→(F(U) ×O X(U))/O∗ X (U)where the action of f ∈O∗ X (U) on (s,g) is (fs,f−1g).Then L 1 is a presheaf of O X-modules by setting (s,g) + (s 0,g) = (s,g+ (s0/s)g0) where s0/sis the local sectionfofO∗ X suchthatfs= s0,andh(s,g) = (s,hg) forhalocalsectionofO X.
In fact, comparing sheaf cohomology to de Rham cohomology and singular cohomology provides a proof of de Rham's theorem that the two cohomology theories are isomorphic. A different approach is by Čech cohomology. Čech cohomology was the first cohomology theory developed for sheaves and it is well-suited to concrete calculations.
The aim of the book is to present a precise and comprehensive introduction to the basic theory of derived functors, with an emphasis on sheaf cohomology and spectral sequences. It keeps the treatment as simple as possible, aiming at the same time to provide a number of examples, mainly from sheaf theory, and also from algebra.
Modern algebraic geometry is built upon two fundamental notions: schemes and sheaves. The theory of schemes is presented in the first part of this book (Algebraic Geometry 1: From Algebraic Varieties to Schemes, AMS,Translations of Mathematical Monographs, Volume ).
In the present book, the author turns to the theory of sheaves and their cohomology.3/5(1). I personally won't recommend Bredon's book, rather Iversen's "Cohomology of sheaves" (especially if you are interested in the topological aspects/applications of sheaf theory).
There is also Dimca's "Sheaves in topology". However I should say that the epigraph to this (very good) book is "Do not shoot the pianist", and maybe not without a reason.
Of course, a lot of the general stuff about sites, sheaves, etc., is the same either way. $\endgroup$ – Keenan Kidwell Nov 10 '11 at 4 $\begingroup$ Both his course notes and his text book are good, but they are quite different.
In fact, one can regard this functor as $\mathcal{F} \mapsto \hom_{\mathrm{sheaves}}(\ast, \mathcal{F})$ where $\ast$ is the constant sheaf with one element (the terminal object in the category of all -- not necessarily abelian -- sheaves, so sheaf cohomology can be recovered from the full category of sheaves, or the "topos:" it is a fairly.In the present book, Ueno turns to the theory of sheaves and their cohomology.
Loosely speaking, a sheaf is a way of keeping track of local information defined on a topological space, such as the local holomorphic functions on a complex manifold or the local sections of a vector bundle. To study schemes, it is useful to study the sheaves 4/5(1).Coherent sheaves; Cohomology of coherent sheaves ; Computations of some Hodge numbers ; Deformations and Hodge theory ; Analogies and conjectures ; Further details can be found at official website for the book at Springer.
Errata My thanks to Lizhen Qin for the first comment, and Sandor Kovacs for the rest.
le-jasmin-briancon.com - Cohomology of sheaves book © 2020
|
CommonCrawl
|
Ultra High Energy Cosmic Rays 2018
8-12 October 2018
Ecole Supérieure de Chimie, Paris
Europe/Paris timezone
First Circular
Second Circular
Mini workshop on Future
Lunch and more
Local Organizing Commitee
Mme Isabelle Lhenry-Yvon
[email protected]
203. Welcome and Opening
80. The Highest Energy Particles in Nature – the Past, the Present and the Future
Prof. Alan Watson (University of Leeds)
The Highest Energy Particles in Nature – the Past, the Present and the Future
Alan Watson
Since the earliest days cosmic-ray physicists have been studying the highest-energy particles in Nature. A basic understanding of the development of electromagnetic cascades led to the first targeted searches for air showers and, soon after the discovery of charged and neutral pions,...
149. TA Spectrum
Dmitri Ivanov (University of Utah)
Telescope Array (TA) is measuring cosmic rays of energies from PeV to 100 EeV and higher in the Northern hemisphere. TA has two parts: main TA and the TA low energy extension (TALE). Main TA is a hybrid detector that consists of 507 plastic scintillation counters on a 1200m - spaced square grid that are overlooked by three fluorescence detector stations. TALE is also a hybrid detector and it...
154. Measurement of energy spectrum of ultra-high energy cosmic rays with the Pierre Auger Observatory
Valerio Verzi (INFN Roma "Tor Vergata")
The energy spectrum of high-energy cosmic rays measured using the Pierre Auger Observatory is presented. The measurements extend over three orders of magnitude in energy from 3 x 10^17 eV up to the very end of the spectrum and they benefit of the almost calorimetric estimation of the shower energies performed with the fluorescence telescopes. The huge amount of data collected with the surface...
160. Auger-TA energy spectrum working group report
The energy spectrum of ultra-high energy cosmic rays is the most emblematic observable for describing these particles. Beyond a few tens of EeV, the Pierre Auger Observatory and the Telescope Array, currently being exploited, provide the largest exposures ever accumulated in the Northern and the Southern hemispheres to measure independently a suppression of the intensity, in a complementary...
78. Minimal model of UHECR and IceCube neutrinos
Mr Dmitri Semikoz (APC, Paris)
In this talk I'll present minimal model, which explain UHECR spectrum and composition and at the same time explain IceCube astrophysical neutrino signal (M.Kachelriess et al, ``Minimal model for extragalactic cosmic rays and neutrinos,'' Phys.Rev.D 96}, 083006 (2017) Also I'll discuss galactic-extragalactic transition in context of this model.
134. NICHE: Air-Cherenkov light observation at the TA site
Prof. Douglas Bergman (University of Utah)
An array of non-imaging Cherenkov light collectors has recently been installed at the Telescope Array Middle Drum site, in the field-of-view of the TALE FD telescopes. This allows for imaging/non-imaging Cherenkov hybrid observations of air showers in the energy range just above 1 PeV. The performance of the array and the first analyses using hybrid measurements will be presented.
143. Data-driven model of the cosmic-ray flux and mass composition over all energies
Hans Dembinski (Max Planck Institute for Nuclear Physics, Heidelberg)
We present a parametrisation of the cosmic-ray flux and its mass composition over an energy range from 1 GeV to $10^{11}$ GeV, which can be used for theoretical calculations. The parametrisation provides a summary of the experimental state-of-the-art for individual elements from proton to nickel. We seamlessly combine measurements of the flux of individual elements from high-precision...
74. Particle Acceleration in Radio Galaxies
Prof. Tony Bell (University of Oxford)
Ultra-high energy cosmic rays pose an extreme challenge to theories of particle acceleration. We discuss the reasons why diffusive acceleration by shocks is a leading contender. A crucial aspect of shock acceleration is that cosmic rays must be efficiently scattered by magnetic field. This requires magnetic field amplification on scales comparable with the cosmic ray Larmor radius, which in...
159. Estimates of the Cosmic-Ray Composition with the Pierre Auger Observatory
Michael Unger (KIT)
We present measurements from the Pierre Auger Observatory related to
mass composition of ultra-high energy cosmic rays.
Using the fluorescence telescopes of the Observatory we determine the
distribution of shower maxima (Xmax) from 10^17.2 to 10^19.6 eV and
derive estimates of the mean and variance of the average logarithmic
mass of cosmic rays. The fraction of p, He, N and Fe nuclei as...
111. Measurements of UHECR Mass Composition by Telescope Array
William Hanlon (University of Utah)
Telescope Array (TA) has recently published results of nearly nine years of $X_{\mathrm{max}}$ observations providing it's highest statistics measurement of UHECR mass composition to date for energies exceeding $10^{18.2}$ eV. This analysis measured agreement of observed data with results expected for four different single elements. Instead of relying only on the first and second moments of...
151. Depth of maximum of air-shower profiles: testing the compatibility of measurements performed at the Pierre Auger Observatory and the Telescope Array experiment
Alexey Yushkov (Institute of Physics AS CR, Prague)
At the Pierre Auger Observatory and the Telescope Array (TA) experiment the measurements of depths of maximum of air-shower profiles, $X_{\rm max}$, are performed using direct observations of the longitudinal development of showers with the help of the fluorescence telescopes. Though the same detection technique is used by both experiments, the straightforward comparison of the characteristics...
83. Search and study of extensive air shower events with the TUS space experiment.
Andrey Grinyuk (Joint Institute for Nuclear Research)
The TUS experiment is designed to investigate the ultra high energy cosmic rays (UHECR) at energy ∼100 EeV from the space orbit by the UV radiation measurement of extensive air showers (EAS). It is the first orbital telescope aimed for such measurements and is taking data since April 28, 2016. TUS detector consists of a modular Fresnel mirror and a photo receiver matrix with a field of view...
72. Ultra high energy cosmic rays simulations with CONEX code
Dr Mohamed Cherif TALAI (Badji Mokhtar University of Annaba, Department of Physics )
Nowadays, ultra high energy cosmic rays (UHECR) are subject to intense research of great interest. The existence of such rays with an energy above $10^{20}$ eV is contradicted by the limit GZK due to photo-pion production, or by nuclei photo-disintegration, in the interaction of UHECR with the cosmic microwave background.
In this work, detailed simulations of extensive air showers have been...
87. A Quality Control of High Speed Photon Detector
Mr Yusuke Inome (Konan University, Icrr), Prof. Tokonatsu Yamamoto (Konan University, ICRR)
High speed photon detectors are one of the most important tools for observations of high energy cosmic rays. As technology of photon detectors and its read-out electronics improved rapidly, it became possible to observe cosmic rays with time resolution better than one nano-second. To utilize such devices effectively, calibration using a short-pulse light source is mandatory. We have developed...
93. Blazar flares as the origin of high-energy astrophysical neutrinos?
Foteini Oikonomou (ESO)
The IceCube Collaboration recently announced the detection of a high-energy astrophysical neutrino consistent with arriving from the direction of the blazar TXS 0506+056 during an energetic gamma-ray flare. In light of this finding, we consider the implications for neutrino emission from blazar flares in general. We discuss the likely total contribution of blazar flares to the diffuse neutrino...
193. Multi-wavelength observation of cosmic-ray air-showers with CODALEMA/EXTASIS
Antony Escudie (Subatech, IMT Atlantique, Nantes, France)
Over the years, significant efforts have been devoted to the understanding of the radio emission of extensive air shower (EAS) in the range [20-80] MHz but, despite some studies led until the nineties, the [1-10] MHz band has remained unused for nearly 30 years. At that time it has been measured by some pioneering experiments but also suggested by theoretical calculations that EAS could...
130. Development of the calibration device using UAV mounted UV-LED light source for the fluorescence detector
Dr Takayuki Tomida (Shinshu University), For TA collaboration
We are developing a standard light source with UV-LED of calibration device for fluorescence detector (FD). This device is called Opt-copter. The standard light source is mounted on the UAV, and it can stay at an arbitrary position within the FOD of the FD. The GPS for surveying is highly accurate (~10 cm) and measures the position of the light source synchronously with the light emission....
176. The Auger@TA Project: Phase II Progress and Plans
Corbin Covault (Case Western Reserve University)
The Auger@TA project is a combined effort involving members of both the Pierre Auger Observatory and the Telescope Array experiment (TA) to cross-calibrate detectors and compare results on air showers detected at one location. We have recently reported results from Phase I of the project, during which we collected and presented data from two Auger water-Cherenkov surface-detector stations...
115. Telescope Array search for ultra-high energy photons and neutrinos
Grigory Rubtsov (Institute for Nuclear Research of the Russian Academy of Sciences)
We report the ultra-high energy (> 1EeV) photon flux limits based on the analysis of the 9 years data from the Telescope Array Surface detector. The multivariate classifier is built upon 16 reconstructed parameters of the extensive air shower. These parameters are related to the curvature and the width of the shower front, the steepness of the lateral distribution function and the timing...
114. High-energy emissions from neutron star mergers
Shigeo Kimura (Pennsylvania State University)
ast year, LIGO-VIRGO collaborations reported detection of the first neutron star merger event, GW170817, which accompanied with observations of electromagnetic counterparts from radio to gamma rays. High-energy gamma rays and neutrinos were not observed. However, the mergers of neutron stars are expected to produce these high-energy particles. Relativistic jets are expected to be launched when...
88. Ultra-high energy neutrinos from neutron-star mergers
Valentin Decoene (Institut d'Astrophysique de Paris)
In the context of the recent multi-messenger observation of neutron-star merger GW170817, we examine whether such objects could be sources of ultra-high energy astroparticles. At first order, the energetics and the population number is promising to envisage the production of a copious amount of high-energy particles, during the first minutes to weeks from the merger. In addition, the strong...
91. Ultra-High-Energy Cosmic Rays and Neutrinos from Tidal Disruptions by Massive Black Holes
Claire Guépin (IAP)
In addition to the emergence of time domain astronomy, the advent of multi-messenger astronomy opens up a new window on transient high-energy sources. Through the multi-messenger study of the most energetic objects in our universe, two fundamental questions can be addressed: what are the sources of ultra-high energy cosmic rays (UHECRs) and the sources of very-high energy neutrinos?
Jetted...
101. Supergalactic Structure of Multiplets with the Telescope Array Surface Detector
Jon Paul Lundquist (University of Utah - Telescope Array)
Evidence of supergalactic structure of multiplets has been found for ultra-high energy cosmic rays (UHECR) with energies above 10$^{19}$ eV using 7 years of data from the Telescope Array (TA) surface detector. The tested hypothesis is that UHECR sources, and intervening magnetic fields, may be correlated with the supergalactic plane, as it is a fit to the average matter density within the GZK...
66. Ultra-High-Energy Cosmic Rays from Radio Galaxies
Björn Eichmann
Radio galaxies are intensively discussed as the sources of cosmic rays observed above about 3 EeV, called ultra-high energy cosmic rays (UHECRs). The talk presents a first, systematic study that takes the individual characteristics of these sources into account, as well as the impact of the galactic magnetic field, as well as the extragalactic magnetic-field structures up to a distance of 120...
107. Cosmogenic neutrinos from a combined fit of the Auger spectrum and composition
Jonas Heinze
We present a combined fit of the Auger spectrum and composition based on a newly developed code for the extragalactic propagation of cosmic ray nuclei (PriNCe). This very efficient numerical solver of the transport equations allows for scans over large ranges of unknown UHECR source parameters.
Here, we present a study of a generalized source population with three parameters...
73. The most updated results of the magnetic field structure of the Milky Way
Prof. JinLin Han (National Astronomical Observatories, Chinese Academy of Sciences)
Magnetic fields are an important agent for cosmic rays to transport. The observed all-sky Faraday rotation distribution implies that the magnetic fields in the Galactic halo have a toroidial structure, but the radius range and scale height as well as the strength of the toroidial fields are totally unknown. In the Galactic disk, the magnetic fields probably follow the spiral structure with a...
6. Ultra-high energy cosmic rays from radio galaxies
James Matthews (University of Oxford)
The origin of ultra-high energy cosmic rays (UHECRs) is an open question, but radio galaxies offer one of the best candidate acceleration sites. Acceleration at the termination shocks of relativistic jets is problematic because relativistic shocks are poor accelerators to high energy. Using hydrodynamic simulations and general physical arguments, I will show that shocks with non- or mildly...
85. UHECR science with ground-based imaging atmospheric Cherenkov telescopes
Dr Iftach Sadeh (DESY-Zeuthen)
Arrays of imaging atmospheric Cherenkov telescopes (IACTs), such as VERITAS and the future CTA observatory, are designed to detect particles of astrophysical origin. IACTs are nominally sensitive to gamma rays and cosmic rays at energies between tens of GeV and hundreds of TeV. As such, they can be used as both direct and indirect probes of particle acceleration to very high energies.
97. Simulation of the optical performance of the Fluorescence detector Array of Single-pixel Telescopes
Dusan Mandat (Institute of Physics of Academy of Science of The Czech Republic), Toshihiro Fujii (ICRR, University of Toyo)
The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a proposed large-area, next-generation experiment for the detection of ultra-high energy cosmic rays via the atmospheric fluorescence technique. The telescope's large field-of-view (30°x30°) is imaged by four 200 mm photomultiplier-tubes at the focal plane of a segmented spherical mirror of 1.6 m diameter. Two prototypes are...
113. New Constraints on the Random Magnetic Field of the Galaxy
The knowledge of the magnitude and coherence length of the random
component of the Galactic Magnetic Field (GMF) is of fundamental
importance for establishing the rigidity threshold above which
astronomy with charged particles is possible.
Here we present a new study of the random component of the GMF using
synchrotron intensity as measured by Planck, WMAP and Haslam et al and
combine it for...
131. Atmospheric transparency measurement on Telescope Array site by the central laser facility
Dr Takayuki Tomida (Shinshu University), For the Telescope Array collaboration
The TA experiment has three FD stations these containing 38 FDs.
In addition, 16 FD was newly added by TAx4 and TALE.
In order to reconstruct FD observation data to air shower information, it is necessary to calibrate the influence of aerosol attenuation. CLF measures atmospheric transparency of TA site.
144. Investigating an angular correlation between nearby starburst galaxies and ultrahigh-energy cosmic rays with the Telescope Array experiment
Armando di Matteo (ULB, Brussels, Belgium)
The arrival directions of cosmic rays detected by the Pierre Auger Observatory (Auger) with energies above 39 EeV were recently reported to correlate with the positions of 23 nearby starburst galaxies (SBGs): in their best-fit model, 9.7% of the cosmic-ray flux originates from these objects and undergoes angular diffusion on a 12.9° scale. On the other hand, some of the SBGs on their list,...
167. Air Shower Structure measured with the Telescope Array Surface Detectors
Ms Rosa Mayta Palacios (Osaka City University)
Telescope Array constructed in Utah USA is a largest air shower observatory in the northern hemisphere aiming at clarifying the origin of UHECRs. In order for better understandings of the air shower phenomenon we report a study on the distribution of arriving signals measured with FADC of the TA Surface detector we use 10 years TA SD data to examine which include delay time to shower front...
103. Latest cosmic-ray results from IceCube and IceTop
Karen Andeen (Marquette University)
The IceCube Neutrino Observatory at the geographic South Pole, with its surface array IceTop, detects three different components of extensive air showers: the total signal at the surface, low energy muons in the periphery of the showers, and high energy muons in the deep array of IceCube. These three components allow for a variety of cosmic ray measurements including the energy spectrum and...
135. The Cosmic-Ray Energy Spectrum between 2 PeV and 2 EeV Observed with the TALE detector in monocular mode
Charles Jui
We present a measurement of the cosmic ray energy spectrum by the Telescope
Array Low-Energy Extension (TALE) air fluorescence detector (FD). The TALE FD
is also sensitive to the Cherenkov light produced by shower particles. Low
energy cosmic rays, in the PeV energy range, are detectable by TALE as
``Cherenkov Events''. Using these events, we measure the energy spectrum from
a low energy...
77. KASCADE-Grande: Post-operation analyses and latest results
Andreas Haungs (KIT), KASCADE-Grande collaboration
The KASCADE-Grande experiment has significantly contributed to the current knowledge about the energy spectrum and composition of cosmic rays for energies between the knee and the ankle. Meanwhile, post-LHC versions of the hadronic interaction models are available and used to interpret the entire data set of KASCADE-Grande. In addition, a new, combined analysis of both arrays, KASCADE and...
198. Primary Energy Spectrum by the Data of EAS Cherenkov Light Arrays Tunka-133 and TAIGA-HiSCORE
Vasily Prosin
Tunka-133 collected data since 2009. The data of 7 winter seasons (2009-2014 and 2015-2017) are processed and analyzed till now. The new TAIGA-HiSCORE array, designed for gamma astronomy tasks mostly, can be used for reconstruction of the all primary particle energy spectrum too. These two arrays provide the very wide range of primary energy measurements 2.10^14 – 2.10^18 eV with the same...
75. Transition from Galactic to Extragalactic Cosmic Rays
Michael Kachelriess (Department of Physics, NTNU)
Additionally to the all-particle cosmic ray (CR) spectrum, data on the
primary composition and anisotropy have become available from the knee region up to few $\times 10^{19}$ eV. These data point to an early Galactic-extragalactic transition and the presence of Peter's cycle, i.e. a rigidity-dependent maximal energy. Theoretical models have to explain therefore the ankle as a feature in the...
98. Ultra High Energy Cosmic Ray Propagation and Source Signatures
Prof. Andrew Taylor
Knowledge about the processes dictating UHECR
losses during their propagation in extragalactic space
allows the secondary species to be used to probe the source
location. In this talk I will cover the state of our knowledge on
these processes, and gives examples about properties of the
sources that may be inferred from the observed secondary
species at Earth. Some suggestion will also be...
190. Galactic and Intergalactic magnetic fields
Prof. Andrii Neronov (University of Geneva & APC, Paris)
I will review the status of measurements and modelling of Galactic and intergalactic magnetic fields in the context of multi-messenger astrophysics and in particular of UHECR observations.
145. The extragalactic gamma-ray background above 100 MeV
Markus Ackermann (DESY)
I will review our knowledge about the properties and the origin of the extragalactic gamma-ray background above 100 MeV. Since the universe is transparent to MeV and GeV gamma rays up to very high redshifts, the extragalactic gamma-ray background contains the imprint of all gamma-ray emission from the beginning of star formation until the present day. Its properties have important implications...
132. Cloud monitoring at Telescope Array site by Visible Fisheye CCD.
The Telescope Array (TA) is an international experiment studying ultra-high energy cosmic rays.
TA uses fluorescence detection technology to observe cosmic rays, and in order to estimate the flux of cosmic rays with the observation of the fluorescence detector (FD), it is necessary to estimate the condition of FD observation area correctly.
Because the cloud has a great influence on the Field...
139. CRPropa 3.2: Improved and extended open-source astroparticle propagation framework from TeV to ZeV energies
Dr Arjen van Vliet (DESY Zeuthen)
Experimental observations of Galactic and extragalactic cosmic rays, neutrinos and gamma rays in the last decade challenge the theoretical description of both the sources and the transport of these particles. The latest version of the publicly available simulation framework CRPropa 3.2 is a Monte-Carlo based software package capable of providing consistent solutions of the cosmic-ray origin...
140. Origins of Extragalactic Cosmic Ray Nuclei by Contracting Alignment Patterns induced in the Galactic Magnetic Field
Mr Marcus Wirtz (RWTH Aachen University)
We present a novel approach to search for origins of ultra-high energy cosmic rays. In a simultaneous fit to all observed cosmic rays we use the galactic magnetic field as a mass spectrometer and adapt the nuclear charges such that their extragalactic arrival directions are concentrated in as few directions as possible. During the fit the nuclear charges are constraint by the individual...
142. The detection of UHECRs with the EUSO-TA telescope
Francesca Bisconti (INFN Sezione di Torino), Mario Bertaina (Univ. of Torino, Italy), Kenji Shinozaki (University of Torino, Italy)
EUSO-TA is a cosmic ray detector developed by the JEM-EUSO Collaboration (Joint Experiment Missions for Extreme Universe Space Observatory), observing during nighttime the fluorescence light emitted through the path of extensive air showers in the atmosphere. It is installed at the Telescope Array site in Utah, USA, in front of the fluorescence detector station in Black Rock Mesa, as...
150. TA SD Spectrum
Telescope Array (TA) is a large cosmic ray detector in the Northern hemisphere that measures cosmic rays of energies from PeV to 100 EeV and higher. Main TA consists of a surface detector (SD) of 507 plastic scintillation counters of 1200 m separation on a square grid that is overlooked by three fluorescence detector stations. We present the cosmic ray energy spectrum measured by the TA SD...
194. The Atmospheric Electricity Studies at the Pierre Auger Observatory
Kevin-Druis Merenda
The Fluorescence Detector (FD) at the Pierre Auger Observatory has triggered on numerous elves since the first observation in 2005, and it has potential for simultaneous Terrestrial Gamma ray Flashes (TGF) detection. In addition, the Surface Detector (SD) observed peculiar events with radially expanding footprints, which are correlated with lightning strikes reconstructed by the World Wide...
200. AugerPrime implementation in the Offline simulation and reconstruction framework
David Schmidt (Karlsruhe Institute of Technology)
The Pierre Auger Observatory is currently upgrading its surface detector array by placing a 3.84 square meter scintillator on top of each of the existing 1660 water-Cherenkov detectors. The differing responses of the two detectors allow for the disentanglement of the muonic and electromagnetic components of extensive air showers, which ultimately facilitates reconstruction of the mass...
81. Studies for High Energy air shower identification using RF measurements with the ASTRONEU array
Mr Stavros Nonis (Malkou)
The Hellenic Open University (HOU) Cosmic Ray Telescope (ASTRONEU) comprises 9 charged particle detectors and 3 RF antennas arranged in three autonomous stations operating at the University Campus of HOU in the city of Patra. In this work, we extend the analysis of very high energy showers that are detected by more than one station and in coincidence with the RF antennas of the Telescope. We...
178. Inductive Particle Acceleration
John KIRK
181. Black hole jets in clusters of galaxies as sources of high-energy cosmic particles
Ke FANG
It has been a mystery that with ten orders of magnitude difference in energy, high-energy neutrinos, ultrahigh-energy cosmic rays, and sub-TeV gamma rays all present comparable energy injection rate, hinting an unknown common origin. Here we show that black hole jets embedded in clusters of galaxies may work as sources of all three messengers. By numerically simulating the propagation of...
153. Multi-messenger Astrophysics at Ultra-High Energy with the Pierre Auger Observatory
Alvarez-Muniz Jaime (Dept. Particle Physics, Univ. Santiago de Compostela)
The study of correlations between observations of fundamentally different nature from extreme cosmic sources promises extraordinary physical insights into the Universe. With the Pierre Auger Observatory we can significantly contribute to multi-messenger astrophysics by searching for ultra-high energy particles, particularly neutrinos and photons which, being electrically neutral, point back to...
79. Recent IceCube results - evidences of neutrino emission from the blazar TXS 0506+056 and searches for Glashow resonance
Lu Lu (Chiba University)
Finally a hundred years after the discovery of cosmic-rays, a blazar has been identified as a source (at ~3 sigma level) of high-energy neutrinos and cosmic-rays thanks to the real-time multimessenger observation lead by the cubic-kilometer IceCube neutrino observatory. In this talk, details of the spatial-timing correlation analysis of the ~290 TeV neutrino event with Fermi light curves will...
89. Latest results on high-energy cosmic neutrino searches with the ANTARES neutrino telescope
Agustín Sánchez Losa (INFN - Sezione di Bari)
The ANTARES detector is currently the largest undersea neutrino telescope. Located in the Mediterranean Sea at a depth of 2.5 km, 40 km off the Southern coast of France, it has been looking for cosmic neutrinos for more than 10 years. High-energy cosmic neutrino production is strongly linked with cosmic ray production. The latest results from IceCube represent a step forward towards the...
156. Search for a correlation between the UHECRs measured by the Pierre Auger Observatory and the Telescope Array and the neutrino candidate events from IceCube and ANTARES
Dr Lorenzo Caccianiga (Università degli studi di Milano)
We present the results of three searches for correlations between UHECR events measured by the Pierre Auger Observatory and Telescope Array and high energy neutrino candidate events from IceCube and ANTARES. A cross-correlation analysis is performed, where the angular separation between the arrival directions of UHECRs and neutrinos is scanned. The same events are also exploited in a separate...
199. Overview and results from the first four flights of ANITA
Amy Connolly (The Ohio State University)
ANITA was designed as a discovery experiment for ultra-high energy (UHE) neutrinos using the radio Askaryan detection technique, launching from McMurdo Station in Antarctica under NASA's long duration balloon program and observing 1.5 million square kilometers of ice at once from an altitude of 40 km. Over ANITA's four flights we set the best constraints on UHE neutrino fluxes above 10^19 eV,...
104. The cosmogenic neutrino flux determines the fraction of protons in UHECRs
When UHECRs propagate through the universe, cosmogenic neutrinos are created via several interactions. In general, the expected flux of these cosmogenic neutrinos depends on multiple parameters describing the sources and propagation of UHECRs. However, using CRPropa, we show that a 'sweet spot' occurs at a neutrino energy of ~1 EeV. At that energy this flux only depends strongly on two...
152. TALE surface detector array and TALE hybrid system
Prof. Shoichi Ogio (Osaka City University)
The Telescope Array Low-energy Extension (TALE) experiment is a hybrid air shower detector for observation of air showers produced by very high energy cosmic rays above 10^16.5 eV.TALE is located at the north part of the Telescope Array (TA) experiment site in the western desert of Utah, USA. TALE has a surface detector (SD) array made up of 103 scintillation counters, including 40 with 400 m...
155. Search for Extreme Energy Cosmic Rays with the TUS telescope and comparison with ESAF
Mario Bertaina (Univ. of Torino, Italy)
The Track Ultraviolet Setup (TUS) detector was launched on April 28, 2016 as a part of the scientific payload of the Lomonosov satellite. TUS is a path-finder mission for future space-based observation of Extreme Energy Cosmic Rays (EECRs, E > 5x10^19 eV) with experiments such as K-EUSO. TUS data offer the opportunity to develop strategies in the analysis and reconstruction of the events which...
157. Cloud distribution evaluated by the WRF model during the EUSO-SPB1 flight
Kenji Shinozaki (University of Torino, Italy)
EUSO-SPB1 was a balloon-borne mission of the JEM-EUSO (Joint Experiment Missions for Extreme Universe Space Observatory) Program aiming at the observation of UHECRs from space. The EUSO-SPB1 telescope was a fluorescence detector with a 1 m2 Fresnel refractive optics and a focal surface covered with 36 multi-anode photomultiplier tubes for a total of 2304 channels covering ~11 degrees FOV. Each...
161. Determination of the invisible energy of extensive air showers from the data collected at Pierre Auger Observatory
Dr Analisa Mariazzi (Universidad Nacional de La Plata and CONICET, La Plata, Argentina)
In order to get the primary energy of cosmic rays from their extensive air showers using the fluorescence detection technique, the invisible energy should be added to the measured calorimetric energy. The invisible energy is the energy carried away by particles that do not deposit all their energy in the atmosphere.
It has traditionally been calculated using Monte Carlo simulations that are...
162. Potential of a scintillator and radio extension of the IceCube surface detector array
Andreas Haungs (KIT)
An upgrade of the present IceCube surface array (IceTop) with scintillation detectors and possibly radio antennas is foreseen. The enhanced array will calibrate the impact of snow accumulation on the reconstruction of cosmic-ray showers detected by IceTop as well as improve the veto capabilities of the surface array. In addition, such a hybrid surface array of radio antennas, scintillators...
170. Study of muons from ultrahigh energy cosmic ray air showers measured with the Telescope Array experiment
Dr Ryuji Takeishi (Sungkyunkwan University, South Korea)
One of the uncertainties in ultrahigh energy cosmic ray (UHECR) observation derives from the hadronic interaction model used for air shower Monte-Carlo (MC) simulations. One may test the hadronic interaction models by comparing the measured number of muons observed at the ground from UHECR induced air showers with the MC prediction.
The Telescope Array (TA) is the largest experiment in the...
204. Direct measurement of the muon density in air showers with the Pierre Auger Observatory
Mrs Sarah Mueller (KIT)
As part of the upgrade of the Pierre Auger Observatory, the AMIGA (Auger Muons and Infill for the Ground Array) underground muon detector extension will allow for direct muon measurements for showers falling into the 750m SD vertical array. We optimized the AMIGA muon reconstruction procedure by introducing a geometrical correction for muons leaving a signal in multiple detector strips due to...
138. TA Anisotropy Summary
Kazumasa Kawata (ICRR, University of Tokyo)
The Telescope Array (TA) is the largest ultra-high-energy cosmic-ray (UHECR) detector in the northern hemisphere, which consists of 507 surface detector (SD) covering a total 700 km^2 and three fluorescence detector stations. In this presentation, we will summarize recent results on the search for directional anisotropy of UHECRs using the latest data set collected by the TA SD array.
195. Study of the arrival directions of ultra-high-energy cosmic rays detected at the Pierre Auger Observatory
Piera Luisa Ghia (IPNO)
The distribution of the arrival directions of ultra-high energy cosmic rays is, together with the spectrum and the mass composition, a harbinger of their nature and origin. As such, it has been the subject of intense studies at the Pierre Auger Observatory since its inception in 2004, with two main lines of analysis being pursued at different angular scales and at different energies. One...
158. Covering the sphere at ultra-high energies: full-sky cosmic-ray maps beyond the ankle and the flux suppression
Jonathan Biteau (IPNO)
Despite deflections by Galactic and extragalactic magnetic fields, the distribution of the flux of ultra-high energy cosmic rays (UHECRs) over the celestial sphere remains a most promising observable for the identification of their sources. This distribution is remarkably close to being isotropic. Thanks to a large number of detected events over the past years, a large-scale anisotropy at...
95. A Close Correlation between TA Hotspot UHECR Events and Local Filaments of Galaxies and its Implication
Dr Jihyun Kim (UNIST)
The Telescope Array (TA) experiment identified a concentration of ultra-high-energy cosmic ray (UHECR) events on the sky, so-called hotspot. Besides the hotspot, the arrival directions of TA events show another characteristic feature, i.e., a deficit of events toward the Virgo cluster. As an effort to understand the sky distribution of TA events, we investigated the structures of galaxies...
90. High energy cosmic ray interactions and UHECR composition problem
Dr Sergey Ostapchenko (Frankfurt Institute for Advanced Studies (FIAS))
I'll discuss the differences between contemporary Monte Carlo generators of high energy hadronic interactions and their impact on the interpretation of experimental data on ultra-high energy cosmic rays (UHECRs). In particular, key directions for model improvements will be outlined. The prospect for a coherent interpretation of the data in terms of the primary composition will be investigated.
172. Measurements and tests of hadronic interactions at ultra-high energies with the Pierre Auger Observatory
Dr Markus Roth (Karlsruhe Institute of Technology, Institut für Kernphysik, Karlsruhe, Germany), Dr Lorenzo Cazon (LIP, Lisbon)
Extensive air showers are complex objects, resulting of billions of
particle reactions initiated by single cosmic ray at ultra-high-energy.
Their characteristics are sensitive both to the mass of the primary
cosmic ray and to the details of hadronic interactions. Many of the
interactions that determine the shower features occur in energy and kinematic
regions beyond those tested by human-made...
137. Hadronic interaction studied by TA
Takashi Sako (ICRR, University of Tokyo)
Telescope Array (TA) is measuring ultra-high energy cosmic rays in the Northern hemisphere since 2008. Using hybrid detectors namely surface detector array (SD) and fluorescence telescopes (FD), TA can measure the lateral and longitudinal developments of extensive air showers, respectively, in detail. Recent analysis of SD data reveals the excess of muons at large distance from the shower core...
165. Report on tests and measurements of hadronic interaction properties with air showers
Unambiguously determining the mass composition of ultra-high energy cosmic rays is a key challenge at the frontier of cosmic ray research. The mass composition is inferred from air shower observables using air shower simulations, which rely on hadronic interaction models. Current hadronic interaction models lead to varying interpretations, therefore tests of hadronic interaction models with...
168. Prospects of testing an UHECR single source class model with the K-EUSO orbital telescope
Mikhail Zotov (Skobeltsyn Institute of Nuclear Physics, Lomonosov Moscow State University)
KLYPVE-EUSO (K-EUSO) is a planned orbital detector of ultra-high energy cosmic rays (UHECRs), which is to be deployed on board the International Space Station. K-EUSO is expected to have an almost uniform exposure over the celestial sphere and register from 120 to 500 UHECRs at energies above ~57 EeV in a 2-year mission. We employ the CRPropa3 package to estimate prospects of testing the...
163. A novel method for the absolute end-to-end calibration of the Auger fluorescence telescopes.
Hermann-Josef Mathes
The fluorescence detector technique is using the atmosphere
as a calorimeter. Besides the precise monitoring of the parameters
of the atmosphere a proper knowledge of the optical properties in
the UV range of all optical components involved in the measurements
of the fluorescence light is vital.
Until now, the end-to-end calibration was performed with a 4.5 m^2 large,
uniformly lit light...
166. Preliminary results of the AMIGA engineering array at the Pierre Auger Observatory
Alvaro Taboada Nunez (IKP, KIT / ITeDA)
The prototype array of the underground muon detector as part of the AMIGA enhancement was built and operated until November 2017. During this engineering phase, the array was composed of seven stations. The detector design as well as its performance for physics deliverables were validated and optimized. The most notable improvement was the selection of silicon photo-multipliers rather than...
173. Average shape of longitudinal shower profiles measured at the Pierre Auger Observatory
Sofia Andringa (LIP)
The average profiles of cosmic ray showers developing with traversed atmospheric depth are measured for the first time, with the Fluorescence Detectors at the Pierre Auger Observatory. The profile shapes are well reproduced by the Gaisser-Hillas parametrization, at the 1% level in a 500 g/cm2 interval around the shower maximum, for cosmic rays with log(E/eV) > 17.8. The results are quantified...
189. On the maximum energy of protons in the hotspots of AGN jets
Anabella Araudo
It has been suggested that relativistic shocks in extragalactic jets may accelerate the highest energy cosmic rays. The maximum energy to which particles can be accelerated via a diffusive mechanism depends on the magnetic turbulence near the shock but recent theoretical advances indicate that relativistic shocks are probably unable to accelerate particles to energies much larger than a...
196. Ultra-high-energy cosmic rays from supermassive black holes
Arman Tursunov
Mechanism of acceleration of charged particles to ultra-high energies above EeV up to ZeV still remains unsolved. Recent multimessenger observations strongly established the source of ultra-high-energy cosmic rays (UHECRs) being extragalactic supermassive black hole (SMBH). I will show that UHECRs can be produced within a neutron beta-decay in a dynamical environment of SMBHs located at the...
201. Radio detection of cosmic rays with the Auger Engineering Radio Array
Tim Huege (Karlsruhe Institute of Technology)
The Auger Engineering Radio Array (AERA) complements the Pierre Auger Observatory with 150 radio-antenna stations measuring in the
frequency range from 30 to 80 MHz. With an instrumented area of 17 km^2, the array constitutes the largest cosmic-ray radio detector
built to date, allowing us to do multi-hybrid measurements of cosmic rays in the energy range of ~10^17 eV up to several 10^18...
202. ATMOSPHERIC AEROSOL EFFECT ON FD DATA ANALYSIS AT THE PIERRE AUGER OBSERVATORY
Laura Valore (Universita' di Napoli Federico II)
The atmospheric aerosol monitoring system of the Pierre Auger Observatory, initiated in 2004, continues to operate smoothly. Two laser facilities (Central Laser Facility, CLF and eXtreme Laser Facility, XLF) each fire sets of 50 laser shots four times per hour during Fluorescence Detector (FD) shifts.
The FD measures these UV laser tracks. Analysis of these tracks yields hourly measurements...
206. CORSIKA upgrade, plans and status
Ralf Ulrich (KIT)
186. LHC results
David d'Enterria (CERN)
82. Probing the hadronic energy spectrum in proton air interactions through the fluctuations of the EAS muon content
felix riehn (LIP, Lisbon)
The average number of muons in air showers and its connection with the development of air showers has been studied extensively in the past. With the upcoming detector upgrades, UHECR observatories will be able to also probe higher moments of the muon distribution. Here we present a study of the physics of the fluctuations of the muon content. In addition to proving that the fluctuations must...
65. EPOS 3
Tanguy Pierog (KIT, IKP)
With the recent results of large hybrid air shower experiments, it is clear that the simulations of the hadronic interactions are not good enough to obtain a consistent description of the observations. Even the most recent models tuned after the first run of LHC show significant discrepancy with air shower data. Since then many more data have been collected at LHC and lower energies which are...
133. Recent results from the LHCf experiment
Hiroaki Menjo (ISEE, Nagoya University, Japan)
The LHCf experiment aims for measurements of the forward neutral particles at an LHC interaction point to test hadronic interaction models which are widely used in cosmic-ray air-shower simulations. The LHCf had an operation with proton-proton collisions at the center of mass collision energy of 13 TeV in 2015. The LHCf detectors were composed of sampling and imaging calorimeters and they were...
177. Overview of the Auger@TA project and preliminary results from Phase I
Fred Sarazin (Colorado School of Mines), and the Pierre Auger and Telescope Array Collaborations
Auger@TA is a joint experimental program of the Telescope Array experiment (TA) and the Pierre Auger Observatory (Auger), the two leading ultra-high energy cosmic-ray experiments located respectively in the northern and southern hemispheres. The aim of the program is to achieve a cross-calibration of the Surface Detector (SD) from both experiments. The first phase of this joint effort is...
102. Air showers, hadronic models, and muon production.
Sergio Sciutto (Departamento de Física - Universidad Nacional de La Plata - Argentina)
We report on a study about the mechanisms of muon production during the development of extended air showers initiated by ultra-high-energy cosmic rays. In particular, we analyze and discuss on the observed discrepancies between experimental measurements and simulated data.
164. Atmospheric Muons Measured with IceCube
Dr Dennis Soldin (University of Delaware), for the Ice Cube collaboration
IceCube is a cubic-kilometer Cherenkov detector in the deep ice at the geographic South Pole. The dominant event yield is produced by penetrating atmospheric muons with energies above several 100 GeV. Due to its large detector volume, IceCube provides unique opportunities to study atmospheric muons with large statistics in great detail. Measurements of the energy spectrum and the lateral...
99. Results of the first orbital ultra-high-energy cosmic ray detector TUS in view of future space mission KLYPVE-EUSO
P. Klimov
The observation of ultra-high energy cosmic rays (UHECR) from Earth orbit relies on the detection of the UV fluorescence tracks of the extensive air shower (EAS). This technique is widely used by ground-based detectors. Analogous measurements from space will allow to achieve the largest instantaneous aperture for observation the whole sky with nearly homogeneous exposure. It is important for...
86. Results from the first missions of the JEM-EUSO program
Mario Bertaina (University & INFN Torino)
The origin and nature of Ultra-High Energy Cosmic Rays (UHECRs) remain unsolved in contemporary astroparticle physics. To give an answer to these questions is rather challenging because of the extremely low flux of a few per km^2 per century at extreme energies such as E > 5 × 10^19eV. The objective of the JEM-EUSO program, Extreme Universe Space Observatory, is the realization of a space...
109. Leading cluster approach to simulations of hadron collisions with GHOST generator
Jean-Noel Capdevielle (APC et IRFU CEA-Saclay)
We present the current version of generator GHOST which can be used in the simulation of Non Diffractive (ND),Non Single Diffractive (NSD), single diffractive (SD) and double diffractive (DD) events at cosmic ray energies.
The generator is based on four-gaussian parameterization of pseudorapidity distribution which is related to the leading cluster approach in distribution of secondary...
136. Status and prospects of the TAx4 experiment
Dr Eiji Kido (Institute for Cosmic Ray Research, University of Tokyo)
The TAx4 experiment is a project to observe highest energy cosmic rays by expanding the detection area of the TA experiment with newly constructed surface detectors (SDs) and fluorescence detectors (FDs). The construction of both SDs and FDs is ongoing. New SDs are arranged in a square grid with 2.08 km spacing at the north east and south east of the TA SD array. Field of view of new FDs...
188. AugerPrime: the Pierre Auger Observatory upgrade.
Antonella Castellina (INFN & INAF-OATo)
The world largest exposure to ultra high energy cosmic rays accumulated by the Pierre Auger Observatory lead to major advances in our understanding of their properties, but the many unknowns about the nature and distribution of the sources, the primary composition and the underlying hadronic interactions prevent the emergence of a uniquely consistent picture.
The new perspectives opened by...
96. A next-generation ground array for the detection of ultrahigh-energy cosmic rays: the Fluorescence detector Array of Single-pixel Telescopes (FAST)
Toshihiro Fujii (ICRR, University of Toyo)
The origin and nature of ultrahigh-energy cosmic rays (UHECRs) is one of the most intriguing mys- teries in astroparticle physics. The two largest observatories currently in operation, the Telescope Array Experiment in central Utah, USA, and the Pierre Auger Observatory in western Argentina, have been steadily observing UHECRs in both hemispheres for over a decade. We highlight the latest...
108. Detection of ultra-high energy cosmic ray air showers by Cosmic Ray Air Fluorescence Fresnel-lens Telescope for next generation
Dr Yuichiro Tameda (Osaka Electro-Communication University)
In the future, ultra-high energy cosmic ray (UHECR) observatory will be expanded due to the small flux. Then, cost reduction is useful strategy to realize a huge scale observatory. For this purpose, we are developing a simple structure cosmic ray detector named as Cosmic Ray Air Fluorescence Fresnel-lens Telescope (CRAFFT). We deployed CRAFFT detectors at the Telescope Array site and performed...
192. Precision measurements of cosmic rays up to the highest energies with a large radio array at the Pierre Auger Observatory
Dr Jörg Hörandel (Radboud University Nijmegen)
High-energy cosmic rays impinging on the atmosphere of the Earth induce cascades of secondary particles, the extensive air showers. Many particles in the showers are electrons and positrons. Due to interactions with the magnetic field of the Earth they emit radiation with frequencies of several tens of MHz. In the last years huge progress has been achieved in this field through strong...
171. In-ice radio arrays for the detection of ultra-high energy neutrinos
Radio techniques show the most promise for measuring and characterizing
the astrophysical neutrino flux above about 10^17 eV. Complementary strategies include observing a target volume from a distance and deploying sensors in the target volume itself. I will focus on the current status of experiments utilizing the latter strategy, in-ice radio arrays. I will give an overview of results from...
84. The GRAND Project
Olivier Martineau (IN2P3)
The Giant Radio Array for Neutrino Detection (GRAND) aims at detecting ultra-high-energy extraterrestrial neutrinos via the extensive air showers induced by the decay of tau leptons created in the interaction of neutrinos under the Earth's surface. Consisting of an array of $\sim200\,000$ radio antennas deployed over $\sim200\,000\,$km$^2$, GRAND plans to reach, for the first time, a...
92. The space road to UHECR observations: challenges and expected rewards
Etienne Parizot (APC - University Paris 7)
Significant progress has been made in the last decade in the
field of Ultra-High-Energy Cosmic Rays (UHECRs), thanks to the operation
of large ground-based detectors and to the renewed theoretical interest
that they triggered. While multi-messenger astronomy is rapidly developing
worldwide, the sources of the charged messengers, namely the cosmic rays,
are still to be determined, and the...
105. POEMMA: Probe Of Multi-Messenger Astrophysics
Dr John Krizmanic (CRESST/NASA//GSFC/UMBC)
Developed as a NASA Astrophysics Probe mission concept study, the Probe Of Multi-Messenger Astrophysics (POEMMA) science goals are to identify the sources of ultra-high energy cosmic rays (UHECRs) and to observe cosmic neutrinos above 10 PeV. POEMMA consists of two satellites flying in loose formation at 525 km altitudes. A novel focal plane design is optimized to observe the UV air...
205. Closing and Concluding Remarks
Ralph Engel (Karlsruhe Institute of Technology)
207. Introduction
Ralph Engel (Karlsruhe Institute of Technology), Shoichi Ogio (Osaka City University)
Mini Worshop on Future of UHECR
208. Status and open problems in ultrahigh-energy cosmic ray and neutrino physics
Paolo Lipari
209. Origin of UHECR anisotropies and what we can learn from them
Prof. Günter Sigl (University of Hamburg)
210. Mixed composition and the chances of finding UHECR sources
217. Towards a Global Cosmic Ray Observatory (GCOS) - requirements for a future observatory
Ralph Engel (Karlsruhe Institute of Technology), Andreas Haungs (KIT), Dr Markus Roth (Karlsruhe Institute of Technology, Institut für Kernphysik, Karlsruhe, Germany)
218. A giant air shower detector
220. Layered surface detector (10 min)
Ioana Maris (Universitaet und Forschungszentrum Karlsruhe)
211. Discussion time
219. Plans for GRAND 200k
Kumiko Kotera (Institut d'Astrophysique de Paris)
212. A "snake array" of fluorescence detectors (10 min)
Pierre Sokolsky (University of Utah)
213. SKA with muon counters as super-cosmic-ray detector in the transition energy region
214. Lower energy TALE, down to 10^14 eV
215. On the importance of analyzing very-high and ultra-high energy data together, towards a new working group for UHECR symposia
216. Discussion
106. A comparative study of the "muon excess" in extensive air showers
Ivan Karpikov (INR RAS)
The excess of muons in observed extensive air showers with respect to Monte-Carlo simulations shows up itself in the data of various experiments and under different conditions. We present a comparative quantitative analysis of the muon content of showers observed at various energies, zenith angles, core distances etc. by several experiments.
112. anisotropy by UHECR at 60 EeV ruled by lightest nuclei mainly originated from CenA, M82, NGC 253 near AGN.
Daniele Fargion (Physics Departm Rome 1 INFN 1)
The very recent anisotropy at highest UHECR energies is smootly clustering in several wide spots (or hot spots) : Cen A, M82, NGC 253 are at a few Mpc distance and are possible the main sources of these anisotropies in AUGER and TA data.
Because the Virgo absence and the UHECR airshower slat depth most of UHECR are lightest nuclei.
Other additional growing clustering may be related to well...
5. Galactic model of ultra-high energy cosmic rays.
Sergey Shaulov (P.N.Lebedev Physical Institute)
The hypothesis of existence of the new stable heavy hadrons in the cosmic rays is proposed. It follows from the comprehensive study of extensive air showers in
the hybrid experiment HADRON which was carried out at the level 685 g/cm^2 of the Tien Shan mountains. The spectra of the high energy hadrons inside the cores of extensive air showers were obtained for the first time by means of the...
191. The Hard QCD Study In Terms of the GZK Limit
Prof. Andrew Koshelkin (National Research Nuclear University)
We consider the pion production in collisions of ultra high energy protons with the MBR . The probability of such a process is calculated, and is found to be in the strong dependence on the quark-gluon vertex at high energies in the hard QCD limit. The relation of the obtained results to the experimental knee in the energy spectrum of ultra high energy protons allows us to get information...
94. UHECR 8 EeV dipole anisotropy hint of galactic pollution sources
The discover in AUGER of a dipole remarkable anisotropy it is statistically the most strong in the whole Ultra High Energy Cosmic Ray history. It implies a dipole anisotropy almost overlapping to the Argo-Hawc one at tens TeV energy. However
the tens TeV anisotropy it must be very local (galactic) one while the UHECR are supposed to be (as their name suggest) Cosmic ones.
We show that there...
|
CommonCrawl
|
Minnesota accessibility code
Minecraft unity resource pack
Enable messages on icloud not showing on mac
Monad terrace address
Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible. Bonded atoms Nonbonded Pairs Total Electronic Geometry Molecular Shape Bond Angle Hybridization
Aug 11, 2016 · Both water and diethyl ether have the central atom O in s {{p}^{3}} hybrid state with two lone pairs of electrons on O. But due to the greater repulsions between two ethyl groups in diethyl ether than between two H-atoms in water result in greater bond angle (110°) in diethyl ether than 104.5° in that of water .
Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure 9), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2.
May 09, 2019 · The hybridization of orbitals on the central atom in a molecule is sp. … The hybridization of the central atom in the XeF4 … (GGG:1) ensures that the hybridization is as a true set of … 0 for BR3 0 for CR4, 1 for CR3 …
Predict the geometry around the central atom in PO4^3-Tetrahedral. ... The hybridization of the central nitrogen atom in the molecule N2O is. sp.
Get college assignment help at Students Paper Help Question hybridization of central atom [PO3]3- Question how do you write 2C2H6 in words? Question how do you write 2C2H6 in words? Question lewis structure of xecl2 . Question lewis structure of xecl2 . Question A gas expands and does P-V work on the surroundings
12. What is the hybridization of the central atom in each of the following? (a) BeH2 (b) SF6 (c) PO43- (d) PC15
6.5 creedmoor 129 interlock load data
In the following questions a statement of Assertion (A) followed by a statement of Reason (R) is given. Choose the correct option out of the choices given below each question. Reason (R) : This is because nitrogen atom has one lone pair and oxygen atom has two lone pairs.
A = the central atom, X = an atom bonded to A, E = a lone pair on A Note: There are lone pairs on X or other atoms, but we don't care. We are interested in only the electron densities or domains around atom A. Total Domains Generic Formula Picture Bonded Atoms Lone Pairs Molecular Shape Electron Geometry Example Hybridi -zation Bond Angles 1 AX
Which of the following species have the same molecular geometry: PO43−, SF4, PF5, and XeF4? ... What is the hybridization of the central atom in SO2? sp2.
The ozone molecule looks like this : The hybridization for the blue oxygen atom will be sp2, the green oxygen atom will be sp2 and the red oxygen atom will be sp3.
# of atoms bonded tocentral atom # lone pairs on central atom trigonal bipyramidal distorted tetrahedron Arrangement ofelectron pairs Molecular Geometry Class VSEPR trigonal bipyramidal trigonal Hybridization - mixing of two or more atomic orbitals to form a new set of hybrid orbitals. •
10. (a) Draw a Lewis structure for SiH4 and predict its molecular geometry. (b) What is the hybridization of the central atom Si in SiH4? (c) What must happen to a ground state Si atom in order for hybridization to occur? (d) Sketch the four bonds that occur between the hybrid orbitals on the Si central atom and the four H 1s orbitals.
hybridization of the central nitrogen atom is _____. a. sp b. sp 2 c. sp 3 d. sp 3d e. sp d2 28. The hybridization of orbitals on the central atom in a molecule is sp. The electron-domain geometry around this central atom is _____. a. octahedral b. linear c. trigonal planar d. trigonal bipyramidal e. tetrahedral 29.
Although the hybridization is sp², the geometry is angular, due to the presence of lone pair in the central Oxygen atom. The bonding can be expressed as a resonance hybrid with a single bond on one side and double bond on the other producing an overall bond order of 1.5 for each side.
The hybridization can be determined by the number of sigma bonds and the number of lone pair. The number of sigma bond is 4 and the number of lone pairs is 0. This implies stearic number is 4 and hence the hybridization is s p 3 and the geometry and shape of the molecule is tetrahedral and two electrons are forming a double bond (1 sigma and 1 ...
a) icl4— b) sf4 c) nh4+ d) ch3cl e) po43— 15. The electron pair geometry of the ion NO2+ is A) triangular B) bent C) linear D) tetrahedral E) octahedral. 16 - 22. Match the hybridization with that of the central atom in each of the following: molecules and ions.
11 Subscripts and Coefficients Give Different Information Subscripts tell the number of atoms of each element in a molecule Coefficients tell the number of molecules. 12 Rules for balancing equations Identify the names of the reactants and the products, and write a word equation.
Atomic structure - AQA. Atoms consist of a nucleus containing protons and neutrons, surrounded by electrons in shells. The numbers of subatomic particles in an atom can be calculated from its atomic number and mass number.
Hno3 Hybridization
Hybridization - Nitrogen, Oxygen, and Sulfur. Nitrogen - sp 3 hybridization. The nitrogen atom also hybridizes in the sp 2 arrangement, but differs from carbon in that there is a "lone pair" of electron left on the nitrogen that does not participate in the bonding.
A. ICl5:Hybridization of iodine atom is sp3d2.Considering that iodine has one pair of unshared electrons and it is bonded to 5 chlorine atoms, the most stable configuration, minimizing the repulsion between the pair of electrons and the electrons from the chemical bonds is that ...
For example, in H 2 O, the electron-pair geometry around the central O atom is approximately tetrahedral. Thus, the four electron pairs can be envisioned as occupying sp 3 hybrid orbitals. Two of these orbitals contain nonbonding pairs of electrons, while the other two are used in forming bonds with hydrogen atoms, as shown in Figure 9.17.
When you draw the lewis structure, you will find that there is no electron pairs on the central atom, and there is 6 atoms surrounding the central atom. hybridization: sp3d2 shape: octahedral
The hybridization process involves taking atomic orbitals and mixing these into hybrid orbitals. These have a different shape, energy and other The sp3d2 hybridization concept involves hybridizing three p, one s and two d-orbitals. This results in the formation of six different sp3d2 orbitals and these...
Hybridization is defined as the intermix-ing of dissimilar orbitals of the same atom but having slightly different orbitals takes place in the presence of more electronega-. tive atom than the central atom due to contraction in the. dicted hybridized state of central atom using our method are.
Mar 14, 2008 · XeF3+ and PO4 -3 You give two answers here. For example if the hybridization of the central atom for the first molecule or ion is sp3, and on the second is sp3d you enter: sp3 sp3d NOTE the SINGLE SPACE between the answers. Enter s first, e.g. sp3d NOT dsp3
Know answer of objective question : The sp3d2 hybridization of central atom of a molecule would lead to?. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English
epub library models of molecular shapesvsepr theory and orbital hybridization hybridization and molecular orbital mo theory chapter 10 historical models ovalence bond models of molecular shapesvsepr theory and orbital hybridization separate from chemistry in the laboratory 5e Oct 29, 2020 Posted By Sidney Sheldon Media
Reylight pineapple brass
O (2) One Lewis structure of PO43− is O 3 P O O The molecule has 32 valence electrons. Because the multiple bond is counted as one electron pair (a Solution The Lewis structure shows two central C atoms: one with a tetrahedral electron arrangement (s p 3 hybridization, methyl carbon atom) and...
Apr 05, 2008 · Select the hybridization at the central atom of each of the following molecules or ions. BF3 ---Select--- sp sp2 sp3 sp3d sp3d2 H2O2 ---Select--- sp sp2 sp3 sp3d sp3d2 NCl3 ---Select--- sp sp2 sp3 sp3d sp3d2 HCN ---Select--- sp sp2 sp3 sp3d sp3d2 NO31- ---Select--- sp sp2 sp3 sp3d sp3d2 SCl2 ---Select--- sp sp2 sp3 sp3d sp3d2 OCCl2 ---Select--- sp sp2 sp3 sp3d sp3d2 PO43- ---Select--- sp sp2 ...
Numpy smooth array
2. Based on valence bond theory, which statement best describes the electron geometry and hybridization of the central atom(s) in acetylene C2H2? A. The electron geometry of the 2 carbons in acetylene is tetrahedral with a sp3 hybridization. B. The electron geometry of the 2 carbons in acetylene is trigonal planar with a sp2 hybridization. C.
Slpm to psi
(3) The correct hybridization is the one associated with the highest bond-order, for that atom in any appropriate resonance structure. the hybridization concept is developed by VBT n according to this there is no hybridization in the non central atoms ...
Coordination pdf
The hybridization of the central carbon atom is A) sp2 B sp3 C) sp D) sp3d2 E) sp3d 43) The hybridization of orbitals on the central atom in a molecule is sp2. Now as you want to know hybridization of carbon atom take carbon as central atom, now we can see that carbon is bonded with two oxygen by only a single bond.
Wyolum github
Camping with atv trails near me
Location is not available a device which does not exist was specified
Apr 05, 2020 · The A represents the central atom, the X represents the number of atoms bonded to A and E represents the number of lone electron pairs surrounding the central atom. The sum of X and E is the steric number. The central nitrogen atom in nitrate has three X ligands due to the three bonded oxygen atoms.
Dr dish home
Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure 8.14), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2.
E36 ews bypass
Oct 17, 2012 · This central atom is sp3 hybridized. Methane (CH 4) is an example of a molecule with sp3 hybridization with 4 sigma bonds. Trigonal Pyramid Molecular Geometry. This atom has 3 sigma bonds and a lone pair. There are 4 areas of electron density. It is sp 3 hybridized and the predicted bond angle is less than 109.5 .
How to install u verse app on firestick
Once we know how many valence electrons there are in PO4 3- we can distribute them around the central atom with the goal of filling the outer shells of each atom. In the Lewis structure of PO43- there are a total of 32 valence electrons. For the Lewis structure for PO4 3- you should take formal charges into account to find the best Lewis ...
Kini itoju kokoro inu eje
Top Answer. hybridization of P is sp3. Explanation: central atom in PO43- ion is P. there are three P-O bonds and one P=O bond. number of electrons around P = 5 + 3 = 8 that is four electron pairs. Hence. hybridization is sp3.
In molecular BH3 the molecule is planar with bond angles of 120o so the hybridisation of the central boron atom is sp2. In the dimer B2H6 the molecule has two bridging hydrogens. The hybridisation ...
The nucleus of a radioactive atom disintegrates spontaneously and forms an atom of a different element while emitting radiation in the process. Geologists use a sensitive instrument called a mass spectrometer to detect tiny quantities of the isotopes of the parent and progeny atoms.
Beretta tomcat sights
Sep 07, 2013 · The other hybrids use more complicated sums (not just 50:50 combinations) of the orbitals that are combined to yield the appropriate hybrid orbitals. You can determine the hybridization of an atom from the VSEPR geometry about that atom. Notice that the sum of the exponents is the same as the number of groups about the atom. In this hybridization activity, students fill in the hybridization of chemical compounds given their geometry, shape, and bonds. Students draw the Lewis symbol for elements and determine the number of bonds and lone pairs. The paintings are being filmed for a major Channel 4 series to be screened in December, Jungle Mystery: Lost Kingdoms of the Amazon. These animals were all seen and painted by some of the very first humans ever to reach the Amazon. Their pictures give a glimpse into a lost, ancient civilisation.Know answer of objective question : The sp3d2 hybridization of central atom of a molecule would lead to?. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English
Electronic Geometry, Molecular Shape, and Hybridization. Keyword-suggest-tool.com Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible. This hybridization pattern can also explain the resonance structure of the SO3 molecule that results in all three S-O bonds being of equal length and strength. The theory here is that the S atom's unhybridized p orbital forms a "delocalized pi system" with the oxygen atoms p orbitals giving a bond...Hence the hybridization of the central atom Xe is sp3d. The polarity of any given molecule depends on the molecular geometry and the hybridization of the compound. In XeF2 molecule, two fluorine atoms are arranged symmetrically on the outside with the central atom Xenon in the middle.What is the hybridization of the central atom in each of the following? (a) BeH 2 (b) SF 6 (c) \(\ce{PO4^3-}\) (d) PCl 5. A molecule with the formula AB 3 could have one of four different shapes. Give the shape and the hybridization of the central A atom for each. View Answer. Indicate the hybridization of the central atom in (a) BCl3 (b) AlCl4- (c) CS2 (d) GeH4. Shown here are three pairs of hybrid orbitals, with each set at a characteristic angle. For each pair, determine the type of hybridization, if any that could lead to hybrid orbitals at the specified angle.
# of atoms bonded tocentral atom # lone pairs on central atom trigonal bipyramidal distorted tetrahedron Arrangement ofelectron pairs Molecular Geometry Class VSEPR trigonal bipyramidal trigonal Hybridization - mixing of two or more atomic orbitals to form a new set of hybrid orbitals. •Apr 05, 2020 · The A represents the central atom, the X represents the number of atoms bonded to A and E represents the number of lone electron pairs surrounding the central atom. The sum of X and E is the steric number. The central nitrogen atom in nitrate has three X ligands due to the three bonded oxygen atoms. Since organic chemistry and biochemistry rely on the carbon atom, that's the element that we put our attention on first. The diversity of carbon to make complex molecules is only possible because of the hybridization that its electrons undergo. On the left are 3 carbon atoms with their electrons in their ground state (lowest energy level). The hybridization of the S atom is sp hybridized. It has two orbitals 180 degrees apart. Central atom phosphorus in PCl3 is sp3 hybridised and one of sp3 hybridised orbital is occupied by a lone pair of electrone so it geomatery is tetrahedral: pin. ... (102). Просмотры: 43 тыс.May 09, 2019 · The hybridization of orbitals on the central atom in a molecule is sp. … The hybridization of the central atom in the XeF4 … (GGG:1) ensures that the hybridization is as a true set of … 0 for BR3 0 for CR4, 1 for CR3 … Atoms are the fundamental units of chemistry as each of the chemical elements comprises one distinctive type of atom. An atom consists of a positively The most convenient presentation of the elements is in the periodic table, which groups elements with similar chemical properties together.
Apr 06, 2007 · Since it's a symmetric atom with three substituents, the geometry around the central atom will be trigonal planar, and the hybridization will be sp2. Each of nitrogen's three sp2 orbitals overlaps... The hybridizations of the central atom in PO(OH)₃ is sp³. Explanation: The PO(OH)₃ has the central atom P making one double bond with an oxygen atom and three single bonds with the -OH. The ligands are arranged in a tetrahedral geometry and the hybridization of the P atom is sp³. The charge is distributed asymetricaly and the molecule is ... 21) Of the following, only _____ has sp2 hybridization of the central atom. A) ICl 3 B) I 3-C) PF 5 D) CO 3 2-E) PH 3 22) Of the following, _____ is a correct statement of Boyle's law. A) n/P = constant B) P/V = constant C) V/T = constant D) PV = constant E) V/P = constant sp 2 Hybridization in Ethene and the Formation of a Double Bond. Ethene ... When we know the molecular geometry, we can use the concept of hybridization to describe the electronic orbitals used by the central atom in bonding ; Steps in predicting the hybrid orbitals used by an atom in bonding: 1. Draw the Lewis structure . 2. Determine the electron pair geometry using the VSEPR model . 3. Carbon - sp 2 hybridization. A carbon atom bound to three atoms (two single bonds, one double bond) is sp 2 hybridized and forms a flat trigonal or triangular arrangement with 120° angles between bonds. Notice that acetic acid contains one sp 2 carbon atom and one sp 3 carbon atom.
Congratulations! X Well begun is half done. You have joined No matter what your level. You can score higher. Check your inbox for more details.
Sep 07, 2013 · The other hybrids use more complicated sums (not just 50:50 combinations) of the orbitals that are combined to yield the appropriate hybrid orbitals. You can determine the hybridization of an atom from the VSEPR geometry about that atom. Notice that the sum of the exponents is the same as the number of groups about the atom. Apr 12, 2009 · I'm not gonna draw the Lewis structures but I'll give you a key. Draw them yourself and count the total number of groups, including paired and unpaired electrons, around the central atom. A single bond, double bond, triple bond, or free pair are all one group. The number of groups is matched with the hybridization below: 2 groups -- sp. 3 ...
Identify the hybridization of the central atom for both ClF5 and PO4-3 You give two answers here. For example if the hybridization of the central atom for the first molecule or ion is sp3, and on the second is sp3d you enter: sp3 sp3d NOTE the SINGLE SPACE between the answers. Enter s first, e.g. sp3d NOT dsp3
Which of the following species have the same molecular geometry: PO43−, SF4, PF5, and XeF4? ... What is the hybridization of the central atom in SO2? sp2. I was wondering why the compound SO4^2- has three Lewis structures whereas PO4^3- has only two. In the book, one of the structures for SO4^2- has 12 electrons attached to the central S-atom. For the Lewis structures for PO4^3-, the most that the P-atom holds is ten electrons. May 26, 2016 · 3> Give the remaining electrons to the central atom (sulphur here gets 2 additional electrons and has 3 "things" around it so total of 4 "things" around it. Thus sp3. This is not lengthy process at all especially with some practice; although it appears so here During the combination of Iodine atoms, the central atom gains a negative charge whose value will be 1. I− ion is the donor and I2 molecule is the acceptor. Electrons are mostly accommodated in the empty d orbitals. Know answer of objective question : The sp3d2 hybridization of central atom of a molecule would lead to?. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English
Hybridization Involving d-Orbitals As we discussed earlier, some 3rd row and larger elements can accommodate more than eight electrons around the central atom. These atoms will also be hybridized and have very specific arrangements of the attached groups in space. The two types of hybridization involved with d orbitals are sp3d and sp3d2. Aug 11, 2016 · Both water and diethyl ether have the central atom O in s {{p}^{3}} hybrid state with two lone pairs of electrons on O. But due to the greater repulsions between two ethyl groups in diethyl ether than between two H-atoms in water result in greater bond angle (110°) in diethyl ether than 104.5° in that of water . Science of the Total Environment.What is the hybridization of the central atom in each of the following? (a) BeH 2 (b) SF 6 (c) \(\ce{PO4^3-}\) (d) PCl 5. A molecule with the formula AB 3 could have one of four different shapes. Give the shape and the hybridization of the central A atom for each. Predict the geometry around the central atom in PO4^3-Tetrahedral. ... The hybridization of the central nitrogen atom in the molecule N2O is. sp.
Congratulations! X Well begun is half done. You have joined No matter what your level. You can score higher. Check your inbox for more details. A. ICl5:Hybridization of iodine atom is sp3d2.Considering that iodine has one pair of unshared electrons and it is bonded to 5 chlorine atoms, the most stable configuration, minimizing the repulsion between the pair of electrons and the electrons from the chemical bonds is that ...
Nov 27, 2011 · 1.)hybridization in BCl3 B(5) = 1s22s22p1 in excited state = 1s22s12px12pz1 hence it has sp2 hybridization. 2.)AlCl4 Al(13)=1s22s22p63s23p1 AlCl4, tetrehedral geometry is present; which implies sp3 hybridization. 3.)CS2 C(6)= 1s22s22p2 The CS2 molecule is linear and the carbon atom is bonded to each sulphur by a double bond. In the following questions a statement of Assertion (A) followed by a statement of Reason (R) is given. Choose the correct option out of the choices given below each question. Reason (R) : This is because nitrogen atom has one lone pair and oxygen atom has two lone pairs.Which of the following statement(s) concerning o and TT bonds is/are correct? a. Sigma bonds are formed from unhybridized s orbitals. b. Pi bonds are formed from unhybridized p orbitals. c. A pi bond has an electron distribution above and below the bond axis. d...11 Subscripts and Coefficients Give Different Information Subscripts tell the number of atoms of each element in a molecule Coefficients tell the number of molecules. 12 Rules for balancing equations Identify the names of the reactants and the products, and write a word equation.Atomic structure - AQA. Atoms consist of a nucleus containing protons and neutrons, surrounded by electrons in shells. The numbers of subatomic particles in an atom can be calculated from its atomic number and mass number.
Know answer of objective question : The sp3d2 hybridization of central atom of a molecule would lead to?. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English Hundreds of Britons have fled quarantine in the Swiss ski resort of Verbier, with the country's health minister attributing the exodus to an "impossible situation" where authorities moved at short-notice to contain a new variant of the coronavirus.are various meth- tive atom than the central atom due to contraction in the ods which are generally used to find the hybridized state of size of 'd' orbitals. an atom and these methods are time consuming.5 Recently, (4) After hybridization process we get the same PO43- 5 0 - 3 4 sp3 10.Click hereto get an answer to your question Explain hybridization of central atom in : PO4^3 -. In the given molecule, The P atom have 5 valence electrons. The hybridization can be determined by the number of sigma bonds and the number of lone pair.My query is concerning the hybridization of P in Phosphate (PO43-) I'd like to draw the Lewis dot structure here, but I'm not sure if that's possible. So I drew up some things on my whiteboard, and took a few pictures, there's all the info you need. Look at the left image first, thank you...
M365 tools premium apk
The numbers of ligands and central atoms are indicated by the appropriate numerical prefixes (see of IC12- Ion This ion has symmetrical linear shape which results from sp d hybridisation of I-atom A third approach is to calculate the height of the central atom above a plane defined by the other three...What kind of hybrid orbitals are utilized by the carbon atom in CF 4 molecules? (a) sp (b) sp 2 (c) sp 3 (d) sp 3 d (e) sp 3 d 2. 8. A neutral molecule having the general formula AB 3 has two unshared pair of electrons on A. What is the hybridization of A? (a) sp (b) sp 2 (c) sp 3 (d) sp 3 d (e) sp 3 d 2. 9. What hybridization is predicted for ... E.g. Boron atom gets negative charge when it accepts a lone pair from hydride ion, H-in borohydride ion, BH 4-STEP-4: Calculate the steric number: Steric number = no. of σ-bonds + no. of lone pairs . STEP-5: Assign hybridization and shape of molecule . Now, based on the steric number, it is possible to get the type of hybridization of the atom. May 28, 2020 · Tetrahedral molecules array four atoms around a central atom, every atom oriented 109.5° from the others. Tetrahedral structure is also found in the phosphate ion, PO43-, sulfate ion, SO42 -, and perchlorate ion, ClO4-. In PO4^3- ion, the formal charge on each oxygen atom and P - O bond order respectively asked Dec 19, 2018 in Chemical Bonding and Molecular Structure by pinky ( 74.2k points) chemical bonding
Openvpn connecting to management interface failed windows
Click hereto get an answer to your question Explain hybridization of central atom in : PO4^3 -. In the given molecule, The P atom have 5 valence electrons. The hybridization can be determined by the number of sigma bonds and the number of lone pair.2 days ago · Nh2- Hybridization. The hybridization of a molecule can be determined by a formula. Formulae for hybridization = 1/2(V + M -C +A) M = no of monoatomic atoms connected with the central atom. V = valence electron of central atom. A = anionic charge. C =cationic charge. And, as we know that NH2- is an anion, therefore values are as below: V= 5, M ...
Science of the Total Environment.
Hybridization - Nitrogen, Oxygen, and Sulfur. Nitrogen - sp 3 hybridization. The nitrogen atom also hybridizes in the sp 2 arrangement, but differs from carbon in that there is a "lone pair" of electron left on the nitrogen that does not participate in the bonding. Congratulations! X Well begun is half done. You have joined No matter what your level. You can score higher. Check your inbox for more details. What is the hybridisation state of the central atom in the conjugate base of NH4+ ion? 27. In which of the following molecule, the central atom undergoes sp3d hybridisation with three lone pairs of electrons. 43. The hybrid orbital having an equal amount of s and p - characters is.In molecular BH3 the molecule is planar with bond angles of 120o so the hybridisation of the central boron atom is sp2. In the dimer B2H6 the molecule has two bridging hydrogens. The hybridisation ...
Click hereto get an answer to your question Explain hybridization of central atom in : PO4^3 -. In the given molecule, The P atom have 5 valence electrons. The hybridization can be determined by the number of sigma bonds and the number of lone pair.Dec 29, 2009 · Step 1 Add together the number of monovalent atoms surrounding the central atom and the number of valence electrons surrounding the central atom. Step 2 Add the units of cationic charge to the sum...
The hybridization of the lead atom in PbCl4 is...? biochemistry. Which statements about hydrogen bonds are correct? 1) Hydrogen bonds are the interaction between a hydrogen atom Given that S is the central atom, draw a Lewis structure of OSF4 in which the formal charges of all atoms are zero.10. (a) Draw a Lewis structure for SiH4 and predict its molecular geometry. (b) What is the hybridization of the central atom Si in SiH4? (c) What must happen to a ground state Si atom in order for hybridization to occur? (d) Sketch the four bonds that occur between the hybrid orbitals on the Si central atom and the four H 1s orbitals.
Hybridization - Nitrogen, Oxygen, and Sulfur. Nitrogen - sp 3 hybridization. The nitrogen atom also hybridizes in the sp 2 arrangement, but differs from carbon in that there is a "lone pair" of electron left on the nitrogen that does not participate in the bonding. Hno3 Hybridization
Electronic Geometry, Molecular Shape, and Hybridization. Keyword-suggest-tool.com Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible.
I'm learning how to apply the VSEPR theory to Lewis structures and in my homework, I'm being asked to provide the hybridization of the central atom in each Lewis I've drawn out the Lewis structure for all the required compounds and figured out the arrangements of the electron regions, and figured out...The molecular geometry of NO 2 F is trigonal planar with asymmetric charge distribution on the central atom. Therefore NO 2 F is polar. Nitryl Fluoride on Wikipedia. Return to the molecular geometries: Molecular Geometry. Return to the homepage: www.tutor-pages.com. Get homework help with molecular geometry: www.tutor-homework.com.
NOTE: If more than one bond angle is possible, separate each with a space. For example, trigonal bipyramid geometry can lead to 3 different bond angles. Octahedral geometry can lead to 2.A. What is the hybridization of the central atom in ICl5 ?Hybridization =What are the approximate bond angles in this substance ? Bond angles = °B. What is ... Identify the hybridization of the central atom in each of the following molecules and ions that contain multiple bonds: ClNO (N is the central atom) CS 2 Cl 2 CO (C is the central atom) Cl 2 SO (S is the central atom) SO 2 F 2 (S is the central atom) XeO 2 F 2 (Xe is the central atom) [latex]{\text{ClOF}}_{2}{}^{+}[/latex] (Cl is the central atom) CPK structures represent the atoms as spheres, where the radius of the sphere is equal to the van der Waals radius of the atom; these structures give an To download the pdb files (refers to the protein databank method of saving atomic coordinates for a molecule) for viewing and rotating the ions listed...
Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure \(\PageIndex{9}\)), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2. Las Trojas Cantina. HOME; LOCATIONS. ATHENS; BIRMINGHAM; FAYETTEVILLE; FLORENCE; steric number and hybridization Atoms are the fundamental units of chemistry as each of the chemical elements comprises one distinctive type of atom. An atom consists of a positively The most convenient presentation of the elements is in the periodic table, which groups elements with similar chemical properties together.
Nov 27, 2011 · 1.)hybridization in BCl3 B(5) = 1s22s22p1 in excited state = 1s22s12px12pz1 hence it has sp2 hybridization. 2.)AlCl4 Al(13)=1s22s22p63s23p1 AlCl4, tetrehedral geometry is present; which implies sp3 hybridization. 3.)CS2 C(6)= 1s22s22p2 The CS2 molecule is linear and the carbon atom is bonded to each sulphur by a double bond.
Nitrogen Dioxide (NO 2) involves an sp 2 hybridization type. The simple way to determine the hybridization of NO 2 is by counting the bonds and lone electron pairs around the nitrogen atom and by drawing the Lewis structure. We will also find that in nitrogen dioxide, there are two sigma bonds and one lone electron pair. hybridization of the central nitrogen atom is _____. a. sp b. sp 2 c. sp 3 d. sp 3d e. sp d2 28. The hybridization of orbitals on the central atom in a molecule is sp. The electron-domain geometry around this central atom is _____. a. octahedral b. linear c. trigonal planar d. trigonal bipyramidal e. tetrahedral 29.
Onan p218 spark plug
12. What is the hybridization of the central atom in each of the following? (a) BeH2 (b) SF6 (c) PO43- (d) PC15 If the symbol X represents a central atom, Y represents outer atoms, and Z represents lone pairs on the central atom, the structure. Classify these structures by the hybridization of the central atom. sp sp2 sp3 sp3d sp3d2 XY2, XY3 , XY4, XY5, XY2Z,XY6,XY2Z2,XY2Z3,XY3Z2,XY4Z,XY4Z2,XYZ3XY5Z Jan 24, 2018 · Xenon tetroxide can be prepared from barium perxenate on treatment with anhydrous sulphuric acid. Ba2XeO6 + 2H2SO4 →XeO4 + 2BaSO4 +2H2O Hybridisation in XeO4 * Here the 4 Oxygen atoms form 4 sigma bonds with the 4 sp3 hybridized orbitals of xenon.... May 10, 2018 · The hybridization of the chlorine atom in ##ClF_5## is ##sp^3d^2##. Chlorine pentafluoride ##ClF_5## has a total number of 42 : 7 from the chlorine atom and 7 from each of the five fluorine atoms. The of the chlorine pentafluoride molecule looks like this The five fluorine atoms will be bonded to the central chlorine atom via single bonds. Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure 8.14), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2. 25. The geometry of the hybrid orbitals about a central atom with sp3d2 hybridization is: A) linear D) trigonal bipyramidal B) trigonal planar E) octahedral C) tetrahedral Ans: E 26. N,N-diethyl-m-tolumide (DEET) is the active ingredient in many mosquito repellents. What is the
Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure 9), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2. Original lyrics of Toot Toot Chugga Chugga Big Red Car song by The Wiggles. Wiggles: Yeah, yeah and a wiggly yeah! Wiggles Fruit Salad Lyrics Yummy Yummy Track List. is the second episode of The Wiggles' World!. Remember that we use the hybridization theory for central atoms BECAUSE we need to!!! The bond angles and bond energy that we predict for using the different atomic orbitals of the central atom cannot be explained unless we believe that these valence orbital mix or hybridize!!!NOTE: If more than one bond angle is possible, separate each with a space. For example, trigonal bipyramid geometry can lead to 3 different bond angles. Octahedral geometry can lead to 2.A. What is the hybridization of the central atom in ICl5 ?Hybridization =What are the approximate bond angles in this substance ? Bond angles = °B. What is ...
Dmv handicap placard form california
This central metal atom is significantly smaller than the multiple oxygen atoms in the anion. 43 7. 4 H 4 GeO 4 5. 41 1 The table above shows that the p Kb values of an decrease by 4-5 units for each Effect of ElectronegativityAs the electro negativity of the nonmetal atom decreases the basicity of the...In the Fall 2012 quiz for preparation for quiz 2 in the workbook, #2 asks for the hybridization of the central atom in O3. The answer is sp2 and I am confused as to how the central atom, which would be O, could have an sp2 hybridization. Identify the hybridization of the central atom in each of the following molecules and ions that contain multiple bonds: ClNO (N is the central atom) CS 2 Cl 2 CO (C is the central atom) Cl 2 SO (S is the central atom) SO 2 F 2 (S is the central atom) XeO 2 F 2 (Xe is the central atom) [latex]{\text{ClOF}}_{2}{}^{+}[/latex] (Cl is the central atom) The paintings are being filmed for a major Channel 4 series to be screened in December, Jungle Mystery: Lost Kingdoms of the Amazon. These animals were all seen and painted by some of the very first humans ever to reach the Amazon. Their pictures give a glimpse into a lost, ancient civilisation.
Amc 258 aluminum valve cover
Reference no: EM13311818 . Choose the selection which correctly identifies the sets of hybrid orbitals used by central atoms to form bonds in both of the molecules and/or ions whose formulas are given below: IF4- and BCl4- a) The central atom of IF4- uses sp3d2 hybridization and the central atom of BCl4- uses sp3d hybridization. My query is concerning the hybridization of P in Phosphate (PO43-) I'd like to draw the Lewis dot structure here, but I'm not sure if that's possible. So I drew up some things on my whiteboard, and took a few pictures, there's all the info you need. Look at the left image first, thank you... Give the IUPAC name for each of the following.Because hydrogen must be a terminal atom, nitrogen is the central atom. First find the total number of valence electrons in both atoms: 5+3=8 atoms. Next, determine the number of bonding pairs by dividing the total number of electrons by 2 so 8 divided by two is 4 pairs. Cite this chapter as: Turova N. (2011) Valence State (Hybridization Types) and Coordination Polyhedra of Central Atoms. In: Inorganic Chemistry in Tables. Springer, Berlin, Heidelberg.What is the hybridization of the central atom in each of the following? (a) BeH 2 (b) SF 6 (c) \(\ce{PO4^3-}\) (d) PCl 5. A molecule with the formula AB 3 could have one of four different shapes. Give the shape and the hybridization of the central A atom for each. The word 'atom' actually comes from Ancient Greek and roughly translates as 'indivisible'. The Ancient Greek theory has been credited to several different scholars, but is most often Dalton's outlining of atomic theory was a start, but it still didn't really tell us much about the nature of atoms themselves.21) Of the following, only _____ has sp2 hybridization of the central atom. A) ICl 3 B) I 3-C) PF 5 D) CO 3 2-E) PH 3 22) Of the following, _____ is a correct statement of Boyle's law. A) n/P = constant B) P/V = constant C) V/T = constant D) PV = constant E) V/P = constant sp 2 Hybridization in Ethene and the Formation of a Double Bond. Ethene ... So, the central atom of NOCl is N and its hybridisation is $s{p^2}$. -${N_2}O$: Let us first see the hybridisation of all the elements individually. In this both the N atoms have sp hybridisation because the terminal N forms only 1 triple bond with another N atom and has a lone pair of electron...
Atlassian new office building
84. Hybridization • Some atoms hybridize their orbitals to • maximize bonding • more bonds = more full orbitals = more stability Hybridizing is mixing different types of orbitals in the valence shell to make a new set of degenerate orbitals sp, sp2, sp3, sp3d, sp3d2 • Same type of atom can have different...What is the hybridization of the central atom in each of the following: (a) BeH2 sp (b) SF6 sp^3d^2 (c) PO4 3− sp^3ﰂﰂ (d) PCl5 sp^3d 28. Describe the molecular geometry and hybridization of the N, P, or S atoms in each of the following compounds. 11. Write the Lewis structures that obey the octet rule for each of the following molecules and ions. (In each case the first atom listed is the central atom.) a) b) c) POCl3 NF3 ClO2-SO42- SO32- SCl2. XeO4 PO33- PCl2. PO43- ClO3- The paintings are being filmed for a major Channel 4 series to be screened in December, Jungle Mystery: Lost Kingdoms of the Amazon. These animals were all seen and painted by some of the very first humans ever to reach the Amazon. Their pictures give a glimpse into a lost, ancient civilisation.are various meth- tive atom than the central atom due to contraction in the ods which are generally used to find the hybridized state of size of 'd' orbitals. an atom and these methods are time consuming.5 Recently, (4) After hybridization process we get the same PO43- 5 0 - 3 4 sp3 10.
Peel and stick wood backsplash
Apr 06, 2007 · Since it's a symmetric atom with three substituents, the geometry around the central atom will be trigonal planar, and the hybridization will be sp2. Each of nitrogen's three sp2 orbitals overlaps... Hybridization of ch3oh Hybridization of ch3oh ; Nov 03, 2017 · Sulphur dioxide is SO2 The central sulphur atom is bonded to two oxygen atoms. The structure is like this: O=S=O The sulphur atom forms one sigma and one pi bond with each oxygen atom and has one lone pair. The hybridization of a central atom can be determined from Lewis structures. In a Lewis structure, if there are say two groups of electrons about a central atom, it means that two hybrid orbitals would be required to hold them. Whenever two hybrid orbitals are formed, the hybridization on the central atom is always sp. Similarly, if there are ... Electronic Geometry, Molecular Shape, and Hybridization. Keyword-suggest-tool.com Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible. Las Trojas Cantina. HOME; LOCATIONS. ATHENS; BIRMINGHAM; FAYETTEVILLE; FLORENCE; steric number and hybridization
Azure cli list resource groups
See full list on geometryofmolecules.com Problem: Give the expected hybridization of the central atom for the molecules or ions. a. POCl3, SO42-, XeO4, PO43-, ClO4- FREE Expert Solution We are asked to give the expected hybridization of the central atom for the molecules or ions. Know answer of objective question : The sp3d2 hybridization of central atom of a molecule would lead to?. Answer this multiple choice objective question and get explanation and result.It is provided by OnlineTyari in English sp3d hybridization (5 hybrid orbitals) In order to have an expanded octet, the central atom of the molecule must have at least three shells of electrons in order to accommodate more than eight electrons in its valence shell. It isn't until we reach the third main shell that the "d" sublevel exists to provide enough orbitals for an expanded octet. :atom: The hackable text editor. Contribute to atom/atom development by creating an account on GitHub.11 Subscripts and Coefficients Give Different Information Subscripts tell the number of atoms of each element in a molecule Coefficients tell the number of molecules. 12 Rules for balancing equations Identify the names of the reactants and the products, and write a word equation.The hybridization of a central atom can be determined from Lewis structures. In a Lewis structure, if there are say two groups of electrons about a central atom, it means that two hybrid orbitals would be required to hold them. Whenever two hybrid orbitals are formed, the hybridization on the central atom is always sp. Similarly, if there are ... The molecules in which the central atom is linked to 3 atoms and is sp2 hybridized have a triangular planar shape. Examples of sp 2 Hybridization. All the compounds of Boron i.e. BF 3, BH 3; All the compounds of carbon containing a carbon-carbon double bond, Ethylene (C 2 H 4) sp 3 Hybridization
Normpdf matlab
The central atom of POCl3 that is P has sp3 hybridization. I would also turn your attention towards following learn and practice resources which shall be helpful to you.Hybridization is defined as the intermix- On the basis of above characteristics, we formulate the fol- ing of dissimilar orbitals of the same atom but having slightly lowing facts for a clear understanding and prediction of hy- different energies to form same number of hybrid orbitals of bridized state of an atom in a polyatomic molecule or ion ... May 10, 2018 · The hybridization of the chlorine atom in ##ClF_5## is ##sp^3d^2##. Chlorine pentafluoride ##ClF_5## has a total number of 42 : 7 from the chlorine atom and 7 from each of the five fluorine atoms. The of the chlorine pentafluoride molecule looks like this The five fluorine atoms will be bonded to the central chlorine atom via single bonds. The numbers of ligands and central atoms are indicated by the appropriate numerical prefixes (see of IC12- Ion This ion has symmetrical linear shape which results from sp d hybridisation of I-atom A third approach is to calculate the height of the central atom above a plane defined by the other three...CO 3 2-the central atom C has 3 bond pairs (2 sigle bonds and 1 double bond. π bond does not count in molecular shape), the shape is trigonal planar and habridization is sp 2. SF 5-1 the central atom S has 4 bond pairs and one lone pair, the shape is square pyramidal and habridization is sp 3 d 2 What is the hybridization of the boron atom in BF 4-?. Answer The nucleus of a radioactive atom disintegrates spontaneously and forms an atom of a different element while emitting radiation in the process. Geologists use a sensitive instrument called a mass spectrometer to detect tiny quantities of the isotopes of the parent and progeny atoms.In the following questions a statement of Assertion (A) followed by a statement of Reason (R) is given. Choose the correct option out of the choices given below each question. Reason (R) : This is because nitrogen atom has one lone pair and oxygen atom has two lone pairs.Teletype for Atom makes collaborating on code just as easy as it is to code alone, right from your editor. Share your workspace and edit code together in real time. To start collaborating, open Teletype in Atom and install the package.
Corelle canada outlet store locations
طريقة معرفة شحنة وتهجين الذرة المركزية في الكومبلكس. This is done by forming hybrid orbitals from s, p, and now d orbitals. For trigonal bipyramidal the central atom is bonded through dsp 3 hybrid orbitals. In the case of molecules with an octahedral arrangement of electron pairs, another d-orbital is used and the hybridization of the central atom is d 2 sp 3. In summary hybridization of h3o+ is sp3 and it's geometry is tetrahedral. But as there are three atoms around the central oxygen atom, the fourth position will be occupied by lone pair of electrons.The repulsion between lone and bond pair of electrons is more and hence the molecular geometry will be trigonal...Las Trojas Cantina. HOME; LOCATIONS. ATHENS; BIRMINGHAM; FAYETTEVILLE; FLORENCE; steric number and hybridization In the Fall 2012 quiz for preparation for quiz 2 in the workbook, #2 asks for the hybridization of the central atom in O3. The answer is sp2 and I am confused as to how the central atom, which would be O, could have an sp2 hybridization.The hybridization of B r F 5 can be determined by the number of lone pairs around iodine and the number of sigma bonds formed between Br and F. Since the central atom iodine has 7 valence electrons out of which 5 electrons form 5 sigma bonds with F atoms and and 2 electrons form 1 lone pair making the stearic number 6 which implies the hybridization of the central atom is s p 3 d 2 (involving ... N = number of monovalent atoms bonded to central atom. C = charge of cation. A = charge of anion. Now we have to determine the hybridization of the molecule. Bond pair electrons = 4. Lone pair electrons = 5 - 4 = 1. The number of electrons are 5 that means the hybridization will be and the electronic geometry of the molecule will be trigonal ... The hybridization of B r F 5 can be determined by the number of lone pairs around iodine and the number of sigma bonds formed between Br and F. Since the central atom iodine has 7 valence electrons out of which 5 electrons form 5 sigma bonds with F atoms and and 2 electrons form 1 lone pair making the stearic number 6 which implies the hybridization of the central atom is s p 3 d 2 (involving ... An atom is the smallest unit of ordinary matter that forms a chemical element. Every solid, liquid, gas, and plasma is composed of neutral or ionized atoms. Atoms are extremely small, typically around 100 picometers across.Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. This includes molecules with a lone pair on the central atom, such as ClNO (Figure \(\PageIndex{9}\)), or molecules with two single bonds and a double bond connected to the central atom, as in formaldehyde, CH 2 O, and ethene, H 2 CCH 2.
Indoor play areas in cleveland
What if there's a central atom with four electron regions around it? In this case, the atom must be hybridizing all of its s and p atomic orbitals to Here is the summary of hybridization theory. You can look at the number of electron regions around an atom remembering to count a lone pair as one...The hybridizations of the central atom in PO(OH)₃ is sp³. Explanation: The PO(OH)₃ has the central atom P making one double bond with an oxygen atom and three single bonds with the -OH. The ligands are arranged in a tetrahedral geometry and the hybridization of the P atom is sp³. The charge is distributed asymetricaly and the molecule is ... The hybridization process involves taking atomic orbitals and mixing these into hybrid orbitals. These have a different shape, energy and other The sp3d2 hybridization concept involves hybridizing three p, one s and two d-orbitals. This results in the formation of six different sp3d2 orbitals and these...
Communication rfp
are various meth- tive atom than the central atom due to contraction in the ods which are generally used to find the hybridized state of size of 'd' orbitals. an atom and these methods are time consuming.5 Recently, (4) After hybridization process we get the same PO43- 5 0 - 3 4 sp3 10.In PO4^3- ion, the formal charge on each oxygen atom and P - O bond order respectively asked Dec 19, 2018 in Chemical Bonding and Molecular Structure by pinky ( 74.2k points) chemical bonding Predict the geometry around the central atom in PO4^3-Tetrahedral. ... The hybridization of the central nitrogen atom in the molecule N2O is. sp.
Audi s4 b8 headers
Mar 14, 2008 · XeF3+ and PO4 -3 You give two answers here. For example if the hybridization of the central atom for the first molecule or ion is sp3, and on the second is sp3d you enter: sp3 sp3d NOTE the SINGLE SPACE between the answers. Enter s first, e.g. sp3d NOT dsp3 Start studying Hybridization (of central atom). Learn vocabulary, terms and more with flashcards, games and other study tools. Other sets by this creator. Personal Finance Final Exam Part 6. 43 terms.The hybridization of the central atom will change when ` 2:43. Which of the following d-orbital participates in the hybridization of central atom in the molecule of `IF_(7)` ?
Dollar99 move in specials tucson az
Start studying Hybridization (of central atom). Learn vocabulary, terms and more with flashcards, games and other study tools. Other sets by this creator. Personal Finance Final Exam Part 6. 43 terms.
Browser pro library
Since organic chemistry and biochemistry rely on the carbon atom, that's the element that we put our attention on first. The diversity of carbon to make complex molecules is only possible because of the hybridization that its electrons undergo. On the left are 3 carbon atoms with their electrons in their ground state (lowest energy level). Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible. Bonded atoms Nonbonded Pairs Total Electronic Geometry Molecular Shape Bond Angle Hybridization Electronic Geometry, Molecular Shape, and Hybridization. Keyword-suggest-tool.com Electronic Geometry, Molecular Shape, and Hybridization Page 1 The Valence Shell Electron Pair Repulsion Model (VSEPR Model) The guiding principle: Bonded atoms and unshared pairs of electrons about a central atom are as far from one another as possible. ★★★ Correct answer to the question: What is the hybridization of the central atom in the phosphorus pentafluoride (PF3) molecule? - edu-answer.com The molecules in which the central atom is linked to 3 atoms and is sp2 hybridized have a triangular planar shape. Examples of sp 2 Hybridization. All the compounds of Boron i.e. BF 3, BH 3; All the compounds of carbon containing a carbon-carbon double bond, Ethylene (C 2 H 4) sp 3 Hybridization Interpretation: The expected hybridization of the central atom for the given species is to be determined. Concept introduction: The following steps are to be followed to determine the hybridization and the molecular structure of a given compound, The central atom is identified. Its valence electrons are determined.
When to give up on a long distance relationship reddit
Remember that we use the hybridization theory for central atoms BECAUSE we need to!!! The bond angles and bond energy that we predict for using the different atomic orbitals of the central atom cannot be explained unless we believe that these valence orbital mix or hybridize!!!Thus according to the table, the central atom is $s p^{3} d^{2}$ hybridized. c) In $\mathrm{PO}_{4}^{3-}$ , phosphorous atom is So because all of the six bounding Sze wee no issue being the S S P three d too hybridization Okay, foresee P 043 miners. It seems to be a little bit tricky...
…more atoms attached to a central atom than can be accommodated by an octet of electrons. An example is sulfur hexafluoride, SF6, for which writing a Lewis structure with six S―F bonds …the electron pairs of the central atom, disregarding the distinction between bonding pairs and lone pairs.(3) The correct hybridization is the one associated with the highest bond-order, for that atom in any appropriate resonance structure. the hybridization concept is developed by VBT n according to this there is no hybridization in the non central atoms ...
Central atom hybridization? Thread starter UnD3R0aTh. Start date Aug 17, 2013. In addition to the two sigma bonds, two pi bonds are formed with the oxygens, and a lone pair is left on the sulfur, so it is hybridized sp2.1. Use valence-bond theory to predict the hybridization and other properties of these compounds Cmpd CH4 N2 C02 NH3 SF6 NH2- Lewis structure Hybridization of central atom # of o # of bonds bonds Atomic orbits that form the and bonds. Example: (Jsp2-1s Bond order 3 2. Build these compound using moleculav orbital theory and Pi-edict Compound Li2 ... Apr 05, 2020 · The A represents the central atom, the X represents the number of atoms bonded to A and E represents the number of lone electron pairs surrounding the central atom. The sum of X and E is the steric number. The central nitrogen atom in nitrate has three X ligands due to the three bonded oxygen atoms. NUKEMAP and NUKEMAP3D page views, exported from Google Analytics and cleaned up a bit, with a few of the "known" moments of virality indicated. The usage of the site predictably flares up in certain moments of "virality" (for the 70th anniversary of Hiroshima, over 500,000 people used it over two...
Hno3 Hybridization These pairs are still associated with the terminal atom as well as with the central atom. Remember that, in general, carbon, nitrogen, oxygen, and sulfur can form double or triple bonds with the same element or with another element. Space-filling Structure Lewis Structure Ball-and-stick molecular model. NH3 PO4-3 CO2 BH3 Examples
|
CommonCrawl
|
Perl Weekly Challenge 123: Ugly Numbers
You are given an integer $n >= 1.
Write a script to find the $nth element of Ugly Numbers.
Ugly numbers are those number whose prime factors are 2, 3 or 5. For example, the first 10 Ugly Numbers are 1, 2, 3, 4, 5, 6, 8, 9, 10, 12.
Input: $n = 7
Input: $n = 10
The numbers described above are better known as 5-smooth numbers. The 5-smooth numbers are listed as A000079 on the OEIS.
5-smooth numbers are all the numbers of the form \(2^m \, 3^n \, 5^p, 0 \leq m, 0 \leq n, 0 \leq p\). This means that for each 5-smooth number \(n\) which isn't equal to 1, is equal to twice a 5-smooth number, or thrice times a 5-smooth number, or five times a 5-smooth number:
\[ \begin{array}{|r|r|r|r|r|} \hline & & 2 \,\times & 3 \,\times & 5 \,\times \\ \hline 1 & 2^0 \, 3^0 \, 5^0 & & & \\ 2 & 2^1 \, 3^0 \, 5^0 & 1 & & \\ 3 & 2^0 \, 3^1 \, 5^0 & & 1 & \\ 4 & 2^2 \, 3^0 \, 5^0 & 2 & & \\ 5 & 2^0 \, 3^0 \, 5^1 & & & 1 \\ 6 & 2^1 \, 3^1 \, 5^0 & 3 & 2 & \\ 8 & 2^3 \, 3^0 \, 5^0 & 4 & & \\ 9 & 2^0 \, 3^2 \, 5^0 & & 3 & \\ 10 & 2^1 \, 3^0 \, 5^1 & 5 & & 2 \\ 12 & 2^2 \, 3^1 \, 5^0 & 6 & 4 & \\ 15 & 2^0 \, 3^1 \, 5^1 & & 5 & 3 \\ 16 & 2^4 \, 3^0 \, 5^0 & 8 & & \\ 18 & 2^1 \, 3^2 \, 5^0 & 9 & 6 & \\ 20 & 2^2 \, 3^0 \, 5^1 & 10 & & 4 \\ 24 & 2^3 \, 3^1 \, 5^0 & 12 & 8 & \\ 25 & 2^0 \, 3^0 \, 5^2 & & & 5 \\ 27 & 2^0 \, 3^3 \, 5^0 & & 9 & \\ 30 & 2^1 \, 3^1 \, 5^1 & 15 & 10 & 6 \\ \hline \end{array} \]
Now take a look at the last three columns: they are the 5-smooth numbers, in order! This means, we can create the 5-smooth numbers, by taking the 5-smooth numbers, multiplying them with 2, 3 and 5, and merging those lists. (How's that for a recursive definition?)
Using the discussion above, we will create the 5-smooth/ugly numbers one-by-one. We'll keep an array ugly, containing the ugly numbers created so far, and three pointers/indices into the array: next_2, next_3, and next_5.
We will be maintaining the following invariants:
2 * ugly [next_2 - 1] <= N < 2 * ugly [next_2]
where N is the largest (and most recent) generated ugly number. (For the sake of maintaining the invariant, we assume that ugly [-1] equals 0).
We start off with ugly containing one element, 1, and each of next_2, next_3 and next_5 will start off at 0 (or 1 for languages where arrays start at 1).
Then, in a loop, we calculate the next ugly number as the minimum of 2 * ugly [next_2], 3 * ugly [next_3] and 5 * ugly [next_5]. After generating the next ugly number, we will check which of next_2, next_3, and next_5 need to be incremented, and increment those which do. At least one of them needs to be incremented, but it may be all three need to. We never need to increment by more than 1.
Our solution is quite fast. We spend constant time in each iteration, so our running time is \(\mathcal{O}(N)\).
First the initialization:
use List::Util qw [min];
my @ugly = (1);
my $next_2 = 0;
In a loop, which we execute N - 1 times:
Calculating the next ugly number:
push @ugly => min 2 * $ugly [$next_2],
3 * $ugly [$next_3],
5 * $ugly [$next_5];
Updating the pointers:
$next_2 ++ if 2 * $ugly [$next_2] <= $ugly [-1];
We have similar solutions in AWK, Bash, C, Lua, Node.js, Python, R and Ruby.
|
CommonCrawl
|
Jump to: A | B | C | D | E | F | G | H | J | K | L | M | O | P | Q | R | S | T | V | W | X | Y | Z
Abrar, A. A. and Silvers, L. J. ORCID: 0000-0003-0619-6756 (2018). The effect of time-dependent γ-pumping on buoyant magnetic structures. Geophysical and Astrophysical Fluid Dynamics, doi: 10.1080/03091929.2018.1537396
Abu-Nimeh, S. and Chen, T.M. (2010). Proliferation and detection of blog spam. IEEE Security and Privacy, 8(5), doi: 10.1109/MSP.2010.113
Aina, S., Rahulamathavan, Y., Phan, R. C. W. and Chambers, J. A. (2012). Spontaneous expression classification in the encrypted domain. Paper presented at the 9th IMA International Conference on Mathematics in Signal Processing, 17 December - 20 December 2012, Birmingham, UK.
Alessandretti, L. (2018). Individual mobility in context: from high resolution trajectories to social behaviour. (Unpublished Doctoral thesis, City, University of London)
Alessandretti, L., ElBahrawy, A., Aiello, L. M. and Baronchelli, A. ORCID: 0000-0002-0255-0829 (2018). Machine Learning the Cryptocurrency Market. Complexity, 2018, doi: 10.1155/2018/8983590
Alessandretti, L., Lehmann, S. and Baronchelli, A. Individual mobility and social behaviour: Two sides of the same coin. .
Alessandretti, L., Lehmann, S. and Baronchelli, A. ORCID: 0000-0002-0255-0829 (2018). Understanding the interplay between social and spatial behaviour. EPJ Data Science, 7(1), 36.. doi: 10.1140/epjds/s13688-018-0164-6
Alessandretti, L., Sapiezynski, P., Lehmann, S. and Baronchelli, A. (2018). Evidence for a Conserved Quantity in Human Mobility. Nature Human Behaviour, 2, pp. 485-491. doi: 10.1038/s41562-018-0364-x
Alessandretti, L., Sapiezynski, P., Lehmann, S. and Baronchelli, A. (2017). Multi-scale spatio-temporal analysis of human mobility. PLoS One, 12(2), e0171686.. doi: 10.1371/journal.pone.0171686
Alessandretti, L., Sun, K., Baronchelli, A. and Perra, N. (2017). Random walks on activity-driven networks with attractiveness. Physical Review E (PRE), 95(5), 052318.. doi: 10.1103/PhysRevE.95.052318
Alexandre, J. and Bender, C. (2015). Foldy-Wouthuysen transformation for non-Hermitian Hamiltonians. Journal of Physics A: Mathematical and Theoretical, 48(18), p. 185403. doi: 10.1088/1751-8113/48/18/185403
Altland, A., De Martino, A., Egger, R. and Narozhny, B. (2010). Fluctuation relations and rare realizations of transport observables. Physical Review Letters (PRL), 105(17), pp. 170601-170605. doi: 10.1103/PhysRevLett.105.170601
Altland, A., De Martino, A., Egger, R. and Narozhny, B. (2010). Transient fluctuation relations for time-dependent particle transport. Phys. Rev. B 82, 115323 (2010), 82(11), doi: 10.1103/PhysRevB.82.115323
Altman, R., Gray, J., He, Y., Jejjala, V. and Nelson, B. D. (2014). A Calabi-Yau Database: Threefolds Constructed from the Kreuzer-Skarke List. Journal of High Energy Physics, 2015(2), p. 158. doi: 10.1007/JHEP02(2015)158
Altman, R., He, Y. ORCID: 0000-0002-0787-8380, Jejjala, V. and Nelson, B. D. (2018). New large volume Calabi-Yau threefolds. Physical Review D, 97(4), 046003.. doi: 10.1103/PhysRevD.97.046003
Alzahrani, F. M., Sanusi, Y. S., Vogiatzaki, K., Ghoniem, A. F., Habib, M. A. and Mokheimer, E. M. A. (2014). Evaluation of the Accuracy of Selected Syngas Chemical Mechanisms. Journal of Energy Resources Technology, 137(4), 042201. doi: 10.1115/1.4029860
Amato, R., Lacasa, L., Díaz-Guilera, A. and Baronchelli, A. ORCID: 0000-0002-0255-0829 (2018). The dynamics of norm change in the cultural evolution of language. Proceedings of the National Academy of Sciences, 115(33), pp. 8260-8265. doi: 10.1073/pnas.1721059115
Anderson, L. B., Gray, J., Grayson, D., He, Y. and Lukas, A. (2010). Yukawa Couplings in Heterotic Compactification. Communications in Mathematical Physics, 297, pp. 95-127. doi: 10.1007/s00220-010-1033-8
Anderson, L. B., Gray, J., He, Y. and Lukas, A. (2010). Exploring positive monad bundles and a new heterotic standard model. Journal of High Energy Physics, 2010(2), p. 54. doi: 10.1007/JHEP02(2010)054
Anderson, L. B., He, Y. and Lukas, A. (2007). Heterotic compactification, an algorithmic approach. Journal of High Energy Physics, 0707(049), doi: 10.1088/1126-6708/2007/07/049
Anderson, L. B., He, Y. and Lukas, A. (2008). Monad Bundles in Heterotic String Compactifications. Journal of High Energy Physics, 2008(JHEP07), p. 104. doi: 10.1088/1126-6708/2008/07/104
Andrienko, G., Andrienko, N. and Fuchs, G. (2015). Multi-perspective analysis of mobile phone call data records: A visual analytics approach. Paper presented at the Multi-perspective Analysis of Mobile Phone Call Data Records: a Visual Analytics Approach.
Andrienko, N., Andrienko, G. and Rinzivillo, S. (2015). Exploiting spatial abstraction in predictive analytics of vehicle traffic. ISPRS International Journal of Geo-Information, 4(2), pp. 591-606. doi: 10.3390/ijgi4020591
Andrienko, N., Andrienko, G. and Rinzivillo, S. (2016). Leveraging spatial abstraction in traffic analysis and forecasting with visual analytics. Information Systems, 57, pp. 172-194. doi: 10.1016/j.is.2015.08.007
Argasinski, K. and Broom, M. (2012). Ecological theatre and the evolutionary game: how environmental and demographic factors determine payoffs in evolutionary games. Journal of Mathematical Biology, doi: 10.1007/s00285-012-0573-2
Argasinski, K. and Broom, M. (2017). Evolutionary stability under limited population growth: Eco-evolutionary feedbacks and replicator dynamics. Ecological Complexity, doi: 10.1016/j.ecocom.2017.04.002
Argasinski, K. and Broom, M. ORCID: 0000-0002-1698-5495 (2018). Interaction rates, vital rates, background fitness and replicator dynamics: how to embed evolutionary game structure into realistic population dynamics. Theory in Biosciences - Theorie in den Biowissenschaften, 137(1), pp. 33-50. doi: 10.1007/s12064-017-0257-y
Argasinski, K. and Broom, M. (2013). The nest site lottery: How selectively neutral density dependent growth suppression induces frequency dependent selection. Theoretical Population Biology, 90, pp. 82-90. doi: 10.1016/j.tpb.2013.09.011
Arutyunov, G., Pankiewicz, A. and Stefanski, B. (2001). Boundary Superstring Field Theory Annulus Partition Function in the Presence of Tachyons. Journal of High Energy Physics, 2001(JHEP06), 049. doi: 10.1088/1126-6708/2001/06/049
Ashmore, A. and He, Y. (2011). Calabi-Yau Three-folds: Poincare Polynomials and Fractals. In: Strings, Gauge Fields, and the Geometry Behind: The Legacy of Maximilian Kreuzer. (pp. 173-186). World Scientific Publishing Company. ISBN 978-981-4412-54-4
Assis, P. E. G. and Fring, A. (2010). Compactons versus solitons. Pramana, 74(6), pp. 857-865. doi: 10.1007/s12043-010-0078-8
Assis, P. E. G. and Fring, A. (2009). From real fields to complex Calogero particles. Journal of Physics A: Mathematical and General, 42(42), doi: 10.1088/1751-8113/42/42/425206
Assis, P. E. G. and Fring, A. (2009). Integrable models from PT-symmetric deformations. Journal of Physics A: Mathematical and Theoretical, 42(10), doi: 10.1088/1751-8113/42/10/105206
Assis, P. E. G. and Fring, A. (2008). Metrics and isospectral partners for the most generic cubic PT-symmetric non-Hermitian Hamiltonian. Journal of Physics A: Mathematical and General, 41(24), doi: 10.1088/1751-8113/41/24/244001
Assis, P. E. G. and Fring, A. (2008). Non-Hermitian Hamiltonians of Lie algebraic type. Journal of Physics A: Mathematical and Theoretical, 42(1), doi: 10.1088/1751-8113/42/1/015203
Assis, P. E. G. and Fring, A. (2008). The quantum brachistochrone problem for non-Hermitian Hamiltonians. Journal of Physics A: Mathematical and Theoretical, 41(24), doi: 10.1088/1751-8113/41/24/244002
Avan, J., Caudrelier, V., Doikou, A. and Kundu, A. (2016). Lagrangian and Hamiltonian structures in an integrable hierarchy and space-time duality. Nuclear Physics B, 902, pp. 415-439. doi: 10.1016/j.nuclphysb.2015.11.024
Babichenko, A., Stefanski, B. and Zarembo, K. (2010). Integrability and the AdS(3)/CFT2 correspondence. Journal of High Energy Physics, 2010(3), doi: 10.1007/JHEP03(2010)058
Babujian, H., Fring, A., Karowski, M. and Zapletal, A. (1998). Exact Form Factors in Integrable Quantum Field Theories: the Sine-Gordon Model. Nuclear Physics B, 538(3), pp. 535-586. doi: 10.1016/S0550-3213(98)00737-8
Bagarello, F. and Fring, A. (2017). From pseudo-bosons to pseudo-Hermiticity via multiple generalized Bogoliubov transformations. International Journal of Modern Physics B, 31(12), p. 1750085. doi: 10.1142/S0217979217500850
Bagarello, F. and Fring, A. (2015). Generalized Bogoliubov transformations versus D-pseudo-bosons. Journal of Mathematical Physics, 56, p. 103508. doi: 10.1063/1.4933242
Bagarello, F. and Fring, A. (2013). A non self-adjoint model on a two dimensional noncommutative space with unbound metric. Physical Review A: Atomic, Molecular and Optical Physics, 88(4), doi: 10.1103/PhysRevA.88.042119
Bagchi, B. and Fring, A. (2009). Comment on "Non-Hermitian Quantum Mechanics with Minimal Length Uncertainty". Symmetry, Integrability and Geometry: Methods and Applications (SIGMA), 5(089), doi: 10.3842/SIGMA.2009.089
Bagchi, B. and Fring, A. (2009). Minimal length in quantum mechanics and non-Hermitian Hamiltonian systems. Physics Letters A, 373(47), pp. 4307-4310. doi: 10.1016/j.physleta.2009.09.054
Bagchi, B. and Fring, A. (2008). PT-symmetric extensions of the supersymmetric Korteweg-de Vries equation. Journal of Physics A: Mathematical and General, 41(39), doi: 10.1088/1751-8113/41/39/392004
Bagchi, B. and Fring, A. ORCID: 0000-0002-7896-7161 (2019). Quantum, noncommutative and MOND corrections to the entropic law of gravitation. International Journal of Modern Physics B, 33(05), doi: 10.1142/s0217979219500188
Baggio, M., Sax, O. O., Sfondrini, A., Stefanski, B. and Torrielli, A. (2017). Protected string spectrum in AdS(3)/CFT2 from worldsheet integrability. Journal of High Energy Physics(4), 91.. doi: 10.1007/JHEP04(2017)091
Balasubramanian, V., Czech, B., He, Y., Larjo, K. and Simon, J. (2008). Typicality, Black Hole Microstates and Superconformal Field Theories. Journal of High Energy Physics, 2008(JHEP03), 008 - 008. doi: 10.1088/1126-6708/2008/03/008
Balasubramanian, V., de Boer, J., Feng, B., He, Y., Huang, M., Jejjala, V. and Naqvi, A. (2003). Multitrace superpotentials vs. matrix models. Communications in Mathematical Physics, 242, pp. 361-392. doi: 10.1007/s00220-003-0947-9
Banjo, Elizabeth (2013). Representation theory of algebras related to the partition algebra. (Unpublished Doctoral thesis, City University London)
Barbier, S., Cox, A. ORCID: 0000-0001-9799-3122 and De Visscher, M. ORCID: 0000-0003-0617-2818 (2019). The blocks of the periplectic Brauer algebra in positive characteristic. Journal of Algebra,
Barker, A. J., Silvers, L. J., Proctor, M. R. E. and Weiss, N. O. (2012). Magnetic buoyancy instabilities in the presence of magnetic flux pumping at the base of the solar convection zone. Monthly Notices of the Royal Astronomical Society, 424(1), pp. 115-127. doi: 10.1111/j.1365-2966.2012.21174
Barker, H. A., Broom, M. and Rychtar, J. (2012). A game theoretic model of kleptoparasitism with strategic arrivals and departures of beetles at dung pats. Journal of Theoretical Biology, 300, pp. 292-298. doi: 10.1016/j.jtbi.2012.01.038
Baronchelli, A. (2018). The Emergence of Consensus: a primer. Royal Society Open Science, 5, 172189.. doi: 10.1098/rsos.172189
Baronchelli, A. (2014). Modeling is a tool, and data are crucial A comment on "Modelling language evolution: Examples and predictions" by Tao Gong et al.. Physics of Life Reviews, 11(2), pp. 317-318. doi: 10.1016/j.plrev.2014.01.014
Baronchelli, A. (2011). Role of feedback and broadcasting in the naming game. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 83(4), doi: 10.1103/PhysRevE.83.046103
Baronchelli, A. (2016). A gentle introduction to the minimal Naming Game. Belgian Journal of Linguistics, 30(1), pp. 171-192. doi: 10.1075/bjl.30.08bar
Baronchelli, A. (2011). The maturity of modeling. A comment on "Modeling the Cultural Evolution of Language" by Luc Steels.. Physics of Life Reviews, 8(4), pp. 377-378. doi: 10.1016/j.plrev.2011.10.005
Baronchelli, A., Barrat, A. and Pastor-Satorras, R. (2009). Glass transition and random walks on complex energy landscapes. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 80(2), doi: 10.1103/PhysRevE.80.020102
Baronchelli, A., Caglioti, E. and Loreto, V. (2005). Artificial sequences and complexity measures. Journal of Statistical Mechanics: Theory and Experiment new, 2005, doi: 10.1088/1742-5468/2005/04/P04002
Baronchelli, A., Caglioti, E. and Loreto, V. (2005). Measuring complexity with zippers. European Journal of Physics, 26(5), S69 - S77. doi: 10.1088/0143-0807/26/5/S08
Baronchelli, A., Caglioti, E., Loreto, V. and Pizzi, E. (2004). Dictionary-based methods for information extraction. Physica A: Statistical Mechanics and its Applications, 342(1-2), pp. 294-300. doi: 10.1016/j.physa.2004.01.072
Baronchelli, A., Caglioti, E., Loreto, V. and Steels, L. (2006). Complex systems approach to language games. Paper presented at the Second European Conference on Complex Systems ECCS'06, 25-29 Sep 2006, Oxford, UK.
Baronchelli, A., Castellano, C. and Pastor-Satorras, R. (2011). Voter models on weighted networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 83(6), doi: 10.1103/PhysRevE.83.066117
Baronchelli, A., Catanzaro, M. and Pastor-Satorras, R. (2008). Bosonic reaction-diffusion processes on scale-free networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 78(1), doi: 10.1103/PhysRevE.78.016111
Baronchelli, A., Catanzaro, M. and Pastor-Satorras, R. (2008). Random walks on complex trees. Physical Review E (PRE), 78(1), 011114. doi: 10.1103/PhysRevE.78.011114
Baronchelli, A., Chater, N., Christiansen, M. H. and Pastor-Satorras, R. (2013). Evolution in a Changing Environment. PLoS ONE, 8(1), doi: 10.1371/journal.pone.0052742
Baronchelli, A., Chater, N., Pastor-Satorras, R. and Christiansen, M. H. (2012). The Biological Origin of Linguistic Diversity. PLoS ONE, 7(10), doi: 10.1371/journal.pone.0048029
Baronchelli, A., Dall'Asta, L., Barrat, A. and Loreto, V. (2006). Bootstrapping communication in language games. Paper presented at the 6th International Conference (EVOLANG6), 12-15 April 2006, Rome, Italy.
Baronchelli, A., Dall'Asta, L., Barrat, A. and Loreto, V. (2007). Nonequilibrium phase transition in negotiation dynamics. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 76(5), doi: 10.1103/PhysRevE.76.051102
Baronchelli, A., Dall'Asta, L., Barrat, A. and Loreto, V. (2006). Strategies for fast convergence in semiotic dynamic. In: Rocha, L. M. (Ed.), Artificial Life X: Proceedings of the Tenth International Conference on the Simulation and Synthesis of Living Systems. (pp. 480-485). Cambridge, MA, US: The MIT Press. ISBN 9780262681629
Baronchelli, A., Dall'Asta, L., Barrat, A. and Loreto, V. (2006). Topology-induced coarsening in language games. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 73(1), doi: 10.1103/PhysRevE.73.015102
Baronchelli, A., Dall'Asta, L., Barrat, A. and Loreto, V. (2007). The role of topology on the dynamics of the Naming Game. European Physical Journal: Special Topics, 143(1), pp. 233-235. doi: 10.1140/epjst/e2007-00092-0
Baronchelli, A. and Díaz-Guilera, A. (2012). Consensus in networks of mobile communicating agents. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 85(1), doi: 10.1103/PhysRevE.85.016113
Baronchelli, A., Felici, M., Caglioti, E., Loreto, V. and Steels, L. (2005). Self-organizing communication in language games. Paper presented at the First European Conference on Complex Systems ECCS'05, 14-18 Nov 2005, Paris, France.
Baronchelli, A., Felici, M., Loreto, V., Caglioti, E. and Steels, L. (2006). Sharp transition towards shared vocabularies in multi-agent systems. Journal of Statistical Mechanics: Theory and Experiment new, 2006(6), doi: 10.1088/1742-5468/2006/06/P06014
Baronchelli, A., Ferrer-i-Cancho, R., Pastor-Satorras, R., Chater, N. and Christiansen, M. H. (2013). Networks in cognitive science. Trends in Cognitive Sciences, 17(7), pp. 348-360. doi: 10.1016/j.tics.2013.04.010
Baronchelli, A., Gong, T., Loreto, V. and Puglisi, A. (2010). On the origin of universal categorization patterns: an in-silica experiment. In: Smith, A. D. M., Schouwstra, M. and de Boer, B. (Eds.), The Evolution of Language. (pp. 365-366). USA: World Scientific. ISBN 9814295221
Baronchelli, A., Gong, T., Puglisi, A. and Loreto, V. (2010). Modeling the emergence of universality in color naming patterns. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 107(6), pp. 2403-2407. doi: 10.1073/pnas.0908533107
Baronchelli, A. and Loreto, V. (2006). Ring structures and mean first passage time in networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 73(2), doi: 10.1103/PhysRevE.73.026103
Baronchelli, A., Loreto, V. and Puglisi, A. (2015). Individual biases, cultural evolution, and the statistical nature of language universals: the case of colour naming systems. PLoS ONE, 10(5), doi: 10.1371/journal.pone.0125019
Baronchelli, A., Loreto, V. and Steels, L. (2008). In-depth analysis of the naming game dynamics: The homogeneous mixing case. International Journal of Modern Physics C (ijmpc), 19(5), pp. 785-812. doi: 10.1142/S0129183108012522
Baronchelli, A. and Pastor-Satorras, R. (2009). Effects of mobility on ordering dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2009(11), doi: 10.1088/1742-5468/2009/11/L11001
Baronchelli, A. and Pastor-Satorras, R. (2010). Mean-field diffusive dynamics on weighted networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 82(1), doi: 10.1103/PhysRevE.82.011111
Baronchelli, A. and Radicchi, F (2013). Lévy flights in human behavior and cognition. Chaos, Solitons and Fractals, 56, pp. 101-105. doi: 10.1016/j.chaos.2013.07.013
Barrat, A., Baronchelli, A., Dall'Asta, L. and Loreto, V. (2007). Agreement dynamics on interaction networks with diverse topologies. Chaos, 17(2), 026111. doi: 10.1063/1.2734403
Bastos, M. T. ORCID: 0000-0003-0480-1078, Mercea, D. ORCID: 0000-0003-3762-2404 and Baronchelli, A. ORCID: 0000-0002-0255-0829 (2018). The Geographic Embedding of Online Echo Chambers: Evidence from the Brexit Campaign. PLoS ONE, 13(11), e0206841. doi: 10.1371/journal.pone.0206841
Bastos, M. T., Mercea, D. and Baronchelli, A. The Spatial Dimension of Online Echo Chambers. .
Beccaria, M., De Angelis, G. F. and Forini, V. ORCID: 0000-0001-9726-1423 (2007). The scaling function at strong coupling from the quantum string Bethe equations. Journal of High Energy Physics, 2007, 066.. doi: 10.1088/1126-6708/2007/04/066
Beccaria, M., Dunne, G. V., Forini, V. ORCID: 0000-0001-9726-1423, Pawellek, M. and Tseytlin, A. A. (2010). Exact computation of one-loop correction to the energy of folded spinning string in AdS(5) x S-5. Journal of Physics A: Mathematical and Theoretical, 43(16), 165402.. doi: 10.1088/1751-8113/43/16/165402
Beccaria, M. and Forini, V. ORCID: 0000-0001-9726-1423 (2007). Anomalous dimensions of finite size field strength operators in N=4 SYM. Journal of High Energy Physics, 2007, 031.. doi: 10.1088/1126-6708/2007/11/031
Beccaria, M. and Forini, V. ORCID: 0000-0001-9726-1423 (2009). Four loop reciprocity of twist two operators in N=4 SYM. Journal of High Energy Physics, 2009, 111.. doi: 10.1088/1126-6708/2009/03/111
Beccaria, M. and Forini, V. ORCID: 0000-0001-9726-1423 (2008). Reciprocity of gauge operators in N=4SYM. Journal of High Energy Physics, 2008, 077.. doi: 10.1088/1126-6708/2008/06/077
Beccaria, M., Forini, V. ORCID: 0000-0001-9726-1423, Lukowski, T. and Zieme, S. (2009). Twist-three at five loops, Bethe ansatz and wrapping. Journal of High Energy Physics, 2009, 129.. doi: 10.1088/1126-6708/2009/03/129
Belavin, A. and Fring, A. (1997). On the Fermionic Quasi-particle Interpretation in Minimal Models of Conformal Field Theory. Physics Letters B, 409(1-4), pp. 199-205. doi: 10.1016/S0370-2693(97)00879-4
Bender, C., Hook, D. W., Meisinger, P. N. and Wang, Q-H. (2010). Complex correspondence principle. Physical Review Letters, 104(6), e061601. doi: 10.1103/PhysRevLett.104.061601
Bender, C. and Klevansky, S. P (2010). Families of particles with different masses in PT-symmetric quantum field theory. Physical Review Letters, 105(3), e031601. doi: 10.1103/PhysRevLett.105.031601
Bender, C. and Mannheim, P. D. (2008). No-ghost theorem for the fourth-order derivative Pais-Uhlenbeck oscillator model. Physical Review Letters, 100(11), e110402. doi: 10.1103/PhysRevLett.100.110402
Bender, C., Moshe, M. and Sarkar, S. (2012). PT-symmetric interpretation of double-scaling. Journal of Physics A: Mathematical and Theoretical, 46(10), e102002. doi: 10.1088/1751-8113/46/10/102002
Benishti, N., He, Y. and Sparks, J. (2010). (Un)Higgsing the M2-brane. Journal of High Energy Physics, 1001, 067 - 067. doi: 10.1007/JHEP01(2010)067
Benson, D. J., Kessar, R. and Linckelmann, M. (2017). On blocks of defect two and one simple module, and Lie algebra structure of HH¹. Journal of Pure and Applied Algebra, doi: /10.1016/j.jpaa.2017.02.010
Benson, D. J. and Linckelmann, M. (2005). Vertex and source determine the block variety of an indecomposable module. Journal of Pure and Applied Algebra, 197(1-3), pp. 11-17. doi: 10.1016/j.jpaa.2004.08.032
Benvenuti, S., Feng, B., Hanany, A. and He, Y. (2007). Counting BPS Operators in Gauge Theories: Quivers, Syzygies and Plethystics. Journal of High Energy Physics, 0711(050), doi: 10.1088/1126-6708/2007/11/050
Bercioux, D. and De Martino, A. (2010). Spin-resolved scattering through spin-orbit nanostructures in graphene. Physical Review B (PRB), 81(16), doi: 10.1103/PhysRevB.81.165410
Bianchi, L., Bianchi, M. S., Bres, A., Forini, V. ORCID: 0000-0001-9726-1423 and Vescovi, E. (2014). Two-loop cusp anomaly in ABJM at strong coupling. Journal of High Energy Physics, 2014, 13.. doi: 10.1007/JHEP10(2014)013
Bianchi, L., Bianchi, M. S., Forini, V. ORCID: 0000-0001-9726-1423, Leder, B. and Vescovi, E. (2016). Green-Schwarz superstring on the lattice. Journal of High Energy Physics, 2016, 14.. doi: 10.1007/JHEP07(2016)014
Bianchi, L., Forini, V. ORCID: 0000-0001-9726-1423 and Hoare, B. (2013). Two-dimensional S-matrices from unitarity cuts. Journal of High Energy Physics, 2013, 88.. doi: 10.1007/JHEP07(2013)088
Bianchini, D. (2016). Entanglement entropy in integrable quantum systems. (Unpublished Doctoral thesis, City, University of London)
Bianchini, D., Castro Alvaredo, O. and Doyon, B. (2015). Entanglement entropy of non-unitary integrable quantum field theory. Nuclear Physics B, 896(July 2), pp. 835-880. doi: 10.1016/j.nuclphysb.2015.05.013
Bianchini, D. and Castro-Alvaredo, O. (2016). Branch Point Twist Field Correlators in the Massive Free Boson Theory. Nuclear Physics B, 913, pp. 879-911. doi: 10.1016/j.nuclphysb.2016.10.016
Bianchini, D., Castro-Alvaredo, O., Doyon, B., Levi, E. and Ravanini, F. (2015). Entanglement entropy of non-unitary conformal field theory. Journal of Physics A: Mathematical and Theoretical, 48(4), 04FT01. doi: 10.1088/1751-8113/48/4/04FT01
Blondeau-Fournier, O., Castro Alvaredo, O. and Doyon, B. (2016). Universal scaling of the logarithmic negativity in massive quantum field theory. Journal of Physics A: Mathematical and Theoretical, 49(12), doi: 10.1088/1751-8113/49/12/125401
Boltje, R., Kessar, R. ORCID: 0000-0002-1893-4237 and Linckelmann, M. (2019). On Picard groups of blocks of finite groups. Journal of Algebra, doi: 10.1016/j.jalgebra.2019.02.045
Bombardelli, D., Stefanski, B. and Torrielli, A. (2018). The low-energy limit of AdS(3)/CFT2 and its TBA. Journal of High Energy Physics(10), doi: 10.1007/JHEP10(2018)177
Bond, R. L., He, Y-H. ORCID: 0000-0002-0787-8380 and Ormerod, T. C. (2018). A quantum framework for likelihood ratios. International Journal of Quantum Information, 16(1), 1850002.. doi: 10.1142/S0219749918500028
Bonnafé, C. and Kessar, R. (2008). On the endomorphism algebra of modular Gelfand-Graev representations. Journal of Algebra, 320(7), pp. 2847-2870. doi: 10.1016/j.jalgebra.2008.05.029
Borsato, R., Ohlsson Sax, O., Sfondrini, A. and Stefanski, B. (2015). The AdS 3 × S 3 × S 3 × S 1 worldsheet S matrix. Journal of Physics A: Mathematical and Theoretical, 48(41), p. 5401. doi: 10.1088/1751-8113/48/41/415401
Borsato, R., Ohlsson Sax, O., Sfondrini, A. and Stefanski, B. (2016). On the spectrum of AdS₃ × S³× T⁴ strings with Ramond–Ramond flux. Journal of Physics A: Mathematical and Theoretical, 49, 41LT03.. doi: 10.1088/1751-8113/49/41/41LT03
Borsato, R., Ohlsson Sax, O., Sfondrini, A. and Stefanski, B. (2014). Towards the All-Loop Worldsheet S Matrix for AdS3 × S3 × T4. Physical Review Letters (PRL), 113, p. 131601. doi: 10.1103/PhysRevLett.113.131601
Borsato, R., Ohlsson Sax, O., Sfondrini, A., Stefanski, B. and Torrielli, A. (2013). Dressing phases of AdS3/CFT2. Physical Review D - Particles, Fields, Gravitation and Cosmology, 88(6), doi: 10.1103/PhysRevD.88.066004
Borsato, R., Ohlsson Sax, O., Sfondrini, A., Stefanski, B. and Torrielli, A. (2016). On the dressing factors, Bethe equations and Yangian symmetry of strings on AdS3 × S3 × T4. Journal of Physics A: Mathematical and Theoretical, 50(2), doi: 10.1088/1751-8121/50/2/024004
Borsato, R., Sax, O. O., Sfondrini, A., Stefanski, B. and Torrielli, A. (2013). The all-loop integrable spin-chain for strings on AdS3 × S 3 × T 4: The massive sector. Journal of High Energy Physics, 2013(8), doi: 10.1007/JHEP08(2013)043
Bose, S., Gundry, J. and He, Y. (2015). Gauge Theories and Dessins d'Enfants: Beyond the Torus. Journal of High Energy Physics, 2015(1), p. 135. doi: 10.1007/JHEP01(2015)135
Bourgoin, Monique (2012). Antilinear deformations of Coxeter groups with application to Hamiltonian systems. (Unpublished Doctoral thesis, City University London)
Bourjaily, J. L., He, Y. ORCID: 0000-0002-0787-8380, McLeod, A. J., von Hippel, M. and Wilhelm, M. (2018). Traintracks through Calabi-Yau Manifolds: Scattering Amplitudes beyond Elliptic Polylogarithms. Physical Review Letter, 121(7), doi: 10.1103/PhysRevLett.121.071603
Bowman, C. and Cox, A. (2014). Decomposition numbers for Brauer algebras of type G(m,p,n) in characteristic zero. Journal of Pure and Applied Algebra, 218(6), pp. 992-1002. doi: 10.1016/j.jpaa.2013.10.014
Bowman, C., Cox, A. and De Visscher, M. (2013). Decomposition numbers for the cyclotomic Brauer algebras in characteristic zero. Journal of Algebra, 378, pp. 80-102. doi: 10.1016/j.jalgebra.2012.12.020
Bowman, C., Cox, A. and Speyer, L. (2016). A Family of Graded Decomposition Numbers for Diagrammatic Cherednik Algebras. International Mathematics Research Notices, doi: 10.1093/imrn/rnw101
Bowman, C., De Visscher, M. and Enyang, J. (2017). Simple modules for the partition algebra and monotone convergence of Kronecker coefficients. International Mathematics Research Notices, doi: 10.1093/irmn/rnx095
Braun, V., He, Y. and Ovrut, B. A. (2006). Stability of the minimal heterotic standard model bundle. Journal of High Energy Physics, 0606(32), doi: 10.1088/1126-6708/2006/06/032
Braun, V., He, Y. and Ovrut, B. A. (2013). Supersymmetric Hidden Sectors for Heterotic Standard Models. Journal of High Energy Physics, 2013(9), p. 8. doi: 10.1007/JHEP09(2013)008
Braun, V., He, Y. and Ovrut, B. A. (2006). Yukawa couplings in heterotic standard models. Journal of High Energy Physics, 0604, 019 - 019. doi: 10.1088/1126-6708/2006/04/019
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2006). The Exact MSSM spectrum from string theory. Journal of High Energy Physics, 0605, doi: 10.1088/1126-6708/2006/05/043
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2006). Heterotic standard model moduli. Journal of High Energy Physics, 2006(JHEP01), 025 - 025. doi: 10.1088/1126-6708/2006/01/025
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2005). A Heterotic standard model. Physics Letters B, 618, pp. 252-258. doi: 10.1016/j.physletb.2005.05.007
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2006). Moduli dependent mu-terms in a heterotic standard model. Journal of High Energy Physics, 2006(JHEP03), 006 - 006. doi: 10.1088/1126-6708/2006/03/006
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2006). Vector bundle extensions, sheaf cohomology, and the heterotic standard model. Advances in Theoretical and Mathematical Physics, 10(4),
Braun, V., He, Y., Ovrut, B. A. and Pantev, T. (2005). A standard model from the E(8) x E(8) heterotic superstring. Journal of High Energy Physics, 0506(039), doi: 10.1088/1126-6708/2005/06/039
Braun, V. and Stefanski, B. (2002). Orientifolds and K-theory. Paper presented at the NATO Advanced Study Institute on Progress in String Theory and M-Theory, 24-05-1999 - 05-06-1999, Cargese, France.
Broom, M. (2009). Balancing risks and rewards: the logic of violence. Frontiers in Behavioral Neuroscience, 3(51), doi: 10.3389/neuro.08.051.2009
Broom, M. (2003). The use of multiplayer game theory in the modeling of biological populations. Comments on Theoretical Biology, 8(2-3), pp. 103-123.
Broom, M., Borries, C. and Koenig, A. (2004). Infanticide and infant defence by males--modelling the conditions in primate multi-male groups. Journal of Theoretical Biology, 231(2), pp. 261-270. doi: 10.1016/j.jtbi.2004.07.001
Broom, M. and Cannings, C. (2017). Game theoretical modelling of a dynamically evolving network I: general target sequences. Journal of Dynamics and Games, 4(4), pp. 285-318. doi: 10.3934/jdg.2017016
Broom, M. and Cannings, C. (2013). A dynamic network population model with strategic link formation governed by individual preferences. Journal of Theoretical Biology, 335, doi: 10.1016/j.jtbi.2013.06.024
Broom, M., Cannings, C. and Vickers, G. T. (2000). Evolution in Knockout Contests: the Variable Strategy Case. Selection, 1, pp. 5-21.
Broom, M., Crowe, M. L., Fitzgerald, M. R. and Rychtar, J. (2010). The stochastic modelling of kleptoparasitism using a Markov process. Journal of Theoretical Biology, 264(2), pp. 266-272. doi: 10.1016/j.jtbi.2010.01.012
Broom, M., Hadjichrysanthou, C. and Rychtar, J. (2010). Evolutionary games on graphs and the speed of the evolutionary process. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(2117), pp. 1327-1346. doi: 10.1098/rspa.2009.0487
Broom, M., Hadjichrysanthou, C., Rychtar, J. and Stadler, B. T. (2010). Two results on evolutionary processes on general non-directed graphs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 466(2121), pp. 2795-2798. doi: 10.1098/rspa.2010.0067
Broom, M., Hughes, R. N., Burrows, M. T. and Ruxton, G. D. (2012). Evolutionarily stable sexual allocation by both stressed and unstressed potentially simultaneous hermaphrodites within the same population.. Journal of Theoretical Biology, 309, pp. 96-102. doi: 10.1016/j.jtbi.2012.06.004
Broom, M., Johanis, M. and Rychtar, J. (2017). The effect of fight cost structure on fighting behaviour involving simultaneous decisions and variable investment levels. Journal of Mathematical Biology, doi: 10.1007/s00285-017-1149-y
Broom, M., Johanis, M. and Rychtar, J. (2014). The effect of fight cost structure on fighting behaviour. Journal of Mathematical Biology, pp. 979-996. doi: 10.1007/s00285-014-0848-x
Broom, M., Kiss, I. Z. and Rafols, I. (2009). Can epidemic models describe the diffusion of research topics across disciplines?. Paper presented at the 12th International Conference of the International Society for Scientometrics and Informetrics, 14-07-2009 - 17-07-2009, Rio de Janeiro.
Broom, M. ORCID: 0000-0002-1698-5495 and Krivan, V. (2018). Biology and evolutionary games. In: Basar, T. and Zaccour, G. (Eds.), Handbook of Dynamic Game Theory. (pp. 1039-1077). Germany: Springer. ISBN 9783319443737
Broom, M., Krivan, V. and Riedel, F. (2015). Dynamic Games and Applications: Second Special Issue on Population Games: Introduction. Dynamic Games and Applications, 5(2), pp. 155-156. doi: 10.1007/s13235-015-0153-3
Broom, M., Luther, R. M., Ruxton, G. D. and Rychtar, J. (2008). A game-theoretic model of kleptoparasitic behavior in polymorphic populations. Journal of Theoretical Biology, 255(1), pp. 81-91. doi: 10.1016/j.jtbi.2008.08.001
Broom, M., Luther, R. M. and Rychtar, J. (2009). A Hawk-Dove game in kleptoparasitic populations. Journal of Combinatorics, Information and System Sciences, 4, pp. 449-462.
Broom, M. and Ruxton, G. D. (2013). On the evolutionary stability of zero-cost pooled-equilibrium signals. Journal of Theoretical Biology, 323, pp. 69-75. doi: 10.1016/j.jtbi.2013.01.017
Broom, M. and Ruxton, G. D. (2012). Perceptual advertisement by the prey of stalking or ambushing predators. Journal of Theoretical Biology, 315, pp. 9-16. doi: 10.1016/j.jtbi.2012.08.026
Broom, M. and Ruxton, G. D. (2011). Some mistakes go unpunished: the evolution of "all or nothing" signalling.. Evolution, 65(10), pp. 2743-2749. doi: 10.1111/j.1558-5646.2011.01377.x
Broom, M. and Ruxton, G. D. (2004). A framework for modelling and analysing conspecific brood parasitism. Journal of Mathematical Biology, 48(5), pp. 529-544. doi: 10.1007/s00285-003-0244-4
Broom, M., Ruxton, G. D. and Schaefer, H. M. (2013). Signal verification can promote reliable signalling.. Proceedings of the Royal Society B: Biological Sciences, 280(201315), -. doi: 10.1098/rspb.2013.1560
Broom, M. and Rychtar, J. (2014). Asymmetric Games in Monomorphic and Polymorphic Populations. Dynamic Games and Applications, 4(4), pp. 391-406. doi: 10.1007/s13235-014-0112-4
Broom, M. and Rychtar, J. (2016). Evolutionary games with sequential decisions and dollar auctions. Dynamic Games and Applications, doi: 10.1007/s13235-016-0212-4
Broom, M. and Rychtar, J. (2016). Ideal Cost-Free Distributions in Structured Populations for General Payoff Functions. Dynamic Games and Applications, doi: 10.1007/s13235-016-0204-4
Broom, M. and Rychtar, J. (2011). Kleptoparasitic melees--modelling food stealing featuring contests with multiple individuals.. Bulletin of Mathematical Biology, 73(3), pp. 683-699. doi: 10.1007/s11538-010-9546-z
Broom, M. and Rychtar, J. (2016). Nonlinear and Multiplayer Evolutionary Games. Annals of the International Society of Dynamic Games, 14, pp. 95-115. doi: 10.1007/978-3-319-28014-1_5
Broom, M. and Rychtar, J. (2008). An analysis of the fixation probability of a mutant on special classes of non-directed graphs. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 464(2098), pp. 2609-2627. doi: 10.1098/rspa.2008.0058
Broom, M. and Rychtar, J. (2009). A game theoretical model of kleptoparasitism with incomplete information. Journal of Mathematical Biology, 59(5), pp. 631-649. doi: 10.1007/s00285-008-0247-2
Broom, M. and Rychtar, J. (2012). A general framework for analysing multiplayer games in networks using territorial interactions as a case study. Journal of Theoretical Biology, 302, pp. 70-80. doi: 10.1016/j.jtbi.2012.02.025
Broom, M. and Rychtar, J. (2016). A model of food stealing with asymmetric information. Ecological Complexity, 26, pp. 137-142. doi: 10.1016/j.ecocom.2015.05.001
Broom, M., Rychtar, J. and Spears-Gill, T. (2016). The Game-Theoretical Model of Using Insecticide-Treated Bed-Nets to Fight Malaria. Applied Mathematics, 07(09), pp. 852-860. doi: 10.4236/am.2016.79076
Broom, M., Rychtar, J. and Stadler, B. (2009). Evolutionary Dynamics on Small-Order Graphs. Journal of Interdisciplinary Mathematics, 12, pp. 129-140.
Broom, M., Rychtar, J. and Sykes, C. (2008). The Evolution of Kleptoparasitism under Adaptive Dynamics Without Restriction. Journal of Interdisciplinary Mathematics, 11(4), pp. 479-494.
Broom, M., Rychtar, J. and Sykes, D. (2014). Kleptoparasitic Interactions under Asymmetric Resource Valuation. Mathematical Modelling of Natural Phenomena, 9(3), pp. 138-147. doi: 10.1051/mmnp/20149309
Broom, M., Speed, M. P. and Ruxton, G. D. (2006). Evolutionarily stable defence and signalling of that defence. Journal of Theoretical Biology, 242(1), pp. 32-43. doi: 10.1016/j.jtbi.2006.01.032
Broom, M., Speed, M. P. and Ruxton, G. D. (2005). Evolutionarily stable investment in secondary defences. Functional Ecology, 19(5), pp. 836-843. doi: 10.1111/j.1365-2435.2005.01030.x
Budroni, M. A., Baronchelli, A. and Pastor-Satorras, R. (2017). Scale-free networks emerging from multifractal time series. Physical Review E (PRE), 95(5), 052311.. doi: 10.1103/PhysRevE.95.052311
Bull, K., He, Y. ORCID: 0000-0002-0787-8380, Jejjala, V. and Mishra, C. (2018). Machine learning CICY threefolds. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 785, pp. 65-72. doi: 10.1016/j.physletb.2018.08.008
Bytsko, A. G. and Fring, A. (1999). ADE spectra in conformal field theory. Physics Letters B, 454(1-2), pp. 59-69. doi: 10.1016/S0370-2693(99)00300-7
Bytsko, A. G. and Fring, A. (1998). Anyonic Interpretation of Virasoro Characters and the Thermodynamic Bethe Ansatz. Nuclear Physics B, 521(3), pp. 573-591. doi: 10.1016/S0550-3213(98)00222-3
Bytsko, A. G. and Fring, A. (2000). Factorized combinations of virasoro characters. Communications in Mathematical Physics, 209(1), pp. 179-205. doi: 10.1007/s002200050019
Bytsko, A. G. and Fring, A. (1999). A Note on ADE-Spectra in Conformal Field Theory. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 454(1-2), pp. 59-69. doi: 10.1016/S0370-2693(99)00300-7
Bytsko, A. G. and Fring, A. (1998). Thermodynamic Bethe Ansatz with Haldane Statistics. Nuclear Physics B, 532(3), pp. 588-608. doi: 10.1016/S0550-3213(98)00531-8
Cabra, D. C., De Martino, A., Grynberg, M. D., Peysson, S. and Pujol, P. (2000). Random bond XXZ chains with modulated couplings. Physical Review Letters (PRL), 85(22), pp. 4791-4794. doi: 10.1103/PhysRevLett.85.4791
Cabra, D. C., De Martino, A., Honecker, A., Pujol, P. and Simonetti, P. (2000). Doping-dependent magnetization plateaux in p-merized Hubbard chains. Physics Letters A, 268(4-6), doi: 10.1016/S0375-9601(00)00210-3
Cabra, D. C., De Martino, A., Honecker, A., Pujol, P. and Simonetti, P. (2001). Emergence of Irrationality: Magnetization Plateaux in Modulated Hubbard Chains. Physical Review B (PRB), 63(9), doi: 10.1103/PhysRevB.63.094406
Cabra, D. C., De Martino, A., Pujol, P. and Simonetti, P. (2002). Hubbard ladders in a magnetic field. Europhysics Letters, 57(3), doi: 10.1209/epl/i2002-00475-5
Candelas, P., de la Ossa, X., He, Y. and Szendroi, B. (2008). Triadophilia: A special corner in the landscape. Advances in Theoretical and Mathematical Physics, 12(2),
Cardy, J. L., Castro-Alvaredo, O. and Doyon, B. (2008). Form factors of branch-point twist fields in quantum integrable models and entanglement entropy. Journal of Statistical Physics, 130(1), pp. 129-168. doi: 10.1007/s10955-007-9422-x
Castelló, X., Baronchelli, A. and Loreto, V. (2009). Consensus and ordering in language dynamics. European Physical Journal B (The), 71(4), pp. 557-564. doi: 10.1140/epjb/e2009-00284-2
Castro Alvaredo, O. (2017). Massive Corrections to Entanglement in Minimal E8 Toda Field Theory. SciPost Physics, 2(008), doi: 10.21468/SciPostPhys.2.1.008
Castro Alvaredo, O. ORCID: 0000-0003-1876-7341, De Fazio, C., Doyon, B. and Szécsényi, I. M. (2018). Entanglement Content of Quasiparticle Excitations. Physical Review Letters, 121(17), 170602.. doi: 10.1103/PhysRevLett.121.170602
Castro Alvaredo, O. ORCID: 0000-0003-1876-7341, Doyon, B. and Fioravanti, D. (2018). Conical twist fields and null polygonal Wilson loops. Nuclear Physics B, 931, pp. 146-178. doi: 10.1016/j.nuclphysb.2018.04.002
Castro Alvaredo, O. and Fring, A. (2002). Conductance from Non-perturbative Methods II. Journal of High Energy Physics, 2002(10),
Castro-Alvaredo, A. and Fring, A. (2004). On vacuum energies and renomalizability in integrable quantum field theories. Nuclear Physics B, 687(3), pp. 303-322. doi: 10.1016/j.nuclphysb.2004.04.005
Castro-Alvaredo, O. (2006). Boundary form factors of the sinh-Gordon model with Dirichlet boundary conditions at the self-dual point. Journal of Physics A: Mathematical and General, 39(38), pp. 11901-11914. doi: 10.1088/0305-4470/39/38/016
Castro-Alvaredo, O. (2008). Form factors of boundary fields for the A(2)-affine Toda field theory. Journal of Physics A: Mathematical and General, 41(19), doi: 10.1088/1751-8113/41/19/194005
Castro-Alvaredo, O., Chen, Y., Doyon, B. and Hoogeveen, M. (2014). Thermodynamic Bethe ansatz for non-equilibrium steady states: exact energy current and fluctuations in integrable QFT. Journals of Statistical Mechanics [JSTAT], doi: 10.1088/1742-5468/2014/03/P03011
Castro-Alvaredo, O. and Doyon, B. (2009). Bi-partite Entanglement Entropy in Massive QFT with a Boundary: the Ising Model. Journal of Statistical Physics, 134(1), pp. 105-145. doi: 10.1007/s10955-008-9664-2
Castro-Alvaredo, O. and Doyon, B. (2008). Bi-partite entanglement entropy in integrable models with backscattering. Journal of Physics A: Mathematical and General, 41(27), doi: 10.1088/1751-8113/41/27/275203
Castro-Alvaredo, O. and Doyon, B. (2009). Bi-partite entanglement entropy in massive (1+1)-dimensional quantum field theories. Journal of Physics A: Mathematical and General, 42(50), doi: 10.1088/1751-8113/42/50/504006
Castro-Alvaredo, O. and Doyon, B. (2012). Entanglement Entropy of Highly Degenerate States and Fractal Dimensions. Physical Review Letters (PRL), 108(12), doi: 10.1103/PhysRevLett.108.120401
Castro-Alvaredo, O. and Doyon, B. (2013). Entanglement in permutation symmetric states, fractal dimensions, and geometric quantum mechanics. Journal of Statistical Mechanics: Theory and Experiment, 2013, P02016 -. doi: 10.1088/1742-5468/2013/02/P02016
Castro-Alvaredo, O. and Doyon, B. (2011). Permutation operators, entanglement entropy, and the XXZ spin chain in the limit Δ-> -1. Journal of Statistical Mechanics: Theory and Experiment, 2011(2), doi: 10.1088/1742-5468/2011/02/P02001
Castro-Alvaredo, O., Doyon, B. and Levi, E. (2011). Arguments towards a c-theorem from branch-point twist fields. Journal of Physics A: Mathematical and General, 44(49), doi: 10.1088/1751-8113/44/49/492003
Castro-Alvaredo, O., Doyon, B. and Ravanini, F. (2017). Irreversibility of the renormalization group flow in non-unitary quantum field theory. Journal of Physics A: Mathematical and Theoretical, 50(42), 424002.. doi: 10.1088/1751-8121/aa8a10
Castro-Alvaredo, O., Doyon, B. and Yoshimura, T. (2016). Emergent Hydrodynamics in Integrable Quantum Systems Out of Equilibrium. Physical Review X, 6(4), 041065.. doi: 10.1103/PhysRevX.6.041065
Castro-Alvaredo, O., Dreissig, J. and Fring, A. (2002). Integrable scattering theories with unstable particles. The European Physical Journal C - Particles and Fields, 35(3), pp. 393-411. doi: 10.1140/epjc/s2004-01780-x
Castro-Alvaredo, O. ORCID: 0000-0003-1876-7341, Fazio, C. D., Doyon, B. and Szécsényi, I. M. (2018). Entanglement Content of Quantum Particle Excitations I. Free Field Theory. Journal of High Energy Physics, 39, doi: 10.1007/JHEP10(2018)039
Castro-Alvaredo, O. and Fring, A. (2004). Applications of quantum integrable systems. International Journal of Modern Physics A (IJMPA), 19(S2), pp. 92-116. doi: 10.1142/S0217751X04020336
Castro-Alvaredo, O. and Fring, A. (2003). Breathers in the elliptic sine-Gordon model. Journal of Physics A: Mathematical and General, 36(40), doi: 10.1088/0305-4470/36/40/008
Castro-Alvaredo, O. and Fring, A. (2004). Chaos in the thermodynamic Bethe ansatz. Physics Letters A, 334(2-3), pp. 173-179. doi: 10.1016/j.physleta.2004.11.009
Castro-Alvaredo, O. and Fring, A. (2001). Constructing infinite particle spectra. Physical Review D (PRD), 64(8), doi: 10.1103/PhysRevD.64.085005
Castro-Alvaredo, O. and Fring, A. (2001). Decoupling the SU(N)2-homogeneous sine-Gordon model. Physical Review D (PRD), 64(8), doi: 10.1103/PhysRevD.64.085007
Castro-Alvaredo, O. and Fring, A. (2000). Decoupling the SU(N)_2-homogeneous Sine-Gordon model. Physical Review D (PRD), 64(8), doi: 10.1103/PhysRevD.64.085007
Castro-Alvaredo, O. and Fring, A. (2002). Finite temperature correlation functions from form factors. Nuclear Physics B, 636(3), pp. 611-631. doi: 10.1016/S0550-3213(02)00409-1
Castro-Alvaredo, O. and Fring, A. (2001). Form factors from free fermionic Fock fields, the Federbush model. Nuclear Physics B, 618(3), pp. 437-464. doi: 10.1016/S0550-3213(01)00462-X
Castro-Alvaredo, O. and Fring, A. (2001). Form-factors from free fermionic Fock fields, the Federbush model. Nuclear Physics B, 618(3), pp. 437-464. doi: 10.1016/S0550-3213(01)00462-X
Castro-Alvaredo, O. and Fring, A. (2002). From integrability to conductance, impurity systems. Nuclear Physics B, 649(3), pp. 449-490. doi: 10.1016/S0550-3213(02)01029-5
Castro-Alvaredo, O. and Fring, A. (2001). Identifying the operator content, the Homogeneous sine-Gordon models. Nuclear Physics B, 604(1-2), pp. 367-390. doi: 10.1016/S0550-3213(01)00055-4
Castro-Alvaredo, O. and Fring, A. (2005). Integrable models with unstable particles. Progress in Mathematics, 237, pp. 59-87. doi: 10.1007/3-7643-7341-5_2
Castro-Alvaredo, O. and Fring, A. (2003). Rational sequences for the conductance in quantum wires from affine Toda field theories. Journal of Physics A: Mathematical and General, 36(26), L425-. doi: 10.1088/0305-4470/36/26/101
Castro-Alvaredo, O. and Fring, A. (2000). Renormalization group flow with unstable particles. Physical Review D (PRD), 63(2), doi: 10.1103/PhysRevD.63.021701
Castro-Alvaredo, O. and Fring, A. (2002). Scaling functions from q-deformed Virasoro characters. Journal of Physics A: Mathematical and General, 35(3), 609 -. doi: 10.1088/0305-4470/35/3/310
Castro-Alvaredo, O. and Fring, A. (2004). Universal boundary reflection amplitudes. Nuclear Physics B, 682(3), pp. 551-584. doi: 10.1016/j.nuclphysb.2004.01.009
Castro-Alvaredo, O. and Fring, A. (2002). Unstable particles versus resonances in impurity systems, conductance in quantum wires. Journal of Physics: Condensed Matter, 14(47), doi: 10.1088/0953-8984/14/47/101
Castro-Alvaredo, O. and Fring, A. (2009). A spin chain model with non-Hermitian interaction: the Ising quantum spin chain in an imaginary field. Journal of Physics A: Mathematical and Theoretical, 42(46), doi: 10.1088/1751-8113/42/46/465211
Castro-Alvaredo, O., Fring, A. and Faria, C. F. D. M. (2003). Relativistic treatment of harmonics from impurity systems in quantum wires. Physical Review B (PRB), 67(12), doi: 10.1103/PhysRevB.67.125405
Castro-Alvaredo, O., Fring, A. and Göhmann, F. (2002). On the absence of simultaneous reflection and transmission in integrable impurity systems. Physics Letters,
Castro-Alvaredo, O., Fring, A. and Korff, C. (2000). Form factors of the homogeneous sine-Gordon models. Physics Letters B, 484(1-2), pp. 167-176. doi: 10.1016/S0370-2693(00)00565-7
Castro-Alvaredo, O., Fring, A., Korff, C. and Miramontes, J. L. (2000). Thermodynamic Bethe ansatz of the homogeneous sine-Gordon models. Nuclear Physics B, 575(3), pp. 535-560. doi: 10.1016/S0550-3213(00)00162-0
Castro-Alvaredo, O. and Levi, E. (2011). Higher particle form factors of branch point twist fields in integrable quantum field theories. Journal of Physics A: Mathematical and General, 44(25), doi: 10.1088/1751-8113/44/25/255401
Castro-Alvaredo, O. and Maillet, J. M. (2007). Form factors of integrable Heisenberg (higher) spin chains. Journal of Physics A: Mathematical and General, 40(27), pp. 7451-7471. doi: 10.1088/1751-8113/40/27/004
Castro-Alvaredo, O. and Miramontes, J. L. (2000). Massive symmetric space sine-Gordon soliton theories and perturbed conformal field theory. Nuclear Physics B, 581(3), pp. 643-678. doi: 10.1016/S0550-3213(00)00248-0
Caudrelier, V. (2005). Factorization in integrable systems with impurity. Paper presented at the 14th International Colloquium on Quantum Groups, 16 June 2005 - 18 June 2005, Prague, Czech Republic.
Caudrelier, V. (2015). Multisymplectic approach to integrable defects in the sine-Gordon model. Journal of Physics A: Mathematical and Theoretical, 48, p. 195203. doi: 10.1088/1751-8113/48/19/195203
Caudrelier, V. (2008). On a systematic approach to defects in classical integrable field theories. International Journal of Geometric Methods in Modern Physics (ijgmmp), 5(7), pp. 1085-1108. doi: 10.1142/S0219887808003223
Caudrelier, V. (2015). On the Inverse Scattering Method for Integrable PDEs on a Star Graph. Communications in Mathematical Physics, 338(2), pp. 893-917. doi: 10.1007/s00220-015-2378-9
Caudrelier, V. and Crampe, N. (2006). Exact energy spectrum for models with equally spaced point potentials. Nuclear Physics B, 738(3), pp. 351-367. doi: 10.1016/j.nuclphysb.2005.12.014
Caudrelier, V. and Crampe, N. (2007). Exact results for the one-dimensional many-body problem with contact interaction: Including a tunable impurity. Reviews in Mathematical Physics (rmp), 19(4), pp. 349-370. doi: 10.1142/S0129055X07002973
Caudrelier, V. and Crampe, N. (2004). Integrable N-particle Hamiltonians with Yangian or reflection algebra symmetry. Journal of Physics A: Mathematical and General, 37(24), pp. 6285-6298. doi: 10.1088/0305-4470/37/24/007
Caudrelier, V. and Crampe, N. (2008). Symmetries of Spin Calogero Models. Symmetry, Integrability and Geometry: Methods and Applications (SIGMA), 4, pp. 1-16. doi: 10.3842/SIGMA.2008.090
Caudrelier, V., Crampe, N. and Zhang, Q. C. (2014). Integrable boundary for quad-graph systems: Three-dimensional boundary consistency. SIGMA 10, 14, -. doi: 10.3842/SIGMA.2014.014
Caudrelier, V., Crampe, N. and Zhang, Q. C. (2013). Set-theoretical reflection equation: Classification of reflection maps. Journal of Physics A: Mathematical and Theoretical, 46(095203), doi: 10.1088/1751-8113/46/9/095203
Caudrelier, V. and Doyon, B. (2015). The Quench Map in an Integrable Classical Field Theory: Nonlinear Schrödinger Equation. ..
Caudrelier, V. and Kundu, A. (2015). A multisymplectic approach to defects in integrable classical field theory. Journal of High Energy Physics, 88, doi: 10.1007/JHEP02(2015)088
Caudrelier, V., Mintchev, M. and Ragoucy, E. (2014). Exact scattering matrix of graphs in magnetic field and quantum noise. Journal of Mathematical Physics, 55, 083524. doi: 10.1063/1.4893354
Caudrelier, V., Mintchev, M. and Ragoucy, E. (2013). Quantum wire network with magnetic flux. Physics Letters A, 337(31-33), pp. 1788-1793. doi: 10.1016/j.physleta.2013.05.018
Caudrelier, V., Mintchev, M. and Ragoucy, E. (2005). Solving the quantum nonlinear Schrodinger equation with delta-type impurity. Journal of Mathematical Physics, 46(4), doi: 10.1063/1.1842353
Caudrelier, V., Mintchev, M. and Ragoucy, E. (2004). The quantum nonlinear Schrodinger model with point-like defect. Journal of Physics A: Mathematical and General, 37(30), L367. doi: 10.1088/0305-4470/37/30/L02
Caudrelier, V., Mintchev, M., Ragoucy, E. and Sorba, P. (2005). Reflection-transmission quantum Yang-Baxter equations. Journal of Physics A: Mathematical and General, 38(15), pp. 3431-3441. doi: 10.1088/0305-4470/38/15/013
Caudrelier, V. and Ragoucy, E. (2010). Direct computation of scattering matrices for general quantum graphs. Nuclear Physics B, 828(3), pp. 515-535. doi: 10.1016/j.nuclphysb.2009.10.012
Caudrelier, V. and Ragoucy, E. (2003). Lax pair and super-Yangian symmetry of the nonlinear super-Schrodinger equation. Journal of Mathematical Physics, 44(12), pp. 5706-5732. doi: 10.1063/1.1625078
Caudrelier, V. and Ragoucy, E. (2004). Quantum resolution of the nonlinear super-Schrodinger equation. International Journal of Geometric Methods in Modern Physics (ijgmmp), 19(10), pp. 1559-1577. doi: 10.1142/S0217751X0401804X
Caudrelier, V. and Ragoucy, E. (2005). Spontaneous symmetry breaking in the non-linear Schrodinger hierarchy with defect. Journal of Physics A: Mathematical and General, 38(10), pp. 2241-2257. doi: 10.1088/0305-4470/38/10/013
Caudrelier, V. and Zhang, Q. (2012). Vector Nonlinear Schrödinger Equation on the half-line. Journal of Physics A: Mathematical and General, 45(10), doi: 10.1088/1751-8113/45/10/105201
Caudrelier, V. and Zhang, Q. C. (2014). Yang-Baxter and reflection maps from vector solitons with a boundary. Nonlinearity, 27(6), 1081 -. doi: 10.1088/0951-7715/27/6/1081
Cavaglia, A. and Fring, A. (2012). PT-symmetrically deformed shock waves. Journal of Physics A: Mathematical and Theoretical, 45(44), p. 444010. doi: 10.1088/1751-8113/45/44/444010
Cavaglia, A., Fring, A. and Bagchi, B. (2011). PT-symmetry breaking in complex nonlinear wave equations and their deformations. Journal of Physics A: Mathematical and Theoretical, 44(32), p. 325201. doi: 10.1088/1751-8113/44/32/325201
Cavaglià, A. (2015). Nonsemilinear one-dimensional PDEs: analysis of PT deformed models and numerical study of compactons. (Doctoral thesis, City University London)
Cen, J., Correa, F. and Fring, A. (2017). Degenerate multi-solitons in the sine-Gordon equation. Journal of Physics A: Mathematical and Theoretical, 50(43), 435201.. doi: 10.1088/1751-8121/aa8b7e
Cen, J., Correa, F. and Fring, A. (2017). Time-delay and reality conditions for complex solitons. Journal of Mathematical Physics, 58(3), 032901.. doi: 10.1063/1.4978864
Cen, J. and Fring, A. (2016). Complex solitons with real energies. Journal of Physics A: Mathematical and Theoretical, 49(36), 365202.. doi: 10.1088/1751-8113/49/36/365202
Cen, J., Fring, A. ORCID: 0000-0002-7896-7161 and Frith, T. (2019). Time-dependent Darboux (supersymmetric) transformations for non-Hermitian quantum systems. Journal of Physics A: Mathematical and Theoretical, doi: 10.1088/1751-8121/ab0335
Centola, D. and Baronchelli, A. (2015). The spontaneous emergence of conventions: An experimental study of cultural evolution. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 112(7), pp. 1989-1994. doi: 10.1073/pnas.1418838112
Centola, D., Becker, J., Brackbill, D. and Baronchelli, A. ORCID: 0000-0002-0255-0829 (2018). Experimental evidence for tipping points in social convention. Science, 360(6393), pp. 1116-1119. doi: 10.1126/science.aas8827
Chan, Pee Yuaw (1986). Software reliability prediction. (Unpublished Doctoral thesis, The City University London)
Chan-Henry, R.Y. (1992). Design and development of electrochemical gas sensors. (Unpublished Doctoral thesis, City University London)
Chawsheen, T.A. and Broom, M. (2017). Seasonal time-series modeling and forecasting of monthly mean temperature for decision making in the Kurdistan Region of Iraq. Journal of Statistical Theory and Practice, doi: 10.1080/15598608.2017.1292484
Chen, T. and Abu-Nimeh, S. (2011). Lessons from Stuxnet. Computer, 44(4), pp. 91-93. doi: 10.1109/MC.2011.115
Chuang, J. (2001). Derived equivalence in SL2(p2). Transactions of the American Mathematical Society, 353(7), pp. 2897-2913.
Chuang, J. and Kessar, R. (2016). On Perverse Equivalences and Rationality. In: Proceedings of the 7th European Congress of Mathematics (7ECM). . Zürich, Switzerland: European Mathematical Society Publishing House. ISBN 978-3-03719-176-7
Chuang, J. and Kessar, R. (2002). Symmetric groups, wreath products, Morita equivalences, and Broue's abelian defect group conjecture. Bulletin of the London Mathematical Society, 34(2), pp. 174-184. doi: 10.1112/S0024609301008839
Chuang, J. and King, A. Free resolutions of algebras. Journal of Algebra,
Chuang, J., King, A. and Leinster, T. (2016). On the magnitude of a finite dimensional algebra. Theory and Applications of Categories, 31(3), pp. 63-72.
Chuang, J. and Lazarev, A. (2009). Abstract Hodge decomposition and minimal models for cyclic algebras. Letters in Mathematical Physics, 89(1), pp. 33-49. doi: 10.1007/s11005-009-0314-7
Chuang, J. and Lazarev, A. (2013). Combinatorics and Formal Geometry of the Maurer-Cartan Equation. LETTERS IN MATHEMATICAL PHYSICS, 103(1), pp. 79-112. doi: 10.1007/s11005-012-0586-1
Chuang, J. and Lazarev, A. (2007). Dual Feynman transform for modular operads. Communications in Number Theory and Physics, 1(4), pp. 605-649.
Chuang, J. and Lazarev, A. (2010). Feynman diagrams and minimal models for operadic algebras. Journal of the London Mathematical Society, 81(2), pp. 317-337. doi: 10.1112/jlms/jdp073
Chuang, J. and Lazarev, A. (2011). L-infinity maps and twistings. Homology, Homotopy and Applications, 13(2), pp. 175-195.
Chuang, J. and Lazarev, A. (2018). On the perturbation algebra. Journal of Algebra, 519, pp. 130-148. doi: 10.1016/j.jalgebra.2018.10.032
Chuang, J., Lazarev, A. and Braun, C. (2018). Derived localisation of algebras and modules. Advances in Mathematics, 328, pp. 555-622. doi: 10.1016/j.aim.2018.02.004
Chuang, J., Lazarev, A. and Mannan, W. (2016). Cocommutative coalgebras: homotopy theory and Koszul duality. Homology, Homotopy and Applications, 18(2), pp. 303-336. doi: 10.4310/HHA.2016.v18.n2.a17
Chuang, J., Lazarev, A. and Mannan, W. (2016). Koszul-Morita duality. Journal of Noncommutative Geometry, 10(4), pp. 1541-1557. doi: 10.4171/JNCG/265
Chuang, J. and Meng Tan, K. (2002). Some canonical basis vectors in the basic Uq (SI n)module. Journal of Algebra, 248(2), pp. 765-779. doi: 10.1006/jabr.2001.9030
Chuang, J., Miyachi, H. and Tan, K. M. (2008). Kleshchev's decomposition numbers and branching coefficients in the Fock space. Transactions of the American Mathematical Society, 360(3), pp. 1179-1191. doi: 10.1090/S0002-9947-07-04202-X
Chuang, J., Miyachi, H. and Tan, K. M. (2017). Parallelotope tilings and q-decomposition numbers. Advances in Mathematics, 321, pp. 80-159. doi: 10.1016/j.aim.2017.09.024
Chuang, J., Miyachi, H. and Tan, K. M. (2002). Row and column removal in the q-deformed Fock space. Journal of Algebra, 254(1), pp. 84-91. doi: 10.1016/S0021-8693(02)00062-5
Chuang, J., Miyachi, H. and Tan, K. M. (2004). A v-analogue of Peel's theorem. Journal of Algebra, 280(1), pp. 219-231. doi: 10.1016/j.jalgebra.2004.04.017
Chuang, J. and Rouquier, R. (2008). Derived equivalences for symmetric groups and sl2- categorification. Annals of Mathematics, 167(1), pp. 245-298. doi: 10.4007/annals.2008.167.245
Chuang, J. and Tan, K. M. (2016). Canonical bases for Fock spaces and tensor products. Advances in Mathematics, 302, pp. 159-189. doi: 10.1016/j.aim.2016.07.008
Chuang, J. and Tan, K. M. (2001). On certain blocks of Schur algebras. Bulletin of the London Mathematical Society, 33(2), pp. 157-167.
Chuang, J. and Tan, K. M. (2003). Representations of wreath products of algebras. Mathematical Proceedings of the Cambridge Philosophical Society, 135(3), pp. 395-411. doi: 10.1017/S0305004103006984
Chuang, J. and Turner, W. (2008). Cubist algebras. Advances in Mathematics, 217(4),
Ciulla, F., Mocanu, D., Baronchelli, A., Gonçalves, B., Perra, N. and Vespignani, A. (2012). Beating the news using social media: the case study of American Idol. EPJ Data Science, 1(8), doi: 10.1140/epjds8
Cohnitz, L., De Martino, A., Häusler, W. and Egger, R. (2016). Chiral interface states in p-n graphene junctions. Physical Review B, 94(16), 165443.. doi: 10.1103/PhysRevB.94.165443
Cohnitz, L., De Martino, A., Häusler, W. and Egger, R. (2017). Proximity-induced superconductivity in Landau-quantized graphene monolayers. Physical review B: Condensed matter and materials physics, 96, 140506.. doi: 10.1103/PhysRevB.96.140506
Coletti, E., Forini, V. ORCID: 0000-0001-9726-1423, Nardelli, G., Grignani, G. and Orselli, M. (2004). Exact potential and scattering amplitudes from the tachyon non-linear β-function. Journal of High Energy Physics, 2004, 030.. doi: 10.1088/1126-6708/2004/03/030
Colgain, E. O. and Stefanski, B. (2011). A search for AdS(5) X S-2 IIB supergravity solutions dual to N=2 SCFTs. Journal of High Energy Physics, 2011, p. 61. doi: 10.1007/JHEP10(2011)061
Collins, S. (1992). Studies on the natural fluorescence of wool and wool grease. (Unpublished Doctoral thesis, City University London)
Constantin, A., He, Y. ORCID: 0000-0002-0787-8380 and Lukas, A. (2019). Counting string theory standard models. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 792, pp. 258-262. doi: 10.1016/j.physletb.2019.03.048
Cox, A. (2001). Decomposition numbers for distant Weyl modules. Journal of Algebra, 243(2), pp. 448-472. doi: 10.1006/jabr.2001.8857
Cox, A. (1998). Ext(1) for Weyl modules for q-GL(2, k). Mathematical Proceedings of the Cambridge Philosophical Society, 124, pp. 231-251.
Cox, A. (2000). On the blocks of the infinitesimal Schur algebras. The Quarterly Journal of Mathematics, 50(1), pp. 39-56. doi: 10.1093/qmathj/50.1.39
Cox, A. (1998). The blocks of the q-Schur algebra. Journal of Algebra, 207(1), pp. 306-325.
Cox, A. (2007). The tilting tensor product theorem and decomposition numbers for symmetric groups. Algebras and Representation Theory, 10(4), pp. 307-314. doi: 10.1007/s10468-007-9051-8
Cox, A. ORCID: 0000-0001-9799-3122 and Bowman, C. (2018). Modular decomposition numbers of cyclotomic Hecke and diagrammatic Cherednik algebras: a path theoretic approach. Forum of Mathematics, Sigma, doi: 10.1017/fms.2018.9
Cox, A. and De Visscher, M. (2011). Diagrammatic Kazhdan-Lusztig theory for the (walled) Brauer algebra. Journal of Algebra, 340(1), pp. 151-181. doi: 10.1016/j.jalgebra.2011.05.024
Cox, A., De Visscher, M., Doty, S. and Martin, P. (2008). On the blocks of the walled Brauer algebra. Journal of Algebra, 320(1), pp. 169-212. doi: 10.1016/j.jalgebra.2008.01.026
Cox, A., De Visscher, M. and Martin, P. (2011). Alcove geometry and a translation principle for the Brauer algebra. Journal of Pure and Applied Algebra, 215(4), pp. 335-367. doi: 10.1016/j.jpaa.2010.04.023
Cox, A., De Visscher, M. and Martin, P. (2009). The blocks of the Brauer algebra in characteristic zero. Representation Theory, 13, pp. 272-308. doi: 10.1090/S1088-4165-09-00305-7
Cox, A., De Visscher, M. and Martin, P. (2009). A geometric characterisation of the blocks of the Brauer algebra. Journal of the London Mathematical Society, 80(2), pp. 471-494. doi: 10.1112/jlms/jdp039
Cox, A. and Erdmann, K. (2000). On Ext(2) between Weyl modules for quantum GL(n). Mathematical Proceedings of the Cambridge Philosophical Society, 128, pp. 441-463.
Cox, A., Graham, J. and Martin, P. (2003). The blob algebra in positive characteristic. Journal of Algebra, 266(2), pp. 584-635. doi: 10.1016/S0021-8693(03)00260-6
Cox, A., Martin, P., Parker, A. and Xi, C. (2006). Representation theory of towers of recollement: Theory, notes, and examples. Journal of Algebra, 302(1), pp. 340-360. doi: 10.1016/j.jalgebra.2006.01.009
Cox, A. ORCID: 0000-0001-9799-3122 and Parker, A. (2005). Homomorphisms and Higher Extensions for Schur algebras and symmetric groups. Journal of Algebra and Its Applications (jaa), 4(6), pp. 645-670. doi: 10.1142/S0219498805001460
Cox, A. and Parker, A. (2006). Homomorphisms between Weyl modules for SL3(k). Transactions of the American Mathematical Society (TRAN), 358(9), pp. 4159-4207. doi: 10.1090/S0002-9947-06-03861-X
Cox, M G (1975). Numerical methods for the interpolation and approximation of data by spline functions. (Unpublished Post-Doctoral thesis, City, University of London)
Craven, D. A., Eaton, C. W., Kessar, R. and Linckelmann, M. (2011). The structure of blocks with a Klein four defect group. Mathematische Zeitschrift, 268(1-2), pp. 441-476. doi: 10.1007/s00209-010-0679-4
Dall'Asta, L. and Baronchelli, A. (2006). Microscopic activity patterns in the naming game. Journal of Physics A: Mathematical and General, 39(48), pp. 14851-14867. doi: 10.1088/0305-4470/39/48/002
Dall'Asta, L., Baronchelli, A., Barrat, A. and Loreto, V. (2006). Agreement dynamics on small-world networks. Europhysics Letters, 73(6), pp. 969-975. doi: 10.1209/epl/i2005-10481-7
Dall'Asta, L., Baronchelli, A., Barrat, A. and Loreto, V. (2006). Nonequilibrium dynamics of language games on complex networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 74(3), doi: 10.1103/PhysRevE.74.036105
Daniels, P.G. (2007). On the boundary layer structure of differentially heated cavity flow in a stably stratified porous medium. Journal of Fluid Mechanics, 586, pp. 347-370. doi: 10.1017/S0022112007007100
Daniels, P.G. (2010). On the boundary-layer structure of high-Prandtl-number horizontal convection. Journal of Fluid Mechanics, 652, pp. 299-331. doi: 10.1017/S0022112009994125
Daniels, P.G. (2006). Shallow cavity flow in a porous medium driven by differential heating. Journal of Fluid Mechanics, 565, pp. 441-459. doi: 10.1017/S0022112006001868
Daniels, P.G. and Lee, A. T. (1999). On the boundary-layer structure of patterns of convection in rectangular-planform containers. Journal of Fluid Mechanics, 393, pp. 357-380.
Daniels, P.G. and Patterson, J.C. (1997). On the long-wave instability of natural-convection boundary layers. Journal of Fluid Mechanics, 335, pp. 57-73.
Daniels, P.G. and Punpocha, M. (2005). On the boundary-layer structure of cavity flow in a porous medium driven by differential heating. Journal of Fluid Mechanics, 532, pp. 321-344. doi: 10.1017/S0022112005004167
Daniels, P.G. and Weinstein, M. (1996). On finite-amplitude patterns of convection in a rectangular-planform container. Journal of Fluid Mechanics, 317, pp. 111-127.
Danilova, N. (2014). Integration of search theories and evidential analysis to Web-wide Discovery of information for decision support. (Unpublished Doctoral thesis, City, University of London)
Dasgupta, T. and Stefanski, B. (2000). Non-BPS States and Heterotic - Type I' Duality. Nuclear Physics B, 572(1-2), pp. 95-111. doi: 10.1016/S0550-3213(00)00039-0
De Martino, A., Dell'Anna, L. and Egger, R. (2007). Magnetic confinement of massless Dirac fermions in graphene. Physical Review Letters (PRL), 98(6), doi: 10.1103/PhysRevLett.98.066802
De Martino, A. and Egger, R. (2003). Acoustic phonon exchange, attractive interactions, and the Wentzel-Bardeen singularity in single-wall nanotubes. Physical Review B (PRB), 67(23), doi: 10.1103/PhysRevB.67.235418
De Martino, A. and Egger, R. (2001). ESR theory for interacting 1D quantum wires. Europhysics Letters, 56(4), pp. 570-575. doi: 10.1209/epl/i2001-00558-3
De Martino, A. and Egger, R. (2004). Effective low-energy theory of superconductivity in carbon nanotube ropes. Physical Review B (PRB), 70, doi: 10.1103/PhysRevB.70.014508
De Martino, A. and Egger, R. (2005). Rashba spin-orbit coupling and spin precession in carbon nanotubes. Journal of Physics: Condensed Matter, 17(36), doi: 10.1088/0953-8984/17/36/008
De Martino, A. and Egger, R. (2017). Two-electron bound states near a Coulomb impurity in gapped graphene. Physical review B: Condensed matter and materials physics, 95(8), 085418.. doi: 10.1103/PhysRevB.95.085418
De Martino, A., Egger, R. and Gogolin, A. O. (2009). Phonon-phonon interactions and phonon damping in carbon nanotubes. Physical Review B (PRB), 79(20), doi: 10.1103/PhysRevB.79.205408
De Martino, A., Egger, R., Hallberg, K. and Balseiro, C. A. (2002). Spin-orbit coupling and electron spin resonance theory for carbon nanotubes. Physical Review Letters (PRL), 88(20), doi: 10.1103/PhysRevLett.88.206402
De Martino, A., Egger, R., Murphy-Armando, F. and Hallberg, K. (2004). Spin-orbit coupling and electron spin resonance for interacting electrons in carbon nanotubes. Journal of Physics Condensed Matter, 16(17), S1437-S1452. doi: 10.1088/0953-8984/16/17/002
De Martino, A., Egger, R. and Tsvelik, A. M. (2006). Nonlinear magnetotransport in interacting chiral nanotubes.. Physical Review Letters (PRL), 97(7), doi: 10.1103/PhysRevLett.97.076402
De Martino, A., Hütten, A. and Egger, R. (2011). Landau levels, edge states, and strained magnetic waveguides in graphene monolayers with enhanced spin-orbit interaction. Physical Review B (PRB), 84(15), doi: 10.1103/PhysRevB.84.155420
De Martino, A., Klöpfer, D., Matrasulov, D. and Egger, R. (2014). Electric-dipole-induced universality for Dirac fermions in graphene. Physical Review Letters (PRL), 112(18), 186603 - ?. doi: 10.1103/PhysRevLett.112.186603
De Martino, A. and Moriconi, M. (1999). Boundary S-matrix for the Gross-Neveu Model. Physics Letter B, 451(3-4), pp. 354-356. doi: 10.1016/S0370-2693(99)00240-3
De Martino, A. and Moriconi, M. (1998). Tricritical Ising Model with a Boundary. Nuclear Physics B, 528(3), pp. 577-594. doi: 10.1016/S0550-3213(98)00379-4
De Martino, A., Moriconi, M. and Mussardo, G. (1998). Reflection Scattering Matrix of the Ising Model in a Random Boundary Magnetic Field. Nuclear Physics B, 509(3), pp. 615-636. doi: 10.1016/S0550-3213(97)00644-5
De Martino, A. and Musto, R (1995). Knizhnik-Zamolodchikov equation and extended symmetry for stable Hall states. Modern Physics Letters A (mpla), 2051, doi: 10.1142/S0217732395002209
De Martino, A. and Musto, R. (1995). Abelian Hall Fluids and Edge States: a Conformal Field Theory Approach. International Journal of Modern Physics B (ijmpb), 2839, doi: 10.1142/S0217979295001063
De Martino, A., Thorwart, M., Egger, R. and Graham, R. (2005). Exact results for one-dimensional disordered bosons with strong repulsion.. Physical Review Letters (PRL), 94(6), doi: 10.1103/PhysRevLett.94.060402
De Visscher, M. (2002). Extensions of modules for SL(2,K). Journal of Algebra, 254(2), pp. 409-421. doi: 10.1016/S0021-8693(02)00084-4
De Visscher, M. (2008). On the blocks of semisimple algebraic groups and associated generalized Schur algebras. Journal of Algebra, 319(3), pp. 952-965. doi: 10.1016/j.jalgebra.2007.11.015
De Visscher, M. (2005). Quasi-hereditary quotients of finite Chevalley groups and Frobenius kernels. The Quarterly Journal of Mathematics, 56(1), pp. 111-121. doi: 10.1093/qmath/hah025
De Visscher, M. (2006). A note on the tensor product of restricted simple modules for algebraic groups. Journal of Algebra, 303(1), pp. 407-415. doi: 10.1016/j.jalgebra.2005.06.010
De Visscher, M., Bowman, C. and Orellana, R. (2015). The partition algebra and the Kronecker coefficients. Transactions of the American Mathematical Society, 367, pp. 3647-3667. doi: 10.1090/S0002-9947-2014-06245-4
De Visscher, M., Bowman, C. and Orellana, R. (2013). The partition algebra and the Kronecker product (Extended Abstract). Paper presented at the DMTCS Proceedings, 25th International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2013), 24 June - 28 June 2013, Paris, France.
De Visscher, M. and Donkin, S. (2005). On projective and injective polynomial modules. Mathematische Zeitschrift, 251(2), pp. 333-358. doi: 10.1007/s00209-005-0805-x
Dell'Anna, L. and De Martino, A. (2011). Magnetic superlattice and finite-energy Dirac points in graphene. Physical Review B (PRB), 83(15), doi: 10.1103/PhysRevB.83.155449
Dell'Anna, L. and De Martino, A. (2009). Multiple magnetic barriers in graphene. Physical Review B (PRB), 79(4), doi: 10.1103/PhysRevB.79.045420
Dell'Anna, L. and De Martino, A. (2009). Wavevector-dependent spin filtering and spin transport through magnetic barriers in graphene. Phys. Rev. B 80, 155416 (2009), 80(15), doi: 10.1103/PhysRevB.80.155416
Dey, S. and Fring, A. (2013). Bohmian quantum trajectories from coherent states. Physical Review A (PRA), 88(022116), -. doi: 10.1103/PhysRevA.88.022116
Dey, S. and Fring, A. (2012). Squeezed coherent states for noncommutative spaces with minimal length uncertainty relations. Physical Review D - Particles, Fields, Gravitation and Cosmology, 86(6), doi: 10.1103/PhysRevD.86.064038
Dey, S., Fring, A. and Gouba, L. (2012). PT-symmetric non-commutative spaces with minimal volume uncertainty relations. Journal of Physics A: Mathematical and Theoretical, 45(38), doi: 10.1088/1751-8113/45/38/385302
Dey, S., Fring, A. and Hussin, V. (2017). Nonclassicality versus entanglement in a noncommutative space. International Journal of Modern Physics B, 31(1), 1650248.. doi: 10.1142/S0217979216502489
Dey, S., Fring, A. and Khantoul, B. (2013). Hermitian versus non-Hermitian representations for minimal length uncertainty relations. Journal of Physics A: Mathematical and Theoretical, 46(33), doi: 10.1088/1751-8113/46/33/335304
Dey, S., Fring, A. and Mathanaranjan, T. (2014). Non-Hermitian systems of Euclidean Lie algebraic type with real energy spectra. Annals of Physics, 346, pp. 28-41. doi: 10.1016/j.aop.2014.04.002
Dey, S., Fring, A. and Mathanaranjan, T. (2014). Spontaneous PT-Symmetry Breaking for Systems of Noncommutative Euclidean Lie Algebraic Type. International Journal of Theoretical Physics, 54(11), pp. 4027-4033. doi: 10.1007/s10773-014-2447-4
Dey, Sanjib (2014). Solvable Models on Noncommutative Spaces with Minimal Length Uncertainty Relations. (Unpublished Doctoral thesis, City University London)
Donagi, R., He, Y., Ovrut, B. A. and Reinbacher, R. (2005). Higgs doublets, split multiplets and heterotic SU(3)(C) x SU(2)(L) x U(1)(Y) spectra. Physics Letters B, 618(1-4), pp. 259-264. doi: 10.1016/j.physletb.2005.05.004
Donagi, R., He, Y., Ovrut, B. A. and Reinbacher, R. (2004). Moduli dependent spectra of heterotic compactifications. Physics Letters B, 598, pp. 279-284. doi: 10.1016/j.physletb.2004.08.010
Donagi, R., He, Y., Ovrut, B. A. and Reinbacher, R. (2004). The Particle spectrum of heterotic compactifications. Journal of High Energy Physics, 0412(54), doi: 10.1088/1126-6708/2004/12/054
Donagi, R., He, Y., Ovrut, B. A. and Reinbacher, R. (2005). The Spectra of heterotic standard model vacua. Journal of High Energy Physics, 0506(070), doi: 10.1088/1126-6708/2005/06/070
Drukker, N. and Forini, V. ORCID: 0000-0001-9726-1423 (2011). Generalized quark-antiquark potential at weak and strong coupling. Journal of High Energy Physics, 2011, 131.. doi: 10.1007/JHEP06(2011)131
Duncan, M., Gu, W., He, Y. and Zhou, D (2014). The Statistics of Vacuum Geometry. Journal of High Energy Physics, 2014(6), p. 42. doi: 10.1007/JHEP06(2014)042
de Morrison Faria, C. F. and Fring, A. (2006). Time evolution of non-Hermitian Hamiltonian systems. Journal of Physics A: Mathematical and General, 39(29), pp. 9269-9289. doi: 10.1088/0305-4470/39/29/018
Eaton, C.W., Kessar, R., Kuelshammer, B. and Sambale, B. (2014). 2-blocks with abelian defect groups. ADVANCES IN MATHEMATICS, 254, pp. 706-735. doi: 10.1016/j.aim.2013.12.024
Egger, R., De Martino, A., Siedentop, H. and Stockmeyer, E. (2010). Multiparticle equations for interacting Dirac fermions in magnetically confined graphene quantum dots. Journal of Physics A: Mathematical and Theoretical, 43(21), p. 215202. doi: 10.1088/1751-8113/43/21/215202
Eisele, F. (2014). Basic Orders for Defect Two Blocks of ℤpΣn. Communications in Algebra, 42(7), pp. 2890-2907. doi: 10.1080/00927872.2013.773336
Eisele, F. (2013). On the IYB-property in some solvable groups. Archiv der Mathematik, 101(4), pp. 309-318. doi: 10.1007/s00013-013-0569-1
Eisele, F. (2012). P-Adic lifting problems and derived equivalences. Journal of Algebra, 356(1), pp. 90-114. doi: 10.1016/j.jalgebra.2012.01.015
Eisele, F. (2014). The p-adic group ring of. Journal of Algebra, 410, pp. 421-459. doi: 10.1016/j.jalgebra.2014.01.036
Eisele, F., Geline, M., Kessar, R. and Linckelmann, M. (2017). On tate duality and a projective scalar property for symmetric algebras. Pacific Journal of Mathematics, 293(2), pp. 277-300. doi: 10.2140/pjm.2018.293.277
Eisele, F., Kiefer, A. and Van Gelder, I. (2015). Describing units of integral group rings up to commensurability. Journal of Pure and Applied Algebra, 219(7), pp. 2901-2916. doi: 10.1016/j.jpaa.2014.09.031
El-Sherbiny, Hamid Moustafa (1987). The Korteweg-De Vries Equation and its Homologues: A Comparative Analysis II. (Unpublished Doctoral thesis, City, University of London)
ElBahrawy, A., Alessandretti, L., Kandler, A., Pastor-Satorras, R. and Baronchelli, A. (2017). Evolutionary dynamics of the cryptocurrency market. Royal Society Open Science, 4, 170623.. doi: 10.1098/rsos.170623
Ellwood, I., Feng, B., He, Y. and Moeller, N. (2001). The identity string field and the tachyon vacuum. Journal of High Energy Physics, 2001(016), doi: 10.1088/1126-6708/2001/07/016
Faria, C. F. D. M. and Fring, A. (2006). Isospectral Hamiltonians from Moyal products. Czechoslovak Journal of Physics, 56(9), pp. 899-908. doi: 10.1007/s10582-006-0386-x
Faria, C. F. D. M. and Fring, A. (2007). Non-Hermitian Hamiltonians with real eigenvalues coupled to electric fields: From the time-independent to the time-dependent quantum mechanical formulation. Laser Physics, 17(4), pp. 424-437. doi: 10.1134/S1054660X07040196
Faria, C. F. D. M., Fring, A. and Schrader, R. (1999). Analytical treatment of stabilization. Laser Physics, 9(1), pp. 379-387.
Faria, C. F. D. M., Fring, A. and Schrader, R. (2000). Existence criteria for stabilization from the scaling behaviour of ionization probabilities. Journal of Physics B: Atomic, Molecular and Optical Physics, 33(8), doi: 10.1088/0953-4075/33/8/316
Faria, C. F. D. M., Fring, A. and Schrader, R. (1998). On the influence of pulse shapes on ionization probability. Journal of Physics B: Atomic, Molecular and Optical Physics, 31(3), p. 449. doi: 10.1088/0953-4075/31/3/013
Faria, C. F. D. M., Fring, A. and Schrader, R. (1999). Stabilization not for certain and the usefulness of bounds. AIP Conference Proceedings, 525, pp. 150-161. doi: 10.1063/1.1291934
Farrell, N. (2017). Rationality of blocks of quasi-simple finite groups. (Unpublished Doctoral thesis, City, University of London)
Favier, B., Louve, L., Edmunds, L. J., Silvers, L. J. and Proctor, M. R. E. (2012). How can large-scale twisted magnetic structures naturally emerge from buoyancy instabilities?. Monthly Notices of the Royal Astronomical Society, 426(4), pp. 3349-3359. doi: 10.1111/j.1365-2966.2012.21920.x
Favier, B. and Proctor, M. R. E. (2013). Growth rate degeneracies in kinematic dynamos. Physical Review E (PRE), 88(031001), doi: 10.1103/PhysRevE.88.031001
Favier, B. and Proctor, M. R. E. (2013). Kinematic dynamo action in square and hexagonal patterns. Physical Review E (PRE), 88(053011), doi: 10.1103/PhysRevE.88.053011
Feng, B., Franco, S., Hanany, A. and He, Y. (2002). Symmetries of toric duality. Journal of High Energy Physics, 2002(76), doi: 10.1088/1126-6708/2002/12/076
Feng, B., Franco, S., Hanany, A. and He, Y. (2003). UnHiggsing the del Pezzo. Journal of High Energy Physics, 2003(08), doi: 10.1088/1126-6708/2003/08/058
Feng, B., Hanany, A. and He, Y. (2007). Counting gauge invariants: The Plethystic program. Journal of High Energy Physics, 0703(090), doi: 10.1088/1126-6708/2007/03/090
Feng, B., Hanany, A. and He, Y. (2001). D-brane gauge theories from toric singularities and toric duality. Nuclear Physics B, 595(1-2), pp. 165-200. doi: 10.1016/S0550-3213(00)00699-4
Feng, B., Hanany, A. and He, Y. (2001). Phase structure of D-brane gauge theories and toric duality. Journal of High Energy Physics, 2001(040), doi: 10.1088/1126-6708/2001/08/040
Feng, B., Hanany, A. and He, Y. (1999). The Z(k) x D(k-prime) brane box model. Journal of High Energy Physics, 1999(JHEP09), doi: 10.1088/1126-6708/1999/09/011
Feng, B., Hanany, A. and He, Y. (2000). Z-D Brane Box Models and Non-Chiral Dihedral Quivers. In: Golfand, Y. and Shifman, M. A. (Eds.), The many faces of the superworld. (pp. 280-306). Singapore: World Scientific Pub Co Inc. ISBN 9810242069
Feng, B., Hanany, A., He, Y. and Iqbal, A. (2003). Quiver theories, soliton spectra and Picard-Lefschetz transformations. Journal of High Energy Physics, 2003, 056 - 056. doi: 10.1088/1126-6708/2003/02/056
Feng, B., Hanany, A., He, Y. and Prezas, N. (2001). Discrete torsion, covering groups and quiver diagrams. Journal of High Energy Physics, 2001(037), doi: 10.1088/1126-6708/2001/04/037
Feng, B., Hanany, A., He, Y. and Prezas, N. (2001). Discrete torsion, non-abelian orbifolds and the Schur multiplier. Journal of High Energy Physics, 2001(JHEP01), doi: 10.1088/1126-6708/2001/01/033
Feng, B., Hanany, A., He, Y. and Prezas, N. (2002). Stepwise projection: toward brane setups for generic orbifold singularities. Journal of High Energy Physics, 2002(040), doi: 10.1088/1126-6708/2002/01/040
Feng, B., Hanany, A., He, Y. and Uranga, A. M. (2001). Toric duality as Seiberg duality and brane diamonds. Journal of High Energy Physics, 2001(035), doi: 10.1088/1126-6708/2001/12/035
Feng, B. and He, Y. (2003). Seiberg duality in matrix models II. Physics Letters B, 562(3-4), pp. 339-346. doi: 10.1016/S0370-2693(03)00597-5
Feng, B., He, Y., Karch, A. and Uranga, A. M. (2001). Orientifold dual for stuck NS5-branes. Journal of High Energy Physics, 2001(065), doi: 10.1088/1126-6708/2001/06/065
Feng, B., He, Y., Kennaway, K.D. and Vafa, C. (2008). Dimer models from mirror symmetry and quivering amoebae. Advances in Theoretical and Mathematical Physics, 12(3), pp. 489-545. doi: 10.4310/ATMP.2008.v12.n3.a2
Feng, B., He, Y. and Lam, F. (2004). On correspondences between toric singularities and (p,q) webs. Nuclear Physics B, 701, pp. 334-356. doi: 10.1016/j.nuclphysb.2004.08.048
Feng, B., He, Y. and Moeller, N. (2001). Testing the uniqueness of the open bosonic string field theory vacuum (MIT-CTP-3097). Cambridge, USA: Massachusetts Institute of Technology.
Feng, B., He, Y. and Moeller, N. (2002). Zeeman spectroscopy of the star algebra. Journal of High Energy Physics, 2002(041), doi: 10.1088/1126-6708/2002/05/041
Feng, B., He, Y. and Moeller, N. (2002). The spectrum of the Neumann matrix with zero modes. Journal of High Energy Physics, 2002(038), doi: 10.1088/1126-6708/2002/04/038
Ferrier, M., De Martino, A., Kasumov, A., Gueron, S., Kociak, M., Egger, R. and Bouchiat, H. (2004). Superconductivity in ropes of carbon nanotubes. Solid State Communications, 131, doi: 10.1016/j.ssc.2004.05.044
Forcella, D., Hanany, A., He, Y. and Zaffaroni, A. (2008). The Master Space of N=1 Gauge Theories. Journal of High Energy Physics, 2008(JHEP08), 012 - 012. doi: 10.1088/1126-6708/2008/08/012
Forcella, D., Hanany, A., He, Y. and Zaffaroni, A. (2008). Mastering the Master Space. Letters in Mathematical Physics, 85(2-3), pp. 163-171. doi: 10.1007/s11005-008-0255-6
Forini, V. ORCID: 0000-0001-9726-1423 (2010). Quark-antiquark potential in AdS at one loop. Journal of High Energy Physics, 2010, 79.. doi: 10.1007/JHEP11(2010)079
Forini, V. ORCID: 0000-0001-9726-1423 and Drukker, N. (2012). Generalized quark-antiquark potential in AdS/CFT. Fortschritte der Physik - Progress of Physics, 60(9-10), pp. 1019-1025. doi: 10.1002/prop.201200022
Forini, V. ORCID: 0000-0001-9726-1423, Grignani, G. and Nardelli, G. (2005). A new rolling tachyon solution of cubic string field theory. Journal of High Energy Physics, 2005, 079.. doi: 10.1088/1126-6708/2005/03/079
Forini, V. ORCID: 0000-0001-9726-1423, Grignani, G. and Nardelli, G. (2006). A solution to the 4-tachyon off-shell amplitude in cubic string field theory. Journal of High Energy Physics, 2006, 053.. doi: 10.1088/1126-6708/2006/04/053
Forini, V. ORCID: 0000-0001-9726-1423, Puletti, V. G. M., Griguolo, L., Seminara, D. and Vescovi, E. (2016). Precision calculation of 1/4-BPS Wilson loops in AdS(5) x S-5. Journal of High Energy Physics, 2016, 105.. doi: 10.1007/JHEP02(2016)105
Forini, V. ORCID: 0000-0001-9726-1423, Puletti, V. G. M., Griguolo, L., Seminara, D. and Vescovi, E. (2015). Remarks on the geometrical properties of semiclassically quantized strings. Journal of Physics A: Mathematical and Theoretical, 48(47), 475401.. doi: 10.1088/1751-8113/48/47/475401
Forini, V. ORCID: 0000-0001-9726-1423, Puletti, V. G. M., Pawellek, M. and Vescovi, E. (2015). One-loop spectroscopy of semiclassically quantized strings: bosonic sector. Journal of Physics A: Mathematical and Theoretical, 48(8), 085401.. doi: 10.1088/1751-8113/48/8/085401
Forini, V. ORCID: 0000-0001-9726-1423, Puletti, V. G. M. and Sax, O. O. (2013). The generalized cusp in AdS(4) x CP3 and more one-loop results from semiclassical strings. Journal of Physics A: Mathematical and Theoretical, 46(11), 115402.. doi: 10.1088/1751-8113/46/11/115402
Forini, V. ORCID: 0000-0001-9726-1423, Tseytlin, A. A. and Vescovi, E. (2017). Perturbative computation of string one-loop corrections to Wilson loop minimal surfaces in AdS(5) x S-5. Journal of High Energy Physics, 2017, 3.. doi: 10.1007/JHEP03(2017)003
Franco, S., Galloni, D. and He, Y. (2012). Towards the continuous limit of cluster integrable systems. Journal of High Energy Physics, 2012(9), p. 20. doi: 10.1007/JHEP09(2012)020
Franco, S., Hanany, A. and He, Y. (2004). A trio of dualities: Walls, trees and cascades. Fortschritte der Physik, 52, pp. 540-547. doi: 10.1002/prop.200310142
Franco, S., Hanany, A., He, Y. and Kazakopoulos, P. (2003). Duality walls, duality trees and fractional branes (MIT-CTP-3386, UPR-1046-T). Cambridge, Massachusetts: Massachusetts Institute of Technology.
Franco, S., He, Y-H., Sun, C. and Xiao, Y. (2017). A comprehensive survey of brane tilings. International Journal of Modern Physics A, 32(23-24), 1750142.. doi: 10.1142/S0217751X17501421
Franco, S., He, Y., Herzog, C. and Walcher, J. (2004). Chaotic cascades for D-branes on singularities. Paper presented at the Cargese Summer School, Cargèse, France, 7th - 19th 2004.
Franco, S., He, Y., Herzog, C. and Walcher, J. (2004). Chaotic duality in string theory. Physical Review D (PRD), 70(4), doi: 10.1103/PhysRevD.70.046006
Fring, A. (1996). Braid Relations in Affine Toda Field Theory. International Journal of Modern Physics A (ijmpa), 11, pp. 1337-1352.
Fring, A. (2015). E2-quasi-exact solvability for non-Hermitian models. Journal of Physics A: Mathematical and General, 48(14), pp. 145301-145320. doi: 10.1088/1751-8113/48/14/145301
Fring, A. (2002). Mutually local fields from form factors. International Journal of Modern Physics B (IJMBP), 16(14-15), pp. 1915-1924. doi: 10.1142/S0217979202011639
Fring, A. (2013). PT-symmetric deformations of integrable models. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371(1989), p. 20120046. doi: 10.1098/rsta.2012.0046
Fring, A. (2007). PT-symmetric deformations of the Korteweg-de Vries equation. Journal of Physics A: Mathematical and Theoretical, 40(15), pp. 4215-4224. doi: 10.1088/1751-8113/40/15/012
Fring, A. (2007). PT-symmetry and Integrability. Acta Polytechnica, 47, pp. 44-49.
Fring, A. (2009). Particles versus fields in PT-symmetrically deformed integrable systems. Pramana, 73(2), pp. 363-373. doi: 10.1007/s12043-009-0128-2
Fring, A. (2005). Supersymmetric integrable scattering theories with unstable particles. Journal of High Energy Physics, 2005(JHEP01), doi: 10.1088/1126-6708/2005/01/030
Fring, A. (2016). A Unifying E2-Quasi Exactly Solvable Model. Springer Proceedings in Physics, pp. 235-248. doi: 10.1007/978-3-319-31356-6_15
Fring, A. (2015). A new non-Hermitian E2-quasi-exactly solvable model. Physics Letters A: General, Atomic and Solid State Physics, 379(10-11), pp. 873-876. doi: 10.1016/j.physleta.2015.01.008
Fring, A. (2006). A note on the integrability of non-Hermitian extensions of Calogero-Moser-Sutherland models. Modern Physics Letters A (MPLA), 21(8), pp. 691-699. doi: 10.1142/S0217732306019682
Fring, A., Bender, C. and Komijani, J. (2014). Nonlinear eigenvalue problems. Journal of Physics A: Mathematical and Theoretical, 47(23), p. 235204. doi: 10.1088/1751-8113/47/23/235204
Fring, A. and Dey, S. (2014). Noncommutative quantum mechanics in a time-dependent background. Physical Review D - Particles, Fields, Gravitation and Cosmology, 90, 084005 -084019. doi: 10.1103/PhysRevD.90.084005
Fring, A. and Dey, S. (2013). The Two-dimensional Harmonic Oscillator on a Noncommutative Space with Minimal Uncertainties. Acta Polytechnica, 2013(3), pp. 268-276.
Fring, A., Dey, S. and Gouba, L. (2015). Milne quantization for non-Hermitian systems. Journal of Physics A: Mathematical and Theoretical, 48, p. 40. doi: 10.1088/1751-8113/48/40/40FT01
Fring, A., Dey, S., Gouba, L. and Castro, P. G. (2013). Time-dependent q-deformed coherent states for generalized uncertainty relations. Physical Review D: Particles, Fields, Gravitation and Cosmology, 87(8), doi: 10.1103/PhysRevD.87.084033
Fring, A., Dey, S. and Hussin, V. (2018). A squeezed review on coherent states and nonclassicality for non-Hermitian systems with minimal length. Springer Proceedings in Physics,, 205, pp. 209-242. doi: 10.1007/978-3-319-76732-1
Fring, A. and Frith, T. (2017). Exact analytical solutions for time-dependent Hermitian Hamiltonian systems from static unobservable non-Hermitian Hamiltonians. Physical Review A, 95, 010102(R). doi: 10.1103/PhysRevA.95.010102
Fring, A. and Frith, T. (2017). Mending the broken PT-regime via an explicit time-dependent Dyson map. Physics Letters A, 381(29), pp. 2318-2323. doi: 10.1016/j.physleta.2017.05.041
Fring, A. ORCID: 0000-0002-7896-7161 and Frith, T. (2018). Quasi-exactly solvable quantum systems with explicitly time-dependent Hamiltonians. Physics Letters A, doi: 10.1016/j.physleta.2018.10.043
Fring, A. ORCID: 0000-0002-7896-7161 and Frith, T. (2018). Solvable two-dimensional time-dependent non-Hermitian quantum systems with infinite dimensional Hilbert space in the broken PT-regime. Journal of Physics A: Mathematical and General, 51(26), 265301.. doi: 10.1088/1751-8121/aac57b
Fring, A., Gouba, L. and Bagchi, B. (2010). Minimal areas from q-deformed oscillator algebras. Journal of Physics A: Mathematical and General, 43(42), doi: 10.1088/1751-8113/43/42/425202
Fring, A., Gouba, L. and Scholtz, F. G. (2010). Strings from position-dependent noncommutativity. Journal of Physics A: Mathematical and General, 43(34), doi: 10.1088/1751-8113/43/34/345401
Fring, A., Johnson, P. R., Kneipp, M. A. C. and Olive, D. I. (1994). Vertex operators and soliton time delays in affine Toda field theory. Nuclear Physics B, 430(3), pp. 597-614. doi: 10.1016/0550-3213(94)90161-9
Fring, A. and Korff, C. (2005). Affine Toda field theories related to Coxeter groups of noncrystallographic type. Nuclear Physics B, 729(3), pp. 361-386. doi: 10.1016/j.nuclphysb.2005.08.044
Fring, A. and Korff, C. (2000). Colour valued Scattering Matrices. Physics Letters B, 477(1-3), pp. 380-386. doi: 10.1016/S0370-2693(00)00226-4
Fring, A. and Korff, C. (2004). Exactly solvable potentials of Calogero type for q-deformed Coxeter groups. Journal of Physics A: Mathematical and General, 37(45), pp. 10931-10949. doi: 10.1088/0305-4470/37/45/012
Fring, A. and Korff, C. (2000). Large and small Density Approximations to the thermodynamic Bethe Ansatz. Nuclear Physics B, 579(3), pp. 617-631. doi: 10.1016/S0550-3213(00)00250-9
Fring, A. and Korff, C. (2006). Non-crystallographic reduction of generalized Calogero-Moser models. Journal of Physics A: Mathematical and General, 39(5), pp. 1115-1131. doi: 10.1088/0305-4470/39/5/007
Fring, A., Korff, C. and Schulz, B. J. (2000). On the universal representation of the scattering matrix of affine Toda field theory. Nuclear Physics B, 567(3), pp. 409-453. doi: 10.1016/S0550-3213(99)00578-7
Fring, A., Korff, C. and Schulz, B. J. (1999). The ultraviolet behaviour of integrable quantum field theories, affine Toda field theory. Nuclear Physics B, 549(3), pp. 579-612. doi: 10.1016/S0550-3213(99)00216-3
Fring, A., Kostrykin, V. and Schrader, R. (1997). Ionization probabilities through ultra-intense fields in the extreme limit. Journal of Physics A: Mathematical and Theoretical, 30(24), pp. 8599-8610. doi: 10.1088/0305-4470/30/24/020
Fring, A., Kostrykin, V. and Schrader, R. (1996). On the absence of bound-state stabilization through short ultra-intense fields. Journal of Physics B: Atomic, Molecular and Optical Physics, 29, p. 5651. doi: 10.1088/0953-4075/29/23/011
Fring, A. and Köberle, R. (1994). Affine Toda Field Theory in the Presence of Reflecting Boundaries. Nuclear Physics B, 419(3), pp. 647-662. doi: 10.1016/0550-3213(94)90349-2
Fring, A. and Köberle, R. (1995). Boundary Bound States in Affine Toda Field Theory. International Journal of Modern Physics A (ijmpa), 10, pp. 739-752.
Fring, A. and Köberle, R. (1994). Factorized Scattering in the Presence of Reflecting Boundaries. Nuclear Physics B, 421(1), pp. 159-172. doi: 10.1016/0550-3213(94)90229-1
Fring, A., Liao, H. C. and Olive, D. I. (1991). The mass spectrum and coupling in affine Toda theories. Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, 266(1-2), pp. 82-86.
Fring, A. and Manojlovic, N. (2006). G(2)-Calogero-Moser Lax operators from reduction. Journal of Nonlinear Mathematical Physics, 13(4), pp. 467-478. doi: 10.2991/jnmp.2006.13.4.1
Fring, A. and Moussa, M. (2016). Non-Hermitian Swanson model with a time-dependent metric. Physical Review A, 94, 042128.. doi: 10.1103/PhysRevA.94.042128
Fring, A. and Moussa, M. (2016). Unitary quantum evolution for time-dependent quasi-Hermitian systems with nonobservable Hamiltonians. Physical review. A, General physics, 93, 042114. doi: 10.1103/PhysRevA.93.042114
Fring, A., Mussardo, G. and Simonetti, P. (1993). Form Factors for Integrable Lagrangian Field Theories, the Sinh-Gordon Model. Nuclear Physics B, 393(1-2), pp. 413-441. doi: 10.1016/0550-3213(93)90252-K
Fring, A., Mussardo, G. and Simonetti, P. (1993). Form Factors of the Elementary Field in the Bullough-Dodd Model. Physics Letters B, 307(1-2), pp. 83-90. doi: 10.1016/0370-2693(93)90196-O
Fring, A. and Olive, D. I. (1992). The fusing rule and the scattering matrix of affine Toda theory. Nuclear Physics B, 379(1-2), pp. 429-447.
Fring, A. and Smith, M. (2010). Antilinear deformations of Coxeter groups, an application to Calogero models. Journal of Physics A: Mathematical and Theoretical, 43(32), doi: 10.1088/1751-8113/43/32/325201
Fring, A. and Smith, M. (2012). Non-Hermitian multi-particle systems from complex root spaces. Journal of Physics A: Mathematical and General, 45(8), doi: 10.1088/1751-8113/45/8/085203
Fring, A. and Smith, M. (2011). PT Invariant Complex E (8) Root Spaces. INTERNATIONAL JOURNAL OF THEORETICAL PHYSICS, 50(4), pp. 974-981. doi: 10.1007/s10773-010-0542-8
Fring, A. and Znojil, M. (2008). PT-symmetric deformations of Calogero models. Journal of Physics A: Mathematical and General, 41(19), doi: 10.1088/1751-8113/41/19/194010
Gabella, M., He, Y. and Lukas, A. (2008). An abundance of heterotic vacua. Journal of High Energy Physics, 12(027), doi: 10.1088/1126-6708/2008/12/027
Gaberdiel, M. R. and Stefanski, B. (2000). Dirichlet Branes on Orbifolds. Nuclear Physics B, 578(1-2), pp. 58-84. doi: 10.1016/S0550-3213(99)00813-5
Gao, P., He, Y. and Yau, S-T. (2014). Extremal Bundles on Calabi-Yau Threefolds. Communications in Mathematical Physics, 336(3), pp. 1167-1200. doi: 10.1007/s00220-014-2271-y
Ghosh, T. K., De Martino, A., Häusler, W., Dell'Anna, L. and Egger, R. (2008). Conductance quantization and snake states in graphene magnetic waveguides. Physical Review B (PRB), 77(8), doi: 10.1103/PhysRevB.77.081404
Golbabai,, A. (1983). Axisymmetric Rayleigh-Benard convection. (Unpublished Doctoral thesis, City University London)
Goncalves de Assis, P. E (2009). Non-Hermitian Hamiltonians in Field Theory. (Doctoral thesis, City University London)
Gray, J., Hanany, A., He, Y., Jejjala, V. and Mekareeya, N. (2008). SQCD: A Geometric Apercu. Journal of High Energy Physics, 2008(JHEP05), 099 - 099. doi: 10.1088/1126-6708/2008/05/099
Gray, J., He, Y., Ilderton, A. and Lukas, A. (2009). STRINGVACUA: A Mathematica Package for Studying Vacuum Configurations in String Phenomenology. Computer Physics Communications Package, 180(1), pp. 107-119. doi: 10.1016/j.cpc.2008.08.009
Gray, J., He, Y., Ilderton, A. and Lukas, A. (2007). A new method for finding vacua in string phenomenology. Journal of High Energy Physics, 0707(023), doi: 10.1088/1126-6708/2007/07/023
Gray, J., He, Y., Jejjala, V., Jurke, B., Nelson, B. D. and Simon, J. (2012). Necessary conditions on Calabi-Yau manifolds for large volume vacua. Physical Review D, 86(10), 101901(R). doi: 10.1103/PhysRevD.86.101901
Gray, J., He, Y., Jejjala, V. and Nelson, B. D. (2006). Exploring the vacuum geometry of N=1 gauge theories. Nuclear Physics B, 750, pp. 1-27. doi: 10.1016/j.nuclphysb.2006.06.001
Gray, J., He, Y., Jejjala, V. and Nelson, B. D. (2007). Vacuum Geometry and the Search for New Physics. Physics Letters B, 638(2-3), pp. 253-257. doi: 10.1016/j.physletb.2006.05.026
Gray, J., He, Y. and Lukas, A. (2006). Algorithmic Algebraic Geometry and Flux Vacua. Journal of High Energy Physics, 0609(031), doi: 10.1088/1126-6708/2006/09/031
Hadjichrysanthou, C. (2012). Evolutionary models in structured populations. (Unpublished Doctoral thesis, City University London)
Hadjichrysanthou, C., Broom, M. and Kiss, I. Z. (2012). Approximating evolutionary dynamics on networks using a Neighbourhood Configuration model. Journal of Theoretical Biology, 312, pp. 13-21. doi: 10.1016/j.jtbi.2012.07.015
Hadjichrysanthou, C., Broom, M. and Rychtar, J. (2011). Evolutionary Games on Star Graphs Under Various Updating Rules. Dynamic Games and Applications, 1(3), pp. 386-407. doi: 10.1007/s13235-011-0022-7
Hadjichrysanthou, C., Broom, M. ORCID: 0000-0002-1698-5495 and Rychtar, J. (2018). Models of kleptoparasitism on networks: the effect of population structure on food stealing behaviour. Journal of Mathematical Biology, 76(6), pp. 1465-1488. doi: 10.1007/s00285-017-1177-7
Hakobyan, L., Lumsden, J., O'Sullivan, D. and Bartlett, H. (2013). Mobile assistive technologies for the visually impaired. Survey of Ophthalmology, 58(6), doi: 10.1016/j.survophthal.2012.10.004
Hakobyan, L., Lumsden, J., O'Sullivan, D. and Bartlett, H. (2012). Understanding the IT-Related Attitudes and Needs of Persons with Age-Related Macular Degeneration: A Case Study. Paper presented at the Understanding the IT-Related Attitudes and Needs of Persons with Age-Related Macular Degeneration: A Case Study.
Halu, A., Zhao, K., Baronchelli, A. and Bianconi, G. (2013). Connect and win: The role of social networks in political elections. Europhysics Letters, 102(1), doi: 10.1209/0295-5075/102/16002
Hanany, A. and He, Y. (2011). Chern-Simons: Fano and Calabi-Yau. Advances in High Energy Physics, 2011, doi: 10.1155/2011/204576
Hanany, A. and He, Y. (2008). M2-Branes and Quiver Chern-Simons: A Taxonomic Study (Imperial/TP/08/AH/10). London: Imperial College.
Hanany, A. and He, Y. (2001). A Monograph on the classification of the discrete subgroups of SU(4). Journal of High Energy Physics, 2001(JHEP02), doi: 10.1088/1126-6708/2001/02/027
Hanany, A. and He, Y. (1999). Non-Abelian finite gauge theories. Journal of High Energy Physics, 1999(JHEP02), doi: 10.1088/1126-6708/1999/02/013
Hanany, A., He, Y., Jejjala, V., Pasukonis, J. and Ramgoolam, S. (2011). The Beta Ansatz: A Tale of Two Complex Structures. Journal of High Energy Physics, 2011(6), doi: 10.1007/JHEP06(2011)056
Hanany, A., He, Y., Jejjala, V., Pasukonis, J., Ramgoolam, S. and Rodriguez-Gomez, D. (2012). Invariants of toric seiberg duality. International Journal of Modern Physics A (ijmpa), 27(1), p. 1250002. doi: 10.1142/S0217751X12500029
Hanany, A., He, Y., Sun, C. and Sypsas, S. (2013). Superconformal Block Quivers, Duality Trees and Diophantine Equations. Journal of High Energy Physics, 2013(11), p. 17. doi: 10.1007/JHEP11(2013)017
Harrison, M. D. and Broom, M. (2009). A game-theoretic model of interspecific brood parasitism with sequential decisions. Journal of Theoretical Biology, 256(4), pp. 504-517. doi: 10.1016/j.jtbi.2008.08.033
Hauenstein, J., He, Y. and Mehta, D. (2013). Numerical elimination and moduli space of vacua. Journal of High Energy Physics, 2013(9), p. 83. doi: 10.1007/JHEP09(2013)083
He, Y-H. ORCID: 0000-0002-0787-8380, Huang, Z., Probst, M. and Read, J. (2018). Yang-Mills theory and the ABC conjecture. International Journal of Modern Physics A, 33(13), 1850053.. doi: 10.1142/S0217751X18500537
He, Y-H. ORCID: 0000-0002-0787-8380, Jejjala, V. and Minic, D. (2010). On the Physics of the Riemann Zeros. Journal of Physics: Conference Series, 462, 012036.. doi: 10.1088/1742-6596/462/1/012036
He, Y-H., Jejjala, V. and Pontiggia, L. (2017). Patterns in Calabi-Yau Distributions. Communications in Mathematical Physics, 354(2), pp. 477-524. doi: 10.1007/s00220-017-2907-9
He, Y-H. and McKay, J.M. (2017). Moonshine and the Meaning of Life. Contemporary Mathematics, 694, pp. 1-2. doi: 10.1090/conm/694
He, Y-H. ORCID: 0000-0002-0787-8380, Seong, R-K. and Yau, S-T. (2018). Calabi-Yau Volumes and Reflexive Polytopes. Communications in Mathematical Physics, 361(1), pp. 155-204. doi: 10.1007/s00220-018-3128-6
He, Y. (2012). Bipartita: Physics, Geometry and Number Theory. In: Bai, C., Gazeau, J-P. and Ge, M-L. (Eds.), Symmetries and Groups in Contemporary Physics. (pp. 321-326). World Scientific Publishing. ISBN 978-981-4518-54-3
He, Y. (2013). Calabi-Yau Geometries: Algorithms, Databases, and Physics. International Journal of Modern Physics A (ijmpa), 28(21), p. 1330032. doi: 10.1142/S0217751X13300329
He, Y. (2003). G(2) quivers. Journal of High Energy Physics, 2003(02), doi: 10.1088/1126-6708/2003/02/023
He, Y. (2011). Graph Zeta Function and Gauge Theories. Journal of High Energy Physics, 2011(3), 064 - 064. doi: 10.1007/JHEP03(2011)064
He, Y. (2004). Lectures on D-branes, gauge theories and Calabi-Yau singularities (UPR-1086-T). Phildelphia, USA: University of Pennsylvania.
He, Y. (2017). Machine-learning the string landscape. Physics Letters B, 774, pp. 564-568. doi: 10.1016/j.physletb.2017.10.024
He, Y. (2002). On algebraic singularities, finite graphs and D-brane gauge theories: A String theoretic perspective (UPR-1011-T). Philadelphia, USA: The University of Pennsylvania.
He, Y. (2011). Polynomial Roots and Calabi-Yau Geometries. Advances in High Energy Physics, 2011, p. 719672. doi: 10.1155/2011/719672
He, Y. (2010). An algorithmic approach to string phenomenology. Modern Physics Letters A, 25(2), p. 79. doi: 10.1142/S0217732310032731
He, Y. and Jejjala, V. (2003). Modular matrix models (UPR-1048-T, VPI-IPPAP-03-12). Philadelphia: The University of Pennsylvania.
He, Y., Jejjala, V., Matti, C. and Nelson, B. D. (2016). Testing R-parity with geometry. Journal of High Energy Physics(3), 79.. doi: 10.1007/JHEP03(2016)079
He, Y., Jejjala, V., Matti, C. and Nelson, B. D. (2014). Veronese Geometry and the Electroweak Vacuum Moduli Space. Physics Letters B, 736, pp. 20-25. doi: 10.1016/j.physletb.2014.06.072
He, Y., Jejjala, V. and Minic, D. (2009). Eigenvalue Density, Li's Positivity, and the Critical Strip (VPI-IPNAS-09-03). Blacksburg, USA: Virgina Tech, IPNAS.
He, Y., Jejjala, V. and Minic, D. (2016). From Veneziano to Riemann: A string theory statement of the Riemann hypothesis. International Journal of Modern Physics A, 31(36), 1650201.. doi: 10.1142/S0217751X16502018
He, Y., Jejjela, V. and Rodriguez-Gomez, D. (2012). Brane geometry and dimer models. Journal of High Energy Physics, 2012(6), p. 143. doi: 10.1007/JHEP06(2012)143
He, Y., Kreuzer, M., Lee, S-J. and Lukas, A. (2011). Heterotic bundles on Calabi-Yau manifolds with small Picard number. Journal of High Energy Physics, 2011(12), p. 39. doi: 10.1007/JHEP12(2011)039
He, Y. and Lee, S-J. (2012). Quiver structure of heterotic moduli. Journal of High Energy Physics, 2012(11), p. 119. doi: 10.1007/JHEP11(2012)119
He, Y., Lee, S-J. and Lukas, A. (2010). Heterotic models from vector bundles on toric Calabi-Yau manifolds. Journal of High Energy Physics, 2010(5), p. 71. doi: 10.1007/JHEP05(2010)071
He, Y., Lee, S-J., Lukas, A. and Sun, C. (2014). Heterotic Model Building: 16 Special Manifolds. Journal of High Energy Physics, 2014(6), p. 77. doi: 10.1007/JHEP06(2014)077
He, Y., Matti, C. and Sun, C. (2014). The Scattering Variety. Journal of High Energy Physics, 2014(10), p. 135. doi: 10.1007/JHEP10(2014)135
He, Y. and McKay, J. (2013). N=2 gauge theories: Congruence subgroups, coset graphs, and modular surfaces. Journal of Mathematical Physics, 54(1), 012301. doi: 10.1063/1.4772976
He, Y. and McKay, J.M. (2014). Eta Products, BPS States and K3 Surfaces. Journal of High Energy Physics, 2014(1), p. 113. doi: 10.1007/JHEP01(2014)113
He, Y., McKay, J.M. and Read, J. (2013). Modular Subgroups, Dessins d'Enfants and Elliptic K3 Surfaces. LMS Journal of Computation and Mathematics, 16, pp. 271-318. doi: 10.1112/S1461157013000119
He, Y., Mehta, D., Niemerg, M., Rummel, M. and Valeanu, A. (2013). Exploring the potential energy landscape over a large parameter-space. The Journal of High Energy Physics, doi: 10.1007/JHEP07(2013)050
He, Y., Ovrut, B. A. and Reinbacher, R. (2004). The moduli of reducible vector bundles. Journal of High Energy Physics, 0403, 043 - 043. doi: 10.1088/1126-6708/2004/03/043
He, Y. and Read, J. (2015). Dessins d'enfants in N=2 generalised quiver theories. Journal of High Energy Physics(8), 85.. doi: 10.1007/JHEP08(2015)085
He, Y. and Read, J. (2015). Hecke Groups, Dessins d'Enfants and the Archimedean Solids. Frontiers in Physics, 3, .91. doi: 10.3389/fphy.2015.00091
He, Y., Schwarz, J. H., Spradlin, M. and Volovich, A. (2003). Explicit formulas for Neumann coefficients in the plane wave geometry. Physical Review D (PRD), 67(8), doi: 10.1103/PhysRevD.67.086005
He, Y. and Song, J. S. (2000). Of McKay correspondence, nonlinear sigma model and conformal field theory. Advances in Theoretical and Mathematical Physics, 4(4), pp. 747-790.
He, Y. and van Loon, M. (2014). Gauge Theories, Tessellations & Riemann Surfaces. Journal of High Energy Physics, 2014(6), p. 53. doi: 10.1007/JHEP06(2014)053
He, Yang-Hui ORCID: 0000-0002-0787-8380 (2018). Quiver Gauge Theories: Finitude and Trichotomoty. Mathematics, 6(12), 291.. doi: 10.3390/math6120291
Hewlett, J. and He, Y. (2010). Probing the Space of Toric Quiver Theories. Journal of High Energy Physics, 2010(3), doi: 10.1007/JHEP03(2010)007
Hickey, M., Phillips, J.P. and Kyriacou, P. A. (2015). The effect of vascular changes on the photoplethysmographic signal at different hand elevations. Physiological Measurement, 36(3), pp. 425-440. doi: 10.1088/0967-3334/36/3/425
Holm, T., Kessar, R. and Linckelmann, M. (2007). Blocks with quaternion defect group over a 2-adic ring: the case \tilde{A}_4. Glasgow Mathematical Journal, 49(1), pp. 29-43. doi: 10.1017/S0017089507003394
Holmes, G. R., Anderson, S. R., Dixon, G., Robertson, A. L., Reyes-Aldasoro, C. C., Billings, S. A., Renshaw, S. A. and Kadirkamanathan, V. (2012). Repelled from the wound, or randomly dispersed? Reverse migration behaviour of neutrophils characterized by dynamic modelling. Journal of The Royal Society, Interface, 9(77), pp. 3229-3239. doi: 10.1098/rsif.2012.0542
Huang, R., Rao, J., Feng, B. and He, Y. (2015). An algebraic approach to the scattering equations. Journal of High Energy Physics, 12, 56.. doi: 10.1007/JHEP12(2015)056
Human, Trevor (2014). Landauer's theory of charge and spin transport in magnetic multilayers. (Unpublished Doctoral thesis, City University London)
Häusler, W., De Martino, A., Ghosh, T. K. and Egger, R. (2008). Tomonaga-Luttinger liquid parameters of magnetic waveguides in graphene. Physical Review B (PRB), 78, doi: 10.1103/PhysRevB.78.165402
Héthelyi, L., Kessar, R., Külshammer, B. and Sambale, B. (2015). Blocks with transitive fusion systems. Journal of Algebra, 424, pp. 190-207. doi: 10.1016/j.jalgebra.2014.10.042
Jegan, Mahadevan (2013). Homomorphisms between bubble algebra modules. (Unpublished Doctoral thesis, City University London)
Jhugroo, Eric (2007). Pattern formation in squares and rectangles. (Unpublished Doctoral thesis, City University)
Jäger, H., Steels, L., Baronchelli, A., Briscoe, E., Christiansen, M. H., Griffiths, T., Jager, G., Kirby, S., Komarova, N., Richerson, P. J. and Triesch, J. (2009). What can mathematical, computational and robotic models tell us about the origins of syntax? In: Bickerton, D. and Szathmáry, E. (Eds.), Biological Foundations and Origin of Syntax. (pp. 385-410). USA: MIT Press. ISBN 0262013568
Kandil, M.B.H. (1983). Credibility theory and experience rating in general insurance. (Unpublished Doctoral thesis, City University London)
Kandler, A. and Laland, K. N. (2013). Tradeoffs between the strength of conformity and number of conformists in variable environments. Journal of Theoretical Biology, 332, pp. 191-202. doi: 10.1016/j.jtbi.2013.04.023
Kandler, A. and Powell, A. (2015). Inferring learning strategies from cultural frequency data. In: Mesoudi, A. and Aoki, K. (Eds.), Inferring Learning Strategies from Cultural Frequency Data. (pp. 85-101). Japan: Springer. ISBN 9784431553625
Kandler, A. and Shennan, S. (2015). A generative inference framework for analysing patterns of cultural change in sparse population data with evidence for fashion trends in LBK culture. Journal of the Royal Society Interface, 12(113), e20150905. doi: 10.1098/rsif.2015.0905
Kandler, A. and Sherman, S. (2013). A non-equilibrium neutral model for analysing cultural change. Journal of Theoretical Biology, 330, pp. 18-25. doi: 10.1016/j.jtbi.2013.03.006
Kaparias, I., Liu, P., Tsakarestos, A., Eden, N., Schmitz, P., Hoadley, S. and Hauptmann, S. (2015). Development and testing of a predictive traffic safety evaluation tool for road traffic management and ITS impact assessment. Paper presented at the mobil.TUM 2015 International Scientific Conference on Mobility and Transport, 30-6-2015 - 1-7-2015, Munich.
Karcanias, N. and Galanis, G.E. (2010). Dynamic Polynomial Combinants and Generalised Resultants. Bulletin of Greek Math Society, 57, pp. 229-250.
Kattoua, K. (2003). Floating production storage offloading unit structural fatigue analysis. (Unpublished Doctoral thesis, City University London)
Kerr, O. (2014). Comment on "Nonlinear eigenvalue problems". Journal of Physics A: Mathematical and Theoretical, 47, p. 368001. doi: 10.1088/1751-8113/47/36/368001
Kerr, O. (2016). Critical Rayleigh number of an error function temperature profile with a quasi-static assumption.
Kerr, O. ORCID: 0000-0003-2946-0695 (2018). Double-diffusive instabilities at a horizontal boundary after the sudden onset of heating. Journal of Fluid Mechanics, doi: 10.1017/jfm.2018.821
Kerr, O. and Gumm, Z. (2017). Thermal instability in a time-dependent base state due to sudden heating. Journal of Fluid Mechanics, 825, pp. 1002-1034. doi: 10.1017/jfm.2017.408
Kessar, R. (2012). On blocks stably equivalent to a quantum complete intersection of dimension 9 in characteristic 3 and a case of the abelian defect group conjecture. Journal of the London Mathematical Society, 85(2), pp. 491-510. doi: 10.1112/jlms/jdr047
Kessar, R. (2009). On duality inducing automorphisms and sources of simple modules in classical groups. Journal of Group Theory, 12(3), pp. 331-349. doi: 10.1515/JGT.2008.081
Kessar, R. ORCID: 0000-0002-1893-4237 and Farrell, N. Rationality of blocks of quasi-simple finite groups. City, University of London.
Kessar, R., Koshitani, S. and Linckelmann, M. (2010). Conjectures of Alperin and Broue for 2-blocks with elementary abelian defect groups of order 8. Journal für die reine und angewandte Mathematik, 2012(671), pp. 85-130. doi: 10.1515/CRELLE.2011.162
Kessar, R., Koshitani, S. and Linckelmann, M. (2015). On the Brauer Indecomposability of Scott Modules. The Quarterly Journal of Mathematics, 66(3), pp. 895-903. doi: 10.1093/qmath/hav010
Kessar, R., Kunugi, N. and Mitsuhashi, N. (2011). On saturated fusion systems and Brauer indecomposability of Scott modules. Journal of Algebra, 340(1), pp. 90-103. doi: 10.1016/j.jalgebra.2011.04.029
Kessar, R., Külshammer, B. and Linckelmann, M. (2017). Anchors of irreducible characters. Journal of Algebra, 475, pp. 113-132. doi: 10.1016/j.jalgebra.2015.11.034
Kessar, R. and Linckelmann, M. (2011). Bounds for Hochschild cohomology of block algebras. Journal of Algebra, 337(1), pp. 318-322. doi: 10.1016/j.jalgebra.2011.03.009
Kessar, R. ORCID: 0000-0002-1893-4237 and Linckelmann, M. (2018). Dade's ordinary conjecture implies the Alperin–McKay conjecture. Archiv der Mathematik, doi: 10.1007/s00013-018-1230-9
Kessar, R. ORCID: 0000-0002-1893-4237 and Linckelmann, M. (2018). Descent of Equivalences and Character Bijections. In: Geometric and Topological Aspects of the Representation Theory of Finite Groups, PSSW 2016. Springer Proceedings in Mathematics & Statistics, 242. (pp. 181-212). Cham: Springer. ISBN 9783319940328
Kessar, R. and Linckelmann, M. (2006). On blocks of strongly p-solvable groups. Archiv der Mathematik, 87(6), pp. 481-487.
Kessar, R. and Linckelmann, M. (2002). On blocks with frobenius inertial quotient. Journal of Algebra, 249(1), pp. 127-146. doi: 10.1006/jabr.2001.9058
Kessar, R. and Linckelmann, M. (2002). On perfect isometries for tame blocks. Bulletin of the London Mathematical Society, 34(1), pp. 46-54. doi: 10.1112/S0024609301008633
Kessar, R. and Linckelmann, M. (2010). On stable equivalences and blocks with one simple module. Journal of Algebra, 323(6), pp. 1607-1621. doi: 10.1016/j.jalgebra.2010.01.006
Kessar, R. and Linckelmann, M. (2013). On the Castelnuovo-Mumford regularity of the cohomology of fusion systems and of the Hochschild cohomology of block algebras. London Mathematical Society Lecture Note Series, 422, pp. 324-330. doi: 10.1017/CBO9781316227343.020
Kessar, R. and Linckelmann, M. (2012). On the Hilbert series of Hochschild cohomology of block algebras. Journal of Algebra, 371, pp. 457-461. doi: 10.1016/j.jalgebra.2012.07.020
Kessar, R. and Linckelmann, M. (2009). On two theorems of Flavell. Archiv der Mathematik, 92(1), pp. 1-6. doi: 10.1007/s00013-008-2911-6
Kessar, R. and Linckelmann, M. (2008). ZJ-theorems for fusion systems. Transactions of the American Mathematical Society, 360(6), pp. 3093-3106. doi: 10.1090/S0002-9947-08-04275-X
Kessar, R. and Linckelmann, M. (2003). A block theoretic analogue of a theorem of Glauberman and Thompson. Proceedings of the American Mathematical Society, 131(1), pp. 35-40.
Kessar, R. and Linckelmann, M. (2010). The graded center of the stable category of a Brauer tree algebra. Quarterly Journal of Mathematics, 61(3), pp. 337-349. doi: 10.1093/qmath/han038
Kessar, R., Linckelmann, M. and Navarro, G. (2015). A characterisation of nilpotent blocks. Proceedings of the American Mathematical Society (PROC), 143, pp. 5129-5138. doi: 10.1090/proc/12646
Kessar, R., Linckelmann, M. and Robinson, G. R. (2002). Local control in fusion systems of p-blocks of finite groups. Journal of Algebra, 257(2), pp. 393-413. doi: 10.1016/S0021-8693(02)00517-3
Kessar, R. and Malle, G. (2017). Brauer's height zero conjecture for quasi-simple groups. Journal of Algebra, 475, pp. 43-60. doi: 10.1016/j.jalgebra.2016.05.010
Kessar, R. and Malle, G. (2015). Lusztig induction and ℓ-blocks of finite reductive groups. Pacific Journal of Mathematics, 279(1-2), pp. 269-298. doi: 10.2140/pjm.2015.279.269
Kessar, R. and Schaps, M. (2006). Crossover morita equivalences for blocks of the covering groups of the symmetric and alternating groups. Journal of Group Theory, 9(6), pp. 715-730. doi: 10.1515/JGT.2006.046
Kessar, R. and Stancu, R. (2008). A reduction theorem for fusion systems of blocks. Journal of Algebra, 319(2), pp. 806-823. doi: 10.1016/j.jalgebra.2006.05.039
Khantoul, B. and Fring, A. (2015). Time-dependent massless Dirac fermions in graphene. Physics Letters A, 379(42), pp. 2704-2706. doi: 10.1016/j.physleta.2015.08.011
King, O. (2013). The limiting blocks of the Brauer algebra in characteristic p. Journal of Algebra, 397, pp. 168-189. doi: 10.1016/j.jalgebra.2013.08.040
King, Oliver (2014). The representation theory of diagram algebras. (Unpublished Doctoral thesis, City University London)
Kisha, W., Riley, P. H. ORCID: 0000-0002-1580-7689 and Hann, D. (2018). Development of a low-cost, electricity-generating Rankine cycle, alcohol-fuelled cooking stove for rural communities. Paper presented at the 8th Heat Powered Cycles Conference, 16 - 19 September 2018, University of Bayreuth, Germany.
Kisha, W., Riley, P. H. ORCID: 0000-0002-1580-7689, McKechinie, J. and Hann, D. (2018). The Influence of Heat Input Ratio on Electrical Power Output of a Dual-Core Travelling-Wave Thermoacoustic Engine. Paper presented at the 8th Heat Powered Cycles Conference, 16 - 19 September 2018, University of Bayreuth, Germany.
Kiss, I. Z., Broom, M., Craze, P. G. and Rafols, I. (2010). Can epidemic models describe the diffusion of topics across disciplines?. Journal of Informetrics, 4(1), pp. 74-82. doi: 10.1016/j.joi.2009.08.002
Kloepfer, D., De Martino, A. and Egger, R. (2013). Bound States and Supercriticality in Graphene-Based Topological Insulators. Crystals, 3(1), pp. 14-27. doi: 10.3390/cryst3010014
Klöpfer, D., De Martino, A., Matrasulov, D. and Egger, R. (2014). Scattering theory and ground-state energy of Dirac fermions in graphene with two Coulomb impurities. European Physical Journal B: Condensed Matter and Complex Systems, 87(8), 187 - ?. doi: 10.1140/epjb/e2014-50414-8
Koshitani, S. and Linckelmann, M. (2005). The indecomposability of a certain bimodule given by the Brauer construction. Journal of Algebra, 285(2), pp. 726-729. doi: 10.1016/j.jalgebra.2004.08.031
Laksar, Saroj Kumar (1971). Solutions of certain boundary integral equations in potential theory. (Unpublished Doctoral thesis, City, University of London)
Lee, D. (2018). Documenting Performance and Contemporary Data Models: Positioning Performance within FRBR and LRM. Proceedings from the Document Academy, 5(1),
Lee, D., Robinson, L. ORCID: 0000-0001-5202-8206 and Bawden, D. ORCID: 0000-0002-0478-6456 (2018). Modelling the relationship between scientific and bibliographic classification for music. Journal of the Association for Information Science and Technology, doi: 10.1002/asi.24120
Levi, E., Castro-Alvaredo, O. and Doyon, B. (2013). Universal corrections to the entanglement entropy in gapped quantum spin chains: a numerical study. Physical Review B: Condensed Matter and Materials Physics, 88(9), doi: 10.1103/PhysRevB.88.094439
Levi, Emanuele (2013). Universal properties of the entanglement entropy in quantum integrable models. (Unpublished Doctoral thesis, City University London)
Li, A., Broom, M., Du, J. and Wang, L. (2016). Evolutionary dynamics of general group interactions in structured populations. Physical Review E (PRE), 93(2), 022407. doi: 10.1103/PhysRevE.93.022407
Lin, Y.-R., Margolin, B., Keegan, A., Baronchelli, A. and Lazer, D. (2013). #Bigbirds never die: Understanding social dynamics of emergent hashtag. Paper presented at the The Seventh International AAAI Conference on Weblogs and Social Media (ICWSM-13), 8-11 Jul 2013, MIT Media Lab and Microsoft, Cambridge MA, US.
Linckelmann, M. (2005). Alperin's weight conjecture in terms of equivariant Bredon cohomology. Mathematische Zeitschrift, 250(3), pp. 495-513. doi: 10.1007/s00209-004-0753-x
Linckelmann, M. (2007). Blocks of minimal dimension. Archiv der Mathematik, 89(4), pp. 311-314.
Linckelmann, M. (2011). Finite generation of Hochschild cohomology of Hecke algebras of finite classical type in characteristic zero. Bulletin of the London Mathematical Society, 43(5), pp. 871-885. doi: 10.1112/blms/bdr024
Linckelmann, M. (2004). Fusion category algebras. Journal of Algebra, 277(1), pp. 222-235. doi: 10.1016/j.jalgebra.2003.12.010
Linckelmann, M. (2010). Hochschild and block cohomology varieties are isomorphic. Journal of the London Mathematical Society, 81(2), pp. 389-411. doi: 10.1112/jlms/jdp078
Linckelmann, M. (2002). Induction for interior algebras. Quarterly Journal of Mathematics, 53(2), pp. 195-200. doi: 10.1093/qjmath/53.2.195
Linckelmann, M. (2018). Integrable derivations and stable equivalences of Morita type. Proceedings of the Edinburgh Mathematical Society, 61(2), pp. 343-362. doi: 10.1017/S0013091517000098
Linckelmann, M. (2018). On Automorphisms and Focal Subgroups of Blocks. In: Geometric and Topological Aspects of the Representation Theory of Finite Groups. PSSW 2016. Springer Proceedings in Mathematics & Statistics, 242. (pp. 235-249). Cham: Springer. ISBN 9783319940328
Linckelmann, M. (2009). On H* (C{script}; k×) for fusion systems. Homology, Homotopy and Applications, 11(1), pp. 203-218.
Linckelmann, M. (2009). On dimensions of block algebras. Mathematical Research Letters, 16(6), pp. 1011-1014.
Linckelmann, M. (2016). On equivalences for cohomological Mackey functors. Representation Theory, 20, pp. 162-171. doi: 10.1090/ert/482
Linckelmann, M. (2009). On graded centres and block cohomology. Proceedings of the Edinburgh Mathematical Society, 52(2), pp. 489-514. doi: 10.1017/S0013091507001137
Linckelmann, M. (2015). On stable equivalences with endopermutation source. Journal of Algebra, 434, pp. 27-45. doi: 10.1016/j.jalgebra.2015.03.014
Linckelmann, M. (2002). Quillen stratification for block varieties. Journal of Pure and Applied Algebra, 172(2-3), pp. 257-270. doi: 10.1016/S0022-4049(01)00143-8
Linckelmann, M. (2017). Quillen's stratification for fusion systems. Communications in Algebra, 45(7), pp. 5227-5229. doi: 10.1080/00927872.2017.1301461
Linckelmann, M. (2006). Simple fusion systems and the Solomon 2-local groups. Journal of Algebra, 296(2), pp. 385-401. doi: 10.1016/j.jalgebra.2005.09.024
Linckelmann, M. (2013). Tate duality and transfer in Hochschild cohomology. Journal of Pure and Applied Algebra, 217(12), doi: 10.1016/j.jpaa.2013.04.004
Linckelmann, M. (1999). Transfer in Hochschild Cohomology of Blocks of Finite Groups. Algebras and Representation Theory, 2(2), pp. 107-135. doi: 10.1023/A:1009979222100
Linckelmann, M. (2009). Trivial source bimodule rings for blocks and p-permutation equivalences. Transactions of the American Mathematical Society, 361(3), pp. 1279-1316. doi: 10.1090/S0002-9947-08-04577-7
Linckelmann, M. (1999). Varieties in block theory. Journal of Algebra, 215(2), pp. 460-480. doi: 10.1006/jabr.1998.7724
Linckelmann, M. (2018). The dominant dimension of cohomological Mackey functors. Journal of Algebra and its Applications, doi: 10.1142/S0219498818502286
Linckelmann, M. (2018). A note on the depth of a source algebra over its defect group. International Electronic Journal of Algebra, 24, pp. 68-72. doi: 10.24330/ieja.440216
Linckelmann, M. (2009). The orbit space of a fusion system is contractible. Proceedings of the London Mathematical Society, 98(1), pp. 191-216. doi: 10.1112/plms/pdn029
Linckelmann, M. (2013). A version of alperin's weight conjecture for finite category algebras. Journal of Algebra, 398, pp. 386-395. doi: 10.1016/j.jalgebra.2013.02.010
Linckelmann, M. and Degrassi, L. R. Y. (2018). Block algebras with HH1 a simple Lie algebra. Quarterly Journal of Mathematics, 69(4), pp. 1123-1128. doi: 10.1093/qmath/hay017
Linckelmann, M. and Mazza, N. (2009). The Dade group of a fusion system. Journal of Group Theory, 12(1), pp. 55-74. doi: 10.1515/JGT.2008.060
Linckelmann, M. and Rognerud, B. (2017). On Morita and derived equivalences for cohomological Mackey algebras. Mathematische Zeitschrift, doi: 10.1007/s00209-017-1942-8
Linckelmann, M. and Schroll, S. (2005). A two-sided q-analogue of the Coxeter complex. Journal of Algebra, 289(1), pp. 128-134. doi: 10.1016/j.jalgebra.2005.03.026,
Linckelmann, M. and Stalder, B. (2002). A reciprocity for symmetric algebras. Quarterly Journal of Mathematics, 53(2), pp. 201-205. doi: 10.1093/qjmath/53.2.201
Linckelmann, M. and Stolorz, M (2013). Quasi-hereditary twisted category algebras. Journal of Algebra, 385, pp. 1-13. doi: 10.1016/j.jalgebra.2013.02.036
Linckelmann, M. and Stolorz, M. (2012). On simple modules over twisted finite category algebras. Proceedings of the American Mathematical Society, 140(11), pp. 3725-3737.
Liu, S. Y., Baronchelli, A. and Perra, N. (2013). Contagion dynamics in time-varying metapopulation networks. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 87(3), doi: 10.1103/PhysRevE.87.032805
Lloyd, T., Ohlsson Sax, O., Sfondrini, A. and Stefanski, B. (2015). The complete worldsheet S matrix of superstrings on AdS3×S3×T4 with mixed three-form flux. Nuclear Physics B, 891, pp. 570-612. doi: 10.1016/j.nuclphysb.2014.12.019
Lloyd, T. and Stefanski, B. (2014). AdS3/CFT2, finite-gap equations and massless modes. Journal of High Energy Physics, 2014(4), doi: 10.1007/JHEP04(2014)179
Loreto, V., Baronchelli, A., Mukherjee, A., Puglisi, A. and Tria, F. (2011). Statistical physics of language dynamics. Journal of Statistical Mechanics: Theory and Experiment, 2011(4), P04006. doi: 10.1088/1742-5468/2011/04/P04006
Lunde, A. M., De Martino, A., Schulz, A., Egger, R. and Flensberg, K. (2009). Electron-electron interaction effects in quantum point contacts. New Journal of Physics, 11, doi: 10.1088/1367-2630/11/2/023031
Maiden, J., Shiu, G. and Stefanski, B. (2006). D-brane Spectrum and K-theory Constraints of D=4, N=1 Orientifolds. Journal of High Energy Physics, 2006(4), doi: 10.1088/1126-6708/2006/04/052
Mcbride, C. and Paterson, R. A. (2008). Applicative programming with effects. Journal of Functional Programming, 18(1), pp. 1-13. doi: 10.1017/S0956796807006326
Mehta, D., He, Y. and Hauenstein, J. (2012). Numerical algebraic geometry: a new perspective on gauge and string theories. Journal of High Energy Physics, 2012(7), p. 18. doi: 10.1007/JHEP07(2012)018
Mian, A. and Reyes-Aldasoro, C. C. (2015). Quantification of the Effects of Low Dose Radiation and its Impact on Cardiovascular Risks. Paper presented at the Quantification of the Effects of Low Dose Radiation and its Impact on Cardiovascular Risks.
Mocanu, D., Baronchelli, A., Perra, N., Gonçalves, B., Zhang, Q. C. and Vespignani, A. (2013). The Twitter of Babel: Mapping World Languages through Microblogging Platforms. PLoS ONE, 8(4), doi: 10.1371/journal.pone.0061981
Moghaddam, E.R., Coren, D., Long, C. and Sayma, A. I. (2011). A numerical investigation of moment coefficient and flow structure in a rotor-stator cavity with rotor mounted bolts. In: UNSPECIFIED . UNSPECIFIED. ISBN 9780791854679
Moretti, P., Baronchelli, A., Barrat, A. and Pastor-Satorras, R. (2011). Complex networks and glassy dynamics: Walks in the energy landscape. Journal of Statistical Mechanics: Theory and Experiment new, 2011(3), doi: 10.1088/1742-5468/2011/03/P03032
Moretti, P., Baronchelli, A., Starnini, M. and Pastor-Satorras, R. (2013). Generalized voter-like models on heterogeneous networks. In: Ganguly, N. (Ed.), Dynamics On and Of Complex Networks. (pp. 285-300). Springer Science & Business. ISBN 1461467292
Moretti, P., Liu, S. Y., Baronchelli, A. and Pastor-Satorras, R. (2012). Heterogenous mean-field analysis of a generalized voter-like model on networks. European Physical Journal B (The), 85(3), doi: 10.1140/epjb/e2012-20501-1
Moriconi, M. and De Martino, A. (1999). Quantum Integrability of Certain Boundary Conditions. Physics Letters B, 447(3-4), pp. 292-297. doi: 10.1016/S0370-2693(98)01596-2
Mukherjee, A., Baronchelli, A., Loreto, V., Puglisi, A. and Tria, F. (2010). Aging in the Emergence of Linguistic Categories. In: Fellermann, H., Dörr, M., Hanczyc, M. M., Laursen, L. L., Maurer, S. E., Merkle, D., Monnard, P-A., Støy, K. and Rasmussen, S. (Eds.), Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems. (pp. 589-590). USA: MIT Press. ISBN 9780262290753
Mukherjee, A., Tria, F., Baronchelli, A., Puglisi, A. and Loreto, V. (2011). Aging in language dynamics. PLoS ONE, 6(2), doi: 10.1371/journal.pone.0016677
O'Sullivan, D., Fraccaro, P., Carson, E. and Weller, P. (2014). Decision time for clinical decision support systems. Clinical Medicine, 14(4), pp. 338-341. doi: 10.7861/clinmedicine.14-4-338
O'Sullivan, D., Wilk, S., Michalowski, W., Slowinski, R., Thomas, R. and Farion, K. (2012). Discovering the Preferences of Physicians with Regards to Rank-ordered Medical Documents. Paper presented at the Discovering the Preferences of Physicians with Regards to Rank-ordered Medical Documents.
Ohlsson Sax, O., Torrielli, A. and Stefanski, B. (2013). On the massless modes of the AdS3/CFT2 integrable systems. Journal of High Energy Physics, 13(03), 109 - ?. doi: 10.1007/JHEP03(2013)109
Omidyeganeh, M., Piomelli, U., Christensen, K.T. and Best, J.L. (2013). Large eddy simulation of interacting barchan dunes in a steady, unidirectional flow. JOURNAL OF GEOPHYSICAL RESEARCH-EARTH SURFACE, 118(4), doi: 10.1002/jgrf.20149
Overton, C. E., Broom, M. ORCID: 0000-0002-1698-5495, Hadjichrysanthou, C. and Sharkey, K. (2019). Methods for approximating stochastic evolutionary dynamics on graphs. Journal of Theoretical Biology, 468, pp. 45-59. doi: 10.1016/j.jtbi.2019.02.009
Pankiewicz, A. and Stefanski, B. (2003). On the Uniqueness of Plane-wave String Field Theory. .,
Pankiewicz, A. and Stefanski, B. (2003). PP-Wave Light-Cone Superstring Field Theory. Nuclear Physics B, 657(5), pp. 79-106. doi: 10.1016/S0550-3213(03)00141-X
Pattni, K. (2017). Evolution in finite structured populations with group interactions. (Unpublished Doctoral thesis, City, University of London)
Pattni, K., Broom, M. and Rychtar, J. (2017). Evolutionary dynamics and the evolution of multiplayer cooperation in a subdivided population. Journal of Theoretical Biology, 429, pp. 105-115. doi: 10.1016/j.jtbi.2017.06.034
Pattni, K., Broom, M. ORCID: 0000-0002-1698-5495 and Rychtar, J. (2018). Evolving multiplayer networks: Modelling the evolution of cooperation in a mobile population. Discrete & Continuous Dynamical Systems - B, 23(5), pp. 1975-2004. doi: 10.3934/dcdsb.2018191
Pattni, K., Broom, M., Rychtar, J. and Silvers, L. J. (2015). Evolutionary graph theory revisited: when is an evolutionary process equivalent to the Moran process?. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 471, e2182. doi: 10.1098/rspa.2015.0334
Perra, N., Baronchelli, A., Mocanu, D., Gonçalves, B., Pastor-Satorras, R. and Vespignani, A. (2012). Random walks and search in time-varying networks. Physical Review Letters (PRL), 109(23), doi: 10.1103/PhysRevLett.109.238701
Petroulakis, G. (2015). The approximate determinantal assignment problem. (Unpublished Doctoral thesis, City University London)
Phan, Minh Son (1989). Dynamic Response Function and the Theory of Spin Waves in Metallic Overlayers. (Unpublished Doctoral thesis, City, University of London)
Psaroudakis, C. and Vitória, J. (2017). Realisation functors in tilting theory. Mathematische Zeitschrift, doi: 10.1007/s00209-017-1923-y
Puglisi, A., Baronchelli, A. and Loreto, V. (2008). Cultural route to the emergence of linguistic categories. Proceedings of the National Academy of Sciences of the United States of America (PNAS), 105(23), pp. 7936-7940. doi: 10.1073/pnas.0802485105
Pullen, K. R., Grigorchenkov, I. and Johnson, D. (2012). Evaluation of mobile and stationary applications of energy storage for DC railways and rapid transit. Paper presented at the RRUKA Annual Conference, 07-11-2012, Royal Society, London, UK.
Quiroz, N. and Stefanski, B. (2002). Dirichlet Branes on Orientifolds. Physical Review D (PRD), 66(026002), doi: 10.1103/PhysRevD.66.026002
Radicchi, F. and Baronchelli, A. (2012). Evolution of optimal Lévy-flight strategies in human mental searches. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 85(6), doi: 10.1103/PhysRevE.85.061121
Radicchi, F., Baronchelli, A. and Amaral, L. A. N. (2012). Rationality, irrationality and escalating behavior in lowest unique bid auctions. PLoS ONE, 7(1), doi: 10.1371/journal.pone.0029910
Raza, Mahdi (2016). Using survival analysis to investigate breast cancer in the Kurdistan region of Iraq. (Unpublished Doctoral thesis, City, University of London)
Reid-Edwards, R. A. and Stefanski, B. (2011). On Type IIA geometries dual to N=2 SCFTs. Nuclear Physics B, 849(3), pp. 549-572. doi: 10.1016/j.nuclphysb.2011.04.002
Revestido Herrero, E., Tomas-Rodriguez, M. and Velasco, F. J. (2014). Iterative lead compensation control of nonlinear marine vessels manoeuvring models. Applied Ocean Research, 48, pp. 266-276. doi: 10.1016/j.apor.2014.08.010
Ribeiro, P., Perra, N. and Baronchelli, A. (2013). Quantifying the effect of temporal resolution on time-varying networks. Scientific Reports, 3(3006), doi: 10.1038/srep03006
Riley, P. H. ORCID: 0000-0002-1580-7689 (2015). The Myth of the High-Efficiency External-Combustion Stirling Engine. Engineering, 7(12), pp. 789-795. doi: 10.4236/eng.2015.712068.
Rubio y Degrassi, L. (2016). On hochschild cohomology and modular representation theory. (Unpublished Doctoral thesis, City, University of London)
Ruxton, G. D., Fraser, C. and Broom, M. (2005). An evolutionarily stable joining policy for group foragers. Behavioral Ecology, 16(5), pp. 856-864. doi: 10.1093/beheco/ari063
Saber Raza, M. and Broom, M. (2016). Survival analysis modeling with hidden censoring. Journal of Statistical Theory and Practice, pp. 375-388. doi: 10.1080/15598608.2016.1152205
Sajjad, Ali (2015). A secure and scalable communication framework for inter-cloud services. (Unpublished Post-Doctoral thesis, City University London)
Sax, O. O., Sfondrini, A. and Stefanski, B. (2015). Integrability and the conformal field theory of the Higgs branch. Journal of High Energy Physics, 2015(6), p. 103. doi: 10.1007/JHEP06(2015)103
Sax, O. O. and Stefanski, B. (2011). Integrability, spin-chains and the AdS(3)/CFT2 correspondence. Journal of High Energy Physics, 2011(8), doi: 10.1007/JHEP08(2011)029
Schimit, P., Pattni, K. and Broom, M. ORCID: 0000-0002-1698-5495 (2019). Dynamics of multiplayer games on complex networks using territorial interactions. Physical Review E, 99(3), 032306. doi: 10.1103/physreve.99.032306
Schulz, A., De Martino, A. and Egger, R. (2010). Spin-orbit coupling and spectral function of interacting electrons in carbon nanotubes. Physical Review B (PRB), 82(3), doi: 10.1103/PhysRevB.82.033407
Schulz, A., De Martino, A., Ingenhoven, P. and Egger, R. (2009). Low-energy theory and RKKY interaction for interacting quantum wires with Rashba spin-orbit coupling. Physical Review B (PRB), 79(20), doi: 10.1103/PhysRevB.79.205432
Silvers, L. J. (2008). Long-term Nonlinear Behaviour of the Magnetorotational Instability in a Localised Model of an Accretion Disc. Monthly Notices of the Royal Astronomical Society, 385(2), pp. 1036-1044. doi: 10.1111/j.1365-2966.2008.12906.x
Silvers, L. J. (2008). Magnetic Fields In Astrophysical Objects. Philosophical Transactions of the Royal Society A, 366(1884), pp. 4453-4464. doi: 10.1098/rsta.2008.0173
Silvers, L. J., Bushby, P. J. and Proctor, M. R. E. (2009). Interactions between magnetohydrodynamic shear instabilities and convective flows in the solar interior. Monthly Notices Of The Royal Astronomical Society, 400(1), pp. 337-345. doi: 10.1111/j.1365-2966.2009.15455.x
Silvers, L. J., Favier, B. and Proctor, M. R. E. (2014). Inverse cascade and symmetry breaking in rapidly-rotating Boussinesq convection. Physics of Fluids, 26, 096605. doi: 10.1063/1.489513
Silvers, L. J., Vasil, G., Brummell, N. H. and Proctor, M. (2010). The Evolution of a Double Diffusive Magnetic Buoyancy Instability. Proceedings of the International Astronomical Union, 6(S271), pp. 218-226. doi: 10.1017/S1743921311017649
Silvers, L. J., Vasil, G. M., Brummell, N. H. and Proctor, M. R. E. (2009). Double-diffusive instabilities of a shear-generated magnetic layer. The Astrophysical Journal Letters, 702(1), doi: 10.1088/0004-637X/702/1/L14
Silvers, L. J. and Witzke, V. Mean flow evolution of saturated forced shear flows in polytropic atmospheres. EAS Publications Series,
Skinner, D. M. and Silvers, L. J. (2013). Double-diffusive magnetic buoyancy instability in a quasi-two-dimensional Cartesian geometry. Monthly Notices of the Royal Astronomical Society, 436(1), pp. 531-539. doi: 10.1093/mnras/stt1590
Soliman, A.S. (1989). Studies in female labour supply : Egypt. (Unpublished Doctoral thesis, The City University, London)
Solis-Lemus, J. A., Huang, Y., Wlodkovic, D. and Reyes-Aldasoro, C. C. (2015). Microfluidic environment and tracking analysis for the observation of Artemia Franciscana. Paper presented at the 26th British Machine Vision Conference, 07-09-2015 - 10-09-2015, Swansea, UK.
Spencer, R. and Broom, M. (2018). A game-theoretical model of kleptoparasitic behavior in an urban gull (Laridae) population. Behavioral Ecology, 29(1), pp. 60-78. doi: 10.1093/beheco/arx125
Starnini, M., Baronchelli, A., Barrat, A. and Pastor-Satorras, R. (2012). Random walks on temporal networks. Physical Review E (PRE), 85(5), 056115. doi: 10.1103/PhysRevE.85.056115
Starnini, M., Baronchelli, A. and Pastor-Satorras, R. (2017). Effects of temporal correlations in social multiplex networks. Scientific Reports, 7, 8597.. doi: 10.1038/s41598-017-07591-0
Starnini, M., Baronchelli, A. and Pastor-Satorras, R. (2013). Modeling human dynamics of face-to-face interaction networks. Physical Review Letters (PRL), 110(16), doi: 10.1103/PhysRevLett.110.168701
Starnini, M., Baronchelli, A. and Pastor-Satorras, R. (2012). Ordering dynamics of the multi-state voter model. Journal of Statistical Mechanics: Theory and Experiment new, 2012(10), doi: 10.1088/1742-5468/2012/10/P10027
Starnini, M., Baronchelli, A. and Pastor-Satorras, R. (2016). Temporal correlations in social multiplex networks. .,
Starnini, M., Frasca, M. and Baronchelli, A. (2016). Emergence of metapopulations and echo chambers in mobile agents. Scientific Reports, 6, 31834.. doi: 10.1038/srep31834
Starnini, M., Lepri, B., Baronchelli, A., Barrat, A., Cattuto, C. and Pastor-Satorras, R. (2017). Robust modeling of human contact networks across different scales and proximity-sensing techniques. .,
Stefanski, B. (2002). D-branes, Orientifolds and K-theory. Fortschritte der Physik, 50(8/9), pp. 986-991. doi: 10.1002/1521-3978(200209)50:8/9<986::AID-PROP986>3.0.CO;2-#
Stefanski, B. (2002). D-branes, orientifolds and K theory. Paper presented at the Workshop on the Quantum Structure of Spacetime and the Geometric Nature of Fundamental Interactions, 13-09-2001 - 20-09-2001, Corfu, Greece.
Stefanski, B. (2000). Dirichlet Branes on a Calabi-Yau Three-fold Orbifold. Nuclear Physics B, 589(1-2), pp. 292-314. doi: 10.1016/S0550-3213(00)00410-7
Stefanski, B. (1999). Gravitational Couplings of D-branes and O-planes. Nuclear Physics B, 548(1-3), pp. 275-290. doi: 10.1016/S0550-3213(99)00147-9
Stefanski, B. (2009). Green-Schwarz action for Type IIA strings on $AdS_4\times CP^3$. Nuclear Physics B, 808(1-2), pp. 80-87. doi: 10.1016/j.nuclphysb.2008.09.015
Stefanski, B. (2007). Landau-Lifshitz sigma-models, fermions and the AdS/CFT correspondence. Journal of High Energy Physics, 2007(7), doi: 10.1088/1126-6708/2007/07/009
Stefanski, B. (2004). Open Spinning Strings. Journal of High Energy Physics, 2004(03), doi: 10.1088/1126-6708/2004/03/057
Stefanski, B. (2003). Open String Plane-Wave Light-Cone Superstring Field Theory. Nuclear Physics B, 666(1-2), pp. 71-87. doi: 10.1016/S0550-3213(03)00499-1
Stefanski, B. (2004). Plane-wave lightcone superstring field theory. Classical and Quantum Gravity, 21(10), S1305-S1311. doi: 10.1088/0264-9381/21/10/003
Stefanski, B. (2014). Supermembrane actions for Gaiotto-Maldacena backgrounds. Nuclear Physics B, 883, pp. 581-597. doi: 10.1016/j.nuclphysb.2014.03.028
Stefanski, B. (1999). WZ Couplings of D-branes and O-planes. Paper presented at the NATO Advanced Study Institute on Progress in String Theory and M-Theory, 24-05-1999 - 05-06-1999, Cargese, France.
Stefanski, B. and Ohlsson Sax, O. (2018). Closed Strings and Moduli in AdS3/CFT2. Journal of High Energy Physics, 101.. doi: 10.1007/JHEP05(2018)101
Stefanski, B. and Tseytlin, A. A. (2004). Large spin limits of AdS/CFT and generalized Landau-Lifshitz equations. Journal of High Energy Physics, 2004(5), doi: 10.1088/1126-6708/2004/05/042
Stefanski, B. and Tseytlin, A. A. (2005). Super spin chain coherent state actions and $AdS_5 \times S^5$ superstring. Nuclear Physics B, 718(1-2), pp. 83-112. doi: 10.1016/j.nuclphysb.2005.04.026
Sun, K., Baronchelli, A. and Perra, N. (2014). Contrasting Effects of Strong Ties on SIR and SIS Processes in Temporal Networks. The European Physical Journal B, 88, p. 326. doi: 10.1140/epjb/e2015-60568-4
Teichmann, J. (2015). Models of aposematism and the role of aversive learning. (Unpublished Doctoral thesis, City University London)
Teichmann, J., Broom, M. and Alonso, E. (2014). The application of temporal difference learning in optimal diet models. Journal of Theoretical Biology, 340, pp. 11-16. doi: 10.1016/j.jtbi.2013.08.036
Tria, F., Mukherjee, A., Baronchelli, A., Puglisi, A. and Loreto, V. (2011). A fast no-rejection algorithm for the Category Game. Journal of Computational Science, 2(4), pp. 316-323. doi: 10.1016/j.jocs.2011.10.002
Trianni, V., Simone, D. D., Reina, A. and Baronchelli, A. (2016). Emergence of Consensus in a Multi-Robot Network: from Abstract Models to Empirical Validation. Robotics and Automation Letters, IEEE, 1(1), pp. 348-353. doi: 10.1109/LRA.2016.2519537
Vafiadis, K.G. (2003). Systems and control problems in early systems design. (Unpublished Doctoral thesis, City University London)
Valani, Y.P. (2011). On the partition function for the three-dimensional Ising model. (Unpublished Doctoral thesis, City University London)
Vidunas, R. and He, Y. ORCID: 0000-0002-0787-8380 (2018). Composite genus one Belyi maps. Indagationes Mathematicae, 29(3), pp. 916-947. doi: 10.1016/j.indag.2018.02.001
Von Landesberger, T., Brodkorb, F., Roskosch, P., Andrienko, N., Andrienko, G. and Kerren, A. (2016). MobilityGraphs: Visual Analysis of Mass Mobility Dynamics via Spatio-Temporal Graphs and Clustering. IEEE Transactions on Visualization and Computer Graphics, 22(1), pp. 11-20. doi: 10.1109/TVCG.2015.2468111
Wang, L., Ruxton, G. D., Cornell, S. J., Speed, M. P. and Broom, M. ORCID: 0000-0002-1698-5495 (2019). A theory for investment across defences triggered at different stages of a predator-prey encounter. Journal of Theoretical Biology, 473, pp. 9-19. doi: 10.1016/j.jtbi.2019.04.016
Wang, P. (1992). Thermal convection in slender laterally-heated cavities. (Unpublished Doctoral thesis, City University London)
Ward, A. J. W., Hoare, D. J., Couzin, I. D., Broom, M. and Krause, J. (2002). The effects of parasitism and body length on positioning within wild fish shoals. Journal Of Animal Ecology, 71(1), pp. 10-14. doi: 10.1046/j.0021-8790.2001.00571.x
Westenberger, R. (1983). Some statistical investigations in general insurance. (Unpublished Doctoral thesis, City University London)
Witzke, V. (2017). Shear instabilities in stellar objects: linear stability and non-linear evolution. (Unpublished Doctoral thesis, City, University of London)
Witzke, V., Silvers, L. J. ORCID: 0000-0003-0619-6756 and Favier, B. (2019). Evolution and characteristics of forced shear flows in polytropic atmospheres: Large and small Péclet number regimes. Monthly Notices of the Royal Astronomical Society, 482(1), pp. 1338-1351. doi: 10.1093/mnras/sty2698
Witzke, V., Silvers, L. J. and Favier, B. (2016). Evolution of forced shear flows in polytropic atmospheres: A comparison of forcing methods and energetics. Monthly Notices of the Royal Astronomical Society, 463(1), pp. 282-295. doi: 10.1093/mnras/stw1925
Witzke, V., Silvers, L. J. and Favier, B. (2015). Shear instabilities in a fully compressible polytropic atmosphere. Astronomy and Astrophysics, 577, A76. doi: 10.1051/0004-6361/201425285
Xiao, Y. (2018). Quivers, tilings and branes. (Unpublished Doctoral thesis, City, University of London)
Yates, G. E. and Broom, M. (2007). Stochastic models of kleptoparasitism. Journal of Theoretical Biology, 248(3), pp. 480-489. doi: 10.1016/j.jtbi.2007.05.007
Zhang, Cheng (2013). Continuous and quad-graph integrable models with a boundary: Reflection maps and 3D-boundary consistency. (Unpublished Doctoral thesis, City University London)
Zhang, Haotian (2014). Smart Grid Technologies and Implementations. (Unpublished Doctoral thesis, City University London)
Zhou, D., Xiao, Y. and He, Y. (2015). Seiberg duality, quiver gauge theories, and Ihara's zeta function. International Journal of Modern Physics A, 30(18-19), doi: 10.1142/S0217751X15501183
|
CommonCrawl
|
Collection of cerebrospinal fluid in 50 adult healthy donkeys (Equus asinus): clinical complications, and cytological and biochemical constituents
Mohammed A. H. Abdelhakiem1 &
Hussein Awad Hussein ORCID: orcid.org/0000-0003-0449-82832
Diseases of the central nervous system are a well-recognized cause of morbidity and mortality in equine. Collection and analysis of cerebrospinal fluid (CSF) give information about the type and stage of degenerative and inflammatory diseases in central nervous system (CNS). The present research aimed to assess the clinical complications of CSF collections and to establish range values of cytological and biochemical parameters of CSF in adult healthy donkeys (Equus asinus). The CSF samples were collected from fifty healthy donkeys at the lumbosacral (LS) and atlanto-occipital (AO) sites.
Hypothermia, tachycardia, ataxia and recumbency may develop post-puncture. Erythrocytes were noticed in 35 of 50 CSF samples. Total nucleated cell counts ranged from 0 to 6 cells/μL, and lymphocytes predominated the cells (61%). The concentration of glucose (1.2 to 5.3 mmol/L) was lower than that of serum (P < 0.05). The CSF sodium concentration (123 to 160 mmol/L) was approximately like that of serum, but potassium (1.5–3 mmol/L) was lower than that of serum (P < 0.01). Urea concentrations (1.1–2.9 mmol/L) were markedly lower than serum (P < 0.001). Concentrations of CSF total proteins, and albumin ranged from 0.1 to 0.6 g/dL, and from 0.002 to 0.013 g/dL, respectively. The albumin quotient ranged from 0.06 to 0.56.
Transient hypothermia, tachycardia, ataxia and recumbency may develop as clinical complications of CSF puncture procedures. The collection site has no impact on the constituents in CSF. Furthermore, this study presented the range values for normal cytological and biochemical constituents of CSF in donkeys (Equus asinus) that can provide a basis in comparison when evaluating CSF from donkeys with neurologic diseases.
Diseases of the central nervous system are a well-recognized cause of morbidity and mortality in equine [1]. Many diseases can affect equine central nervous system (CNS), including equine protozoal myeloencephalopathy, equine degenerative myeloencephalopathy, and equine herpesvirus-1 myeloencephalopathy [2]. Analysis of CSF gives information about the type and stage of inflammatory or degenerative processes occurring in CNS [3]. CSF is the fluid that flows in and around the hollow spaces of the brain and spinal cord, and between the meninges [4]. CSF originates from the choroid plexus and ependymal lining of the ventricles [5], and it flows from the ventricular system up over the cerebral hemispheres and through the subarachnoid space surrounding the spinal cord [6]. The pulse waves of the blood in the choroid plexuses push the CSF in a caudal direction [7]. CSF has many functions, including physical support of the brain and spinal cord, excretion action, intracerebral transport, and maintenance of chemical environment of the CNS [7], including proper ionic and acid-base balance [8]. Some body barriers as the blood - brain barrier, the blood-CSF barrier, and the CSF-brain barrier control the composition, production and absorption of the CSF [9].
In equine, CSF can be collected via the atlanto-occipital (AO) or lumbosacral (LS) spaces [2]. Many factors affect the practitioner's choice as LS puncture is more likely to give information about diseases posterior to the foramen magnum [10], because CSF has caudal flow [11]. Analysis of cerebrospinal fluid (CSF) is a useful method for the diagnosis of equine with suspected CNS disease [2]. In addition, CSF analysis has reasonable sensitivity but low specificity, as well as the CSF abnormalities are usually dependent on the CSF collection site with respect to the lesion location [12].
At present, according to the authors' knowledge, there are no literatures describing the clinical complications of CSF puncture procedures in donkeys. Therefore, the objectives of the current research were as follows: [1] describe the clinical observations and possible complications that may develop after CSF collection in clinically healthy adult donkeys (Equus asinus); [2] determine the cytological and biochemical constituents of CSF analysis.
Animals and study design
The present research was ethically approved by the Animal Care and Welfare Committee of Faculty of Veterinary Medicine, Assiut University, Assiut, Egypt. All national and institutional guidelines for the care and use of animals were followed during study procedures. All animals were housed and cared according to the Egyptian animal welfare act (No. 53, 1966). Moreover, informed consent was granted by the owners of the donkeys. The current study was carried out on 50 adult clinically healthy donkeys (Equus asinus), of both sexes (28 males, and 22 non-lactating and non-pregnant females). The average age was 8 ± 1.5 years, and weight was 110 ± 2.8 kg (mean ± standard error). All animals were clinically healthy and showed no signs of CNS or other neurological diseases during physical examination. The body score of animals ranged from 2 to 3 [13]. The clinical examination including rectal temperature, heart rate, and respiratory rate were conducted for all animals [14], as well as the mental status, behavior, posture, gait, involuntary movement, and lameness were recorded pre- and post-collection of CSFs. Animals showed any clinical and/or neurological abnormalities were removed from the study. All animals were kept under clinical observation for 72 h post-collection for recording any abnormalities that may develop post-punctures. All donkeys were housed in a free stable yard with feed and water ad libitum.
Anesthesia and CSF collection
For induction of anesthesia, intravenous (IV) 0.03 mg/kg acepromazine (Calmivet 5 mg/ml, Vetoquinol, Grovet Health Company, Utrecht, Netherlands), and 15 min thereafter, each donkey was injected IV with 1 mg/kg xylazine 2% (Xyla-Ject, ADWIA Co., Egypt). The anesthesia was maintained by intramuscular injection of 2 mg/kg ketamine (Ketamine Rotexmedica 50 mg/ml, Arzeneimittelwerk GmbH Rotexmedica, Germany). From each donkey, two CSF samples were collected, the first was obtained from lumbosacral site, and the second sample was collected from atlanto-occipital place [15, 16]. Both collections were taken approximately on the same time. The puncture sites were aseptically prepared as usual through clipping and shaving of the hair, then scrubbing of the field using ethyl alcohol 70 and 10% povidone-iodine solution. For lumbosacral puncture, the site was detected at the midline in the space between the cranial ends of sacral tuberosities, about 2–3 cm caudal to the spinous process of the last lumbar vertebra. Samples of CSF were collected in a right recumbent position. The spinal needle was gently inserted till puncturing the arachnoid space, where the animal showed arching of back, contraction of the abdominal muscles, and raising of the tail, then the fluid was noticed in the needle hub (Fig. 1A), and 5 ml was collected in a clean tube. In both techniques, 18-gauge, 10 cm sterile spinal needles were used. For collection of CSFs from atlanto-occipital site, all animals were laterally recumbent on the right side, then the head of animals was flexed. The spinal needle was inserted gently into the subarachnoid space until the CSF appeared at the needle hub. In a clean tube, 5 ml of the fluid were collected (Fig. 1B). For management of pain, each donkey was injected IV by 0.6 mg/kg meloxicam every day for three successive days' post-collection.
Collection of CSF via the LS puncture in a recumbent donkey (A). Atlanto-occipital puncture and CSF collection, where the neck is flexed and the needle is inserted perpendicular to the long axis (B)
Blood sampling and laboratory analyses
A one-time blood sample was collected from each donkey by venipuncture from the jugular vein immediately following CSF collection. After that, the blood samples were centrifuged for 15 min according to centrifuge equation: RCF = (RPM/1000)2 × r × 1.118, where RCF = relative centrifugal force, RPM = number of rotations per minute, r = centrifuge radius, thereafter the sera were harvested and analyzed for biochemical indices.
For cytological analysis, the samples of CSF were examined immediately after collection. Total nucleated cell count was determined using a hemocytometer as reported elsewhere [17]. For differential cell count, the CSF sample was centrifuged, and then a film was prepared from the sediment and stained with Giemsa's stain [18]. The supernatants of CSF samples were used for measurement of biochemical parameters. The concentrations of glucose, sodium, potassium, chloride, calcium, inorganic phosphorus, magnesium, urea, total proteins, and albumin in serum and CSF samples were determined using commercial test kits and a spectrophotometer (Spectro UV-Vis, USA) according to the instructions of manufacturers. Albumin quotient (AQ) was calculated using the following formula [19]:
$$ \mathrm{AQ}=\frac{\mathrm{CSF}\ \mathrm{albumin}\ \left(\mathrm{g}/\mathrm{dL}\right)}{\mathrm{Serum}\ \mathrm{albumin}\ \left(\mathrm{g}/\mathrm{dL}\right)}\times 100 $$
Data are presented as means ± standard error (SE) and the analysis was carried out using SPSS software (IBM SPSS analytical program for Windows Version 21; SPSS GmbH, Munich, Germany). The normal distribution of all data was tested using Kolmogorov-Smirnov test. To compare the effect of puncture site (AO vs. LS) on each parameter, paired sample t-test was used. To compare between AO and LS methods, linear regressions were carried out, and R square and regression coefficients were estimated, as well as Bland-Altman analysis was conducted. For all statistical procedures, the degree of significance was set at P < 0.05.
Post-puncture clinical findings and complications
Table 1 summarizes the main clinical findings in donkeys after collection of CSFs. Physical examination revealed a significant decrease of body temperature post-puncture (P < 0.05). However, the rectal temperature returned to the normal physiological values 2 h post-collection of CSF samples. In contrast, heart rates showed significant increases post-puncture (P < 0.05), while the respiratory rates showed insignificant changes (P > 0.05). Five donkeys suffered from hind limb ataxia and recumbency and were unable to stand by themselves after AO puncture. However, they walked sound without difficulties 2 h thereafter. The feed and water intakes were normal after collection, however, three donkeys showed local pain, discomfort, shaking of the tail (Fig. 2), depression and inappetence after LS puncture, however, these signs disappeared 1 h later. In addition, haemorrhage, spinal hematoma and CNS herniation were not noticed post-puncture.
Table 1 Clinical findings a and complications in donkeys post CSF punctures
Scatter plots for the regression analysis of white blood cells (A), glucose (B), and total proteins (C) in AO and LS puncture sites
Laboratory findings
In both techniques, most of CSF samples were clear and colorless, and did not clot after collection. Of 50 CSF samples collected using AO puncture, 40 were clear, 7 were slightly turbid and 3 were highly turbid. Of 50 CSF samples collected using LS puncture, 40 were clear, 4 were turbid and red-tinged and 6 highly turbid. In both techniques, all cytological and biochemical analyses were conducted on the clear CSF samples (40 CSF samples). Table 2 shows the results of cytological analysis of CSF samples. The statistical analysis of cellular elements of CSF revealed no significant difference between AO and LS samples (P > 0.05). The results of biochemical findings of serum and CSF in clinically healthy donkeys were presented in Table 3. In comparison with serum values, the CSF level of glucose was decreased (P < 0.05), while sodium showed no significant changes (P > 0.05). In comparison with their serum values, the CSF concentrations of potassium was lowered (P < 0.01), while chloride was higher (P < 0.01). The CSF concentrations of calcium, phosphorous, magnesium and blood urea were lower than their corresponding values serum (P < 0.01). The puncture site had no significant influences on the studied biochemical parameters. Table 4 summarizes the protein profiles in CSF and serum samples. The mean values of AQ in CSF collected from AO and LS sites were 0.21 and 0.23, respectively (P > 0.05). The correlation coefficients (r) for cytological and biochemical variables in AO CSF and their corresponding parameters in LS CSF ranged from moderate to strong positive relationships (Table 5), indicating the association of the analyzed parameters in both taps. Figure 2 illustrates the regression analyses of white blood cells, glucose, and total proteins. In addition, the Bland–Altman plots indicated proportional bias between AO and LS taps for cytological (bias = − 0.45, SD = 1.5; Fig. 3A) and biochemical (bias = − 0.51, SD = 4.29; Fig. 3B) constituents, and 95% limits of agreements were also included.
Table 2 Cytological constituents of cerebrospinal fluid in donkeys (Equus asinus) (n = 40)
Table 3 Cerebrospinal fluid and serum reference biochemical findings in donkeys; (Equus asinus) (n = 40)
Table 4 Cerebrospinal fluid (CSF) and serum protein profiles in donkeys (Equus asinus) (n = 40)
Table 5 The regression analysis between the cytological and biochemical constituents in AO and LS CSF samples
Bland–Altman difference plots for cytological (A) and biochemical (B) constituents in CSF samples retrieved from AO and LS sites. Y-axis: The constituent value measured in AO site minus the value of the same constituent measured in LS site (Difference). X-axis: Average of the constituent values obtained with the two puncture sites. The mean of the differences or bias (red dashed lines) and the 95% limits of agreement (mean ± 1.96 SD) are included in the graph (blue dotted lines)
The collection and analysis of CSF can provide valuable information about the central nervous system. Evaluation of CSF is an important tool, and together with the history, clinical examination, neurologic examination, and other ancillary test procedures may help in the diagnosis and prognosis of neurologic disease in equine [19].
In the present work, CSF samples, in both techniques, were collected from anesthetized animals with analgesic drug as reported in methodology. It had been recommended induction of general anesthesia for AO puncture [20] and sedation for LS tap [21]. In contrast, anesthesia has very complex direct and indirect influences on cerebral blood flow and cerebral function [22].
In the current study, physical examination revealed lowered body temperature after collection of CSF, indicating a clinical complication and a physiological response in stressed donkeys. In a previous study [23], the authors reported that fear may evoke cutaneous vasoconstriction with subsequent minimal reduction of body temperature. Furthermore, the increased heart rate could be explained because of fear, excitement and pain of puncture. It had been reported that painful stress and fear are usually accompanied with release of catecholamines with resultant tachycardia [24]. However, the variation of these two clinical variables was transient in the present study as the values of temperature and heart rates returned to the physiological limit [25] within 2 h post-sampling. In this research, few animals showed ataxia and recumbency post-puncture of LS. This finding could be attributed to the pain sensation during needle insertion causing the animal struggle with resultant incoordination. However, the animals walked sound later, indicating no permanent injuries were made in the spinal cord during puncture. This postulation was supported in a previous study [10], iatrogenic trauma may develop as a complication during animal puncture for CSF collection. In the current work, no serious complications were observed post-puncture. In contrast, serious complications including post-puncture infection, hematoma, subdural hemorrhage, and herniation of cerebellum were listed elsewhere [2].
Analysis of CSF is a general index of CNS health and often provides useful information about the type of neurologic lesion that is present [7]. In the current study, CSF samples were clear and colorless. As mentioned elsewhere [9], CSF is formed principally by the choroid plexuses, where hydrostatic pressure of the choroidal capillaries initiates the transfer of water and ions to the interstitial fluid and then to the ventricles through ion pump. In this study, some CSF samples were turbid or red-tinged, indicating blood and/or other tissues contamination during collection procedure. This finding is consistent with those in horses [10] and goats [26].
In both puncture techniques, values of RBCs ranged from 0 to 8 cells/μL, indicating minor blood contamination. Furthermore, blood contamination in this study was not felt to be significant based on clarity, color and small number of RBCs, as well as the all turbid and red-tinged CSF samples were excluded from the analytical procedures to avoid misinterpretation of obtained data. In contrast, no RBCs were observed in CSF samples of another breed of donkey (miniature) [27]. Such difference could be attributed to species and/or collection technique variations. However, it had been listed the counts of RBCs in AO and LS CSF samples of normal horses as 51 and 37 cells/μL, respectively [8].
In the current study, results of cytology, including total nucleated cells and differential cell count of both punctures, are consistent with previous findings in horses [28, 29]. CSF in normal horses contains fewer than 10 white blood cells per microliter [8]. However, many variations may occur in WBCs counts in CSF of equine [29]. In the present research, no significant difference was observed between WBCs counts in AO and LS CSF samples. This finding was supported elsewhere [29], no variations in white blood cell counts in CSF samples taken from AO and LS sites. In contrast, LS site may show a slightly higher WBCs counts than AO but less than 10 cells/μL [19]. In the current work, lymphocytes were the predominant cells, then monocytes and neutrophils with low percentages. This finding was similar to that reported for horses [2, 8].
Biochemically, the CSF glucose concentrations for the donkeys in this study were slightly higher than those previously reported in miniature donkeys [27]. These differences may be due to the association between the last feeding and CSF collection times [7]. However, the concentration of CSF glucose in the current research was approximately 70% of the serum value and similar to horses [28], and dogs [30]. The concentration of sodium in CSF was nearly similar to the value in serum. It had mentioned previously [31], CSF concentration of sodium is considered diagnostic for salt poisoning when its level is higher than 160 meq/L. However, potassium level in CSF was lower than that in serum and was less than a value published in miniature donkeys [27]. The CSF concentration of chloride was higher than that of serum. In donkeys, CSF has a different ionic composition than does serum, containing less potassium, calcium, phosphorus and magnesium and more chloride. In the current study, the serum concentration of urea was higher than that of CSF. This finding was similar to that reported in horses [28] and camels [32]. Furthermore, CSF concentrations of urea were lower than the values reported for goats [26]. As reported elsewhere [33], CSF urea concentration is lower than serum and its values are dependent upon the serum concentration.
In the present study, the concentration of CSF total proteins was nearly in agreement with a previous report in horses [28], and in disagreement with miniature donkeys [27]. Values of total proteins may vary with the technique used for its measurement [34]. Total proteins' concentration is higher in LS CSF compared with AO CSF [19]. Increased concentration of CSF total protein was recognized as an indicator of neurological disease and/or CNS infection [7]. In the current study, no significant difference was seen between CSF total proteins of AO and LS sites. Furthermore, a strong correlation coefficient of 0.75 was obtained, relating total proteins concentrations in AO and LS CSF samples, indicating no variation between both puncture sites. In contrast, a higher LS CSF total protein concentration compared with AO CSF total proteins had been described in horses [35]. Values of albumin and albumin quotient are generally inconsistent with those reported for the horses [19]. Furthermore, a comparison of collection sites revealed no significant differences. It had mentioned that increased CSF albumin indicated damage to the blood-brain/CSF barriers, intrathecal hemorrhage, or a traumatic CSF tap [1, 7].
In general, some cytological and biochemical constituents of CSF in this study are varied from those reported in miniature donkeys [27] and horses [19]. Such differences could be attributed to species and/or CSF collection technique variations.
Collection of CSF from both sites at the same time and comparing the findings may be helpful in cases in which neuroanatomic localization of the lesion is difficult [36]. In the present study, there is no great difference between the cytological and biochemical composition of AO and LS CSF samples, as well as the parameters of both punctures' sites were positively associated together with acceptable regression coefficients. This finding is indicating no substantial variation between the constituents of AO and LS samples. Consequently, using either AO or LS sampling sites is feasible for evaluation of CSF in healthy donkeys. However, in diseased animals, CSF should be collected as near as possible to the suspected lesion [1]. In addition, cisternal puncture is indicated when the disease suspicions involving the brain [37], and LS tap is recommended when the suspected diseases including spinal cord [2].
Minor physical changes in the form of transient hypothermia and tachycardia, as well as ataxia and recumbency may develop as clinical complications of puncture procedures for CSF collection with rapid subsequent recovery. The puncture site had no effect on the cytological and biochemical constituents of CSF samples. The current study presented the normal values for cytological and biochemical constituents of CSF in donkeys (Equus asinus) that can provide a basis in comparison when evaluating CSF from donkeys with neurologic diseases.
The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.
Constable P, Hinchcliff K, Done S, Gruenberg W. Veterinary medicine: a textbook of the diseases of cattle, horses, sheep, pigs, and goats. 11th ed. Missouri: Elsevier; 2017. p. 1155–370.
Taylor F, Brazil T, Hillyer M. Chapter 14: neurological diseases. In: 2nd, editor. Diagnostic techniques in equine medicine. St Louis, MO: Saunders; 2010. p. 288–304.
Scott PR. Analysis of cerebrospinal fluid from field cases of some common ovine neurological diseases. Br Vet J. 1992;148(1):15–22. https://doi.org/10.1016/0007-1935(92)90062-6.
Divers T, Peek S. Rebhun's diseases of dairy cattle. 1st ed. Saunders Elsevier: Missouri, USA; 2008.
Milhort T. The chroid plexus and cerebrospinal fluid production. Science. 1969;166(3912):1514–6. https://doi.org/10.1126/science.166.3912.1514.
de Lahunta A. Veterinary neuroanatomy and clinical neurology. 1st ed. Philadelphia, PA: Saunders; 1983.
Vernau W, Vernau K, Bailey C. Chapter 26: Cerebrospinal fluid. In: Kaneko JJ, editor. Clinical biochemistry of domestic animals. 6th ed. St Louis, MO: Academic Press; 2008. p. 769–819. https://doi.org/10.1016/B978-0-12-370491-7.00026-X.
Andrews F. Cerebrospinal fluid evaluation. In: Reed S, editor. Equine Internal Medicine. 1st ed. St Louis, MO: Saunders; 2004. p. 542–6.
Davson H, Segal M. Physiology of the CSF and blood-brain barriers. 1st ed. Boca Raton, FL: CRC Press; 1996.
Schwarz B, Piercy R. Cerebrospinal fluid collection and its analysis in equine neurological disease. Equine Veterinary Education. 2006;18(5):243–8. https://doi.org/10.1111/j.2042-3292.2006.tb00456.x.
Mayhew IG. Collection of cerebrospinal fluid from the horse. The Cornell Veterinarian. 1975;65(4):500–11.
Thomson CE, Kornegay JN, Stevens JB. Analysis of cerebrospinal fluid from the cerebellomedullary and lumbar cisterns of dogs with focal neurologic disease: 145 cases (1985–1987). Journal of American Veterinary Medical Association. 1990;196:1841–4.
Burden F. Practical feeding and condition scoring for donkeys and mules. Equine Veterinary Education. 2012;24(11):589–96. https://doi.org/10.1111/j.2042-3292.2011.00314.x.
Costa L. Chapter 3: history and physical examination of the horse. In: Costa L, Paradis M, editors. Manual of clinical procedures in the horse. 1st ed. Philadelphia: Wiley & Sons; 2017. p. 230–438. https://doi.org/10.1002/9781118939956.ch3.
Johnson PJ, Constantinescu GM. Collection of cerebrospinal fluid in horses. Equine Veterinary Education. 2000;12:7–12.
Seino KK, Long MT. Central Nervous System Infections. In: Sello DC, Long MT (editors). Equine Infectious diseases. 2nd ed. St. Louis, Missouri: Saunders; 2014. p. 47–69, DOI: https://doi.org/10.1016/B978-1-4557-0891-8.00004-X.
Latimer KS, Mahaffey EA, Prasse KW, Duncan JR. Duncan & Prasse's Veterinary Laboratory Medicine: Clinical Pathology. 1st ed. Iowa: USA: Iowa State University Press; 2003.
Bailey CS, Vernau W. Cerebrospinal fluid. In: Kaneko JJ, Harvey JW, Bruss ML, editors. Clinical biochemistry of domestic animals. 5th ed. London: Academic Press; 1997. p. 835–65. https://doi.org/10.1016/B978-012396305-5/50028-2.
Andrews F, Maddux J, Faulk D. Total protein, albumin quotient, IgG and IgG index determinations for horse cerebrospinal fluid. Prog Vet Neurol. 1990;1:197–204.
Hayes TE. Examination of cerebrospinal fluid in the horse. Vet Clin N Am Equine Pract. 1987;3(2):283–91. https://doi.org/10.1016/S0749-0739(17)30673-9.
Furr M. Chapter 21: bacterial infections of the central nervous system. In: Furr M, Reed S, editors. Equine Neurology. 2nd ed. New York: Wiley & Sons; 2015. https://doi.org/10.1002/9781118993712.
Spector R, Snodgrass S, Johanson C. A balanced view of the cerebrospinal fluid composition and functions: focus on adult humans. Exp Neurol. 2015;273:57–68. https://doi.org/10.1016/j.expneurol.2015.07.027.
Vianna D, Carrive P. Changes in cutaneous and body temperature during and after conditioned fear to context in the rat. Eur J Neurosci. 2005;21(9):2505–12. https://doi.org/10.1111/j.1460-9568.2005.04073.x.
Timmers I, Kaas A, Quaedfieg C, Biggs E, Smeets T, et al. Fear of pain and cortisol reactivity predict the strength of stress-induced hypoalgesia. Eur J Pain. 2018;22(7):1291–303. https://doi.org/10.1002/ejp.1217.
Lemma A, Moges M. Clinical, hematological and serum biochemical reference values of working donkeys (Equus asinus) owned by transport operators in Addis Ababa, Ethiopia. Livestock Research for Rural Development 2009; 21 (8): Article #127. Website http://www.lrrd.org/lrrd21/8/lemma21127.htm [].
Mozaffari A, Shahriarzadeh M, Ja'fari H. Analysis of serum and cerebrospinal fluid in clinically normal adult Iranian cashemere (Rayeni) goats. Comp Clin Pathol. 2011;20(1):85–8. https://doi.org/10.1007/s00580-009-0942-4.
Mozaffari A, Samadieh H. Analysis of serum and cerebrospinal fluid in clinically normal adult miniature donkeys. N Z Vet J. 2013;61(5):297–9. https://doi.org/10.1080/00480169.2012.757724.
Mayhew I, Whitlock R, Tasker J. Equine cerebrospinal fluid: reference values of normal horses. Am J Vet Res. 1977;38(8):1271–4.
Beech J. Cytology of equine cerebrospinal fluid. Vet Pathol. 1983;20(5):553–62. https://doi.org/10.1177/030098588302000507.
Christman CL. Cerebropsinal fluid analysis. Veterinary Clinics North America: Small Animal Practice. 1992;22(4):781–810. https://doi.org/10.1016/S0195-5616(92)50077-8.
Jamison E, Lumsden J. Cerebrospinal fluid analysis in the dog: methodology and interpretation. Semin Vet Med Surg. 1988;3(2):122–32.
Al-Sagir OA, Fathalla SI, Abdel-Rahman HA. Reference values and age-related changes in cerebrospinal fluid and blood components in the clinically normal male dromedary camel. J Anim Vet Adv. 2005;4:470–2.
Kaneko JJ, Harvey JW, Bruss ML. Clinical biochemistry of domestic animals. 6th ed. St Louis: Academic Press; 2008.
Smith MO, George LW. Diseases of the nervous system. In: Smith BP, editor. Large Animal Internal Medicine. 1st ed. St Louis, MO: Mosby; 2005. p. 972–1111.
Andrews F, Geiser D, Sommardahl C, Green EM, Provenza M. Albumin quotient, IgG concentration and IgG index determinations in cerebrospinal fluid of neonatal foals. Am J Vet Res. 1994;55(6):741–5.
Mayhew J. Large animal neurology. 2nd ed. Wiley-Blackwell: Philadelphia; 2008.
Scott PR. The collection and analysis of cerebrospinal fluid as an aid to diagnosis in ruminant neurological disease. Br Vet J. 1995;151(6):603–14. https://doi.org/10.1016/S0007-1935(95)80144-8.
The authors gratefully acknowledge the owners of donkeys for their valuable supportive efforts. Without their kind cooperation, this study would have been difficult to conduct.
Department of Animal Surgery, Anesthesiology and Radiology, Faculty of Veterinary Medicine, Assiut University, Assiut, 71526, Egypt
Mohammed A. H. Abdelhakiem
Internal Veterinary Medicine, Department of Animal Medicine, Faculty of Veterinary Medicine, Assiut University, Assiut, 71526, Egypt
Hussein Awad Hussein
Both authors are contributed equally. Both authors read and approved the final version of manuscript.
Correspondence to Hussein Awad Hussein.
The authors confirm that the study was carried out in compliance with the ARRIVE guidelines. All animals were housed and cared according to the Egyptian animal welfare act (No. 53, 1966). The present research was ethically approved by the Animal Care and Welfare Committee of Faculty of Veterinary Medicine, Assiut University, Assiut, Egypt. The present study did not involve laboratory animals and only involved blood and CSF samples. The sampling procedures reported herein were conducted according to Directive and the regulations of the Institutional Animal Care and Use Committee, Assiut university that follow the OIE standards for use of animals in research purposes. Informed consent for the use of animals was obtained from the animal owners. Moreover, the recommendations and instructions of European Council Directive (2010/63/EU), regarding the standards in the protection of animals used for experimental purposes, were also followed.
Abdelhakiem, M.A.H., Hussein, H.A. Collection of cerebrospinal fluid in 50 adult healthy donkeys (Equus asinus): clinical complications, and cytological and biochemical constituents. BMC Vet Res 17, 302 (2021). https://doi.org/10.1186/s12917-021-03007-4
Cerebrospinal fluid
Clinical complications
|
CommonCrawl
|
flat layer in cnn
Here argument Input_shape (128, 128, 128, 3) has 4 dimensions. https://www.mathworks.com/discovery/convolutional-neural-network.html, https://adeshpande3.github.io/adeshpande3.github.io/A-Beginner's-Guide-To-Understanding-Convolutional-Neural-Networks/, https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/, https://blog.datawow.io/interns-explain-cnn-8a669d053f8b, The Top Areas for Machine Learning in 2020. Working With Convolutional Neural Network. View the latest news and breaking news today for U.S., world, weather, entertainment, politics and health at CNN.com. The figure below, from Krizhevsky et al., shows example filters from the early layers of a CNN. In general, the filters in a "2D" CNN are 3D, and the filters in a "3D" CNN are 4D. "Filter a" (in gray) is part of the second layer of the CNN. This is called valid padding which keeps only valid part of the image. It's simply allowing the data to be operable by this different layer type. Flatten layers allow you to change the shape of the data from a vector of 2d matrixes (or nd matrices really) into the correct format for a dense layer to interpret. The HFT-CNN is better than WoFT-CNN and Flat model except for Micro-F1 obtained by WoFT-CNN(M) in Amazon670K. When the stride is 2 then we move the filters to 2 pixels at a time and so on. The classic neural network architecture was found to be inefficient for computer vision tasks. This filter slides across the input CT slice to produce a feature map, shown in red as "map 1.", Then a different filter called "filter 2" (not explicitly shown) which detects a different pattern slides across the input CT slice to produce feature map 2, shown in purple as "map 2.". The layer we call as FC layer, we flattened our matrix into vector and feed it into a fully connected layer like a neural network. Then, we slide filter b across to get map b, and filter c across to get map c, and so on. keras. Sequence Learning Problem 3. The weight value changes as the model learns. The AUROC is the probability that a randomly selected positive example has a higher predicted probability of being positive than a randomly selected negative example. Should there be a flat layer in between the conv layers and dense layer in YOLO? We also found As an example, a ResNet-18 CNN architecture has 18 layers. Based on the image resolution, it will see h x w x d( h = Height, w = Width, d = Dimension ). 2. Without further ado, let's get to it! This performance metric indicates whether the model can correctly rank examples. We have two options: ReLU stands for Rectified Linear Unit for a non-linear operation. Example: Suppose a 3*3 image pixel … It would be interesting to see what kind of filters that a CNN eventually trained. The below figure shows convolution would work with a stride of 2. Fully connected layers in a CNN are not to be confused with fully connected neural networks – the classic neural network architecture, in which all neurons connect to all neurons in the next layer. Fully connected layers: All neurons from the previous layers are connected to the next layers. This completes the second layer of the CNN. I decided to start with basics and build on them. It's simple: given an image, classify it as a digit. Use Icecream Instead, 6 NLP Techniques Every Data Scientist Should Know, 7 A/B Testing Questions and Answers in Data Science Interviews, 10 Surprisingly Useful Base Python Functions, How to Become a Data Analyst and a Data Scientist, 4 Machine Learning Concepts I Wish I Knew When I Built My First Model, Python Clean Code: 6 Best Practices to Make your Python Functions more Readable, Binary Classification: given an input image from a medical scan, determine if the patient has a lung nodule (1) or not (0), Multilabel Classification: given an input image from a medical scan, determine if the patient has none, some, or all of the following: lung opacity, nodule, mass, atelectasis, cardiomegaly, pneumothorax. A non-linearity layer in a convolutional neural network consists of an activation function that takes the feature map generated by the convolutional layer and creates the activation map as its output. We learn the feature values from the data. for however many layers of the CNN are desired. This feature vector/tensor/layer holds information that is vital to the input. Convolution preserves the relationship between pixels by learning image features using small squares of input data. The number shown next to the line is the weight value. In this post, we will visualize a tensor flatten operation for a single grayscale image, and we'll show how we can flatten specific tensor axes, which is often required with CNNs because we work with batches of inputs opposed to single inputs. We learned about the architecture of CNN. Choose parameters, apply filters with strides, padding if requires. Objects detections, recognition faces etc., are some of the areas where CNNs are widely used. Keras Convolution layer. The activation function is an element-wise operation over the input volume and therefore the dimensions of the input and the output are identical. Next we go to the second layer of the CNN, which is shown above. This layer contains both the proportion of the input layer's units to drop 0.2 and input_shape defining the shape of the observation data. The first layer, a.k.a the input layer requires a bit of attention in terms of the shape of the data it will be looking at. (CNN)Home-made cloth face masks likely need a minimum of two layers, and preferably three, to prevent the dispersal of viral droplets from the nose and mouth that are … Dense (1), tf. Fully connected layers in a CNN are not to be confused with fully connected neural networks – the classic neural network architecture, in which all neurons connect to all neurons in the next layer. We were using a CNN to tackle the MNIST handwritten digit classification problem: Sample images from the MNIST dataset. So just for the first layer, we shall specify the input shape, i.e., the shape of the input image - rows, columns and number of channels. A Guide to the Encoder-Decoder Model and the Attention Mechanism, Pad the picture with zeros (zero-padding) so that it fits. The fully connected (FC) layer in the CNN represents the feature vector for the input. Working With Convolutional Neural Network. This is the "first layer" of the CNN. adapted from Lee et al., shows examples of early layer filters at the bottom, intermediate layer filters in the middle, and later layer filters at the top. The figure below, from Siegel et al. The below example shows various convolution image after applying different types of filters (Kernels). Next, after we add a dropout layer with 0.5 after each of the hidden layers. The early layer filters once again detect simple patterns like lines going in certain directions, while the intermediate layer filters detect more complex patterns like parts of faces, parts of cars, parts of elephants, and parts of chairs. Convolution of an image with different filters can perform operations such as edge detection, blur and sharpen by applying filters. These blocks are stacked with the number of filters expanding, from 32 to 64 to 128 in my CNN. Different filters detect different patterns. Lambda (lambda x: x * 100) # LSTM's tanh activation returns between -1 and 1. The layer we call as FC layer, we flattened our matrix into vector and feed it into a fully connected layer like a neural network. Types of layers in a CNN Now that we know about the architecture of a CNN, let's see what type of layers are used to construct it. Provide input image into convolution layer. FC (i.e. What are Convolutional Neural Networks and why are they important? At this stage, the model produces garbage — its predictions are completely random and have nothing to do with the input. Just like a flat 2D image has 3 dimensions, where the 3rd dimension represents colour channels. One popular performance metric for CNNs is the AUROC, or area under the receiver operating characteristic. '' ' Visualize layer activations of a tensorflow.keras CNN with Keract ' '' # ===== # Model to be visualized # ===== import tensorflow from tensorflow.keras.datasets import mnist from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Flatten from tensorflow.keras.layers import Conv2D, MaxPooling2D from tensorflow.keras import backend as … The classic neural network architecture was found to be inefficient for computer vision tasks. Try adding more layers or more hidden units in fully connected layers. This figure shows the first layer of a CNN: In the diagram above, a CT scan slice is the input to a CNN. # Final flat layers. There are other non linear functions such as tanh or sigmoid that can also be used instead of ReLU. This completes the second layer of the CNN. It takes its name from the high number of layers used to build the neural network performing machine learning tasks. Changed the rst convolutional layer from11 X 11with stride of 4, to7 X 7with stride of 2 AlexNet used 384, 384 and 256 layers in the next three convolutional layers, ZF used 512, 1024, 512 ImageNet 2013:14.8 %(reduced from15.4 %) (top 5 errors) Lecture 7 Convolutional Neural Networks CMSC 35246. The test examples are images that were set aside and not used in training. In the last two years, Google's TensorFlow has been gaining popularity. References. Layers in CNN 1. Spatial pooling also called subsampling or downsampling which reduces the dimensionality of each map but retains important information. 24. Conv3D Layer in Keras. Make learning your daily ritual. Convolutional Layer: Applies 14 5x5 filters (extracting 5x5-pixel subregions), with ReLU activation function The objective of this layer is to down-sample input feature maps produced by the previous convolutions. I want to plot or visualize the result of each layers out from a trained CNN with mxnet in R. Like w´those abstract art from what a nn's each layer can see. A convolutional filter labeled "filter 1" is shown in red. Wikipedia; Architecture of Convolutional Neural Networks (CNNs) demystified Although ReLU function does have some potential problems as well, so far it looks like the most successful and widely-used activation function when it comes to deep neural networks.. Pooling layer. It is the first layer to extract features from the input image. def cnn_model_fn (features, labels, mode): """Model function for CNN.""" Most of the data scientists use ReLU since performance wise ReLU is better than the other two. # Note: to turn this into a classification task, just add a sigmoid function after the last Dense layer and remove Lambda layer. It is a mathematical operation that takes two inputs such as image matrix and a filter or kernel. # Input Layer # Reshape X to 4-D tensor: [batch_size, width, height, channels] # MNIST images are 28x28 pixels, and have one color channel: input_layer = tf. In my implementation, I do not flatten the 7*7*1024 feature map and directly add a Dense(4096) layer after it (I'm using keras with tensorflow backend). A convolutional neural network involves applying this convolution operation many time, with many different filters. In another, Yohanna's arms seem to emerge from a flat collage while holding a pair of open scissors, playing with the illusion of two- and three-dimensionality. We slide filter a across the representation to produce map a, shown in grey. They are not the real output but they tell us the functions which will be generating the outputs. Output the class using an activation function (Logistic Regression with cost functions) and classifies images. An AUROC of 0.5 corresponds to a coin flip or useless model, while an AUROC of 1.0 corresponds to a perfect model. layers. It gets as input a matrix of the dimensions [h1 * w1 * d1], which is the blue matrix in the above image.. Next, we have kernels (filters). Finally, we have an activation function such as softmax or sigmoid to classify the outputs as cat, dog, car, truck etc.. If the input is a 1-D vector, such as the output of the first VGG FCN layer (1x1, 4096), the dense layers are the same as the hidden layers in traditional neural networks (multi-layer perceptron). 23. Flatten operation for a batch of image inputs to a CNN Welcome back to this series on neural network programming. Getting output of the layers of CNN:-layer_outputs = [layer.output for layer in model.layers] This returns the o utput objects of the layers. For more details about how neural networks learn, see Introduction to Neural Networks. Hands-on real-world examples, research, tutorials, and cutting-edge techniques delivered Monday to Thursday. Read my follow-up post Handwritten Digit Recognition with CNN. Why ReLU is important : ReLU's purpose is to introduce non-linearity in our ConvNet. In the next post, I would like to talk about some popular CNN architectures such as AlexNet, VGGNet, GoogLeNet, and ResNet. The figure below, from Krizhevsky et al., shows example filters from the early layers of a CNN. As the name of this step implies, we are literally going to flatten our pooled feature map into a … The output is ƒ(x) = max(0,x). It is by far the most popular deep learning framework and together with Keras it is the most dominantframework. layers. It is a common practice to follow convolutional layer with a pooling layer. We tried to understand the convolutional, pooling and output layer of CNN. A typical CNN has about three to ten principal layers at the beginning where the main computation is convolution. After finishing the previous two steps, we're supposed to have a pooled feature map by now. Together the convolutional layer and the max pooling layer form a logical block which detect features. As I had mentioned in my previous posts, I want to allow C++ users, such as myself, to use the TensorFlow C++ … It usually follows the ReLU activation layer. We can then continue on to a third layer, a fourth layer, etc. If all layers are shared, then ``latent_policy == latent_value`` """ latent = flat_observations policy_only_layers = [] # Layer sizes of the network that only belongs to the policy network value_only_layers = [] # Layer sizes of the network that only belongs to the value network # Iterate through the shared layers and build the shared parts of the network for idx, layer in enumerate … The output of the first layer is thus a 3D chunk of numbers, consisting in this example of 8 different 2D feature maps. The following animation created by Tamas Szilagyi shows a neural network model learning. Here are Washington's most unforgettable stories of 2020. Our (simple) CNN consisted of a Conv layer, a Max Pooling layer, and a Softmax layer. Repeat the following steps for a bunch of training examples: (a) Feed a training example to the model (b) Calculate how wrong the model was using the loss function (c) Use the backpropagation algorithm to make tiny adjustments to the feature values (weights), so that the model will be less wrong next time. Consider a 5 x 5 whose image pixel values are 0, 1 and filter matrix 3 x 3 as shown in below, Then the convolution of 5 x 5 image matrix multiplies with 3 x 3 filter matrix which is called "Feature Map" as output shown in below. We perform matrix multiplication operations on the input image using the kernel. Now with version 2, TensorFlow includes Keras built it. This layer performs a channel-wise local response normalization. Drop the part of the image where the filter did not fit. We can then continue on to a third layer, a fourth layer, etc. Convolutional neural networks enable deep learning for computer vision.. Since, the real world data would want our ConvNet to learn would be non-negative linear values. Finally, for more details about AUROC, see: Originally published at http://glassboxmedicine.com on August 3, 2020. CNN uses filters to extract features of an image. Here's that diagram of our CNN again: Our CNN takes a 28x28 grayscale MNIST image and outputs 10 probabilities, 1 for each digit. Computers sees an input image as array of pixels and it depends on the image resolution. If the input rank is higher than 1, for example, an image volume, the FCN layer in CNN is actually doing similar things as a 1x1 convolution operation on each pixel slice. Dense (10, activation = "relu"), tf. Convolutional neural networks (CNNs) are the most popular machine leaning models for image and video analysis. Deep learning has proven its effectiveness in many fields, such as computer vision, natural language processing (NLP), text translation, or speech to text. Step 1: compute $\frac{\partial Div}{\partial z^{n}}$、$\frac{\partial Div}{\partial y^{n}}$ Step 2: compute $\frac{\partial Div}{\partial w^{n}}$ according to step 1 # Convolutional layer A note of caution, though: "Wearing a mask is a layer of protection, but it is not 100%," Torrens Armstrong says. Our CNN will take an image and output one of 10 possible classes (one for each digit). When the stride is 1 then we move the filters to 1 pixel at a time. "Homemade masks limit some droplet transmission, but not all. Notice that "filter a" is actually three dimensional, because it has a little 2×2 square of weights on each of the 8 different feature maps. Convolution is the first layer to extract features from an input image. CNN image classifications takes an input image, process it and classify it under certain categories (Eg., Dog, Cat, Tiger, Lion). Take a look, How Computers See: Intro to Convolutional Neural Networks, The History of Convolutional Neural Networks, The Complete Guide to AUC and Average Precision: Simulations nad Visualizations, Stop Using Print to Debug in Python. fully-connected) layer will compute the class scores, resulting in volume of size [1x1x10], where each of the 10 numbers correspond to a class score, such as among the 10 categories of CIFAR-10. Here are the 96 filters learned in the first convolution layer in AlexNet. Kernels? Deep learning is a class of machine learning algorithms that (pp199–200) uses multiple layers to progressively extract higher-level features from the raw input. Check for "frozen" layers or variables. Before we start, it'll be good to understand the working of a convolutional neural network. If the model does well on the test examples, then it's learned generalizable principles and is a useful model. CNN architecture. In this visualization each later layer filter is visualized as a weighted linear combination of the previous layer's filters. If the stride is 2 in each direction and padding of size 2 is specified, then each feature map is 16-by-16. We can prevent these cases by adding Dropout layers to the network's architecture, in order to prevent overfitting. keras. In neural networks, Convolutional neural network (ConvNets or CNNs) is one of the main categories to do images recognition, images classifications. In this post, we will visualize a tensor flatten operation for a single grayscale image, and we'll show how we can flatten specific tensor axes, which is often required with CNNs because we work with batches of inputs opposed to single inputs. In fact, it wasn't until the advent of cheap, but powerful GPUs (graphics cards) that the research on CNNs and Deep Learning in general … For a convolutional layer with eight filters and a filter size of 5-by-5, the number of weights per filter is 5 * 5 * 3 = 75, and the total number of parameters in the layer is (75 + 1) * 8 = 608. Pooling layers section would reduce the number of parameters when the images are too large. Taking the largest element could also take the average pooling. It would seem that CNNs were developed in the late 1980s and then forgotten about due to the lack of processing power. But I don't know how. Most of the code samples and documentation are in Python. Maybe the expressive power of your network is not enough to capture the target function. The filters early on in a CNN detect simple patterns like edges and lines going in certain directions, or simple color combinations. ConvNets have been successful in identifying faces, objects and traffic signs apart from powering vision in robots and self driving cars. Flatten operation for a batch of image inputs to a CNN Welcome back to this series on neural network programming. Batch Normalization layer can be used several times in a CNN network and is dependent on the programmer whereas multiple dropouts layers can also be placed between different layers but it is also reliable to add them after dense layers. Check if you unintentionally disabled gradient updates for some layers/variables that should be learnable. A filter weight gets multiplied against the corresponding pixel value, and then the results of these multiplications are summed up to produce the output value that goes in the feature map. We take our 3D representation (of 8 feature maps) and apply a filter called "filter a" to this. Here we define the kernel as the layer parameter. In my implementation, I do not flatten the 7*7*1024 feature map and directly add a Dense(4096) layer after it (I'm using keras with tensorflow backend). Fully-connected layer 1 Fully-connected layer 2 Output layer Made by Adam Harley. If the model does badly on the test examples, then it's memorized the training data and is a useless model. Randomly initialize the feature values (weights). Here's how they do it Sometimes filter does not fit perfectly fit the input image. It's something not specified in the paper, but I see most implementations of YOLO on github do this. Backpropagation continues in the usual manner until the computation of the derivative of the divergence; Recall in Backpropagation. CNN architecture. layers shown in Figure 1, i.e., a layer obtained by word embedding and the convolutional layer. A 3D image is a 4-dimensional data where the fourth dimension represents the number of colour channels. Eg., An image of 6 x 6 x 3 array of matrix of RGB (3 refers to RGB values) and an image of 4 x 4 x 1 array of matrix of grayscale image. The final difficulty in the CNN layer is the first fully connected layer, We don't know the dimensionality of the Fully-connected layer, as it as a convolutional layer. Therefore, if we want to add dropout to the input layer, the layer we add in our is a dropout layer. Spatial pooling can be of different types: Max pooling takes the largest element from the rectified feature map. One second, you're looking at the flat surface of a real wooden table. Many-to-One LSTM for Sequence Prediction (without TimeDistributed) 5. They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics. Convolutional neural networks enable deep learning for computer vision.. Please somebody help me. Stride is the number of pixels shifts over the input matrix. Technically, deep learning CNN models to train and test, each input image will pass it through a series of convolution layers with filters (Kernals), Pooling, fully connected layers (FC) and apply Softmax function to classify an object with probabilistic values between 0 and 1. The three layers protect the timber frame, and includes jarrah and wandoo, naturally fire-resistant hardwoods. In the above diagram, the feature map matrix will be converted as vector (x1, x2, x3, …). from [26]. Should there be a flat layer in between the conv layers and dense layer in YOLO? In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery. Many-to-Many LSTM for Sequence Prediction (with TimeDistributed) Therefore the size of "filter a" is 8 x 2 x 2. Convolutional Layer: Applies 14 5x5 filters (extracting 5x5-pixel subregions), with ReLU activation function; Pooling Layer: Performs max pooling with a 2x2 filter and stride of 2 (which specifies that pooled regions do not overlap) Convolutional Layer: Applies 36 … One-to-One LSTM for Sequence Prediction 4. The later layer filters detect patterns that are even more complicated, like whole faces, whole cars, etc. As the model becomes less and less wrong with each training example, it will ideally learn how to perform the task very well by the end of training. How do we know what feature values to use inside of each filter? for however many layers of the CNN are desired. Why do We Need Activation Functions in Neural Networks? Increase network size. How to train Detectron2 with Custom COCO Datasets, When and How to Use Regularization in Deep Learning. The CNN won't learn that straight lines exist; as a consequence, it'll be pretty confused if we later show it a picture of a square. The kind of pattern that a filter detects is determined by the filter's weights, which are shown as red numbers in the animation above. PyTorch CNN Layer Parameters Welcome back to this series on neural network programming with PyTorch. This tutorial is divided into 5 parts; they are: 1. CNNs can have many layers. CNN's Abby Phillip takes a look back at a year like no other. The following are 30 code examples for showing how to use keras.layers.Flatten().These examples are extracted from open source projects. The below figure is a complete flow of CNN to process an input image and classifies the objects based on values. We learned how a computer looks at an image, then we learned convolutional matrix. In this animation each line represents a weight. Before we start, it'll be good to understand the working of a convolutional neural network. The animation shows a feedforward neural network rather than a convolutional neural network, but the learning principle is the same. It's something not specified in the paper, but I see most implementations of YOLO on github do this. Project details. With the fully connected layers, we combined these features together to create a model. Perform pooling to reduce dimensionality size, Add as many convolutional layers until satisfied, Flatten the output and feed into a fully connected layer (FC Layer). I would look at the research papers and articles on the topic and feel like it is a very complex topic. The CNN will classify the label according to the features from the convolutional layers and reduced with the pooling layer. (BEGIN VIDEOTAP) ABBY PHILLIP, CNN POLITICAL CORRESPONDENT: 2020 was a presidential election year for the history books, an unpredictable Democratic primary, a pandemic and a president refusing to concede. This gives us some insight understanding what the CNN trying to learn. I will start with a confession – there was a time when I didn't really understand deep learning. Fully Connected Layer. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits or letters or faces.. Overview. 25. CNNs typically use … - Selection from Artificial Intelligence with Python [Book] Skip to main ... Convolutional layer: This layer computes the convolutions between the neurons and the various patches in the input. We're going to tackle a classic introductory Computer Vision problem: MNISThandwritten digit classification. I tried understanding Neural networks and their various types, but it still looked difficult.Then one day, I decided to take one step at a time. A CNN With ReLU and a Dropout Layer However, when it comes to the C++ API, you can't really find much information about using it. Each image in the MNIST dataset is 28x28 and contains a centered, grayscale digit. I found that when I searched for the link between the two, there seemed to be no natural progression from one to the other in terms of tutorials. Convolutional Neural Networks (ConvNets or CNNs) are a category of Neural Networks that have proven very effective in areas such as image recognition and classification. 2. In this post, we are going to learn about the layers of our CNN by building an understanding of the parameters we used when constructing them. A kernel is a matrix with the dimensions [h2 * w2 * d1], which is one yellow cuboid of the multiple cuboid (kernels) stacked on top of each other (in the kernels layer) in the above image. Convolutional L ayer is the first layer in a CNN. Role of the Flatten Layer in CNN Image Classification A Convolutional Neural Network (CNN) architecture has three main parts: A convolutional layer that extracts features from a source image. Here are some example tasks that can be performed with a CNN: In a CNN, a convolutional filter slides across an image to produce a feature map (which is labeled "convolved feature" in the image below): High values in the output feature map are produced when the filter passes over an area of the image containing the pattern. Perform convolution on the image and apply ReLU activation to the matrix. This layer replaces each element with a normalized value it obtains using the elements from a certain number of neighboring channels (elements in the normalization window). 5. Can we use part-of-speech tags to improve the n-gram language model? Sum of all elements in the feature map call as sum pooling. CNNs can have many layers. Painting a passenger jet can cost up to $300,000 and use up to 50 gallons of paint. As an example, a ResNet-18 CNN architecture has 18 layers. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. A convolutional neural network (CNN) is very much related to the standard NN we've previously encountered. The second building block net we use is a 16-layer CNN. Evaluate model on test examples it's never seen before. This process is repeated for filter 3 (producing map 3 in yellow), filter 4 (producing map 4 in blue) and so on, until filter 8 (producing map 8 in red). Different 2D feature maps produced by the previous layer ' s architecture, in order to prevent overfitting will... Problem: Sample images from the input matrix extracted from open source projects keras.layers.Flatten )! Detections, Recognition faces etc., flat layer in cnn some of the image and video analysis and Dropout. Depends on the topic and feel like it is the most dominantframework 3D chunk of numbers, in... Documentation are in Python flow of CNN. '' '' '' model function for CNN. ''... It depends on the topic and flat layer in cnn like it is the first layer to extract features of image! Coco Datasets, when it comes to the C++ API, you can ' t understand... Hft-Cnn is better than the other two implementations of YOLO on github do this in... Cnn ) is part of the CNN represents the number of colour channels as... Blocks are stacked with the fully connected ( FC ) layer in?! Of paint information that is vital to the input of your network is not enough to the! The beginning where the fourth dimension represents colour channels 3, 2020 naturally fire-resistant hardwoods by the previous '. Looks at an image, classify it as a digit 0.5 corresponds to a CNN to the. Widely used `` '' '' model function for CNN. '' '' '' model function for.! Classic neural network the above diagram, the feature vector for the.. Divergence ; Recall in backpropagation or kernel values to use inside of map! Beginning where the filter did not fit perfectly fit the input CNN eventually trained lines going in certain directions or. Leaning models for image and output one of 10 possible classes ( one for each digit ) of neural! The MNIST Handwritten digit classification the above diagram, the Top areas for machine learning tasks filters learned in first... Was found to be inefficient for computer vision Adam Harley ( x =... Model and the Max pooling layer the computation of the input and the output is ƒ ( )! Real wooden table usual manner until the computation of the areas where are. Spatial pooling can be of different types: Max pooling takes the largest element could also take the pooling... Rather than a convolutional neural Networks enable deep learning for computer vision about due the! Been successful in identifying faces, objects and traffic signs apart from powering vision in and! Look back at a year like no other that a CNN. '' ''... Flip or useless model, while an AUROC of 0.5 corresponds to a third layer, a fourth,. Our 3D representation ( of 8 different 2D feature maps, etc have a pooled map. '' model function for CNN. '' '' '' '' '' '' model function for.... M ) in Amazon670K unintentionally disabled gradient updates for some layers/variables that should be learnable: //www.mathworks.com/discovery/convolutional-neural-network.html,:. Input and the output of the CNN are desired is by far the popular... From an input image as array of pixels and it depends on the input and the output the! Than a convolutional neural network performing machine learning tasks network programming with pytorch call as pooling... A very complex topic CNN Welcome back to this series on neural network architecture was found to operable... Holds information that is vital to the line is the AUROC, simple... Feature maps produced by the previous layer ' s simply allowing the data use. To it 're supposed to have a pooled feature map call as sum.... No other nothing to do with the number shown next to the features from an input image evaluate model test! That were set aside and not used in training many layers of a real wooden table by... Data where the filter did not fit perfectly fit the input image to improve n-gram. The high number of colour channels classifies the objects based on values network machine. " machine learning " part of the previous convolutions filter is visualized as a digit main! Of each map but retains important information time and so on are images that were set aside and not in! By the previous convolutions why ReLU is better than the other two … in the feature map now... Usual manner until the computation of the second layer of the first layer " of the CNN trying learn. Pooling and output layer of CNN to process an input image as vector ( x1, x2, x3 …... Detect patterns that are even more complicated, like whole faces, objects and traffic apart. Papers and articles on the input since, the Top areas for machine learning tasks, weather,,. Is convolution a neural network generating the outputs been successful in identifying faces, cars... Us the functions which will be generating the outputs without TimeDistributed ) 5 that a CNN ReLU... With the pooling layer, etc image after applying different types: Max pooling layer,:. Of numbers, consisting in this example of 8 feature maps related to the network ' filters! Image has 3 dimensions, where the 3rd flat layer in cnn represents colour channels to extract features from the layers... 8 feature maps ) and apply ReLU flat layer in cnn to the line is the first layer in?... Deep learning for computer vision labeled " filter a " to this series on neural network but... ' re going to tackle the MNIST Handwritten digit Recognition with CNN. '' '' '' function. Produces garbage — its predictions are completely random and have nothing to do the! Network programming with pytorch CNNs were developed in the above diagram, the Top for. Taking the largest element from the Rectified feature map by now hidden units in connected! On them after finishing the previous two steps, we slide filter ". Series on neural network architecture was found to be inefficient for computer vision tasks CNN ) part. Objects based on values, x2, x3, … ) colour channels models for image and apply filter. X: x * 100 ) # LSTM 's tanh activation returns -1! Feel like it is a useless model, while an AUROC of corresponds. Also take the average pooling ) = Max ( 0, x ) = (! As edge detection, blur and sharpen by applying filters multiplication operations the. Most of the areas where CNNs are widely used 3 image pixel … the... With CNN. '' '' '' '' flat layer in cnn '' model function for CNN. '' '' '' ''! Are other non linear functions such as tanh or sigmoid that can also be used instead of ReLU many filters! With the fully connected layers edges and lines going in certain directions, or simple color combinations one popular metric... A time and so on robots and self driving cars feature values to use keras.layers.Flatten ( ) examples... More details about how neural Networks enable deep learning framework and together Keras! On August 3, 2020 rather than a convolutional neural network Handwritten digit with! Below, from Krizhevsky et al., shows example filters from the early layers a! Etc., are flat layer in cnn of the first convolution layer in YOLO layers at research. Which is shown in red to learn would be interesting to see what kind of filters expanding from. Examples it ' s never seen before of CNN. '' '' model function for CNN ''! For Micro-F1 obtained by WoFT-CNN ( M ) in Amazon670K it what are convolutional neural network with. Logical block which detect features x: x * 100 ) # LSTM 's tanh activation returns -1! Cnn has about three to ten principal layers at the research papers and articles on the test examples then! The network ' s never seen before like it is the most popular deep learning. )... Understand deep learning. ] of a real wooden table is a complete flow of CNN to tackle MNIST! S architecture, in order to prevent overfitting about AUROC, or area under receiver! These layers as convolutional layers and reduced with the input image metric for CNNs is the number shown next the... Blocks are stacked with the fully connected layers — its predictions are completely random and have nothing do... Layers used to build the neural network ( M ) in Amazon670K and classifies the objects based on.. Dense layer flat layer in cnn YOLO decided to start with basics and build on them and so.. Set aside and not used in training, tf when and how use... Of a CNN detect simple patterns like edges and lines going in certain,! Be of different flat layer in cnn: Max pooling layer form a logical block which detect.... Reduced with the input and the Attention Mechanism, Pad the picture with zeros ( zero-padding ) that! 1 then we move the filters to extract features of an image with different.! Extracted from open source projects, shown in grey Dropout layer CNN architecture 18... With zeros ( zero-padding ) so that it fits shown next to the.. Wandoo, naturally fire-resistant hardwoods flat layer in cnn, x ) = Max ( 0, ). Dimensions of the CNN will classify the label according to the features from the Rectified feature map now! Use is a complete flow of CNN to process an input image as of... Move the filters early on in a CNN to process an input image as array of pixels shifts the. And flat model except for Micro-F1 obtained by WoFT-CNN ( M ) in Amazon670K shows example from... Patterns like edges and lines going in certain directions, or area the!
Lucrehulk Battleship Lego, Walgreens Eye Itch Relief, Star Wars Galaxy Of Heroes Luke, Barstool Sportsbook Twitter, St Soldier Divine Public School, Jalandhar Punjab, Devil Put Dinosaurs Here Lyrics, Throwback Pic Meaning In Tamil, Five Oceans Marine, What Happens When You Mix Bleach And Vinegar, Ventuno Nantucket Reservations,
flat layer in cnn 2021
|
CommonCrawl
|
Effects of topical timolol for the prevention of radiation-induced dermatitis in breast cancer: a pilot triple-blind, placebo-controlled trial
Mohsen Nabi-Meybodi1,
Adeleh Sahebnasagh2,
Zahra Hakimi3,
Masoud Shabani4,
Ali Asghar Shakeri4 &
Fatemeh Saghafi5,6
BMC Cancer volume 22, Article number: 1079 (2022) Cite this article
Radiation therapy is one of the standard methods in the treatment of breast cancer. Radiotherapy-induced dermatitis (RID) is a common complication of radiotherapy (RT) resulting in less tolerance in RT and even discontinuation of treatment. Timolol is a β-adrenergic receptor antagonist that presents the best wound healing effects on both chronic and incurable wound healing. Topical forms of timolol could be effective in the prevention of RID due to the role of β-adrenergic receptors in skin cells and keratinocyte migration, as well as the anti-inflammatory effect of timolol. However, no placebo-controlled randomized trial is available to confirm its role. The current trial aimed to evaluate the efficacy of topical timolol 0.5% (w/w) on the RID severity and patients' quality of life (QOL).
Patients aged older than 18 years with positive histology confirmed the diagnosis of invasive and localized breast cancer were included. Patients were randomized based on the random number table to receive each of the interventions of timolol 0.5% (w/w) or placebo topical gels from the first day of initiation of RT and for 6 weeks, a thin layer of gel twice daily. Patients were asked to use a thin layer of gel for at least two hours before and after radiation therapy. Primary outcomes were acute radiation dermatitis (ARD) grade using Radiation Therapy Oncology Group and the European Organization for Research and Treatment of Cancer (RTOG/EORTC) scale and severity of desquamation based on Common Terminology Criteria for Adverse Events (CTCAE), version 5.0. Secondary outcomes were QOL based on Skindex16 (SD-16), maximum grade of ARD, and time of initial RD occurrence.
A total of 64 female patients with an age range of 33 to 79 years were included. The means (SD) of age were 53.88 (11.02) and 54.88 (12.48) in the control and timolol groups, respectively. Considering the RTOG/EORTC and CTCAE scores the difference between groups was insignificant (P-Value = 0.182 and P-Value = 0.182, respectively). In addition, the mean (SD) of time of initial RID occurrence in placebo and timolol groups were 4.09 (0.588) and 4.53 (0.983) weeks, respectively (P-Value = 0.035). The maximum grade of RID over time was significantly lower in the timolol group. During the study period, 75.0% of patients in placebo groups had grade 2 of ARD while in the timolol group it was 31.3% (P-Value = 0.002). QoL was not significantly different between groups (P-Value = 0.148).
Although the topical formulation of timolol, 0.5% (w/w), was found to reduce the average maximum grade of ARD and increase the mean (SD) time of initial RID occurrence, it showed no effect on ARD, severity, and QOL. However, future clinical trials should be performed to assess timolol gel formulation in larger study populations.
https://irct.ir/ IRCT20190810044500N11 (17/03/2021).
Cancer is one of the leading causes of mortality worldwide, and cancer treatment options include chemotherapy, radiotherapy, surgery, and hormone therapy [1]. RT is a treatment based on the use of high-energy waves or radioactive particles to damage tumor cells to attenuate their growth. This modality has been effectively used for cancer treatment in more than 100 years [2]. Approximately 75% of cancer patients receive radiation therapy as a part of their treatment [3].
RT is one of the standard protocol with a high success rate for the treatment of breast cancer to reduce the risk of recurrence and death [4, 5]. The goal of RT is to destroy tumor cells with minimal damage to normal tissue. However, normal cells may be damaged when exposed to radiation. Exposure to ionizing radiation produces free radicals that can damage cellular DNA, change proteins, carbohydrates, and lipids, release the inflammatory cytokines and structural damage to the skin. Normally, natural tissues have a high capacity for self-repair but an imbalance between tissue damage and repair occurs when cells are exposed to repeated radiation [3].
RID occurs in 95% of patients receiving RT during their treatment [6]. The skin cells located in close vicinity to the tumor cells receive large amounts of radiation, causing several complications such as redness, dry and wet desquamation, and tissue necrosis [7, 8]. Wet desquamation may lead to the perception of a severe pain around the skin folds [4, 9]. One common complication of RT is radiation induced primary and delayed dermatitis. Primary reactions include erythema, dry skin, moist desquamation, and sometimes wound. The most common symptoms of delayed dermatitis are fragile or thin skin, fibrosis, acanthosis, skin pigmentation, atrophy, telangiectasia, sensitivity to trauma, neuropathy, and cutaneous neoplasms [5, 10]. The occurrence of these complications in patients lead to discomfort, limited daily activities, and even stop radiotherapy, which negatively affects the cancer treatment [5, 11]. Symptoms usually appear 10–14 days following the initiation of treatment and carry on for 2 to 4 weeks during RT [3, 12]. The severity of dermatitis depends on the dose per fraction, total dose, radiation quality, radiation method, pre-chemotherapy, and skin type [13, 14]. Notably, the patient and radiotherapy characteristics also affect the frequency and severity of skin reactions [4, 15].
Previous studies evaluated the effect of various topical formulations in RID, such as aloe vera, anionic phospholipids and hyaluronic acid based formulation, and corticosteroids [16,17,18,19,20]. However, until now, there is no standard measure for prevention of RID in patients with breast cancer.
Oral and topical formulations of timolol, a nonspecific β-receptor antagonist, are indicated in the management of glaucoma, myocardial infarction, hypertension, and prophylaxis of migraine headache. β2 adrenergic receptor is an important regulator of wound regeneration. Previous experimental and clinical studies have shown that this receptor plays an important role in skin cell migration and proliferation. β2 adrenergic receptor also modulates re-epithelialization, angiogenesis, and inflammatory responses during processes of wound healing [21, 22]. Direct migration of keratinocytes is critical for wound re-epithelialization. β-adrenergic receptor signaling system plays a key role in epidermal wound physiology. Activation of β2 receptors delays the regeneration of the epidermal barrier, while blockage of this receptor promotes the regeneration of these boundaries. Thus, it is presumed that blocking the β adrenergic receptors of keratinocyte enhances the rate of their migration and accelerated the process of re-epithelialization of wound [23]. The beneficial effects of topical timolol 0.5% have been exhibited previously in the management of chronic foot ulcers [24], surgical scars [25], chronic hand eczema [26], trauma wounds, vascular complications [27], and chronic wounds [12, 28].
Catecholamines are endogenous agonists for adrenergic receptors, and epinephrine has the highest specificity for β adrenergic receptors. Epinephrine prevents the migration of keratinocytes through β adrenergic receptor. Keratinocytes contain enzymes that are essential for synthesis of epinephrine. Environmental stress (e.g., UVB radiation and heat damage) regulates the expression of cyclic adenosine monophosphate (cAMP) and β2 receptors in keratinocytes [23, 29,30,31]. The expression of Phenylethanolamine N-methyltransferase enzyme is increased in wound site through the destructive effects of radiation and heat, which subsequently promotes the production of epinephrine and delays wound healing processes. Thereby, the topical timolol, as an antagonist of β adrenergic receptors, could be a potential candidate in the enhancement of wound healing process by preventing the binding of epinephrine to β2 receptors [12, 32].
Exposure to ionizing radiation results in the production of free radicals and release of inflammatory cytokines which subsequently damages the keratinocytes and vascular endothelial cells, all contributes in the structural damage into the epidermis and dermis [33]. On the other hand, the positive therapeutic effects of timolol are attributed to the antioxidant activity of this drug on the entire cell [34]. The clinical studies have shown that timolol protects the endothelial cells from oxidative stress with its potent antioxidant activity [35]. β adrenergic receptor antagonists could exhibit anti-inflammatory action through reducing lymphocyte proliferation, circulating natural killer cells, and T lymphocytes [27]. Therefore, Timolol, as an β-adrenergic receptor antagonist, with its antioxidant, anti-inflammatory and wound-healing properties, can interfere with the underlying pathogenesis of RID and damage to the irradiated epidermis and dermis. Despite the introduction of numerous treatment options in recent years, no effective treatment is available for prevention of RID. Considering the underlying pathogenesis of RID and the mechanism of actions of timolol, this study aimed to determine the role of this β-adrenergic antagonist in the prevention of RID. To our knowledge, this is the first clinical trial of timolol in this bothersome complication of RT in breast cancer patients.
Ethics considerations
The study protocol was approved by the Ethics Committee of Shahid Sadoughi University of Medical Sciences (IR.SSU.MEDICINE.REC.1399.058) and registered in the Iranian Registry of Clinical Trials (IRCT20190810044500N11). Informed consent was obtained from all subjects or their legal guardians. All experiments were performed in accordance with relevant guidelines and regulations.
Timolol maleate as active pharmaceutical ingredient (API (was purchased from Sina darou Laboratories (Tehran, Iran). Polyethylene glycol 4000 and propylene glycol 99.0% were provided by Samchun Chemicals (Gyeonggi-do, Korea). Poly (1-carboxyethylene) or carbopol® 934 as a thickener were purchased from Serva FeinBiochemica (Heidelberg, Germany). Furthermore, triethanolamine as pH adjusters was supplied by Merck (Darmstadt, Germany).
Topical gel preparation
The topical gels were prepared in the pharmaceutics laboratory of a pharmacy school. For preparation of 50 g topical timolol gel 0.5% (w/w), 200 mg carbopol® 934 was added slowly to 44.46 g of stirring phosphate buffer for 24 h. Then, 0.34 g timolol maleate powder was dissolved in 5 g propylene glycol. In the next step, two prepared solutions were mixed. Triethylamine was added until the pH was 7. Placebo gel was prepared with the same materials except timolol. Finally, both topical preparations were packed in similar 50 g aluminum collapsible tubes. The stability test was performed in terms of its organoleptic properties such as clarity, consistency, homogeneity, and spreadability. The prepared gels were stable in the refrigerator (4 °C) for at least one week. Then, the tubes were labeled A or B by the principal investigator.
Patients aged 18 years or older with a pathologic diagnosis of breast cancer, receiving a radiation dose of maximum 60 Gy in 200 cGy fractions who were referred to a medical university-affiliated radiotherapy center were evaluated for eligibility. Patients with known allergy or contraindication of β-blockers, unwillingness to sign an informed consent, inflammatory metastatic carcinoma, concomitant use of nonsteroidal anti‐inflammatory drugs (NSAIDs), corticosteroids, and other immunosuppressive or antioxidant medications, chronic skin or connective tissue diseases were not included to the study. Exclusion criteria were lack of cooperation to continue treatment, and improper use of the study gel and poor compliance which was evaluated by eight-item Morisky Medication Adherence Scale (MMAS-8) [36, 37]. This tool applies a series of short behavioral questions geared in such a way to avoid "yes-saying" bias. The higher scores in this scale are in favor of more adherent. If the patients developed grade 3 dermatitis according to RTOG/EORTC and CTCAE criteria [3, 38], they would have been transitioned off the study medication and given standard of dermatologic care.
Trial design and blinding
The patients, the radiation oncologists, and the investigator of clinical responses were all blinded to the intervention assignments throughout the study. The principal investigator, who was unaware of the interventions, gave A or B codes to each prepared formulation. After the accomplishment of the clinical phase of the study, the principal investigator decoded the topical formulations and assigned each one to the appropriate group.
Patients were randomized to receive each of the interventions of timolol 0.5% (w/w) or placebo topical gels. Radiation dose was 50–60 Gy in 200 cGy fractions given over 5 days per week. The skin examination was performed at the baseline to confirm no previous skin disease. Patients were asked to use a thin layer of gel twice a day for at least two hours before and after radiation therapy. Patients were recommended not to wear their clothes ten minutes after the topical applying the gel and do not wash the area until performing radiotherapy. Patient were also prohibited to use other topical and/or systemic agents for prophylaxis of dermatitis. During radiotherapy, all patients were given the necessary skin care recommendations according to the Multinational Association for Supportive Care in Cancer (MASCC) Skin Toxicity Study Group guideline to prevent acute skin reactions caused by radiotherapy (1). After the accomplishment of the clinical phase of the study, the principal investigator decoded the topical formulations and assigned each one to the appropriate group.
Randomization in a 1:1 ratio was used to ensure a balanced allocation of 64 eligible patients in the control and timolol groups. The random allocation sequence was generated using random allocation software (version 1). Thereby, the first eligible person was referred to as number 1, the second person as number 2, and so on until the 64th patient. Next, using the software generated list, the patients received one of the interventions. To access allocation concealment, an examiner (who was not involved in the study) performed randomization.
Demographic characteristics of the participants were recorded at baseline. Primary and secondary outcomes were evaluated at baseline, then weekly during RT, and finally 2 weeks after the termination of radiotherapy course. Primary outcome was the grade of ARD using each of RTOG/EORTC and CTCAE version 5.0. The severity of ARD was undertaken every week in accordance with the criteria of the RTOG/EORTC and the size and severity of skin ulceration was scored using the CTCAE (Table 1). Secondary outcomes were QOL based on Skindex16 (SD-16), maximum recorded grade of ARD during the study follow-up, and the time of initial RID occurrence.
Table 1 Acute radiation dermatitis corresponding to the RTOG/EORTC and CTCAE scoring criteria
The current pilot study was developed to calculate the sample size for a larger trial. Therefore, considering the rule of thumb for the pilot studies, at least 12 participants in each group would be an appropriate justification for sample size [39]. Considering low participation of the patients during COVID-19 pandemic, with the allowance of possible lost to follow-up during the study period, we allocated 32 patients to the control group and 32 patients into timolol 0.5% group.
Data from a previous randomized prospective trial was used for sample size calculation [3]. A total study size of at least 54 patients (2 × 27 patients per each group) using the following equation allowed for a power (1-β) of 85% at a significance level of 0.05 and ARD grade by RTOG/EORTC score ≥ 2 at weeks 1 to 6 for detecting a difference between two proportions (reduction in total clinical score) of at least 40% (30% vs 75%). The estimated sample size was increased to 32 per group to take account of potential attrition of 12%.
$$n=\frac{{\left(z\frac{\alpha }{2}+z\beta \right)}^{2}\left[{p}_{1}\left(1-{p}_{1}\right)+{p}_{2}\left(1-{p}_{2}\right)\right]}{{\left({p}_{1}-{p}_{2}\right)}^{2}}$$
The Kolmogorov–Smirnov (KS) test was used for checking normality of the data. The quantitative and qualitative variables were reported as each of mean (SD)/median (IQR) and frequency (%), respectively. The distributed quantitative variables were compared between groups by using the Mann–Whitney U test. Moreover, repeated measurement was used to compare changes of variables at groups over time. Spearman's rank correlation coefficient was used to evaluate the association between body mass index (BMI) and ARD. Data were analyzed using statistical package for social science (SPSS) software version 23.0 and P-values < 0.05 were considered statistically significant.
In this study, 130 new cases of breast cancer who were referred to a medical university affiliated radiotherapy center were screened. Fifty-five patients were excluded from the study, because of consuming other topical interventions, decline to participate, history of asthma or cardiovascular diseases, and previously known sensitivity to β-blockers. Eventually, 75 subjects were randomized to receive each of topical timolol 0.5% (w/w) (N = 37) or placebo gels (N = 38). In the placebo group, two patients were excluded, because of not using the topical preparations properly and four experienced grade 3 ARD. In the timolol group, three patients were excluded, because of not using the topical preparations properly and two contracted coronavirus disease 2019 (COVID-19). Sixty-four patients completed the study and have yielded data for analysis (Fig. 1). Demographic and baseline clinical characteristics of enrolled patients are given in Table 2.
CONSORT flow diagram of Timolol 0.5% (w/w) vs placebo during study follow up. NSAIDs: Non-steroidal anti-inflammatory drugs; COVID-19: Coronavirus Disease of 2019; ARD: Acute Radiation Dermatitis
Table 2 Patient demographic profile and baseline disease characteristics
Primary outcomes
The intention to treat analysis of the ARD grade by RTOG/EORTC and CTCAE scores showed a significant difference between timolol and placebo groups at weeks 4 to 6 (P-Value < 0.05), but not at the end of the first two weeks (Table 3). Moreover, the median of RTOG/EORTC and CTCAE scores were zero for all patients in both groups at baseline while the median increased to score 1 and 2 in the timolol and placebo groups, respectively, at the end of week 6 (Tables 3 and 4). There was also a statistically significant time effect (P-Value < 0.001), but the difference between the two groups in time × group interaction effect was not statistically significant (P-Value = 0.182).
Table 3 Primary and Secondary Outcomes over time during weeks 1 to 7 for Timolol and 4 Placebo groups
Table 4 Maximum severity of ARD corresponding to the RTOG/EORTC score in included patients during the study follow-up visits1
Secondary outcomes
The maximum severity of ARD was lower with the timolol group compared to the placebo when treated prophylactically (P-Value = 0.002). Only 31.3% of patients receiving timolol experienced RTOG/EORTC grade II compared to 75.0% of patients receiving placebo. Furthermore, despite the fact that 31 (96.9%) patients in the timolol group experienced ARD at the end of the study, none of them suffered ARD more severe than Grade 2. While on the contrary, in placebo group, 40% of patients experienced Grade 2 and three patient experienced Grade 3 of ARD, which were excluded from the study. Furthermore, one participant in timolol group remained asymptomatic at the end of the study. The details of our findings are given in Table 4.
In terms of skin-related QOL, evaluated by the Skindex-16 (SD16) questionnaire, there were no differences between the two groups at weeks 1 to 3 (P-Value > 0.05). This value increased dramatically during weeks 4 to 6 and then started to fall gradually. However, the values of these changes at week 6 of RT were much higher for the placebo group compared with the intervention group (Table 2).
Furthermore, the mean (SD) time of incidence of ARD in placebo and timolol groups were 4.09 (0.588) and 4.53 (0.983) in weeks, respectively, which was statistically significant (P-Value = 0.035). In order to evaluate the association between BMI and ARD, spearman's rank correlation coefficient was used. The results showed no significant association between BMI and ARD (spearman's rank correlation coefficient = 0.017 and P-value = 0.895).
Mild adverse effects, sensed as the feeling of irritation was reported in all the 64 sites treated with each of timolol or placebo topical formulations. However, none of the patients discontinued the therapy because of the adverse effects. No reports of bradycardia or wheezing were reported in any of the patients who completed the treatment period.
Although the anti-inflammatory properties of timolol, the data on its' radioprotective effects is limited. The present study was the first randomized, controlled clinical trial evaluating the efficacy and safety of timolol 0.5% (w/w) topical gels twice a day at least two hours before and after receiving RT in prevention of RID. The results of the present study demonstrated that timolol 0.5% (w/w) topical gel can significantly delay and decrease the incidence of ARD and its severity in breast cancer patients receiving RT compared with those receiving the placebo. Moreover, the maximum grade of RID over time was significantly diminished in timolol groups.
RID is the most common adverse effect of breast-cancer RT. During RT, around 95% of patients develop some degree of local inflammatory symptoms, such as erythema, dry or moist desquamation, edema, and ulcers. The severe presentations of radiodermatitis, e.g., moist desquamation, ulcers, and skin fibrosis, may necessitate discontinuation of the RT. This subsequently impair patients' QOL and negatively influence the outcomes of the patients. The pathogenesis of radiodermatitis is rather complex and comprises of radiation tissue injury followed by an inflammatory reaction. An erythematous skin reaction develops by an increased vascular permeability and vasodilation. This is followed by inflammatory responses [40].
Wound-healing is a well-organized and a complex process achieved through four distinct phases of hemostasis, inflammation, proliferation, and remodeling [41]. During the past two decades of research, the efficacy of various biological and chemical compounds such as antioxidants, cytoprotective factors, and vitamins have been investigated [42,43,44]. Yet, no proven modality is available for prevention of RID. Topical steroids such as mometasone 0.1% and hydrocortisone have been evaluated for their anti‐inflammatory properties [16, 45]. The results of the previous studies suggested that low dose of corticosteroids may be beneficial in reducing itching and irritation in patients with radiodermatitis. Moreover, steroids are contraindicated in the presence of infection as they could mask the signs and symptoms of infection and worsen it [16, 18, 45].
The first clues to the biological effect of β -adrenergic receptor in wound-healing process came from Donaldson study, revealing that β -adrenergic receptor agonists delay wound repairing in newt limbs [46]. Later studies confirmed that β adrenergic receptor antagonists promote wound re-epithelialization through blocking the β2 receptors within the skin layers [23, 47, 48]. The efficacy of β adrenergic receptor antagonists in promoting wound healing process was initially demonstrated by their systemic administration [49]. Despite limited clinical evidence to support the efficacy of topical timolol, Thomas et al. in a case–control study reported that topical application of 0.5% timolol solution along with antibiotics and dressings produced clinically significant reduction in ulcer area within 4 weeks [24]. Mohammadi et al., in a randomized double-blind clinical trial showed oral propranolol decreased healing time of superficial wounds and hospital stay period in hospitalized burn patients [47]. Furthermore, several case reports of illustration of topical timolol effects on acute and refractory chronic wounds healing have been published [12, 25, 27]. In addition, β adrenergic receptor antagonists could exhibit anti-inflammatory action through reducing lymphocyte proliferation, circulating natural killer cells, and T lymphocytes [27]. Although the main mechanisms for β adrenergic receptor antagonists is not known, the proposed mechanisms are as follows: accelerate re-epithelialization, reduce inflammatory response, increase fibroblast migration and angiogenesis, and enhance extracellular signal-related kinase phosphorylation [23].
The application of topical silver sulfadiazine in breast cancer patients referred for RT indicated that women in silver sulfadiazine encountered less sever ARD compared with patients in the control group [50]. The results of another trial revealed that topical administration of atorvastatin 1% significantly reduced severity of ARD compared with placebo [51]. The results of the current study have overall showed that topical administration of timolol 0.5% gel was superior to the placebo gel in the prevention of the ARD incidence and related symptoms.
Previously, compounds with similar anti-inflammatory and antioxidant properties have been used successfully for this complication. For instance, the anti-inflammatory and antioxidant activity of herbal products has been demonstrated in different experimental and clinical evidences [3, 20, 40]. Rafati et al. demonstrated that the topical administration of Nigella sativa 5% gel with anti-inflammatory and antioxidant properties delayed and decreased the severity of ARD and its related symptoms compared to the placebo (3). In this study, we observed that the topical application of timolol 0.5% to the radiation-exposed breast area can effectively prevent the occurrence of ARD.
Karbasforooshan et al. performed a randomized, double‐blind, clinical trial on 40 breast cancer women who were referred to receive RT. The eligible patients were randomly allocated to receive silymarin 1% gel or placebo once daily from the first day of radiotherapy for 5 weeks. The acute skin reactions were assessed according to RTOG/EORTC and CTCAE criteria. However, after 5 weeks of RT, only 9.8% of patients in silymarin group experienced Grade 2 radiodermatitis in comparison with 52% in placebo group. At the end of the RT, proportion of patients without RID was significantly higher in silymarin group (23.5% vs. 2%, p < 0.02). The current study found that 31.3% of participants in timolol group experienced Grade 2 radiodermatitis in comparison with 75.0% in placebo group at study termination [40].
Although the results of the present clinical trial were promising and target the underlying pathology of RID, care must be taken in interpreting it, because of numerous limitations that we faced throughout the study. The first limitation was the small size of the studied subjects. Although we screened 130 patients for eligibility, patients' cooperation was poor due to the COVID-19 pandemic. Second, we only examined the effects of one single concentration of this topical product, timolol 0.5% gel. It remains an area of research for future studies whether increasing the dose of the drug will be associated with higher efficacy without causing side effects or not. Third, regarding the stability of the formulation, for longer consumption time period, physicochemical as well as microbial quality control should be done. Finally, the study was not adjusted for other possible confounding factors including nutritional status, genetic, body mass index (BMI), and chemotherapy regimen, which could have potentially affected the occurrence and the intensity of dermatitis.
This randomized controlled clinical trial showed that the preventive use of the timolol gel significantly delays and diminishes the maximum grade of ARD in breast cancer patients undergoing RT. Nevertheless, large multicenter randomized clinical trials (RCTs) are required to certify this novel concept for the prevention of ARD in breast cancer patients.
Radiotherapy induced dermatitis
QOL:
ARD:
Acute radiation dermatitis
SD-16:
Skindex16
RTOG/EORTC:
Radiation therapy oncology group and the european organization for research and treatment of cancer
CTCAE:
Common terminology criteria for adverse events
RT:
cAMP:
Cyclic adenosine monophosphate
UVB:
Ultraviolet b
API:
MASCC:
Multinational association of supportive care in cancer
Kolmogorov–smirnov
IQR:
Interquartile range
SPSS:
Statistical package for social science
HER2:
Human epidermal growth factor Receptor 2
Estrogen receptor
Progesterone receptor
Randomized clinical trial
Wong RK, Bensadoun R-J, Boers-Doets CB, Bryce J, Chan A, Epstein JB, et al. Clinical practice guidelines for the prevention and treatment of acute and late radiation reactions from the MASCC Skin Toxicity Study Group. Support Care Cancer. 2013;21(10):2933–48.
Aulton ME, Taylor K. Aulton's pharmaceutics: the design and manufacture of medicines. 4 ed: Elsevier Health Sciences; 2013. p. 933.
Rafati M, Ghasemi A, Saeedi M, Habibi E, Salehifar E, Mosazadeh M, et al. Nigella sativa L. for prevention of acute radiation dermatitis in breast cancer: A randomized, double blind, placebo-controlled, clinical trial. Complementary therapies in medicine. 2019;47:102-205.
Thanthong S, Nanthong R, Kongwattanakul S, Laebua K, Trirussapanich P, Pitiporn S, et al. Prophylaxis of radiation-induced dermatitis in patients with breast cancer using herbal creams: a prospective randomized controlled trial. Integr Cancer Ther. 2020;19:1534735420920714.
Salvo N, Barnes E, Van Draanen J, Stacey E, Mitera G, Breen D, et al. Prophylaxis and management of acute radiation-induced skin reactions: a systematic review of the literature. Curr Oncol. 2010;17(4):94–112.
Ryan JL. Ionizing radiation: the good, the bad, and the ugly. J Investig Dermatol. 2012;132(3):985–93.
Goodarzi A, Mozafarpoor S, Dodangeh M, Seirafianpour F, Shahverdi MH. The role of topical timolol in wound healing and the treatment of vascular lesions: A narrative review. Dermatol Ther. 2021;34(2):e14847.
Glover D, Harmer V. Radiotherapy-induced skin reactions: assessment and management. Br J Nurs. 2014;23(Sup2):S28–35.
Chen AP, Setser A, Anadkat MJ, Cotliar J, Olsen EA, Garden BC, et al. Grading dermatologic adverse events of cancer treatments: the Common Terminology Criteria for Adverse Events Version 4.0. J Am Acad Dermatol. 2012;67(5):1025–39.
Singh M, Alavi A, Wong R, Akita S. Radiodermatitis: a review of our current understanding. Am J Clin Dermatol. 2016;17(3):277–92.
Pullar CE, Grahn JC, Liu W, Isseroff RR. β2-Adrenergic receptor activation delays wound healing. FASEB J. 2006;20(1):76–86.
Tang JC, Dosal J, Kirsner RS. Topical timolol for a refractory wound. Dermatol Surg. 2012;38(1):135–8.
Wolf JR, Gewandter JS, Bautista J, Heckler CE, Strasser J, Dyk P, et al. Utility of topical agents for radiation dermatitis and pain: a randomized clinical trial. Support Care Cancer. 2020;28(7):3303–11.
Bernier J, Bonner J, Vermorken J, Bensadoun R-J, Dummer R, Giralt J, et al. Consensus guidelines for the management of radiation dermatitis and coexisting acne-like rash in patients receiving radiotherapy plus EGFR inhibitors for the treatment of squamous cell carcinoma of the head and neck. Ann Oncol. 2008;19(1):142–9.
Sharp L, Johansson H, Hatschek T, Bergenmar M. Smoking as an independent risk factor for severe skin reactions due to adjuvant radiotherapy for breast cancer. The Breast. 2013;22(5):634–8.
Hindley A, Zain Z, Wood L, Whitehead A, Sanneh A, Barber D, et al. Mometasone furoate cream reduces acute radiation dermatitis in patients receiving breast radiation therapy: results of a randomized trial. Int J Radiat Oncol* Biol* Phys. 2014;90(4):748–55.
Schmeel LC, Koch D, Stumpf S, Leitzen C, Simon B, Schüller H, et al. Prophylactically applied Hydrofilm polyurethane film dressings reduce radiation dermatitis in adjuvant radiation therapy of breast cancer patients. Acta Oncol. 2018;57(7):908–15.
Menon A, Prem SS, Kumari R. Topical betamethasone valerate as a prophylactic agent to prevent acute radiation dermatitis in head and neck malignancies: a randomized, open-label, phase 3 trial. Int J Radiat Oncol* Biol* Phys. 2021;109(1):151–60.
Merchant TE, Bosley C, Smith J, Baratti P, Pritchard D, Davis T, et al. A phase III trial comparing an anionic phospholipid-based cream and aloe vera-based gel in the prevention of radiation dermatitis in pediatric patients. Radiat Oncol. 2007;2(1):1–8.
Sahebnasagh A, Saghafi F, Ghasemi A, Akbari J, Alipour A, Habtemariam S, et al. Aloe vera for Prevention of Acute Radiation Proctitis in Colorectal Cancer a Preliminary Randomized, Placebo-Controlled Clinical Trial. J Gastrointest Cancer. 2022;53(2):318–25.
Schmidt ME, Scherer S, Wiskemann J, Steindorf K. Return to work after breast cancer: The role of treatment-related side effects and potential impact on quality of life. Eur J Cancer Care. 2019;28(4):e13051.
Pullar CE, Manabat-Hidalgo CG, Bolaji RS, Isseroff RR. β-Adrenergic receptor modulation of wound repair. Pharmacol Res. 2008;58(2):158–64.
Pullar CE, Rizzo A, Isseroff RR. β-Adrenergic receptor antagonists accelerate skin wound healing: evidence for a catecholamine synthesis network in the epidermis. J Biol Chem. 2006;281(30):21225–35.
Thomas B, Kurien JS, Jose T, Ulahannan SE, Varghese SA. Topical timolol promotes healing of chronic leg ulcer. J Vasc Surg Venous Lymphat Disord. 2017;5(6):844–50.
Dabiri G, Tiger J, Goreshi R, Fischer A, Iwamoto S. Topical timolol may improve overall scar cosmesis in acute surgical wounds. Cutis. 2017;100(1):E27–8.
Pawar M. Topical timolol in chronic, recalcitrant fissures and erosions of hand eczema. J Am Acad Dermatol. 2021;84(3):e125–6.
Alsaad AM, Alsaad SM, Fathaddin A, Al-Khenaizan S. Topical timolol for vasculitis ulcer: a potential healing approach. JAAD Case Reports. 2019;5(9):812–4.
Beroukhim K, Rotunda AM. Topical 0.5% timolol heals a recalcitrant irradiated surgical scalp wound. Dermatol Surg. 2014;40(8):924–6.
Sivamani RK, Lam ST, Isseroff RR. Beta adrenergic receptors in keratinocytes. Dermatol Clin. 2007;25(4):643–53.
Pullar CE, Zhao M, Song B, Pu J, Reid B, Ghoghawala S, et al. ß-adrenergic receptor agonists delay while antagonists accelerate epithelial wound healing: Evidence of an endogenous adrenergic network within the corneal epithelium. J Cell Physiol. 2007;211(1):261–72.
Grando SA, Pittelkow MR, Schallreuter KU. Adrenergic and cholinergic control in the biology of epidermis: physiological and clinical significance. J Investig Dermatol. 2006;126(9):1948–65.
Dunn JH, Koo J. Psychological Stress and skin aging: a review of possible mechanisms and potential therapies. Dermatol Online J. 2013;19(6):1-18.
Liao W, Hei TK, Cheng SK. Radiation-induced dermatitis is mediated by IL17-expressing γδ T cells. Radiat Res. 2017;187(4):464–74.
Saccà SC, La Maestra S, Micale RT, Larghero P, Travaini G, Baluce B, et al. Ability of dorzolamid hydrochloride and timolol maleate to target mitochondria in glaucoma therapy. Arch Ophthalmol. 2011;129(1):48–55.
Izzotti A, Saccà S, Di Marco B, Penco S, Bassi A. Antioxidant activity of timolol on endothelial cells and its relevance for glaucoma course. Eye. 2008;22(3):445–53.
Wu CH, Wong MC, Wang HH, Kwan MW, Chan WM, Li HW, et al. The eight-item Morisky Medication Adherence Scale (MMAS-8) score was associated with glycaemic control in diabetes patients. Hypertension. 2014;64(suppl_1):558.
Tan X, Patel I, Chang J. Review of the four item Morisky medication adherence scale (MMAS-4) and eight item Morisky medication adherence scale (MMAS-8). INNOVATIONS in pharmacy. 2014;5(3):5.
Schmeel LC, Koch D, Schmeel FC, Bücheler B, Leitzen C, Mahlmann B, et al. Hydrofilm polyurethane films reduce radiation dermatitis severity in hypofractionated whole-breast irradiation: an objective, intra-patient randomized dual-center assessment. Polymers. 2019;11(12):2112.
Julious SA. Sample size of 12 per group rule of thumb for a pilot study. Pharm Stat J Appl Stat Pharm Ind. 2005;4(4):287–91.
Karbasforooshan H, Hosseini S, Elyasi S, FaniPakdel A, Karimi G. Topical silymarin administration for prevention of acute radiodermatitis in breast cancer patients: A randomized, double-blind, placebo-controlled clinical trial. Phytother Res. 2019;33(2):379–86.
Enoch S, Leaper DJ. Basic science of wound healing. Surg Infect (Larchmt). 2008;26(2):31–7.
Nasser NJ, Fenig S, Ravid A, Nouriel A, Ozery N, Gardyn S, et al. Vitamin D ointment for prevention of radiation dermatitis in breast cancer patients. NPJ breast cancer. 2017;3(1):1–5.
Halperin EC, Gaspar L, George S, Darr D, Pinnell S. A double-blind, randomized, prospective trial to evaluate topical vitamin C solution for the prevention of radiation dermatitis. Int J Radiat Oncol* Biol* Phys. 1993;26(3):413–6.
Schmidlin CJ, de la Vega MR, Perer J, Zhang DD, Wondrak GT. Activation of NRF2 by topical apocarotenoid treatment mitigates radiation-induced dermatitis. Redox Biol. 2020;37:101714.
Ho AY, Olm-Shipman M, Zhang Z, Siu CT, Wilgucki M, Phung A, et al. A randomized trial of mometasone furoate 0.1% to reduce high-grade acute radiation dermatitis in breast cancer patients receiving postmastectomy radiation. Int J Radiat Oncol Biol Phys. 2018;101(2):325–33.
Donaldson DJ, Mahan JT. Influence of catecholamines on epidermal cell migration during wound closure in adult newts. Comparative Biochemistry and Physiology Part C: Comparative Pharmacology. 1984;78(2):267–70.
Mohammadi AA, Bakhshaeekia A, Alibeigi P, Hasheminasab MJ, Tolide-ei HR, Tavakkolian AR, et al. Efficacy of propranolol in wound healing for hospitalized burn patients. J Burn Care Res. 2009;30(6):1013–7.
Braun LR, Lamel SA, Richmond NA, Kirsner RS. Topical timolol for recalcitrant wounds. JAMA Dermatol. 2013;149(12):1400–2.
Arbabi S, Campion EM, Hemmila MR, Barker M, Dimo M, Ahrns KS, et al. Beta-blocker use is associated with improved outcomes in adult trauma patients. J Trauma Acute Care Surg. 2007;62(1):56–62.
Hemati S, Asnaashari O, Sarvizadeh M, Motlagh BN, Akbari M, Tajvidi M, et al. Topical silver sulfadiazine for the prevention of acute dermatitis during irradiation for breast cancer. Support Care Cancer. 2012;20(8):1613–8.
Ghasemi A, Ghashghai Z, Akbari J, Yazdani-Charati J, Salehifar E, Hosseinimehr SJ. Topical atorvastatin 1% for prevention of skin toxicity in patients receiving radiation therapy for breast cancer: a randomized, double-blind, placebo-controlled trial. Eur J Clin Pharmacol. 2019;75(2):171–8.
This article is derived from the thesis "Topical Timolol Effectiveness for prophylaxy of radiation induced dermatitis in breast cancer patients refer to Ramazan zadeh radiotherapy center of Yazd" supervised by Assistant Professor Dr. Fatemeh Saghafi and submitted by Dr. Fatemeh Saghafi to the Faculty of Pharmacy of Shahid Sadoughi University of Medical Sciences, Yazd, Iran, in partial fulfillment of the requirements for the Degree of Pharm-D of Zahra Hakimi.
This study was supported by the shahid sadooghi University of Medical Sciences (grant number: 7145).
Department of Pharmaceutics, School of Pharmacy, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran
Mohsen Nabi-Meybodi
Clinical Research Center, Department of Internal Medicine, School of Medicine, Faculty of Medicine, North Khorasan University of Medical Sciences, Bojnurd, Iran
Adeleh Sahebnasagh
Pharmaceutical Sciences Research Center, School of Pharmacy, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran
Zahra Hakimi
Department of Radiooncology, School of Medicine, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran
Masoud Shabani & Ali Asghar Shakeri
Department of Clinical Pharmacy, School of Pharmacy, Shahid Sadoughi University of Medical Sciences and Health Services, Yazd, Iran
Fatemeh Saghafi
Shahid Sadoughi University of Medical Sciences, Department of Clinical Pharmacy, Faculty of Pharmacy, Shohadaye gomnam Blvd, Yazd Province, Yazd, Iran
Masoud Shabani
Ali Asghar Shakeri
F.S. and A.S. were involved in the conception and design of the study. Z.H. and M.N.M. prepared the timolol and placebo gels. Z.H., M.S. and A.A.S. evaluated the patients and collected the data. F.S. and Z.H. analyzed the data and drafted the first manuscript. M.N.M. and F.S. modified manuscript, and answered most queries raised by reviewers together with other authors in major revision. All authors read and approved the final manuscript.
Correspondence to Fatemeh Saghafi.
The study protocol was approved by the Ethics Committee of Shahid Sadoughi University of Medical Sciences (IR.SSU.MEDICINE.REC.1399.058). Informed consent was obtained from all subjects or their legal guardians. All experiments were performed in accordance with relevant guidelines and regulations.
The authors declare that they have no competing interests regarding the publication of this paper.
Nabi-Meybodi, M., Sahebnasagh, A., Hakimi, Z. et al. Effects of topical timolol for the prevention of radiation-induced dermatitis in breast cancer: a pilot triple-blind, placebo-controlled trial. BMC Cancer 22, 1079 (2022). https://doi.org/10.1186/s12885-022-10064-x
DOI: https://doi.org/10.1186/s12885-022-10064-x
Keywsords
Timolol
Radiodermatitis
Submission enquiries: [email protected]
|
CommonCrawl
|
Weak KAM theory for HAMILTON-JACOBI equations depending on unknown functions
Mixing invariant extremal distributional chaos
November 2016, 36(11): 6523-6532. doi: 10.3934/dcds.2016081
On a constant rank theorem for nonlinear elliptic PDEs
Gábor Székelyhidi 1, and Ben Weinkove 2,
Department of Mathematics, University of Notre Dame, 255 Hurley, Notre Dame, IN 46556, United States
Department of Mathematics, Northwestern University, 2033 Sheridan Road, Evanston, IL 60208, United States
Received October 2015 Revised June 2016 Published August 2016
We give a new proof of Bian-Guan's constant rank theorem for nonlinear elliptic equations. Our approach is to use a linear expression of the eigenvalues of the Hessian instead of quotients of elementary symmetric functions.
Keywords: Nonlinear elliptic, eigenvalues., constant rank Hessian.
Mathematics Subject Classification: Primary: 35J6.
Citation: Gábor Székelyhidi, Ben Weinkove. On a constant rank theorem for nonlinear elliptic PDEs. Discrete & Continuous Dynamical Systems - A, 2016, 36 (11) : 6523-6532. doi: 10.3934/dcds.2016081
O. Alvarez, J.-M. Lasry and P.-L. Lions, Convex viscosity solutions and state constraints,, J. Math. Pures Appl. (9), 76 (1997), 265. doi: 10.1016/S0021-7824(97)89952-7. Google Scholar
B. Bian and P. Guan, A microscopic convexity principle for nonlinear partial differential equations,, Invent. Math., 177 (2009), 307. doi: 10.1007/s00222-009-0179-5. Google Scholar
B. Bian and P. Guan, A structural condition for microscopic convexity principle,, Discrete Contin. Dyn. Syst., 28 (2010), 789. doi: 10.3934/dcds.2010.28.789. Google Scholar
H. J. Brascamp and E. H. Lieb, On extensions of the Brunn-Minkowski and Prékopa-Leindler theorems, including inequalities for log concave functions, and with an application to the diffusion equation,, J. Functional Analysis, 22 (1976), 366. doi: 10.1016/0022-1236(76)90004-5. Google Scholar
L. Caffarelli and A. Friedman, Convexity of solutions of some semilinear elliptic equations,, Duke Math. J., 52 (1985), 431. doi: 10.1215/S0012-7094-85-05221-4. Google Scholar
L. Caffarelli, P. Guan and X.-N. Ma, A constant rank theorem for solutions of fully nonlinear elliptic equations,, Comm. Pure Appl. Math., 60 (2007), 1769. doi: 10.1002/cpa.20197. Google Scholar
L. Caffarelli and J. Spruck, Convexity properties of solutions to some classical variational problems,, Comm. Partial Differential Equations, 7 (1982), 1337. doi: 10.1080/03605308208820254. Google Scholar
P. Cannarsa and C. Sinestrari, Semiconcave functions, Hamilton-Jacobi equations, and optimal control,, in Progress in Nonlinear Differential Equations and their Applications 58, 58 (2004). Google Scholar
D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, (1998). Google Scholar
P. Guan, Q. Li and X. Zhang, A uniqueness theorem in Kähler geometry,, Math. Ann., 345 (2009), 377. doi: 10.1007/s00208-009-0358-0. Google Scholar
P. Guan, C. S. Lin and X.-N. Ma, The Christoffel-Minkowski problem II: Weingarten curvature equations,, Chin. Ann. Math., 27 (2006), 595. doi: 10.1007/s11401-005-0575-0. Google Scholar
P. Guan and X.-N. Ma, The Christoffel-Minkowski problem I: Convexity of solutions of a Hessian equations,, Invent. Math., 151 (2003), 553. doi: 10.1007/s00222-002-0259-2. Google Scholar
P. Guan, X.-N. Ma and F. Zhou, The Christoffel-Minkowski problem III: existence and convexity of admissible solutions,, Comm. Pure Appl. Math., 59 (2006), 1352. doi: 10.1002/cpa.20118. Google Scholar
P. Guan and D. H. Phong, A maximum rank problem for degenerate elliptic fully nonlinear equations,, Math. Ann., 354 (2012), 147. doi: 10.1007/s00208-011-0729-1. Google Scholar
F. Han, X.-N. Ma and D. Wu, A constant rank theorem for Hermitian $k$-convex solutions of complex Laplace equations,, Methods Appl. Anal., 16 (2009), 263. doi: 10.4310/MAA.2009.v16.n2.a5. Google Scholar
B. Kawohl, A remark on N. Korevaar's concavity maximum principle and on the asymptotic uniqueness of solutions to the plasma problem,, Math. Methods Appl. Sci., 8 (1986), 93. doi: 10.1002/mma.1670080107. Google Scholar
A. U. Kennington, Power concavity and boundary value problems,, Indiana Univ. Math. J., 34 (1985), 687. doi: 10.1512/iumj.1985.34.34036. Google Scholar
N. J. Korevaar, Capillary surface convexity above convex domains,, Indiana Univ. Math. J., 32 (1983), 73. doi: 10.1512/iumj.1983.32.32007. Google Scholar
N. J. Korevaar and J. L. Lewis, Convex solutions of certain elliptic equations have constant rank Hessians,, Arch. Rational Mech. Anal., 97 (1987), 19. doi: 10.1007/BF00279844. Google Scholar
Q. Li, Constant rank theorem in complex variables,, Indiana Univ. Math. J., 58 (2009), 1235. doi: 10.1512/iumj.2009.58.3574. Google Scholar
X.-N. Ma and L. Xu, The convexity of solutions of a class of Hessian equation in bounded convex domain in $\mathbbR^3$,, J. Funct. Anal., 255 (2008), 1713. doi: 10.1016/j.jfa.2008.06.008. Google Scholar
I. Singer, I. B. Wong, S.-T. Yau and S.S.T. Yau, An estimate of gap of the first two eigenvalues in the Schrodinger operator,, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 12 (1985), 319. Google Scholar
J. Spruck, Geometric aspects of the theory of fully nonlinear elliptic equations, in Global Theory of Minimal Surfaces, (2005), 283. Google Scholar
G. Székelyhidi, Fully non-linear elliptic equations on compact Hermitian manifolds,, preprint, (). Google Scholar
G. Székelyhidi, V. Tosatti and B. Weinkove, Gauduchon metrics with prescribed volume form,, preprint, (). Google Scholar
N. S. Trudinger, Comparison principles and pointwise estimates for viscosity solutions of nonlinear elliptic equations,, Revista Mat. Iber., 4 (1988), 453. doi: 10.4171/RMI/80. Google Scholar
X. J. Wang, Counterexample to the convexity of level sets of solutions to the mean curvature equation,, J. Eur. Math. Soc., 16 (2014), 1173. doi: 10.4171/JEMS/457. Google Scholar
M. Warren and Y. Yuan, Hessian estimates for the sigma-2 equation in dimension 3,, Comm. Pure Appl. Math., 62 (2009), 305. doi: 10.1002/cpa.20251. Google Scholar
Adam M. Oberman. Wide stencil finite difference schemes for the elliptic Monge-Ampère equation and functions of the eigenvalues of the Hessian. Discrete & Continuous Dynamical Systems - B, 2008, 10 (1) : 221-238. doi: 10.3934/dcdsb.2008.10.221
Ha Pham, Plamen Stefanov. Weyl asymptotics of the transmission eigenvalues for a constant index of refraction. Inverse Problems & Imaging, 2014, 8 (3) : 795-810. doi: 10.3934/ipi.2014.8.795
Bo Guan, Heming Jiao. The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 701-714. doi: 10.3934/dcds.2016.36.701
François Hamel, Emmanuel Russ, Nikolai Nadirashvili. Comparisons of eigenvalues of second order elliptic operators. Conference Publications, 2007, 2007 (Special) : 477-486. doi: 10.3934/proc.2007.2007.477
Tomas Godoy, Jean-Pierre Gossez, Sofia Paczka. On the principal eigenvalues of some elliptic problems with large drift. Discrete & Continuous Dynamical Systems - A, 2013, 33 (1) : 225-237. doi: 10.3934/dcds.2013.33.225
Hongwei Lou, Xueyuan Yin. Minimization of the elliptic higher eigenvalues for multiphase anisotropic conductors. Mathematical Control & Related Fields, 2018, 8 (3&4) : 855-877. doi: 10.3934/mcrf.2018038
Leszek Gasiński, Nikolaos S. Papageorgiou. Nonlinear hemivariational inequalities with eigenvalues near zero. Conference Publications, 2005, 2005 (Special) : 317-326. doi: 10.3934/proc.2005.2005.317
Feng Du, Adriano Cavalcante Bezerra. Estimates for eigenvalues of a system of elliptic equations with drift and of bi-drifting laplacian. Communications on Pure & Applied Analysis, 2017, 6 (2) : 475-491. doi: 10.3934/cpaa.2017024
Ming Huang, Cong Cheng, Yang Li, Zun Quan Xia. The space decomposition method for the sum of nonlinear convex maximum eigenvalues and its applications. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-21. doi: 10.3934/jimo.2019034
Delia Ionescu-Kruse. Elliptic and hyperelliptic functions describing the particle motion beneath small-amplitude water waves with constant vorticity. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1475-1496. doi: 10.3934/cpaa.2012.11.1475
David L. Finn. Noncompact manifolds with constant negative scalar curvature and singular solutions to semihnear elliptic equations. Conference Publications, 1998, 1998 (Special) : 262-275. doi: 10.3934/proc.1998.1998.262
Li Yin, Jinghua Yao, Qihu Zhang, Chunshan Zhao. Multiple solutions with constant sign of a Dirichlet problem for a class of elliptic systems with variable exponent growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (4) : 2207-2226. doi: 10.3934/dcds.2017095
Igor E. Verbitsky. The Hessian Sobolev inequality and its extensions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 6165-6179. doi: 10.3934/dcds.2015.35.6165
Magdalena Nockowska-Rosiak, Piotr Hachuła, Ewa Schmeidel. Existence of uncountably many asymptotically constant solutions to discrete nonlinear three-dimensional system with $p$-Laplacian. Discrete & Continuous Dynamical Systems - B, 2018, 23 (1) : 369-375. doi: 10.3934/dcdsb.2018025
Nina Ivochkina, Nadezda Filimonenkova. On the backgrounds of the theory of m-Hessian equations. Communications on Pure & Applied Analysis, 2013, 12 (4) : 1687-1703. doi: 10.3934/cpaa.2013.12.1687
N. V. Chemetov. Nonlinear hyperbolic-elliptic systems in the bounded domain. Communications on Pure & Applied Analysis, 2011, 10 (4) : 1079-1096. doi: 10.3934/cpaa.2011.10.1079
Olesya V. Solonukha. On nonlinear and quasiliniear elliptic functional differential equations. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 869-893. doi: 10.3934/dcdss.2016033
Shuhong Chen, Zhong Tan. Optimal interior partial regularity for nonlinear elliptic systems. Discrete & Continuous Dynamical Systems - A, 2010, 27 (3) : 981-993. doi: 10.3934/dcds.2010.27.981
Xia Huang. Stable weak solutions of weighted nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2014, 13 (1) : 293-305. doi: 10.3934/cpaa.2014.13.293
Annamaria Canino, Elisa De Giorgio, Berardino Sciunzi. Second order regularity for degenerate nonlinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4231-4242. doi: 10.3934/dcds.2018184
Gábor Székelyhidi Ben Weinkove
|
CommonCrawl
|
Assessment of malnutrition in patients with liver cirrhosis using protein calorie malnutrition (PCM) score verses bio-electrical impedance analysis (BIA)
Om Parkash ORCID: orcid.org/0000-0003-3704-64861,
Wasim Jafri1,
S. M. Munir1 &
Romaina Iqbal2
Malnutrition is a common problem in patients with liver cirrhosis and tools for nutritional assessment are under debate. We conducted this study to assess prevalence of malnutrition in cirrhotic patients using PCM score and BIA. Additionally we compared BIA to PCM score for detecting malnutrition in this patient population.
This was a cross sectional study conducted in two tertiary care hospitals of Karachi Pakistan on adults with liver cirrhosis. Malnutrition was assessed by PCM score using anthropometric measurements and biological specimens and (ii) Body cell mass was assessed using BIA. Malnutrition as estimated by the PCM score was present in 122 (73%) of patients in which most patients had mild malnutrition (n = 72 (45%)), followed by 34 (21%) with moderate malnutrition and 3 (1.9%) with severe malnutrition. Malnutrition according to BIA estimated through body cell mass could detect it in 98 (61%) of patients. There was optimal correlation of PCM score with body call mass (Pearson correlation coefficient = 0.3 (p value 0.001)). We conclude that majority of the patients with liver cirrhosis had malnutrition as determined by PCM score. BIA underscored the malnutrition in this patient population.
Cirrhosis is the late and irreversible stage of hepatic fibrosis, which is characterized by destruction of hepatic architecture and the development of nodules [1]. Almost 65–90% patients with advanced cirrhosis have malnutrition [2], which itself is an independent predictor of mortality in patients with end stage liver disease [3]. In Pakistan, the most common causes of cirrhosis are hepatitis C virus (HCV) and hepatitis B virus (HBV) [4, 5]. Studies focusing on malnutrition in cirrhosis due to viral etiology are limited. Therefore it is essential to assess malnutrition in this patient population of cirrhosis due to Hepatitis B and C, whose disease course is more complicated due to infective etiology, and different types of treatment.
BIA is a simple, noninvasive, inexpensive and quick method to estimate BCM [6], and it has also been used in patients with cirrhosis [7,8,9]. Decreased body cell mass is an indicator of malnutrition, cachexia, dehydration [10]. Anthropometric measures include Triceps skin fold thickness (TCF), Mid Arm Muscle Circumference (MAMC), Mid Arm Circumference [11] and height. There is still a lot of debate on which is the better tool for assessment of malnutrition, as reliable nutritional assessment of the cirrhotic patients is difficult due to ascites and edema [12]. Therefore we conducted this study to assess prevalence of malnutrition in cirrhotic patients using PCM and BIA (using BCM). Additionally we compared BIA to PCM score for detecting malnutrition in this patient population.
This was a cross sectional study conducted in the outpatient medicine clinic of Aga Khan University hospital Karachi (AKUH) and Jinnah postgraduate medical center (JPMC) Karachi. All patients aged ≥ 14 years, with either history of viral CLD (HBsAg or HCV) or non-viral CLD who had established diagnosis of liver cirrhosis of any etiology were recruited in the study after obtaining the informed consent. Consent from parent or guardian was taken for those who were minors. Liver cirrhosis was diagnosed based on ultra-sonographic evidence of chronic liver disease including shrunken liver, dilated portal vein, splenomegaly. We used nonrandom purposive technique for recruiting participants in this study [13, 14].
Study measurements included demographic information, measurement of anthropometric measures, history of decompensation (including upper gastrointestinal bleed, ascites, and portosystemic encephalopathy) biological specimens (urine and blood) and assessment of total body water and fat free mass using BIA.
Height was measured with portable stadiometer to the nearest of 0.1 cm and mean of three readings were documented. Triceps skin fold thickness was measured with a Lange caliper [15]. Mid arm circumference was calculated from the right arm at mid-point equidistant from the acromion and olecranon, with the patient in the upright position and arm flexed at 90°. The arm muscle circumference was calculated by the following formula using MAMC = MAC − (TSF × 0.3142). Reference for MAMC was obtained from Indian study [16, 17]. Weight was measured on Tanita weighing scale to the nearest of 0.1 kg. Biological specimen including albumin, creatinine, lymphocyte and 24 h urinary creatinine was measured using ADVIA 1800 in the lab. Malnutrition was assessed by using the following formula for PCM score [18]
$${\text{PCM}} = \frac{{\% \;{\text{TCF}} + \% \;{\text{MAC}} + \% \;{\text{MAMC}} + \% \;{\text{lymphocyte}} + \% \;{\text{albumin}} + \% \;{\text{CHI}}}}{6}$$
where TCF is triceps skin fold, MAC is mid arm circumference, MAMC is arm muscle circumference. Percent TCF % MAC, % MAMC, % lymphocyte, % albumin and % CHI were calculated as percent of the normal values. These normal values from a healthy Indian population were; mean TCF = 12 cm, mean MAMC = 26 cm) were used for the above percent calculation [17]. Malnutrition was classified based on this score as, mild (99.9–80%), moderate (60–79.9%) and severe (< 60%) according to the recommendation by Blackburn et al. [19, 20]. PCM score for this study was considered as gold standard.
Bioelectrical impedance measurement
BIA was performed using a BIA 2000M (Data Input GmbH, Darmstadt, Germany) [21] applying alternating current of 800 micro amperes at 50 kHz in the clinic. BIA was measured in the supine position with arm and legs abducted from the body in the morning after an overnight fast. The two electrodes (one for sensor and other for source) were placed on the dorsum of both hand foot of the dominant side of the body. Resistance (R), reactance (Xc) and the phase angle (alpha) was measured at each frequency. All impedance measurements were taken with the patient supine, arms relaxed at the sides but not touching the body. Total body water (TBW) and fat free mass [22] was calculated by using formula by Kushner and Schoeller [23]. Body cell mass was calculated by the formula; BCM = F.F.M * 0.29 * LN(5.28). This was used for assessment in cirrhotic young adults [7]. Body cell mass should be at least 40% of the body weight for a person to be designated as not malnourished [24].
Approximately 200 patients were invited to participate, out of which 161 (response rate = 80%) patients with liver cirrhosis were enrolled. The reasons for not participating was inability to come back for an outpatient visit due to a distant residence outside the city. There were 76 (47.2%) males and mean age was 49.1 (11) years. Hepatitis B or C were the cause of cirrhosis in 138 (87.8%) patients while in 23 (14%) patients these markers were negative. There were 61 (37.9%) patients in child class A, 60 (37.3%) in child class B and 17 (10.6%) in child class C.
Overall and comparison of malnutrition by PCM score and BIA measurements in cirrhotic patients with and without Malnutrition is shown in Table 1. Malnutrition as estimated by the PCM score was present in 122 (73%) of patients in which most patients had Mild Malnutrition (n = 72 (45%)), followed by 34 (21%) moderate malnutrition and only 3 (1.9%) severe malnutrition (Table 2).
Table 1 Comparison of PCM measurements in cirrhotic patients overall, with and without malnutrition
Table 2 Comparison of BIA measurements in cirrhotic patients overall, with and without malnutrition
Comparison of nutritional assessment by PCM and BIA
There was moderate correlation of PCM score with Body call mass (Pearson correlation coefficient = 0.3 (p value 0.001)). Specificity of BCM for detecting Malnutrition in patients with cirrhosis (presumed gold standard PCM score) is 28% and sensitivity is 60% with a positive predictive value of 60% and a negative predictive value of 39% (Table 3).
Table 3 Correlation of PCM score with BIA parameters in patients with cirrhosis
We report in this data from 2 tertiary care centers from Karachi, Pakistan that almost two thirds of cirrhotic patients suffer from malnutrition as assessed through PCM score. However when the same population is assessed by BIA, malnutrition is rather underestimated in this cirrhotic patient population. MAC, TSF and 24 h urine creatinine were the main discriminators in differentiating patients with malnutrition and those without it. PCM score and BIA have moderate correlation for assessing malnutrition in this study.
The prevalence of malnutrition in a study on 300 consecutive patients attending outpatient clinics for liver diseases was 75.3%. Out of them 38.3% had moderate or severe malnutrition. We report prevalence of 67% in patients with liver cirrhosis, with 37% falling under moderate to severe category of malnutrition. The reason for this difference might be that the patient population in the former study from Brazil is those with alcoholic cirrhosis and nonalcoholic fatty liver disease, while our patients were largely those suffering from hepatitis B and C. More recently in another study from Brazil on 230 patients with hepatitis B (n = 80) or C (n = 150), 199 (86.5%) patients were well nourished, and 31 (13.5%) were malnourished. This is a much lower prevalence of malnutrition in contrast to our figure of 67%. In fact a large number of participants in this study from Brazil were overweight. We on the other hand did not see such trends in our patient population. In another study on 315 patients from china prevalence of malnutrition (73%) was higher in the cirrhotic group due to viral hepatitis. In a study from India the prevalence of malnutrition was 68%. Our study shows similar pattern of malnutrition in our patients with liver cirrhosis. The seventy percent prevalence of malnutrition is however higher than the reported figure of 52% from the study by Naqvi et al. which might be underestimated due to use of a partially subjective tool of assessment in the later study [25]. These high figures of malnutrition due to viral hepatitis might be due to genotypes of these viruses specific to Asian or South Asian population that could be more virulent. Other factors contributing to malnutrition in patients with liver cirrhosis include inadequate oral intake, metabolic disturbances, malabsorption, and decreased capacity of the liver to store nutrients and dietary restrictions imposed by the family.
International literature shows conflicting results about BIA in cirrhotic patients where one author concluded that it is a reliable bedside tool for the determination of body cell mass in cirrhotic patients with and without ascites [7]. While another author concluded it as a less reliable tool for nutrition assessment in cirrhotic patients with ascites and suggested to use anthropometric measures [8]. We found in this study that BIA under reported malnutrition (61%) compared to PCM score (73%).The reason for this could be the difference in water distribution in cirrhotic due to edema and ascites. Pirlich et al. reports in his study (n = 41) that BIA is a reliable bedside tool for the determination of body cell mass in cirrhotic patients with and without ascites [7]. Our findings are in contrast with this study. The reasons for this might be that the patients in the former study seemed to belong to child class A mainly (Child–Pugh score of 8.1) while at least 40% of our patients belonged to child class B or C, Secondly the patients in that study had cirrhosis due to non-viral etiology while we had patients mainly with viral etiology, which are more sick compared to those with alcoholic liver disease. Thirdly a sample size of 41 indicate an under power study. Similar to our study, a study from Brazil also concluded that Single-frequency electrical bio impedance for body composition analysis in cirrhotic patients must be cautiously used [26].
Although our study suggests that PCM is more sensitive in detecting malnutrition, it however has its limitations like collection of cumbersome biological specimens which patients might not consent for and also this strategy might not be cost effective. The nutritional state assessment in these patients is complicated, and besides anthropometry is based on several other tools in order to be more accurate [12]. We suggest that BIA in combination with mid arm circumferences as a complimentary tool for assessment of malnutrition might be an area for future research. We saw MAC as a discriminator in detecting malnutrition and can be used as a bedside tool for this purpose.
The strength of this study is that two different methods were used for assessment of malnutrition both measures were objective. The use of these objective measures decreases the chances of misclassification bias. A response rate of 80% in the study is considered as optimal.
We conclude that majority of the patients with liver cirrhosis had malnutrition as determined by PCM score. BIA underscored the malnutrition in this patient population. The correlation of PCM score and BIA was moderate.
There are several limitations in this study, (1) it has limited external validity because it included only patients visiting outpatient clinics in 2 hospitals, hence the results cannot be generalized to the entire population. (2) This is certainly a limitation and future studies ideally should be from population based samples. Being in outpatients setting only patients with well compensated cirrhosis could be recruited while those with advanced disease were not recruited leading to selection bias. (3) Presence of ascites and edema is a limitation for measuring BCM from BIA and also for measuring BMI for PCM score. (4) We did not include information on food intake, type of treatment for cirrhosis which could have been a major determinant in correlating it with malnutrition. (5) While objective measures were used for assessment of malnutrition some degree of observation bias might be involved while measuring MAC and TSF.
PCM:
protein calorie malnutrition
BIA:
bio-electrical impedance analysis
TCF:
triceps skin fold thickness
MAMC:
mid arm muscle circumference
mid arm circumference
Garcia-Tsao G. Cirrhosis and its sequelae. In: Goldman L, Ausiello D, editors. Goldman: cecil medicine. Philadelphia: Saunders elsevier; 2007. p. 1140–4.
Henkel AS, Buchman AL. Nutritional support in patients with chronic liver disease. Nat Clin Pract Gastroenterol Hepatol. 2006;3(4):202–9.
Alberino F, et al. Nutrition and survival in patients with liver cirrhosis. Nutrition. 2001;17(6):445–50.
Parkash O, et al. Frequency of poor quality of life and predictors of health related quality of life in cirrhosis at a tertiary care hospital Pakistan. BMC Res Notes. 2012;5(1):446.
Bukhtiari N, et al. Hepatitis B and C single and co-infection in chronic liver disease and their effect on the disease pattern. J Pak Med Assoc. 2003;53(4):136–40.
PubMed CAS Google Scholar
Shizgal HM. Validation of the measurement of body composition from whole body bioelectric impedance. Infusionstherapie. 1990;17(Suppl 3):67–74.
Pirlich M, et al. Bioelectrical impedance analysis is a useful bedside technique to assess malnutrition in cirrhotic patients with and without ascites. Hepatology. 2000;32(6):1208–15.
Cabre E, et al. Reliability of bioelectric impedance analysis as a method of nutritional monitoring in cirrhosis with ascites. Gastroenterol Hepatol. 1995;18(7):359–65.
Schloerb PR, et al. Bioelectrical impedance in the clinical evaluation of liver disease. Am J Clin Nutr. 1996;64(3 Suppl):510S–4S.
Walter-Kroker A, et al. A practical guide to bioelectrical impedance analysis using the example of chronic obstructive pulmonary disease. Nutr J. 2011;10:35.
Sala P, et al. Gastrointestinal transcriptomic response of metabolic vitamin B12 pathways in Roux-en-Y gastric bypass. Clin Transl Gastroenterol. 2017;8(1):e212.
Moctezuma-Velazquez C, et al. Nutritional assessment and treatment of patients with liver cirrhosis. Nutrition. 2013;29(11–12):1279–85.
Friedman S, Schiano T. Cirrhosis and its sequelae. Cecil textbook of medicine. 22nd ed. Philadelphia: Saunders; 2004. p. 936–44.
American College of Radiology. Expert Panel on Gastrointestinal Imaging. Liver lesion characterization. Reston: American College of Radiology; 2002.
Maud PJ, Foster C. Physiological assessment of human fitness. Human Kinetics: Champaign; 2006.
Dudeja V, et al. BMI does not accurately predict overweight in Asian Indians in northern India. Br J Nutr. 2001;86(1):105–12.
Ghoshal UC, Shukla A. Malnutrition in inflammatory bowel disease patients in northern India: frequency and factors influencing its development. Trop Gastroenterol. 2008;29(2):95–7.
Mendenhall CL, et al. VA cooperative study on alcoholic hepatitis. II: prognostic significance of protein-calorie malnutrition. Am J Clin Nutr. 1986;43(2):213–8.
Blackburn GL, Bistrian BR, Maini BS, Schlamm HT, Smith MF. Nutritional and metabolic assessment of the hospitalized patient. JPEN. 1977;1:11–22.
Carvalho L, Parise ER. Evaluation of nutritional status of nonhospitalized patients with liver cirrhosis. Arq Gastroenterol. 2006;43(4):269–74.
Marchesini G, et al. Factors associated with poor health-related quality of life of patients with cirrhosis. Gastroenterology. 2001;120(1):170–8.
Gupta D, et al. Bioelectrical impedance phase angle in clinical practice: implications for prognosis in advanced colorectal cancer. Am J Clin Nutr. 2004;80(6):1634–8.
Kushner RF, Schoeller DA. Estimation of total body water by bioelectrical impedance analysis. Am J Clin Nutr. 1986;44(3):417–24.
Talluri A, et al. The application of body cell mass index for studying muscle mass changes in health and disease conditions. Acta Diabetol. 2003;40(Suppl 1):S286–9.
Naqvi IH, et al. Determining the frequency and severity of malnutrition and correlating it with the severity of liver cirrhosis. Turk J Gastroenterol. 2013;24(5):415–22.
Erdogan E, et al. Reliability of bioelectrical impedance analysis in the evaluation of the nutritional status of hemodialysis patients: a comparison with Mini Nutritional Assessment. Transplant Proc. 2013;45(10):3485–8.
El-Dika S, et al. The impact of illness in patients with moderate to severe gastro-esophageal reflux disease. BMC Gastroenterol. 2005;5:23.
OP developed the proposal, obtained ethical approvals, applied for funding, supervised data collection and prepared the first draft. RI conceived the idea, provided expertise in designing and analysis of the study. SMWJ served as expert in cirrhosis and contributed to the concept development and in the final manuscript. SMM was involved in study implementation and manuscript writing. All authors read and approved the final manuscript.
We acknowledge contribution of Dr. Aariz for his hard work in collection of data and samples of patients.
Ethical clearance was taken from the institutional ethics committee of Aga Khan University (1020-CHS-ERC-08). Informed consent was taken from all participants. For those who fell under minor category consent was taken from parents or guardian.
Conducted under PMRC Grant No: 4-22-17/08/RDC/AKU.
Section of Gastroenterology, Department of Medicine, Aga Khan University Karachi, Stadium Road, Karachi, Pakistan
Om Parkash, Wasim Jafri & S. M. Munir
Department of Medicine and Community Health Sciences, Aga Khan University Karachi, Karachi, Pakistan
Romaina Iqbal
Om Parkash
Wasim Jafri
S. M. Munir
Correspondence to Om Parkash.
Parkash, O., Jafri, W., Munir, S.M. et al. Assessment of malnutrition in patients with liver cirrhosis using protein calorie malnutrition (PCM) score verses bio-electrical impedance analysis (BIA). BMC Res Notes 11, 545 (2018). https://doi.org/10.1186/s13104-018-3640-y
Bioelectrical impedance analysis
|
CommonCrawl
|
One-Step Mask-Based Diffraction Lithography for the Fabrication of 3D Suspended Structures
Xianhua Tan1,
Tielin Shi1,
Jianbin Lin1,
Bo Sun1,
Zirong Tang1 &
Guanglan Liao1
We propose a novel one-step exposure method for fabricating three-dimensional (3D) suspended structures, utilizing the diffraction of mask patterns with small line width. An optical model of the exposure process is built, and the 3D light intensity distribution in the photoresist is calculated based on Fresnel-Kirchhoff diffraction formulation. Several 3D suspended photoresist structures have been achieved, such as beams, meshes, word patterns, and multilayer structures. After the pyrolysis of SU-8 structures, suspended and free-standing 3D carbon structures are further obtained, which show great potential in the application of transparent electrode, semitransparent solar cells, and energy storage devices.
3D carbon microelectromechanical system (C-MEMS) structures have drawn more and more attentions owing to their excellent chemical stability, electrochemical activity, and biocompatibility [1,2,3,4,5]. Suspended carbon structures are the typical 3D C-MEMS structures free of any intermolecularity [2], presenting significant advantages in sensors [6, 7], microelectrodes [8, 9], and energy storage applications [9]. Various C-MEMS microstructures have been achieved through pyrolysis of polymer, in which SU-8 is the most widely used precursor for pyrolytic carbon structures [10, 11]. With respect to its low light absorption, it is easy to fabricate high aspect ratio microstructures with SU-8 [12]. However, it is still a great challenge to obtain suspended polymer template.
Diverse approaches have been developed to fabricate suspended microstructures, such as E-beam writer [13,14,15], X-ray [10, 16], and two-photon lithography [17,18,19]. Two-photon lithography is a feasible way for achieving complex suspended structures, such as suspended hollow microtubes, with great accuracy but low efficiency [17]. Taking the efficiency and cost into account, UV lithography could be a better choice for fabricating photoresist precursor. Multi-step lithography process with controlled exposure dose for fabricating suspended structures has been demonstrated [3, 6, 7, 20]. Lim et al. [21] fabricated suspended nanowires and nanomeshes using a two-step UV lithography process and obtained glassy carbon nanostructures through a pyrolysis process. Some one-step lithography methods have also been proposed. No et al. [22] achieved suspended microstructures by a single exposure process, during which an optical diffuser film was put on the Cr-masks. The diffuser film had a significant impact on the exposure process, leading to the deformation of photoresist patterns. Long et al. [2] successfully fabricated 3D suspended structures by controlling the exposure dose and air gap between the photoresist and photomask during the proximity exposure process, whereas the proximity exposure mode limited the fabricating resolution. Grayscale photolithography has also been applied in fabricating suspended structures with grayscale masks or maskless lithography systems [11, 23]. Since SU-8 is almost transparent when the light wavelength is above 350 nm [12], it is very difficult to control the accuracy of the thickness of the suspended layer by adjusting the exposure dose [8, 10]. Hemanth et al. [10] optimized the UV wavelength in the exposure process according to the properties of SU-8. They chose the UV wavelength of 405 nm for the high ratio microstructures and 313 nm for the suspended layer. However, the combination of exposure with different UV light wavelengths increases the costs and difficulties of the whole fabrication process.
In this study, we demonstrate a novel one-step mask-based diffraction lithography process that is compatible with most kinds of photoresist to fabricate 3D suspended structures. A 3D light intensity distribution is simulated in the photoresist according to Kirchhoff's diffraction theory and further verified by experiments. The thickness of the suspended structures is controlled by the width of the patterns, and the suspended beams are broadened by stacking several line patterns side by side with proper spacing. Complex 3D suspended structures, such as beams with gradient thickness and full suspended meshes with word patterns, can be achieved by the one-step lithography process. Finally, the suspended carbon beams, meshes, and free-standing carbon meshes have also been obtained via a pyrolysis process.
Methods and Experiments
Optical Model of Diffraction Lithography
During the UV lithography process, the diffraction phenomenon will be very obvious when the pattern size is too small. Here, we utilize the diffraction of narrow patterns with several wavelength widths to fabricate suspended beams. In order to analyze the spatial light intensity distribution in the photoresist, we build an optical model (Fig. 1) for diffraction lithography based on Fresnel diffraction. The air gap between the photoresist and photo mask can be ignored since the exposure is carried out in a hard contact mode. The mask is illuminated with a plane wave at a typical wavelength of 365 nm, and the photoresist is treated as a transparent material with refractive index of 1.659 (the refractive index of SU-8 at 365 nm, measured by an ellipsometer). P0 is a point on the mask with a coordinate of (x0, y0, 0), and P1 is an arbitrary point in the photoresist with a coordinate of (x1, y1, z1).
The optical model of the diffraction lithography
According to the Fresnel-Kirchhoff diffraction formulation [24], the amplitude at point P1 in the photoresist is
$$ E\left({P}_1\right)=\frac{1}{2 j\lambda}\underset{\sum }{\iint }E\left({P}_0\right)\frac{\exp (jkr)}{r}\left(1+\cos \theta \right) ds $$
where k = 2π/λ, λ represents the wavelength of UV light in the photoresist, E(P0) is the light wave amplitude at point P0, θ is the angle between P0P1 and the z axis, r is the distance between P1 and P0, and Σ represents the integral domain of the mask pattern. According to the geometric relationship in Fig. 1, we can get
$$ r=\sqrt{{\left({x}_1-{x}_0\right)}^2+{\left({y}_1-{y}_0\right)}^2+{z_1}^2} $$
$$ \cos \theta ={z}_1/r $$
E(P0) is a constant in the model. Thus, the calculating formula becomes:
$$ E\left({P}_1\right)=\frac{E\left({P}_0\right)}{2 j\lambda}\underset{\sum }{\iint}\frac{\exp \Big( jk\sqrt{{\left({x}_1-{x}_0\right)}^2+{\left({y}_1-{y}_0\right)}^2+{z_1}^2\Big)}}{\sqrt{{\left({x}_1-{x}_0\right)}^2+{\left({y}_1-{y}_0\right)}^2+{z_1}^2}}\left(1+\frac{z_1}{\sqrt{{\left({x}_1-{x}_0\right)}^2+{\left({y}_1-{y}_0\right)}^2+{z_1}^2}}\right){dx}_0{dy}_0 $$
Then, the integrals are calculated using Matlab software, and the light intensity distribution in the photoresist can be expressed as:
$$ I\left(x,y,z\right)={\left|E\left({P}_1\right)\right|}^2 $$
where (x, y, z) equals the coordinate of P1.
In order to further investigate the absorption of the photoresist, we modified the calculations of the light intensity when considering the absorption coefficient. When a light beam passes through the photoresist from P0 to P1, the light intensity can be calculated by the following formula [25].
$$ \frac{I_{\alpha }}{I_0}=\exp \left(-\alpha r\right) $$
where I0 is the initial light intensity at point P0, Iα is the light intensity at point P1, α is the absorption coefficient of the photoresist, and r is the distance between P0 and P1. We define Iα = 0 as the light intensity at point P1 when α = 0 μm−1. It is easy to obtain that Iα = 0 = I0 according to formula (6). The relations between E(Pα = 0) (the amplitude corresponding to Iα = 0) and E(Pα) (the amplitude corresponding to Iα) can be expressed by:
$$ \frac{E\left({P}_{\alpha}\right)}{E\left({P}_{\alpha =0}\right)}=\exp \left(-\alpha r/2\right) $$
Thus, when considering the absorption of the photoresist in the diffraction lithography, the amplitude at point P1 (defined as E(P1α)) can be calculated by:
$$ E\left({P}_{1\alpha}\right)=\frac{1}{2 j\lambda}\underset{\sum }{\iint}\exp \left(-\alpha r/2\right)E\left({P}_0\right)\frac{\exp (jkr)}{r}\left(1+\cos \theta \right) ds $$
And the light intensity can be obtained by formulas (2), (3), (5), and (8).
Masks with line patterns were used to fabricate suspended structures, while circles or squares were designed for fabricating pillars to support the suspended layer. Two kinds of thick negative photoresist were employed in the experiments, including SU-8 2100 (Microchem Co., Ltd.) with thickness of ~ 50 μm and NR26-25000P (Futurrex Co., Ltd.) with thickness of ~ 30 μm. The exposure process was performed with a MJB4 mask aligner, where the wavelength of the illuminating UV light was 365 nm. The suspended structures can be obtained after the samples were immersed into the developer for a certain time. Here, propylene glycol methyl ether acetate (PGMEA, Aladdin Co., Ltd.) was used as the developer for the SU-8 2100 samples and RD6 developer (Futurrex Co., Ltd.) was chosen for the NR26-25000P samples. Finally, a pyrolysis process [16, 26, 27] containing a hard baking step and a carbonization baking step was carried out in a quartz furnace (MTI GAL 1400X) to obtain 3D carbon microstructures. The whole process is illustrated in Fig. 2a, and the temperature variations during the pyrolysis process are illustrated in Fig. 2b. The samples were hard baked at 300 °C for 30 min and then pyrolyzed at 900 °C for 60 min. During the pyrolysis process, the samples were kept in the H2(5%)/Ar(95%) atmosphere with a heating rate of 10 °C/min. The obtained microstructures were characterized by a scanning electron microscope (SEM, Helios NanoLab G3, FEI).
a The process for fabricating 3D carbon suspended structures. b The temperature curve of the pyrolysis
Light Intensity Distribution
Figure 3a shows the cross section of the 3D light intensity distribution under a line-shaped mask with the line width d = 1 μm, 1.5 μm, 2 μm, 2.5 μm, 3 μm, 3.5 μm, and 4 μm, respectively. Here, relative intensity is adopted, and the incident light intensity is defined as 1. The light at the bottom of the photoresist will gradually scatter owing to the light diffraction effect. Once the light intensity reaches a threshold value, the photoresist will get enough energy to release the reaction and turn solid; otherwise, it will be removed in the development process. The thickness of the region above the threshold (0.75 in this study) is defined as the exposure depth, which is very sensitive to the pattern width. The exposure depth is 5.3 μm under d = 1 μm and 18.2 μm under d = 2 μm. It will further increase to 33.5 μm under d = 3 μm and 47.5 μm under d = 4 μm. If the line width is narrower than 1 μm, the exposure depth will be too small for the fabrication, because the air gap between the mask and photoresist caused by the unevenness of the thick photoresist will make the exposure fail. Figure 3b, c shows the mask patterns for fabricating suspended structures and the corresponding light intensity distribution at z = 5, 10, 15, and 20 μm, where the line width is set as 2 μm. The exposure depth of the line and mesh patterns is between 15 and 20 μm, while that of the large squares and circles is big enough to form pillars during lithography. Thus, suspended beams and meshes can be fabricated, supported by the pillars. Since it is hard to fabricate suspended structures when the line width is greater than 5 μm, line patterns are stacked side by side to fabricate wide suspended beams or meshes, as shown in Fig. 3d.
The mask patterns and simulation results in the photoresist. a The light intensity distributions below the photo mask under d = 1 μm, 1.5 μm, 2 μm, 2.5 μm, 3 μm, 3.5 μm, and 4 μm, where d is the width of the line pattern. The mask pattern for b suspended beams, c meshes, and d meshes with stacked line patterns and the corresponding light intensity distributions under z = 5 μm, 10 μm, 15 μm, and 20 μm in the photoresist. Here, z is the distance between the section plane and photo mask
Suspended Photoresist Structures
Experiments were carried out to fabricate suspended structures. We tested the minimum exposure time to obtain photoresist pillars and defined it as the exposure threshold. Then, four or three times of the threshold value was adopted as the exposure dose and the threshold of the relative light intensity was evaluated at 0.75, in accordance with the simulation. Figure 4 shows the suspended photoresist beams under different d value. It is found that the thickness of the suspended layer h is positively related to d. For NR26-25000P photoresist, h is 10.9 μm under d = 2 μm (Fig. 4a) and increases to 25.5 μm under d = 4 μm (Fig. 4e). As d comes to 5 μm, the exposure depth is big enough to reach the substrate, and no suspended structures is obtained (Fig. 4f). Figure 4g–k depicts the suspended structures of SU-8. The function of h vs. d for both experiments and simulations is illustrated in Fig. 4l, where the straight lines are fitted by the least square method. The linear correlation coefficient R of the fitted lines are R2 = 0.963, 0.988, and 0.858 for simulations without counting the absorption, NR26-25000P, and SU-8, respectively. It can be seen that the results of the SU-8 experiments are very close to the simulation results. By contrast, the suspended layer of NR26-25000P is much thinner than that of the simulation without absorption. This can be mainly attributed to the transparent property of SU-8 and the high absorption ability of NR26-25000P. This is also why gray exposure can be used to fabricate suspended structures for some photoresist, but not suitable for SU-8.
The suspended photoresist beams resulted from one-step diffraction lithography with different line width d using the mask pattern in Fig. 3b. NR26-25000P photoresist beams under a d = 2 μm, b 2.5 μm, c 3 μm, d 3.5 μm, e 4 μm, and f 5 μm; SU-8 2100 photoresist beams under g d = 2 μm, h 2.5 μm, i 3 μm, j 3.5 μm, and k 4 μm; l the functions of exposure thickness vs. line width in simulation without absorption, NR26-25000P, and SU-8 2100 and simulations with absorption coefficient α = 0.0374 μm−1, where the inset shows the tilted view of SU-8 suspended beams. The thickness of the beams increases with the line width of the mask pattern. The scale bars are 50 μm
Then, we introduce absorption coefficient α in optical model and perform the calculations with formula (8). The results under α = 0.0374 μm−1 (the absorption coefficient of NR21-25000P at 365 nm, tested by a UV-visible spectrophotometer, UV 2600, Shimadzu Co., Ltd.) are shown in Fig. 4l, where the fitted line with R2 = 0.986 agrees well with the experimental results of NR26-25000P. Thus, our method is available for almost all kinds of thick negative photoresist to fabricate suspended structures with one-step exposure, in which the exposure depth can be guided through simulations.
Figure 5a–c displays the varied cross connection patterns and the corresponding simulation results at z = 15 μm. Three lines are stacked side by side to fabricate a broad suspended beam, where the line width and interval width are both 2 μm. The cross connection pattern with a 20-μm circle is used to fabricate a pillar to support the suspended beams (Fig. 5a). Hollow cross connection patterns are designed to fabricate suspended meshes, as exhibited in Fig. 5b, c. The obtained NR26-25000P photoresist connections are shown in Fig. 5d–f, where the surface textures on the cross connections together with the beams can be clearly observed, in good agreement with the simulations (Fig. 5a–c). Suspended meshes with the three types of cross connections are shown in Fig. 5g–i, and the supporting pillars are also obtained as expected (Fig. 5g). Figure 5h illustrates the thin pillars under the cross connections, owing to the dense patterns with high ratio. The cross connection pattern in Fig. 5c possesses lower duty ratio, where the light intensity is weak, resulting in a full suspended mesh (Fig. 5f). Thus, the ratio of the cross connection patterns can be reduced to fabricate full suspended structures, while the supporting pillars can be easily formed with a solid connection. Meanwhile, the width of the beam can also be controlled by adjusting the number of the stacked line patterns.
Different cross connection patterns with NR26-25000P. a–c Three cross connection patterns on the mask and the corresponding simulation results at z = 15 μm, where the line width is 2 μm with spacing of 2 μm and z is the distance between the section plane and photo mask. d–f The textures on the obtained photoresist cross connection and the broad beams, where the scale bars are 20 μm. g The suspended meshes with supporting pillars. h The suspended meshes with thin supporting pillars, where the pillars result from the dense cross connection patterns with high ratio. i The full suspended mesh patterns. The scale bars in g–i are 100 μm
Some complex 3D microstructures have also been fabricated via a single exposure (Fig. 6a–c, e, f) or a two-step exposure (Fig. 6d) method. Suspended beams with gradient thickness are shown in Fig. 6a, where the width of the line patterns varies from 2 to 4 μm and 4 to 6 μm in the two regions. The thickness of the suspended layer increases with the increase of the line width, in line with the results displayed in Fig. 4. Suspended concentric rings and suspended word patterns can also be easily prepared (Fig. 6b, c). By combining the two exposure processes, two suspended layers have been integrated with NR26-25000P, as shown in Fig. 6d. After the first exposure is completed, the second layer is then spin-coated on the first layer and exposed. The stacked meshes are achieved after the two exposure processes followed by a developing process. Since the second exposure may cause damage to the first layer, the structures need to be carefully optimized to fabricate more excellent multi-layer suspended structures. SU-8 photoresist suspended meshes with word patterns have also been successfully achieved (Fig. 6d–f), though it is more difficult than NR26-25000P to control the exposure parameters due to the high transparency.
3D suspended photoresist structures. a Suspended beams with gradient thickness, b suspended concentric rings, c suspended word structures, and d multilayer suspended meshes, where the photoresist is NR26-25000P. e Suspended SU-8 mesh. f Suspended SU-8 meshes with word patterns. The scale bars are 100 μm. The suspended structures in d is achieved by a two-step exposure, and the others are fabricated with a one-step exposure
Compared with previous works [2, 11, 22, 23], we form a 3D light intensity distribution model in the photoresist by utilizing the diffraction of the small mask patterns. The 3D suspended structures can be well controlled and forecasted by simulations. The absorption coefficient of the photoresist is also taken into account here. Suspended structures with various thicknesses, such as gradient beams, are formed easily through the one-step exposure. Moreover, the exposure process is performed with an ordinary mask in a typical contact exposure mode, and no special masks or equipment is needed, exhibiting excellent compatibility with high fabrication resolution.
Pyrolytic Carbon Structures
SU-8 is a typical precursor for the fabrication of carbon microstructures, while other photoresists like NR26-25000P are not able to sustain the structures under high temperature. Figure 7a–c shows the suspended SU-8 structures while the corresponding pyrolytic carbon structures are presented in Fig. 7d–f. Large shrinkage occurs during the pyrolysis process owing to the multiple concurrent reactions, including dehydrogenation, cyclization, condensation, hydrogen transfer, and isomerization [8, 28]. Thus, a considerable residual stress will exist in the pyrolytic structures, especially in the asymmetric structures. The pyrolytic carbon beams will shrink and pull the pillars at both ends, causing cracks at the bottom (Fig. 7d). As for the large-scale meshes, the stress maintains a relative balance in each direction and no obvious cracks are found in the pyrolytic carbon structures (Fig. 7e, f). Free-standing carbon meshes with the size of 12 mm × 20 mm are achieved, as shown in Fig. 7g–i. The sheet resistance of the carbon meshes is about 182 Ω/sq, and the light transmittance reaches ~ 67% in the whole wavelength. The as-prepared carbon meshes with superior conductivity and transparency can be applied into perovskite solar cells as electrode [29,30,31], offering an available method for fabricating semitransparent solar cells. Moreover, the as-prepared carbon meshes possess excellent flexibility, demonstrating great potential in the applications of flexible transparent electrodes.
Suspended SU-8 meshes and pyrolytic carbon meshes. a Suspended SU-8 beams. b, c Suspended SU-8 meshes with supporting pillars. d Suspended carbon beams, where great strains remained in the carbon structures and cracks occurred at the bottom of the pillar. e, f Suspended carbon meshes. g Free-standing carbon mesh after pyrolysis. h Magnification of the free-standing carbon mesh. i A 12 mm × 20 mm free-standing carbon mesh, which presents well flexibility and transparency. The scale bars are 100 μm
In summary, we demonstrated the fabrication of suspended structures via a novel one-step mask-based diffraction lithography method. The 3D light intensity distribution in the photoresist was simulated, showing that the exposure depth increased with the increase of the width of the line patterns under d < 5 μm. This phenomenon could be utilized to fabricate suspended structures with defined thickness of SU-8 photoresist, which was almost transparent and hard to form suspended structures with grayscale lithography. The corresponding experiments were also conducted here. We found that the thickness of the suspended SU-8 beams was very close to the simulation results, while that of the NR26-25000P was much thinner than the exposure depth in the simulations. This was caused by the high light absorption property of NR26-25000P. Then, the absorption coefficient of photoresist was introduced in the optical model, and the simulation results agreed well with the experiments. Three different cross connection patterns were designed for fabricating suspended 3D meshes with or without supporting pillars, and the surface textures were well replicated. Meshes with pillars and full suspended meshes were also successfully achieved. Other complex 3D suspended photoresist structures, including suspended beams with gradient thickness, suspended concentric rings, and suspended word structures, were obtained through the one-step mask-based diffraction lithography.
Carbon suspended structures and free-standing carbon meshes were further fabricated with a typical two-step pyrolysis process. The suspended 3D carbon structures could be applied in electrochemical electrode, supercapacitor, and sensors owing to their large surface area. The free-standing meshes exhibited excellent conductivity, flexibility, and high transparency. Thus, we developed a simplified and promising method for the fabrications of 3D suspended structures and carbon meshes, which showed great potential in the applications of transparent electrode, semitransparent solar cells, and energy storage devices.
C-MEMS:
Carbon microelectromechanical systems
Wang CL, Jia GY, Taherabadi LH, Madou MJ (2005) A novel method for the fabrication of high-aspect ratio C-MEMS structures. J Microelectromech S 14(2):348–358
Long H, Xi S, Liu D, Shi T, Xia Q, Liu S, Tang Z (2012) Tailoring diffraction-induced light distribution toward controllable fabrication of suspended C-MEMS. Opt Express 20(15):17126
Lee JA, Lee S, Lee K, Il Park S, Lee SS (2008) Fabrication and characterization of freestanding 3D carbon microstructures using multi-exposures and resist pyrolysis. J Micromech Microeng 18:0350123
Xi S, Shi T, Long H, Xu L, Tang Z (2015) Suspended integration of pyrolytic carbon membrane on C-MEMS. Microsyst Technol 21(9):1835–1841
Shilpa DSK, Afzal MAF, Srivastava S, Patil S, Sharma A (2016) Enhanced electrical conductivity of suspended carbon nanofibers: effect of hollow structure and improved graphitization. Carbon 108:135–145
Lim Y, Heo JI, Madou M, Shin H (2013) Monolithic carbon structures including suspended single nanowires and nanomeshes as a sensor platform. Nanoscale Res Lett 8(1):492
Lim Y, Heo J, Shin H (2014) Fabrication and application of a stacked carbon electrode set including a suspended mesh made of nanowires and a substrate-bound planar electrode toward for an electrochemical/biosensor platform. Sensors Actuators B Chem 192:796–803
Hemanth S, Caviglia C, Keller SS (2017) Suspended 3D pyrolytic carbon microelectrodes for electrochemistry. Carbon 121:226–234
Ho V, Zhou C, Kulinsky L, Madou M (2013) Fabrication of 3D polypyrrole microstructures and their utilization as electrodes in supercapacitors. J Micromech Microeng 23:12502912
Hemanth S, Anhøj TA, Caviglia C, Keller SS (2017) Suspended microstructures of epoxy based photoresists fabricated with UV photolithography. Microelectron Eng 176:40–44
Martinez DR (2014) SU-8 photolithography as a toolbox for carbon MEMS. Micromachines-Basel 5(3):766–782
Parida OP, Bhat N (2009) Characterization of optical properties of SU-8 and fabrication of optical components. In: International Conference on Optics and Photonics
Leong ESP, Deng J, Khoo EH, Wu S, Phua WK, Liu YJ (2015) Fabrication of suspended, three-dimensional chiral plasmonic nanostructures with single-step electron-beam lithography. RSC Adv 5(117):96366–96371
Malladi K, Wang C, Madou M (2006) Fabrication of suspended carbon microstructures by e-beam writer and pyrolysis. CARBON 44(13):2602–2607
Lutwyche MI, Moore DF (1991) Suspended structures made by electron beam lithography. J Micromech Microeng 1(4):237
Peele AG, Shew BY, Vora KD, Li HC (2005) Overcoming SU-8 stiction in high aspect ratio structures. Microsyst Technol 11(2–3):221–224
Accoto C, Qualtieri A, Pisanello F, Ricciardi C, Pirri CF, Vittorio MD, Rizzi F (2015) Two-photon polymerization lithography and laser doppler vibrometry of a SU-8-based suspended microchannel resonator. J Microelectromech S 24(4):1038–1042
Yang L, Qian D, Xin C, Hu Z, Ji S, Wu D, Hu Y, Li J, Huang W, Chu J (2017) Direct laser writing of complex microtubes using femtosecond vortex beams. Appl Phys Lett 110(22):221103
Yang L, Ji S, Xie K, Du W, Liu B, Hu Y, Li J, Zhao G, Wu D, Huang W, Liu S, Jiang H, Chu J (2017) High efficiency fabrication of complex microtube arrays by scanning focused femtosecond laser Bessel beam for trapping/releasing biological cells. Opt Express 25(7):8144
Kuo JC, Li CS, Cheng HC, Yang YJ (2013) Suspended magnetic polymer structures fabricated using dose-controlled ultraviolet exposure. Micro Nano Lett 8(10):676–680
Lim Y, Heo J, Madou MJ, Shin H (2013) Development of suspended 2D carbon nanostructures: nanowires to nanomeshes. In: Transducers & Eurosensors Xxvii: the International Conference on Solid-State Sensors, Actuators and Microsystems, pp 1935–1937
No KY, Kim GD, Kim GM (2008) Fabrication of suspended micro-structures using diffsuser lithography on negative photoresist. J Mech Sci Technol 22(9):1765–1771
Rammohan A, Dwivedi PK, Martinez-Duarte R, Katepalli H, Madou MJ, Sharma A (2011) One-step maskless grayscale lithography for the fabrication of 3-dimensional structures in SU-8. Sensors Actuators B Chem 153(1):125–134
Born M, Wolf E (2003) Elements of the theory of diffraction. In: Principles of Optics. Cambridge University Press, London, pp 412–430
Wei JS (2015) Resolving improvement by combination of pupil filters and nonlinear thin films. In: Nonlinear super-resolution nano-optics and applications. Springer, Berlin, Heidelberg, pp 165–171
Jiang S, Shi T, Gao Y, Long H, Xi S, Tang Z (2014) Fabrication of a 3D micro/nano dual-scale carbon array and its demonstration as the microelectrodes for supercapacitors. J Micromech Microeng 24:0450014
Jiang S, Shi T, Liu D, Long H, Xi S, Wu F, Li X, Xia Q, Tang Z (2014) Integration of MnO2 thin film and carbon nanotubes to three-dimensional carbon microelectrodes for electrochemical microcapacitors. J Power Sources 262:494–500
Ma CCM, Chen CY, Kuan HC, Chang WC (2004) Processability, thermal, mechanical, and morphological properties of novolac type-epoxy resin-based carbon-carbon composite. J Compos Mater 38(4):311–320
Han JH, Tu YX, Liu ZY, Liu XY, Ye HB, Tang ZR, Shi TL, Liao GL (2018) Efficient and stable inverted planar perovskite solar cells using dopant-free CuPc as hole transport layer. Electrochim Acta 273:273e281
Liu XY, Liu ZY, Ye HB, Tu YX, Sun B, Tan XH, Shi TL, Tang ZR, Liao GL (2018) Novel efficient C60-based inverted perovskite solar cells with negligible hysteresis. Electrochim Acta 288:115–125
Liu XY, Liu ZY, Sun B, Tan XH, Ye HB, Tu YX, Shi TL, Tang ZR, Liao GL (2018) All low-temperature processed carbon-based planar heterojunction perovskite solar cells employing Mg-doped rutile TiO2 as electron transport layer. Electrochim Acta 283:1115–1124
The authors acknowledge the Micro and Nano Fabrication and Measurement Laboratory of Collaborative Innovation Center for Digital Intelligent Manufacturing Technology and Application for the support in SEM test. Thanks to Mr. Huang Guang in the Center of Micro-Fabrication and Characterization (CMFC) of WNLO for the support of MJB4 operation.
This work is supported by the National Natural Science Foundation of China (Grant Nos. 51805195, 51675209, and 51675210) and the China Postdoctoral Science Foundation (Grant Nos. 2017M612448 and 2016M602283).
The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.
State Key Lab of Digital Manufacturing Equipment and Technology, Huazhong University of Science and Technology, Wuhan, 430074, People's Republic of China
Xianhua Tan
, Tielin Shi
, Jianbin Lin
, Bo Sun
, Zirong Tang
& Guanglan Liao
Search for Xianhua Tan in:
Search for Tielin Shi in:
Search for Jianbin Lin in:
Search for Bo Sun in:
Search for Zirong Tang in:
Search for Guanglan Liao in:
XHT and GLL designed the experiments. XHT and JBL performed the experiments and the calculation. XHT and GLL drafted the manuscript. Other authors contributed to the data analysis and the manuscript modification. All authors read and approved the final manuscript.
Correspondence to Guanglan Liao.
Tan, X., Shi, T., Lin, J. et al. One-Step Mask-Based Diffraction Lithography for the Fabrication of 3D Suspended Structures. Nanoscale Res Lett 13, 394 (2018) doi:10.1186/s11671-018-2817-6
Three-dimensional suspended structure
|
CommonCrawl
|
Journal of Bioenergetics and Biomembranes
Kinetics simulation of transmembrane transport of ions and molecules through a semipermeable membrane
S. O. Karakhim
P. F. Zhuk
S. O. Kosterin
We have developed a model to study the kinetics of the redistribution of ions and molecules through a semipermeable membrane in complex mixtures of substances penetrating and nonpenetrating through a membrane. It takes into account the degree of dissociation of these substances, their initial concentrations in solutions separated by a membrane, and volumes of these solutions. The model is based on the assumption that only uncharged particles (molecules or ion pairs) diffuse through a membrane (and not ions as in the Donnan model). The developed model makes it possible to calculate the temporal dependencies of concentrations for all processing ions and molecules at system transition from the initial state to equilibrium. Under equilibrium conditions, the ratio of ion concentrations in solutions separated by a membrane obeys the Donnan distribution. The Donnan effect is the result of three factors: equality of equilibrium concentrations of penetrating molecules on each side of a membrane, dissociation of molecules into ions, and Le Chatelier's principle. It is shown that the Donnan distribution (irregularity of ion distribution) and accordingly absolute value of the Donnan membrane potential increases if: (i) the nonpenetrating salt concentration (in one of the solutions) and its dissociation constant increases, (ii) the total penetrating salt concentration and its dissociation constant decreases, and (iii) the volumes ratio increases (between solutions with and without a nonpenetrating substance). It is shown also that only a slight difference between the degrees of dissociation of two substances can be used for their membrane separation.
Membrane transport Kinetic model Membrane permeability Donnan distribution Donnan potential
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by any of the authors.
We will show that our kinetic model based on the assumption that only molecules (uncharged particles) penetrate through a membrane, and the Donnan model, based on the assumption that only ions can penetrate through a membrane, are equivalent. For this purpose, we use the "classical" thermodynamic approach and consider two states of the membrane system: the initial and equilibrium states.
In the initial state, the substance KB at concentration C1 and infinitely large dissociation constant K1, is located in the i-cell (i.e., only K+ and B− ions are present) while salt KA with concentration C2 and relatively small dissociation constant K2 is located in the e-cell. It should be taken into account that K+ and A− ions are in the i-cell simultaneously with the undissociated KA molecules. The cell volumes are equal to Vi = Ve.
To calculate the concentrations of ions and molecules in the e-cell, we designate z = [K+] = [A−] and write down an equation for the dissociation constant K2 (taking into account Eq. (8)):
$$ {K}_2=\frac{z^2}{C_2-z} $$
Solving the quadratic equation in z, we get:
$$ z=\frac{-{K}_2+\sqrt{K_2^2+4{K}_2{C}_2}}{2} $$
Thereby the initial state of the system under investigation may be presented as:
[K+]i = C1 | [K+]e = \( \frac{-{K}_2+\sqrt{K_2^2+4{K}_2{C}_2}}{2} \).
[B−]i = C1 | [A−]e = \( \frac{-{K}_2+\sqrt{K_2^2+4{K}_2{C}_2}}{2} \)
| [KA]e = \( \frac{2{C}_2+{K}_2-\sqrt{K_2^2+4{K}_2{C}_2}}{2} \)
Firstly, we will calculate the equilibrium state using the Donnan approach, i.e. we assume that only K+ and A− ions penetrate through the membrane.
As a result of permeable K+ and A− ions passing through the membrane, some amount of salt moves from the e-cell into the i-cell, taking into account that the electroneutrality condition requires transferring equal amounts of K+ and A− ions. As a result, the total salt concentration in the e-cell decreases by x and is then C2 – x. In accordance with Eq. (A1) in which C2 – x has to be written in the denominator instead of C2, it is possible to calculate the concentrations of ions and molecules in the e-cell at equilibrium.
Some of the ions that penetrate into the i-cell form molecules whose concentration is denoted by y. Then the concentrations of K+ cations and A− anions may be written as C1 + x – y and x – y, respectively. The concentrations of ions and undissociated molecules are related by the following equation:
$$ {K}_2=\frac{\left({C}_1+x-y\right)\left(x-y\right)}{y} $$
with which y can be calculated:
$$ y=\frac{\left({C}_1+{K}_2\right)+2x-\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{2} $$
Thereby the equilibrium state of the system under investigation may be presented as:
$$ {\displaystyle \begin{array}{ll}{\left[{K}^{+}\right]}_i=\frac{C_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4x{K}_2}}{2}& \mid {\left[{K}^{+}\right]}_e=\frac{-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{2}\\ {}{\left[{A}^{-}\right]}_i=\frac{-{C}_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4x{K}_2}}{2}& \mid {\left[{A}^{-}\right]}_e=\frac{-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{2}\\ {}{\left[ KA\right]}_i=\frac{\left({C}_1+{K}_2\right)+2x-\sqrt{{\left({C}_1+{K}_2\right)}^2+4x{K}_2}}{2}& \mid {\left[ KA\right]}_e=\frac{2\left({C}_2-x\right)+{K}_2-\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{2}\\ {}{\left[{B}^{-}\right]}_i={C}_1& \mid \end{array}} $$
Now, let us assume that only undissociated KA molecules penetrate through the membrane like in our kinetic approach. In the process of transition of the system from the initial state to the equilibrium state, the total salt concentration in the e-cell will decrease by x, and these x moles of salt KA will move to the solution located in the i-cell. In this case, the concentrations of ions and molecules in the e-cell at equilibrium are as follows:
$$ {\left[{K}^{+}\right]}_e={\left[{A}^{-}\right]}_e=\frac{-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{2} $$
$$ {\left[\mathrm{KA}\right]}_e=\frac{2\left({C}_2-x\right)+{K}_2-\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{2} $$
Some KA molecules that have moved to the i-cell dissociate into ions. Let us denote the concentration of resulting ions as y. Then the concentrations of K+ cations and A− anions may be written as C1 + y and y, respectively The concentrations of ions and undissociated molecules are related by the following equation:
$$ {K}_2=\frac{y\left({C}_1+y\right)}{x-y} $$
$$ y=\frac{-\left({C}_1+{K}_2\right)+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{2} $$
The resulting expression is the concentration of the A− anion in i-cell ([A−]i). Now, it is possible to calculate [K+]i and [KA]i:
$$ {\left[{K}^{+}\right]}_i={C}_1+y=\frac{C_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{2} $$
$$ {\left[\mathrm{KA}\right]}_i=x-y=\frac{\left({C}_1+{K}_2\right)+2x-\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{2} $$
(A10)
As you can see, both approaches lead to identical results, suggesting their equivalence. Thus, regardless of whether undissociated molecules or individual ions are transported through the membrane (in quantities guaranteeing the maintenance of electrical neutrality), the system reaches the same equilibrium state (from an identical initial state).
We also show that both approaches lead to the equality of equilibrium concentrations of undissociated salt molecules in solutions separated by a semipermeable membrane.
At equilibrium, the penetrating ions obey the Donnan distribution, so the following equations may be written:
$$ \lambda =\frac{{\left[{K}^{+}\right]}_i}{{\left[{K}^{+}\right]}_e}=\frac{C_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}} $$
$$ \lambda =\frac{{\left[{A}^{-}\right]}_e}{{\left[{A}^{-}\right]}_i}=\frac{-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}}{-{C}_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}} $$
Let us compare the concentrations of undissociated KA molecules in the i- and e-cells:
$$ \frac{{\left[ KA\right]}_i}{{\left[ KA\right]}_e}=\frac{\left({C}_1+{K}_2\right)+2x-\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}}{2\left({C}_2-x\right)+{K}_2-\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}} $$
It is difficult to determine from Eq. (A13) if the above ratio equals one. So let us focus on Eqs. (A11) and (A12): their ratio equals one.
$$ \frac{{\left[{K}^{+}\right]}_i{\left[{A}^{-}\right]}_i}{{\left[{K}^{+}\right]}_e{\left[{A}^{-}\right]}_e}=\frac{\left({C}_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}\right)\left(-{C}_1-{K}_2+\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}\right)}{\left(-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}\right)\left(-{K}_2+\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}\right)}=1 $$
After transformations we get:
$$ \frac{{\left[{K}^{+}\right]}_i{\left[{A}^{-}\right]}_i}{{\left[{K}^{+}\right]}_e{\left[{A}^{-}\right]}_e}=\frac{2{K}_2\left[\left({C}_1+{K}_2\right)+2x-\sqrt{{\left({C}_1+{K}_2\right)}^2+4{xK}_2}\right]}{2{K}_2\left[2\left({C}_2-x\right)+{K}_2-\sqrt{K_2^2+4{K}_2\left({C}_2-x\right)}\right]}=1 $$
The comparison between Eqs. (A13) and (A15) clearly shows that at equilibrium the concentrations of undissociated molecules in the cells separated by a membrane are equal:
$$ \frac{{\left[ KA\right]}_i}{{\left[ KA\right]}_e}=\frac{{\left[{K}^{+}\right]}_i{\left[{A}^{-}\right]}_i}{{\left[{K}^{+}\right]}_e{\left[{A}^{-}\right]}_e}=1 $$
Al-Obaidi MA, Kara-Zaitri C, Mujtaba IM (2017) Scope and limitations of the irreversible thermodynamics and the solution diffusion models for the separation of binary and multi-component systems in reverse osmosis process. Comput Chem Eng 100:48–79. https://doi.org/10.1016/j.compchemeng.2017.02.001 CrossRefGoogle Scholar
Cohen H, Cooley JW (1965) The numerical solution of the time-dependent Nernst-Planck equations. Biophys J 5:145–162. https://doi.org/10.1016/S0006-3495(65)86707-8 CrossRefPubMedPubMedCentralGoogle Scholar
Davis TA (2000) Donnan dialysis. In: Wilson ID, Adlard ER, Cooke M, Poole CF (eds) Encyclopedia of separation science. Academic Press, London, pp 1701–1707CrossRefGoogle Scholar
Déon S, Escoda A, Fievet P, Salut R (2013) Prediction of single salt rejection by NF membranes: an experimental methodology to assess physical parameters from membrane and streaming potentials. Desalination 315:37–45. https://doi.org/10.1016/j.desal.2012.09.005 CrossRefGoogle Scholar
Donnan FG (1924) The theory of membrane equilibria. Chem Rev 1:73–90. https://doi.org/10.1021/cr60001a003 CrossRefGoogle Scholar
Donnan FG (1995) Theory of membrane equilibria and membrane potentials in the presence of non-dialysing electrolytes. A contribution to physical-chemical physiology. J Membr Sci 100:45–55. https://doi.org/10.1016/0376-7388(94)00297-C CrossRefGoogle Scholar
Duffey ME, Fennell Evans D, Cussler EL (1978) Simultaneous diffusion of ions and ion pairs across liquid membranes. J Membr Sci 3:1–14. https://doi.org/10.1016/S0376-7388(00)80407-X CrossRefGoogle Scholar
Fridman-Bishop N, Tankus KA, Freger V (2018) Permeation mechanism and interplay between ions in nanofiltration. J Membr Sci 548:449–458. https://doi.org/10.1016/j.memsci.2017.11.050 CrossRefGoogle Scholar
Galach M, Waniewski J (2012) Membrane transport of several ions during peritoneal dialysis: mathematical modeling. Artif Organs 36:E163–E178. https://doi.org/10.1111/j.1525-1594.2012.01484.x CrossRefPubMedGoogle Scholar
Galama AH, Post JW, Hamelers HVM, Nikonenko VV, Biesheuvel PM (2016) On the origin of the membrane potential arising across densely charged ion exchange membranes: how well does the Teorell-Meyer-Sievers theory work? J Membr Sci Res 2:128–140. https://doi.org/10.22079/jmsr.2016.20311 CrossRefGoogle Scholar
Gimmi T, Alt-Epping P (2018) Simulating Donnan equilibria based on the Nernst-Planck equation. Geochim Cosmochim Acta 232:1–13. https://doi.org/10.1016/j.gca.2018.04.003 CrossRefGoogle Scholar
Grzegorczyn S, Ślęzak A (2006) Time characteristics of electromotive force in single-membrane cell for stable and unstable conditions of reconstructing of concentration boundary layers. J Membr Sci 280:485–493. https://doi.org/10.1016/j.memsci.2006.02.004 CrossRefGoogle Scholar
Higa M, Kira A (1992) Theory and simulation of ion transport in nonstationary states against concentration gradients across ion-exchange membranes. J Phys Chem 96:9518–9523. https://doi.org/10.1021/j100202a081 CrossRefGoogle Scholar
Johnson KS, Pytkowicz RM (1978) Ion association of Cl− with H+, Na+, K+, Ca2+, and Mg2+ in aqueous solutions at 25° C. Am J Sci 278:1428–1447. https://doi.org/10.2475/ajs.278.10.1428 CrossRefGoogle Scholar
Kim DY, Lee MH, Boram G, Kim JH, Lee S, Yang DR (2010) Modeling of solute transport in multi-component solution for reverse osmosis membranes. Desalination Water Treat 15:20–28. https://doi.org/10.5004/dwt.2010.1662 CrossRefGoogle Scholar
Kondepudi D, Prigogine I (1998) Modern thermodynamics. From heat engines to dissipative structures. John Wiley & Sons, New YorkGoogle Scholar
Kosterin SA, Cherny AP (1991) Gibbs-Donnan equilibrium in the system membrane vesicules – incubation medium. Biofizika 36:826–829. (In Russian)Google Scholar
Kozmai A, Chérif M, Dammak L, Bdiri M, Larchet C, Nikonenko V (2017) Modelling non-stationary ion transfer in neutralization dialysis. J Membr Sci 540:60–70. https://doi.org/10.1016/j.memsci.2017.06.039 CrossRefGoogle Scholar
Kumaran M, Bajpai S (2015) Application of extended Nernst Planck model in nano filtration process –a critical review. Int J Eng Res Rev 3:40–49Google Scholar
Kurbel S (2011) Donnan effect on chloride ion distribution as a determinant of body fluid composition that allows action potentials to spread via fast sodium channels. Theor Biol Med Model 8:16. https://doi.org/10.1186/1742-4682-8-16 CrossRefPubMedPubMedCentralGoogle Scholar
Lang GE, Stewart PS, Vella D, Waters SL, Goriely A (2014) Is the Donnan effect sufficient to explain swelling in brain tissue slices? J Roy Soc Interface 11:20140123. https://doi.org/10.1098/rsif.2014.0123 CrossRefGoogle Scholar
Luo J, Wu C, Wu Y, Xu T (2013) Diffusion dialysis of hydrochloric acid with their salts: effect of co-existence metal ions. Sep Purif Technol 118:716–722. https://doi.org/10.1016/j.seppur.2013.08.014 CrossRefGoogle Scholar
Marcus Y, Hefter G (2006) Ion pairing. Chem Rev 106:4585–4621. https://doi.org/10.1021/cr040087x CrossRefPubMedGoogle Scholar
Mazur I, Kosterin S, Veklich T, Shkrabak O (2014) Gibbs-Donnan potential as a tool for membrane vesicles polarization. J Biophys Chem 5:78–89. https://doi.org/10.4236/jbpc.2014.52009 CrossRefGoogle Scholar
Moshtarikhah S, Oppers NAW, de Groot MT, Keurentjes JTF, Schouten JC, van der Schaaf J (2017) Nernst-Planck modeling of multicomponent ion transport in a Naflon membrane at high current density. J Appl Electrochem 47:51–62. https://doi.org/10.1007/s10800-016-1017-2 CrossRefGoogle Scholar
Neihof R, Sollner K (1957) The transitory overshooting of final equilibrium concentrations in membrane systems which drift toward the Gibbs-Donnan membrane equilibrium. J Phys Chem 61:159–163. https://doi.org/10.1021/j150548a008 CrossRefGoogle Scholar
Nguyen MK, Kurtz I (2006) Quantitative interrelationship between Gibbs-Donnan equilibrium, osmolality of body fluid compartments, and plasma water sodium concentration. J Appl Physiol 100:1293–1300. https://doi.org/10.1152/japplphysiol.01274.2005 CrossRefPubMedGoogle Scholar
Nouri S, Dammak L, Bulvestre G, Auclair B (2002) Studies of the crossed ionic fluxes through a cation-exchange membrane in the case of Donnan dialysis. Desalination 148:383–388. https://doi.org/10.1016/S0011-9164(02)00734-8 CrossRefGoogle Scholar
Osterhout WJV (1925) Is living protoplasm permeable to ions? J Gen Physiol 8:131–146. https://doi.org/10.1085/jgp.8.2.131 CrossRefPubMedPubMedCentralGoogle Scholar
Osterhout WJV (1929) The kinetics of penetration. J Gen Physiol 13:261–294. https://doi.org/10.1085/jgp.13.2.261 CrossRefGoogle Scholar
Palmeri J, Lefebvre X (2006) Computer simulation of Nanofiltration, membranes and processes. In: Rieth M, Schommers W (eds) Handbook of theoretical and computational nanotechnology, Transport Phenomena and Nanoscale Processes, vol 5, 1st edn. American Scientific Publishers, Stevenson Ranch, pp 93–214Google Scholar
Philipse A, Vrij A (2011) The Donnan equilibrium: I. on the thermodynamic foundation of the Donnan equation of state. J Phys Condens Matter 23:194106. https://doi.org/10.1088/0953-8984/23/19/194106 CrossRefPubMedGoogle Scholar
Prado-Rubio OA, Møllerhøj M, Jørgensen SB, Jonsson G (2010) Modeling Donnan dialysis separation for carboxylic anion recovery. Comput Chem Eng 34:1567–1579. https://doi.org/10.1016/j.compchemeng.2010.03.003 CrossRefGoogle Scholar
Pyrzynska K (2006) Preconcentration and recovery of metal ions by Donnan dialysis. Microchim Acta 153:117–126. https://doi.org/10.1007/s00604-005-0434-4 CrossRefGoogle Scholar
Ramirez P, Alcaraz A, Mafe S, Pellicer J (2002) Donnan equilibrium of ionic drugs in pH-dependent fixed charge membranes: theoretical modelling. J Colloid Interface Sci 253:171–179. https://doi.org/10.1006/jcis.2002.8508 CrossRefPubMedGoogle Scholar
Rohman FS, Aziz N (2008) Mathematical model of ion transport in electrodialysis process. Bull Chem React Eng Catal 3:3–8. https://doi.org/10.9767/bcrec.3.1-3.7122.3-8 CrossRefGoogle Scholar
Sarkar S, Sengupta A, Prakash P (2010) The Donnan membrane principle: opportunities for sustainable engineered processes and materials. Environ Sci Technol 44:1161–1166. https://doi.org/10.1021/es9024029 CrossRefPubMedGoogle Scholar
Shu L, Liu X, Li Y, Yang B, Huang S, Lin Y, Jin S (2016) Modified Kedem-Katchalsky equations for osmosis through nano-pore. Desalination 399:47–52. https://doi.org/10.1016/j.desal.2016.08.011 CrossRefGoogle Scholar
Sobana S, Panda RC (2011) Review on modelling and control of desalination system using reverse osmosis. Rev Environ Sci Biotechnol 10:139–150. https://doi.org/10.1007/s11157-011-9233-z CrossRefGoogle Scholar
Steele A, Arias J (2014) Accounting for the Donnan effect in diafiltration optimization for high concentration UFDF applications. BioProcess Int 12:50–54Google Scholar
Szczepański P, Szczepańska G (2017) Donnan dialysis − a new predictive model for non−steady state transport. J Membr Sci 525:277–289. https://doi.org/10.1016/j.memsci.2016.11.017 CrossRefGoogle Scholar
Tanaka Y (2012) Measurement of membrane characteristics using the phenomenological equation and the overall mass transport equation in ion-exchange membrane electrodialysis of saline water. Int J Chem Eng 2012:Article ID 148147, 12. https://doi.org/10.1155/2012/148147 CrossRefGoogle Scholar
Tian H, Zhang L, Wang M (2015) Applicability of Donnan equilibrium theory at nanochannel-reservoir interfaces. J Colloid Interface Sci 452:78–88. https://doi.org/10.1016/j.jcis.2015.03.064 CrossRefPubMedGoogle Scholar
Vega FA, Weng L, Temminghoff EJM, Van Riemsdijk WH (2010) Donnan membrane technique (DMT) for anion measurement. Anal Chem 82:2932–2939. https://doi.org/10.1021/ac9029339 CrossRefPubMedGoogle Scholar
Volpert AI, Hudyaev SI (1975) Analyses in classes of discontinuous functions and equations of mathematical physics. Nauka, Moscow. (In Russian)Google Scholar
Wang J, Dlamini DS, Mishra AK, Pendergast MTM, Wong MCY, Mamba BB, Freger V, Verliefde ARD, Hoek EMV (2014) A critical review of transport through osmotic membranes. J Membr Sci 454:516–537. https://doi.org/10.1016/j.memsci.2013.12.034 CrossRefGoogle Scholar
Yaroshchuk A, Martínez-Lladó X, Llenas L, Rovira M, de Pablo J (2011) Solution-diffusion-film model for the description of pressure-driven trans-membrane transfer of electrolyte mixtures: one dominant salt and trace ions. J Membr Sci 368:192–201. https://doi.org/10.1016/j.memsci.2010.11.037 CrossRefGoogle Scholar
Zhao R, Van Soestbergen M, Rijnaarts HHM, Van der Wal A, Bazant MZ, Biesheuvel PM (2012) Time-dependent ion selectivity in capacitive charging of porous electrodes. J Colloid Interface Sci 384:38–44. https://doi.org/10.1016/j.jcis.2012.06.022 CrossRefPubMedGoogle Scholar
© Springer Science+Business Media, LLC, part of Springer Nature 2020
1.Palladin Institute of Biochemistry of the National Academy of Sciences of UkraineKyivUkraine
2.National Aviation UniversityKyivUkraine
Karakhim, S.O., Zhuk, P.F. & Kosterin, S.O. J Bioenerg Biomembr (2020). https://doi.org/10.1007/s10863-019-09821-8
Received 31 July 2019
Accepted 16 December 2019
Print ISSN 0145-479X
|
CommonCrawl
|
Codeforces Global Round 20
→ Virtual participation
Virtual contest is a way to take part in past contest, as close as possible to participation on time. It is supported only ICPC mode for virtual contests. If you've seen these problems, a virtual contest is not for you - solve these problems in the archive. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. Never use someone else's code, read the tutorials or communicate with other person during a virtual contest.
→ Problem tags
constructive algorithms
No tag edit access
→ Contest materials
Announcement (en)
Tutorial (en)
Custom test
The problem statement has recently been changed. View the changes.
E. notepad.exe
time limit per test
memory limit per test
256 megabytes
standard input
standard output
This is an interactive problem.
There are $$$n$$$ words in a text editor. The $$$i$$$-th word has length $$$l_i$$$ ($$$1 \leq l_i \leq 2000$$$). The array $$$l$$$ is hidden and only known by the grader.
The text editor displays words in lines, splitting each two words in a line with at least one space. Note that a line does not have to end with a space. Let the height of the text editor refer to the number of lines used. For the given width, the text editor will display words in such a way that the height is minimized.
More formally, suppose that the text editor has width $$$w$$$. Let $$$a$$$ be an array of length $$$k+1$$$ where $$$1=a_1 < a_2 < \ldots < a_{k+1}=n+1$$$. $$$a$$$ is a valid array if for all $$$1 \leq i \leq k$$$, $$$l_{a_i}+1+l_{a_i+1}+1+\ldots+1+l_{a_{i+1}-1} \leq w$$$. Then the height of the text editor is the minimum $$$k$$$ over all valid arrays.
Note that if $$$w < \max(l_i)$$$, the text editor cannot display all the words properly and will crash, and the height of the text editor will be $$$0$$$ instead.
You can ask $$$n+30$$$ queries. In one query, you provide a width $$$w$$$. Then, the grader will return the height $$$h_w$$$ of the text editor when its width is $$$w$$$.
Find the minimum area of the text editor, which is the minimum value of $$$w \cdot h_w$$$ over all $$$w$$$ for which $$$h_w \neq 0$$$.
The lengths are fixed in advance. In other words, the interactor is not adaptive.
The first and only line of input contains a single integer $$$n$$$ ($$$1 \leq n \leq 2000$$$) — the number of words on the text editor.
It is guaranteed that the hidden lengths $$$l_i$$$ satisfy $$$1 \leq l_i \leq 2000$$$.
Begin the interaction by reading $$$n$$$.
To make a query, print "? $$$w$$$" (without quotes, $$$1 \leq w \leq 10^9$$$). Then you should read our response from standard input, that is, $$$h_w$$$.
If your program has made an invalid query or has run out of tries, the interactor will terminate immediately and your program will get a verdict Wrong answer.
To give the final answer, print "! $$$area$$$" (without the quotes). Note that giving this answer is not counted towards the limit of $$$n+30$$$ queries.
After printing a query do not forget to output end of line and flush the output. Otherwise, you will get Idleness limit exceeded. To do this, use:
fflush(stdout) or cout.flush() in C++;
System.out.flush() in Java;
flush(output) in Pascal;
stdout.flush() in Python;
see documentation for other languages.
The first line of input must contain a single integer $$$n$$$ ($$$1 \leq n \leq 2000$$$) — the number of words in the text editor.
The second line of input must contain exactly $$$n$$$ space-separated integers $$$l_1,l_2,\ldots,l_n$$$ ($$$1 \leq l_i \leq 2000$$$).
! 32
In the first test case, the words are $$$\{\texttt{glory},\texttt{to},\texttt{ukraine},\texttt{and},\texttt{anton},\texttt{trygub}\}$$$, so $$$l=\{5,2,7,3,5,6\}$$$.
If $$$w=1$$$, then the text editor is not able to display all words properly and will crash. The height of the text editor is $$$h_1=0$$$, so the grader will return $$$0$$$.
If $$$w=9$$$, then a possible way that the words will be displayed on the text editor is:
$$$\texttt{glory__to}$$$
$$$\texttt{ukraine__}$$$
$$$\texttt{and_anton}$$$
$$$\texttt{__trygub_}$$$
The height of the text editor is $$$h_{9}=4$$$, so the grader will return $$$4$$$.
If $$$w=16$$$, then a possible way that the words will be displayed on the text editor is:
$$$\texttt{glory_to_ukraine}$$$
$$$\texttt{and_anton_trygub}$$$
The height of the text editor is $$$h_{16}=2$$$, so the grader will return $$$2$$$.
We have somehow figured out that the minimum area of the text editor is $$$32$$$, so we answer it.
|
CommonCrawl
|
Letters in Mathematical Physics
FRT presentation of classical Askey–Wilson algebras
Pascal Baseilhac
Nicolas Crampé
Automorphisms of the infinite-dimensional Onsager algebra are introduced. Certain quotients of the Onsager algebra are formulated using a polynomial in these automorphisms. In the simplest case, the quotient coincides with the classical analog of the Askey–Wilson algebra. In the general case, generalizations of the classical Askey–Wilson algebra are obtained. The corresponding class of solutions of the non-standard classical Yang–Baxter algebra is constructed, from which a generating function of elements in the commutative subalgebra is derived. We provide also another presentation of the Onsager algebra and of the classical Askey–Wilson algebras.
Onsager algebra Non-standard Yang–Baxter algebra Askey–Wilson algebras Integrable systems
Mathematics Subject Classification
81R50 81R10 81U15
We thank S. Belliard for discussions, and P. Terwilliger and A. Zhedanov for comments and suggestions. P.B. and N.C. are supported by C.N.R.S. N.C. thanks the IDP for hospitality, where part of this work has been done.
From (4.9) to (4.11), for \(k=0,1,2\) one has:
$$\begin{aligned} A_0= & {} {\mathcal {W}}_0{,}\quad A_1={\mathcal {W}}_1{,}\quad G_1=-\frac{1}{4}\tilde{{\mathcal {G}}}_{1}{,}\\ A_{-1}= & {} 2{\mathcal {W}}_{-1}-{\mathcal {W}}_1{,}\quad A_{2}=2{\mathcal {W}}_{2}-{\mathcal {W}}_0{,}\quad G_2=-\frac{1}{2}\tilde{{\mathcal {G}}}_{2}{,}\\ A_{-2}= & {} 4{\mathcal {W}}_{-2}-{\mathcal {W}}_0-2{\mathcal {W}}_{2}{,}\quad A_{3}=4{\mathcal {W}}_{3}-{\mathcal {W}}_1-2{\mathcal {W}}_{-1}{,}\quad G_3=-\tilde{{\mathcal {G}}}_{3} + \frac{1}{4}\tilde{{\mathcal {G}}}_{1} {.} \end{aligned}$$
Conversely, from (4.12)–(4.13) for \(k=1,2\) one has:
$$\begin{aligned} {\mathcal {W}}_{-1}= & {} \frac{A_1 + A_{-1}}{2}{,} \quad {\mathcal {W}}_{2}= \frac{A_0 + A_{2}}{2}{,}\quad \tilde{{\mathcal {G}}}_{2}=-2G_2{,} \\ {\mathcal {W}}_{-2}= & {} \frac{A_2 + 2A_{0} + A_{-2}}{4}{,} \quad {\mathcal {W}}_{2}= \frac{A_3 + 2A_{1} + A_{-1}}{4},\quad \tilde{{\mathcal {G}}}_{3}=-G_3 -2G_1\ . \end{aligned}$$
Baseilhac, P.: An integrable structure related with tridiagonal algebras. Nucl. Phys. B 705, 605–619 (2005). arXiv:math-ph/0408025 ADSMathSciNetCrossRefzbMATHGoogle Scholar
Baseilhac, P., Belliard, S.: An attractive basis for the \(q\)-Onsager algebra. arXiv:1704.02950
Baseilhac, P., Belliard, S., Crampe, N.: FRT presentation of the Onsager algebras. Lett. Math. Phys., 1–24 (2018). arXiv:1709.08555 [math-ph]
Baseilhac, P., Koizumi, K.: A deformed analogue of Onsager's symmetry in the XXZ open spin chain. J. Stat. Mech. 0510, P005 (2005). arXiv:hep-th/0507053 Google Scholar
Baseilhac, P., Koizumi, K.: Exact spectrum of the XXZ open spin chain from the q-Onsager algebra representation theory. J. Stat. Mech., P09006 (2007). arXiv:hep-th/0703106
Baseilhac, P., Kolb, S.: Braid group action and root vectors for the \(q-\)Onsager algebra. arXiv:1706.08747
Baseilhac, P., Shigechi, K.: A new current algebra and the reflection equation. Lett. Math. Phys. 92, 47–65 (2010). arXiv:0906.1482 ADSMathSciNetCrossRefzbMATHGoogle Scholar
Davies, B.: Onsager's algebra and superintegrability. J. Phys. A 23, 2245–2261 (1990)ADSMathSciNetCrossRefzbMATHGoogle Scholar
Davies, B.: Onsager's algebra and the Dolan–Grady condition in the non-self-dual case. J. Math. Phys. 32, 2945–2950 (1991)ADSMathSciNetCrossRefzbMATHGoogle Scholar
Dolan, L., Grady, M.: Conserved charges from self-duality. Phys. Rev. D 25, 1587–1604 (1982)ADSMathSciNetCrossRefGoogle Scholar
De Bie, H., Genest, V.X., van de Vijver, W., Vinet, L.: A higher rank Racah algebra and the \(Z_n\) Laplace–Dunkl operator. arXiv:1610.02638
Faddeev, L.D., Reshetikhin, N.Y., Takhtajan, L.A.: Quantization of Lie groups and Lie algebras. LOMI preprint, Leningrad (1987)Google Scholar
Faddeev, L.D., Reshetikhin, N.Y., Takhtajan, L.A.: Quantization of Lie groups and Lie algebras. Leningr. Math. J. 1, 193 (1990)MathSciNetzbMATHGoogle Scholar
Genest, V.X., Vinet, L., Zhedanov, A.: The equitable Racah algebra from three \(su(1, 1)\) algebras. J. Phys. A Math. Theor. 47, 025203 (2013). arXiv:1309.3540 ADSMathSciNetCrossRefzbMATHGoogle Scholar
Granovskii, Y., Zhedanov, A.S.: Nature of the symmetry group of the 6j-symbol. Zh. Eksper. Teoret. Fiz. 94, 49–54 (1988). (English transl.: Soviet Phys. JETP 67 (1988), 1982–1985)MathSciNetGoogle Scholar
Granovskii, Y., Lutzenko, I., Zhedanov, A.: Linear covariance algebra for \(sl_q(2)\). J. Phys. A Math. Gen. 26, L357–L359 (1993)CrossRefGoogle Scholar
Koornwinder, T.: The relationship between Zhedanov's algebra \(AW(3)\) and the double affine Hecke algebra in the rank one case. SIGMA 3, 063 (2007). arXiv:math.QA/0612730 MathSciNetzbMATHGoogle Scholar
Koornwinder, T.: Zhedanov's algebra \(AW(3)\) and the double affine Hecke algebra in the rank one case. II. The spherical subalgebra. SIGMA 4, 052 (2008). arXiv:0711.2320 MathSciNetzbMATHGoogle Scholar
Koornwinder, T., Mazzocco, M.: Dualities in the \(q\)-Askey scheme and degenerated DAHA. arXiv:1803.02775
Mazzocco, M.: Confluences of the Painlevé equations, Cherednik algebras and q-Askey scheme. Nonlinearity 29, 2565 (2016). arXiv:1307.6140 ADSMathSciNetCrossRefzbMATHGoogle Scholar
Nomura, K., Terwilliger, P.: Linear transformations that are tridiagonal with respect to both eigenbases of a Leonard pair. Linear Alg. Appl. 420, 198–207 (2007). arXiv:math.RA/0605316 MathSciNetCrossRefzbMATHGoogle Scholar
Onsager, L.: Crystal statistics. I. A two-dimensional model with an order-disorder transition. Phys. Rev. 65, 117–149 (1944)ADSMathSciNetCrossRefzbMATHGoogle Scholar
Post, S.: Racah polynomials and recoupling schemes of \(su(1, 1)\). SIGMA 11, 057 (2015)MathSciNetzbMATHGoogle Scholar
Post, S., Walter, A.: A higher rank extension of the Askey–Wilson algebra. arXiv:1705.01860
Roan, S.S.: Onsager Algebra, Loop Algebra and Chiral Potts Model, MPI 91–70. Max-Planck-Institut für Mathematik, Bonn (1991)Google Scholar
Skrypnyk, T.: Generalized quantum Gaudin spin chains, involutive automorphisms and "twisted" classical r-matrices. J. Math. Phys. 47, 033511 (2006)ADSMathSciNetCrossRefzbMATHGoogle Scholar
Skrypnyk, T.: Infinite-dimensional Lie algebras, classical r-matrices, and Lax operators: two approaches. J. Math. Phys. 54, 103507 (2013)ADSMathSciNetCrossRefzbMATHGoogle Scholar
Terwilliger, P.: Leonard pairs and dual polynomial sequences. Preprint available at: https://www.math.wisc.edu/~terwilli/lphistory.html
Terwilliger, P., Vidunas, R.: Leonard pairs and the Askey–Wilson relations. J. Algebra Appl. 3, 411–426 (2004). arXiv:math.QA/0305356 MathSciNetCrossRefzbMATHGoogle Scholar
Terwilliger, P.: The subconstituent algebra of an association scheme. III. J. Algebraic Combin. 2(2), 177–210 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
Terwilliger, P.: Two relations that generalize the q-Serre relations and the Dolan–Grady relations. In: Kirillov, A.N., Tsuchiya, A., Umemura, H. (eds.) Proceedings of the Nagoya 1999 International Workshop on Physics and Combinatorics, pp. 377–398. arXiv:math.QA/0307016
Terwilliger, P.: The universal Askey–Wilson algebra. SIGMA 7, 069 (2011). arXiv:1104.2813v2 MathSciNetzbMATHGoogle Scholar
Terwilliger, P.: The universal Askey–Wilson Algebra and DAHA of type \((C_1^\vee, C_1)\). SIGMA 9, 047 (2013). arXiv:1202.4673 zbMATHGoogle Scholar
Terwilliger, P.: The Lusztig automorphism of the q-Onsager algebra. J Algebra 506 56-75. arXiv:1706.05546
Wiegmann, P., Zabrodin, A.: Algebraization of difference eigenvalue equations related to \(U_q(sl_2)\). Nucl. Phys. B 451, 699–724 (1995). arXiv:cond-mat/9501129 ADSCrossRefzbMATHGoogle Scholar
Zhedanov, A.S.: "Hidden symmetry" of the Askey–Wilson polynomials. Theor. Math. Phys. 89, 1146–1157 (1991)MathSciNetCrossRefzbMATHGoogle Scholar
Zhedanov, A.S.: Quantum \(SU_q(2)\) algebra: "Cartesian" Version and Overlaps. Mod. Phys. Lett. A 7, 1589 (1992)ADSCrossRefzbMATHGoogle Scholar
© Springer Nature B.V. 2019
1.Institut Denis-Poisson CNRS/UMR 7013Université de Tours - Université d'Orléans Parc de GrammontToursFrance
2.Laboratoire Charles Coulomb (L2C)Univ Montpellier, CNRSMontpellierFrance
Baseilhac, P. & Crampé, N. Lett Math Phys (2019). https://doi.org/10.1007/s11005-019-01182-y
Received 27 June 2018
Revised 27 February 2019
DOI https://doi.org/10.1007/s11005-019-01182-y
|
CommonCrawl
|
Anatomical, physical, and mechanical properties of four pioneer species in Malaysia
H. Hamdan1,
A. S. Nordahlia1,
U. M. K. Anwar1,
M. Mohd Iskandar1,
M. K. Mohamad Omar1 &
Tumirah K1
The purpose of this study is to evaluate the anatomical, physical, and mechanical properties of four pioneer species, i.e., batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense). Correlation of factors influencing density, shrinkage, and mechanical properties were also discussed. Samples were obtained from the Forest Research Institute Malaysia (FRIM) campus. From the result obtained, these four pioneer species characterised by medium-to-large vessel with absent of tyloses and gum deposit, fine ray, thin walled fibre, runkel ratio less than 1.0, low in density, and mechanical properties. Sesendok has significantly higher value in fibre length, fibre diameter, fibre lumen diameter, fibre wall thickness, vessel diameter, density, MOR, MOE, compression parallel to grain, and shear parallel to grain compared to the other three pioneer species which were 2001 µm, 45 µm, 35 µm, 5.1 µm, 300 µm, 514 kg/m3, 79.5 N/mm2, 9209 N/mm2, 38.7 N/mm2, and 10.1 N/mm2, respectively. Between these four pioneer species, ludai has significantly higher in runkel ratio which was 0.57, whereas mahang shows significantly higher in slenderness ratio and number of vessels/mm2 which were 50.2 and 5 vessel/mm2, respectively. On the other hand, batai has higher tangential, radial and longitudinal shrinkage compared to ludai, mahang, and sesendok which were 3.0%, 2.4%, and 0.8%, respectively. Based on the basic property study, batai, ludai, mahang, and sesendok could be suitable for pulp and paper, plywood, light construction, furniture, interior finishing, and general utility. Fibre length, fibre wall thickness, and vessel diameter correlated significantly with density and mechanical properties. Shrinkage and mechanical properties were significantly influenced by density.
Pioneer species is seen as an alternative material to the depleting resources of commercial timber from natural forest. It grows on previously disturbed land, such as areas of clear cutting, damage by the elements of nature, or in former agricultural land. These species adapted well to nutrient-depleted soils and colonize them more easily than other species. They are also known as successional species and make the soil more livable for species that are not good colonizers by putting nutrients back into the soil and providing shade for other plants [1]. Information on the availability of pioneer species was obtained from the National Forest Inventory 4 Report for Peninsular Malaysia conducted by the Forest Department Peninsular Malaysia (JPSM) in 2000–2002 [2]. According to [1], pioneer species such as batai, ludai, mahang, and sesendok have potential for the cellulosic industry due to its fast growth, relatively free from common or major known pests and diseases, and yet produce acceptable wood.
Study on the anatomical, physical, and mechanical properties need to be done on the pioneer species to explore the suitability of these species for various applications in wood-based industry such as in pulp and paper and also in plywood industry where the demands on this product are increased. Anatomical properties study such as the cell structure and fibre morphology are very important to determine the different areas of application. As an example, fibre morphology is an indicator on the suitability of timber for pulp and paper products [3]. Besides that, fibre length and fibre wall thickness are also a determinant to predict the density and mechanical properties [4]. On the other hand, vessel size is related to the treatment ability, where large vessel indicates the easy treatment compared to small vessels [5].
Physical properties such as density and shrinkage are related to wood quality. Density is correlated with shrinkage, drying, machining, and mechanical properties [6, 7]. Shrinkage of wood is another important physical property noted by Kiaei [8]. It is necessary to have good understanding on the shrinkage behavior of wood, since this property is associated with effects such as warping, cupping, checking, and splitting that will contribute to the most troublesome physical properties of the wood [9]. Mechanical properties would affect the wood quality, characterised the suitability of wood for structural applications, and also can be used as an indicator to the quality of sawn lumber [10, 11].
The purpose of this study is to evaluate the anatomical, physical, and mechanical properties of four pioneer species, i.e., batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense). These four pioneer species were selected in this study to meet the needs of the wood industry that requires continuous supply and short-term raw materials. Therefore, batai, ludai, mahang, and sesendok were selected for the study due to their fast growth in which in 10 years which they are able to harvest. Correlation factors influencing density, shrinkage, and mechanical properties were also presented. It is hoped that these basic properties will be useful to the wood-based industry to explore the suitable products from the pioneer timbers species.
Preparation of materials
Samples of batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense) were obtained from the Forest Research Institute Malaysia (FRIM) campus. The trees were planted at a spacing of 3 × 3 m. Three trees from each species which age of 14 years were felled at 15 cm above the ground. Two discs approximately 3 cm in thickness and billets of 2 m length were cut. Discs were assigned for two different studies, viz., for anatomical and physical properties study, and billets of 2 m long were used for mechanical property study [12].
Determination of anatomical properties
The anatomical feature study was conducted according to the method by Schweingruber and Schulze [13]. A wood block 10 × 10 × 10 mm was each taken from the wood disc. The blocks were boiled in distilled water until they were well soaked and sank. The sledge microtome was used to cut thin sections from the transverse, tangential, and radial surfaces of each block. The thickness of wood sections must be in the range of between 25 µm. The transverse, tangential, and radial sections were kept in separate petri dishes for the staining process. Staining was carried out using 1% safranin-0. These sections were washed with 50% ethanol and dehydrated using a series of ethanol solutions with concentrations of 70%, 80%, 90%, and 95%. Then, one drop of Canada Balsam was placed on top of the section and covered with a cover slip. The slides were oven-dried at 60 °C for a few days.
The maceration technique was used to determine the fibre morphology [14]. A wood block split into matchstick size pieces before being macerated using a mixture of 30% hydrogen peroxide:glacial acetic acid at a ratio of 1:1 at 45 °C for 2 to 3 h until all of the lignin had dissolved and the cellulose fibres appeared whitish. Microscopic observations and measurement of the wood anatomical features were carried out using the light microscope. The descriptive terminology follows the International Association of Wood Anatomists (IAWA) List of Microscopic Features for Hardwood Identification [14]. For all the anatomical property measurements, there were 25 readings which were taken randomly for each species of batai, ludai, mahang, and sesendok. The Slenderness ratio (fibre length/fibre diameter) and Runkel ratio (2 × wall thickness/lumen diameter) [15, 16] were also calculated.
Determination of physical and mechanical properties
Physical properties were tested using British Standard 373:1957 Methods of Testing Small Clear Specimens of Timber [17]. Samples of size 20 mm in radial × 20 mm in longitudinal × 40 mm in tangential directions were cut from the woods for the analyses of density and shrinkage. Density was determined on the basis of oven dry weight and green volume. The shrinkage test was conducted in green to air-dry conditions. The tangential, radial, and longitudinal sections of each sample were marked and measured with a pair of digital vernier callipers (Mitutoyo) to the nearest 0.01 mm. A total of 90 specimens were used for each species of batai, ludai, mahang, and sesendok. Shrinkage was calculated using the following equations:
$$ S_{a} \left( \% \right) = \frac{{D_{i} {-}D_{a} }}{{D_{i} }} \times 100, $$
where Sa: shrinkage from green to air-dry conditions, Di: initial dimension (mm), and Da: air-dry dimension (mm).
Samples for mechanical properties were tested in accordance with British Standard 373:1957 Methods of Testing Small Clear Specimens of Timber [17]. Types of tests that were conducted: static bending of modulus of rupture (MOR) and modulus of elasticity (MOE), compression, and shear parallel to the grain. The standard dimensions for static bending test were 300 × 20 × 20 mm. Dimensions of 20 × 20 × 60 mm specimens were used for the test of compression parallel to the grain. Each specimen was placed in a vertical position. The dimensions of specimens for shear parallel to the grain were 20 × 20 × 20 mm. The direction of shearing was parallel to the longitudinal direction of the grain. The test was made on the tangential and radial planes of the sample. The total number of specimens was 90 for each species of batai, ludai, mahang, and sesendok. All tests were conducted using the 100 KN Shimadzu testing machine.
Statistical analysis was performed using Statistical Analysis System (SAS) version 9.1.3 software. Analysis of variance (ANOVA) was used to determine whether or not the differences in means were significant. If the differences were significant, Least Significant Difference (LSD) test was used to determine which of the means were significantly different from one another. The relationship between the properties was analysed using simple correlation analysis.
Anatomical properties
Anatomical features of batai, ludai, mahang, and sesendok are shown in Figs. 1, 2, 3, and 4. The anatomical features of these four pioneer species are described for their identification and an important indication on the suitability of the timber for its potential usage. Figure 1 shows the anatomical features of batai. It shows that the vessels are predominantly solitary and in radial multiples of 2–4 with simple perforation. The tangential diameter ranges from 282 to 299 µm and the frequency is 1–3/mm2. Tyloses and deposit are absent. The axial parenchyma is vasicentric and diffuse but visible as white dots in cross section when observe with a hand lens than in the microscope. Its rays are usually uniseriate although sometimes present as biseriates with cell height at 310–550 µm and homocellular with procumbent cells. Fibres are non-septate, while crystal is present in chambered axial parenchyma but silica grains absent.
Batai: (a) tranverse section, (b) tangential section, and (c) radial section
Ludai: (a) tranverse section, (b) tangential section, and (c) radial section
Mahang: (a) tranverse section, (b) tangential section, and (c) radial section
Sesendok: (a) tranverse section, (b) tangential section, and (c) radial section
Anatomical features of ludai (Fig. 2) show that the vessels are predominantly solitary and in radial multiples of 2–6 with simple perforation. The tangential diameter ranges from 243 to 257 µm and frequency is at 3–5/mm2. Tyloses and deposit are absent. Axial parenchyma is irregularly wavy, narrow bands, more distinct with hand lens than in the microscope due to lack of contrast with fibres. Rays are exclusively uniseriate, height ranging from 2500 to 8000 µm, homocellular cells. Fibres are non-septate, while silica grains are present in rays and axial parenchyma.
Anatomical features of mahang (Fig. 3) show that the vessels are solitary and in radial multiples of 2–3 with simple perforations. Tangential diameter ranges from 155 to 167 µm and frequency 4–7/mm2. Tyloses and deposit are absent. Axial parenchyma is in narrow bands. Rays 1–3 seriate, height ranging from 1700 to 3100 µm, heterocellular with procumbent and upright cells. Fibres are non-septate. Crystal is often present in rays or axial parenchyma. Silica grain is absent.
Anatomical features of sesendok (Fig. 4) show that the vessels are predominantly in radial pairs and multiples of 2–7 in a series and occasional clusters with simple perforation. Tangential diameter ranges from 291 to 309 µm and the frequency is 1–3/mm2. Tyloses and deposit are distinctly absent. Axial parenchyma is regularly spaced apotracheal bands, more distinct with hand lens than observing under the microscope. Rays are 1–2 seriate, height of 500–1500 µm, heterocellular with procumbent and upright cells. The fibres are non-septate, while crystal and silica grains are absent.
Table 1 summarizes the result of anatomical properties of batai, ludai, mahang, and sesendok with comparison to other well-known plantation timbers. Results showed that the fibre lengths are significantly different at (p ≤ 0.05), with sesendok having the longest fibre as compared to the other three pioneer species. This present result is similar with the finding by [18] who also found in his study that sesendok has the longest fibre which is very long for hardwood and could be suitable for the pulp and paper. In comparison to other well-known plantation timbers, i.e., rubberwood (Hevea brasiliensis) and Eucalyptus grandis, these four pioneer species show comparable value in terms of fibre length. The fibre wall of sesendok is the thickest at 5.1 µm followed by mahang, ludai, and batai. Fibre wall thickness of these four pioneer species categorised as very thin fibre walled which is the fibre lumen is three times wider than the double wall thickness. Runkel ratio of batai, ludai, mahang, and sesendok is less than 1.0 which were 0.27, 0.57, 0.38, and 0.28, respectively, whilst the slenderness ratio for batai, ludai, mahang, and sesendok were 36.4, 43.2, 50.2, and 45.6, respectively. On the other hand, vessel diameter of batai, ludai, and sesendok is categorised as large with sesendok having the significantly largest vessel diameter. Vessel diameter for mahang is the smallest as compared to the other three pioneer species and categorised as medium-sized vessel. A number of vessels present for these four pioneer species are categorised as very few.
Table 1 Anatomical properties of batai, ludai, mahang, and sesendok in comparison with other well-known plantation timbers
The suitability of the timber for papermaking is based on the runkel ratio. Fibres with a runkel ratio of less than 1.0 are suitable for use as pulp with good strength properties [3]. High runkel ratio indicates inferior raw material for papermaking where the fibre is stiff, less flexible and forms bulkier paper with lower bonded area [19]. Based on the result obtained (Table 1), the mean runkel ratios of all four pioneer species studied were less than 1.0, indicating that the fibres from the timber would produce good quality paper. Besides that, the tearing strength and folding endurance of paper are indicated by the slenderness ratio [20]. Larger slenderness ratio results are better for paper making where it indicates a better formed and well-bonded paper [19, 21]. Present results showed that the slenderness ratio of the four pioneer species is in the range of 36.4–50.2. This result is comparable to the study for Eucalyptus grandis, as shown in Table 1, which ranges from 42.6 to 59.8 [22]. Batai, ludai, mahang, and sesendok also show thin fibre wall and large fibre lumen diameter which, according to [5] this features, contributes to the good adhesive penetration.
Observation on the anatomical features of the four pioneer species shows that all the timbers can be categorised as having medium-to-large vessel according to the vessel category by [14]. Karl [23] stated that wood species with medium-to-large vessel may not be good for printing papers, while [24] reported that generally species with medium-to-large pores are light with course texture which is suitable for general usage. These four pioneer species have larger vessel, absent of tylosis, and gum deposit, which have uniseriate and fine rays which according to [5, 25, 26], and these characteristics make them easy to be impregnated to enhance the wood properties. The absence of gum deposits in batai, ludai, mahang, and sesendok would also make these timbers suitable for veneering into plywood. Adeniyi et al. [5] further reported that timber for plywood should be free from gum deposits as it would interfere with wood gluability. The anatomical features show that these four pioneer species mostly have uniseriate rays which could contribute to the excellent nailing property. As reported by [27], wood with multiseriate rays is poor in nailing property as it has a tendency to split when nailed. However, the presence of silica in ludai would cause a blunting effect on sawteeth. This is also reported by [28] where silica that present in Coelostegia griffithii and Durio griffithii cause a blunting effect on sawteeth.
Physical and mechanical properties
Results of physical and mechanical properties are tabulated in Table 2. Based on the density, batai, ludai, mahang, and sesendok are classified as light timber, which are comparable to rubberwood and Eucalyptus grandis. From the result obtained, sesendok has the highest density, followed by mahang, ludai, and batai. From this result, the trend of density in four pioneer species could be related with the fibre length, fibre wall thickness, and vessel diameter. Longest fibre and thickest fibre wall found in sesendok (Table 1) are directly related to the highest density among the four species studied. On the other hand, batai has the shortest, thinnest fibre, and large vessel diameter (Table 1) which contribute to the lower density. Similar result was also reported by [29] and [30] where density is correlated to the fibre length, fibre wall thickness, and vessel diameter.
Table 2 Physical and mechanical properties of batai, ludai, mahang, and sesendok in comparison with other well-known plantation timbers
In terms of shrinkage (Table 2), batai and sesendok have the highest shrinkage for tangential, radial, and longitudinal. Ludai and mahang shows no significant difference in tangential and longitudinal shrinkage between them. Percentage of shrinkage for sesendok and batai are rated as high, whilst ludai and mahang rated as average. The shrinkage rating is based on the percentage shrinkage of tangential from green to air dry as reported by [31]. The Sesendok shows significantly higher value in MOR, MOE, compression, and shear parallel to grain, followed by mahang, ludai, and the lowest mechanical properties of batai. Van Gelder [32] reported that pioneer species had a significantly lower wood density, MOR, and compression strength.
Correlation factors influencing density, shrinkage, and mechanical properties
Table 3 presented the correlation factors influencing density, shrinkage, and mechanical properties of batai, ludai, mahang, and sesendok. Based on the results, density was positively correlated with fibre length except in batai, where the correlation is moderate-to-strong. Fibre diameter is weakly correlated with density in batai (r = 0.229) and mahang (r = 0.325). Density is positively correlated with fibre wall thickness in all species study with weak-to-moderate correlation. Vessel diameter also significantly correlates with the density with negative and weak-to-moderate correlation in all species. In terms of shrinkage, it shows significantly correlation with fibre length and fibre wall thickness in batai, ludai, mahang, and sesendok. Shrinkage is highly affected by density compared to other anatomical properties with positive and very weak-to-very strong correlation in all species. On the other hand, mechanical properties are significantly correlated with fibre length, fibre wall thickness, and vessel diameter with positive correlation in batai, ludai, mahang, and sesenduk. Among the properties, density is the best factor to be correlated with MOR, MOE, compression parallel to grain, and shear parallel to grain which shows positive and weak-to-strong correlation. Fibre dimeter and fibre lumen diameter are also significantly correlated with some properties, as shown in Table 3.
Table 3 Correlation factors influencing density, shrinkage and mechanical properties of batai, ludai, mahang, and sesendok
This study found the anatomical properties that significantly affect the density and mechanical properties are fibre length, fibre wall thickness, and vessel diameter. Similar findings were also observed by [4, 33] in Pseudolachnostylis maprounaefolia and Azadirachta excelsa, respectively. [5] further stated that strong wood has smaller size of vessel diameter and thick fibre wall. On the other hand, shrinkage was affected significantly by the anatomical properties, namely, fibre length and fibre wall thickness. Significant correlation of fibre with shrinkage was also reported by [34] in Gmelina arborea. Based on the result obtained (Table 3), the number of vessels per mm2 does not significantly influenced the density, shrinkage, and mechanical properties which was also confirmed by [4].
Thus, it can be inferred from the results of this study that shrinkage and mechanical properties are highly dependence on density. Wood with higher density has higher shrinkage and mechanical properties. This is in good agreement with [35,36,37] who also reported significant relationship between density and shrinkage in Melia azedarach, Azadiractha indica, and Pinus pinaster, respectively. Whilst, correlation between density and mechanical properties was also observed by [38,39,40] in Acacia mangium, Acacia melanoxylon, and Tectona grandis, respectively.
Based on the result obtained, sesendok have the largest vessel, longest and thickest fibre, highest in density, and mechanical properties compared to batai, ludai, and mahang. These four pioneer species could be suitable for pulp and paper, since they have longer fibre and a runkel ratio less than 1.0. The absence of gum deposit made the timbers suitable for plywood. Besides that, these four pioneer species have low density and mechanical properties which makes them suitable for light construction, furniture, interior finishing, and general utility. Batai, ludai, mahang, and sesendok have excellent nailing property and could be easily treated. In terms of correlation, fibre length, fibre wall thickness, and vessel diameter are significantly correlated to density and mechanical properties. In this present study, density is a good indicator for predicting shrinkage and mechanical properties. Generally, batai, ludai, mahang, and sesendok could be a promising timber species as an alternative material to the depleting resources of commercial timber.
All data analysed during this study are included in this published article.
modulus of rupture
MOE:
Cheah LC (1995) Pioneer species for fast growing tree plantations in Malaysia-an evaluation. FRIM Technical Information No53. Forest Research Institute Malaysia, Kepong, Selangor
Forest Department Peninsular Malaysia (JPSM) (2004) National forest inventory 4 report for Peninsular Malaysia conducted by the Forest Department Peninsular Malaysia (JPSM) in 2000–2002
Takeuchi R, Wahyudi I, Aiso H, Ishiguri F, Istikowati TW, Ohkubo T, Ohshima J, Iizuka K, Yokota S (2016) Wood properties related to pulp and paper quality in two Macaranga species naturally regenerated in secondary forests, Central Kalimantan, Indonesia. TROPICS 25(3):107–115
Uetimane EJ, Ali AC (2011) Relationship between mechanical properties and selected anatomical features of Ntholo (Pseudolachnostylis maprounaefolia). J Trop For Sci 23(2):166–176
Adeniyi IM, Adebagbo CA, Oladapo FM, Ayetan G (2013) Utilisation of some selected wood species in relation to their anatomical features. Glob J Sci Front Res Agric Veter 13(9):2249–4626
Igartua DV, Monteoliva SE, Monterubbianesi MG, Villegas MS (2003) Basic density and fibre length at breast height of Eucalyptus globulus for parameter prediction of the whole tree. IAWA J 24(2):173–184
Miyoshi Y, Kojiro K, Furuta Y (2018) Effects of density and anatomical feature on mechanical properties of various wood species in lateral tension. J Wood Sci 64:509–514
Kiaei M (2011) Anatomical, physical and mechanical properties of Eldas Pine (Pinus eldarica Medw.) grown in the Kelardasht region. Turkish J Agric For 35:3–42
Shukla SR, Sharma SK, Rao RV (2003) Specific gravity and shrinkage behaviour of eight-year-old plantation grown Tecomella undulata. J Trop For Products 9:35–44
Hsu CYL, Chauhan SS, King N, Lindstrom H (2003) Modulus of elasticity of stemwood vs branchwood in 7-year-old Pinus radiata families. NZ J Forest Sci 33(1):35–46
Karlinasari L, Wahyuna ME, Nugroho M (2008) Non-destructive ultrasonic testing method for determining bending strength properties of Gmelina wood (Gmelina arborea). J Trop For Sci 20(2):99–104
Tan YE, Lim NPT, Gan KS, Wong TC, Lim SC, Thilagawaty M (2010) Testing methods for plantation grown tropical timbers. ITTO project on improving utilization and value adding of plantation timbers from sustainable sources in Malaysia project NO. PD 306/04(1). Forest Research Institute Malaysia, Kepong, Selangor
Schweingruber FH, Borner A, Schulze ED (2006) Atlas of woody plant stems: evolution, structure and environmental modifications. Springer, New York
Wheeler EA, Baas P, Gasson PE (1989) IAWA List of microscopic features for hardwood identification. IAWA Bull 10(3):219–332
Gülsoy SK, Hafizoglu H, Pekgozlu AK, Tumen İ, Donmez İE, Sivrikaya H (2017) Fiber properties of axis and scale of eleven different coniferous cones. Ind Crops Prod 109:45–52
Singh S, Mohanty AK (2007) Wood fiber reinforced bacterial bioplastic composites: fabrication and performance evaluation. Compos Sci Technol 67(9):1753–1763
BS (British Standard) 373 (1957) Methods of testing small clear specimens of timber. British Standard Institution
Ogata K, Fujii T, Abe H, Baas P (2008) Identification of the timbers of Southeast Asia and the Western Pacific. Forestry and Forest Product Research Institute, Tsukuba
Ashori A, Nourbakhsh A (2009) Studies on Iranian cultivated paulownia: a potential source of fibrous raw material for paper industry. Eur J Wood Wood Prod 67:323–327
Yahya R, Sugiyama J, Silsia D, Grill J (2010) Some anatomical features of an Acacia hybrid, A. mangium and A. auriculiformis grown in Indonesia with regard to pulp yield and paper strength. J Trop For Sci 22:343–351
Rodriguez HG, Maiti R, Kumari A, Sarkar NC (2016) Variability in wood density and wood fibre characterization of woody species and their possible utility in Northeastern Mexico. Am J Plant Sci 7:1139–1150
Palermo GPM, Latorraca JVF, Carvalho AM, Calonego FW, Severo ETD (2015) Anatomical properties of Eucalyptus grandis wood and transition age between the juvenile and mature woods. Eur J Wood Product 73:775–780. https://doi.org/10.1007/s00107-015-0947-4
Karl FW (1984) Forestry handbook, Society of American Foresters. pp 616–623
Jayeola AA, David OA, Abayomi EF (2009) Use of wood characters in the identification of selected timber species in Nigeria. Not Bot Hort Agrobot 37(2):2832
Sint KM, Militz H, Hapla F (2011) Treatability and penetration indices of four lesser-used Myanmar hardwood. J Wood Res 56(1):13–22
Sint KM, Stergios A, Gerald K, Frantisek H, Holger M (2013) Wood anatomy and topochemistry of Bombax ceiba L. and Bombax insigne Wall. Bioresource 8(1):530544
Lim SC, Nordahlia AS, Abd Latif M, Gan KS, Rahim S (2016) Identification and properties of Malaysian Timbers. Malaysian Forest Records No. 53. Forest Research Institute Malaysia, Kepong, Selangor
Wong WC, Lim SC (1990) Malaysian Timbers-Durian. Timber Trade Leaflet No 113. Forest Research Institute Malaysia, Kepong, Selangor
Nordahlia AS, Ani S, Zaidon A, Mohd Hamami S (2011) Fibre morphology and physical properties of 10-year-old sentang (Azadirachta excelsa) planted from rooted cuttings and seedlings. J Trop For Sci 23(2):222–227
Alia-Syahirah Y, Paridah MT, Hamdan H, Anwar UMK, Nordahlia AS, Lee SH (2019) Effects of anatomical characteristics and wood density on surface roughness and their relation to surface wettability of hardwood. J Trop For Sci 31(3):269–277
Lim SC, Gan KS, Chung RCK (2019) A dictionary of Malaysian Timbers. Malayan Forest Records No 30. Forest Research Institute Malaysia, Kepong, Selangor
Van Gelder HA, Poorter L, Sterck FJ (2006) In wood mechanics, allometry, and life history variation in a tropical rain forest tree community. New Phytol 171(2):367–378
Nordahlia AS, Anwar UMK, Hamdan H, Zaidon A, Mohd Omar MK (2014) Mechanical properties of 10-year-old sentang (Azadirachta excelsa) grown from vegetative propagation. J Trop For Sci 26(2):240–248
Okon KE (2014) Relationships between fibre dimensional characteristics and shrinkage behavior in a 25-year-old Gmelina arborea in Oluwa forest reserve, South West Nigeria. Appl Sci Res 6(5):50–57
Van Duong D, Matsumura J (2018) Transverse shrinkage variations within tree stems of Melia azedarach planted in northern Vietnam. J Wood Sci 64:720–729
Sotannde OA, Oluyege AO, Adeogun PF, Maina SB (2010) Variation in wood density, grain and anisotropic shrinkage of plantation grown Azadiractha indica. J Appl Sci Res 6(11):1855–1861
Muñoz GR, Anta MB (2010) Physical properties of tinning wood in maritime pine (Pinus pinaster Ait): Case study. Eur J For Res 129(103):71045
Fanny H, Ramadhani AP, Harry P, Sri S (2018) Pengaruh kecepatan pertumbuhan terhadap sifat fisika dan mekanika kayu Acacia mangium Umur 4 Tahun asal Wonogiri, Jawa Tengah (Effect of growth rate on physical and mechanical properties of 4 year old Acacia mangium wood from Wonogiri, Central Java). J For Sci 12:248–254
Machado JS, Louzada JL, Santos AJA, Nunes L, Anjos O, Rodrigues J, Simões RMS, Pereira H (2014) Variation of wood density and mechanical properties of Blackwood (Acacia melanoxylon R.Br.). Mater Des 56:975–980
Izekor DN, Fuwape JA, Oluyege AO (2010) Effects of density on variations in the mechanical properties of plantation grown Tectona grandis wood. Appl Sci Res 2(6):113–120
Naji HR, Suhaimi MH, Nobuchi T, Bakar ES (2013) Intra and interclonal variation in anatomical properties of Hevea Brasiliensis Muell. Argic Wood Fiber Sci 45(3):268–278
Bal BC, Bektaş İ (2012) The physical properties of heartwood and sapwood of Eucalyptus grandis. Proligno 8(4):35–43
Bal BC, Bektaş İ (2013) The mechanical properties of heartwood and sapwood of Flooded gum (Eucalyptus grandis) Grown in Karabucak, Turkey. Ormancilik Dergisi (Forestry Magazine) 9(1):71–76
Forest Products Division, Forest Research Institute Malaysia, 52109, Kepong, Selangor Darul Ehsan, Malaysia
H. Hamdan, A. S. Nordahlia, U. M. K. Anwar, M. Mohd Iskandar, M. K. Mohamad Omar & Tumirah K
H. Hamdan
A. S. Nordahlia
U. M. K. Anwar
M. Mohd Iskandar
M. K. Mohamad Omar
Tumirah K
All authors have participated sufficiently in the study of wood properties and are responsible for the entire contents. The author read and approved the final manuscript.
Correspondence to A. S. Nordahlia.
Hamdan, H., Nordahlia, A.S., Anwar, U.M.K. et al. Anatomical, physical, and mechanical properties of four pioneer species in Malaysia. J Wood Sci 66, 59 (2020). https://doi.org/10.1186/s10086-020-01905-z
DOI: https://doi.org/10.1186/s10086-020-01905-z
Pioneer species
|
CommonCrawl
|
Multistability and localized attractors in a dissipative discrete NLS equation
A kinetic energy reduction technique and characterizations of the ground states of spin-1 Bose-Einstein condensates
June 2014, 19(4): 1129-1136. doi: 10.3934/dcdsb.2014.19.1129
On the limit cycles of the Floquet differential equation
Jaume Llibre 1, and Ana Rodrigues 2,
Departament de Matemàtiques, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Catalonia
College of Engineering, Mathematics and Physical Sciences, University of Exeter, Exeter EX4 4QF, United Kingdom
Received April 2012 Revised February 2014 Published April 2014
We provide sufficient conditions for the existence of limit cycles for the Floquet differential equations $\dot {\bf x}(t) = A{\bf x}(t)+ε(B(t){\bf x}(t)+b(t))$, where ${\bf x}(t)$ and $b(t)$ are column vectors of length $n$, $A$ and $B(t)$ are $n\times n$ matrices, the components of $b(t)$ and $B(t)$ are $T$--periodic functions, the differential equation $\dot {\bf x}(t)= A{\bf x}(t)$ has a plane filled with $T$--periodic orbits, and $ε$ is a small parameter. The proof of this result is based on averaging theory but only uses linear algebra.
Keywords: Floquet differential equation, averaging theory., periodic solution, limit cycle.
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C3.
Citation: Jaume Llibre, Ana Rodrigues. On the limit cycles of the Floquet differential equation. Discrete & Continuous Dynamical Systems - B, 2014, 19 (4) : 1129-1136. doi: 10.3934/dcdsb.2014.19.1129
V. I. Arnold, V. V. Kozlov and A. I. Neishtadt, Mathematical Aspects of Classical and Celestial Mechanics,, Second Printing, (1997). Google Scholar
A. Buică, J.P. Françoise and J. Llibre, Periodic solutions of nonlinear periodic differential systems with a small parameter,, Communication on Pure and Applied Analysis, 6 (2007), 103. doi: 10.1016/j.physd.2011.11.007. Google Scholar
C. Chicone, Ordinary Differential Equations with Applications,, Springer-Verlag, (1999). Google Scholar
D. G. de Figueiredo, Análise de Fourier e Equaçoes Diferenciais Parciais,, Projeto Euclides 5, 5 (1977). Google Scholar
M. W. Hirsch and S. Smale, Differential Equations, Dynamical Systems and Linear Algebra,, Pure and Applied Mathematics 60, 60 (1974). Google Scholar
J. Llibre, M.A. Teixeira and J. Torregrosa, Limit cycles bifurcating from a $k$-dimensional isochronous set center contained in $R^n$ with $k \leq n$,, Math. Phys. Anal. Geom., 10 (2007), 237. doi: 10.1007/s11040-007-9030-7. Google Scholar
P. Lochak and C. Meunier, Multiphase averaging for classical systems,, Appl. Math. Sciences 72, 72 (1988). doi: 10.1007/978-1-4612-1044-3. Google Scholar
I. G. Malkin, Some Problems of the Theory of Nonlinear Oscillations,, (Russian) Gosudarstv. Izdat. Tehn.-Teor. Lit., (1956). Google Scholar
M. Roseau, Vibrations non Linéaires et Théorie de la Stabilité,, (French) Springer Tracts in Natural Philosophy, 8 (1966). Google Scholar
J. A. Sanders, F. Verhulst and J. Murdock, Averaging Methods in Nonlinear Dynamical Systems,, Second edition, 59 (2007). Google Scholar
W. F. Trench, On nonautonomous linear systems of differential and difference equations with R-symmetric coefficient matrices,, Linear Algebra Appl., 431 (2009), 2109. doi: 10.1016/j.laa.2009.07.004. Google Scholar
W. F. Trench, Asymptotic preconditioning of linear homogeneous systems of differential equations,, Linear Algebra Appl., 434 (2011), 1631. doi: 10.1016/j.laa.2010.03.026. Google Scholar
F. Verhulst, Nonlinear Differential Equations and Dynamical Systems,, Universitext, (1996). doi: 10.1007/978-3-642-61453-8. Google Scholar
Shanshan Liu, Maoan Han. Bifurcation of limit cycles in a family of piecewise smooth systems via averaging theory. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020133
Jihua Yang, Liqin Zhao. Limit cycle bifurcations for piecewise smooth integrable differential systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2417-2425. doi: 10.3934/dcdsb.2017123
Fatih Bayazit, Ulrich Groh, Rainer Nagel. Floquet representations and asymptotic behavior of periodic evolution families. Discrete & Continuous Dynamical Systems - A, 2013, 33 (11&12) : 4795-4810. doi: 10.3934/dcds.2013.33.4795
Lijun Wei, Xiang Zhang. Limit cycle bifurcations near generalized homoclinic loop in piecewise smooth differential systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (5) : 2803-2825. doi: 10.3934/dcds.2016.36.2803
Jaume Llibre, Clàudia Valls. Hopf bifurcation for some analytic differential systems in $\R^3$ via averaging theory. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 779-790. doi: 10.3934/dcds.2011.30.779
Qiongwei Huang, Jiashi Tang. Bifurcation of a limit cycle in the ac-driven complex Ginzburg-Landau equation. Discrete & Continuous Dynamical Systems - B, 2010, 14 (1) : 129-141. doi: 10.3934/dcdsb.2010.14.129
Changrong Zhu, Bin Long. The periodic solutions bifurcated from a homoclinic solution for parabolic differential equations. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3793-3808. doi: 10.3934/dcdsb.2016121
S. L. Ma'u, P. Ramankutty. An averaging method for the Helmholtz equation. Conference Publications, 2003, 2003 (Special) : 604-609. doi: 10.3934/proc.2003.2003.604
Andrej V. Plotnikov, Tatyana A. Komleva, Liliya I. Plotnikova. The averaging of fuzzy hyperbolic differential inclusions. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1987-1998. doi: 10.3934/dcdsb.2017117
Ben Niu, Weihua Jiang. Dynamics of a limit cycle oscillator with extended delay feedback. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1439-1458. doi: 10.3934/dcdsb.2013.18.1439
Valery A. Gaiko. The geometry of limit cycle bifurcations in polynomial dynamical systems. Conference Publications, 2011, 2011 (Special) : 447-456. doi: 10.3934/proc.2011.2011.447
Yu-Hsien Chang, Guo-Chin Jau. The behavior of the solution for a mathematical model for analysis of the cell cycle. Communications on Pure & Applied Analysis, 2006, 5 (4) : 779-792. doi: 10.3934/cpaa.2006.5.779
Anatoli F. Ivanov, Sergei Trofimchuk. Periodic solutions and their stability of a differential-difference equation. Conference Publications, 2009, 2009 (Special) : 385-393. doi: 10.3934/proc.2009.2009.385
P. Dormayer, A. F. Ivanov. Symmetric periodic solutions of a delay differential equation. Conference Publications, 1998, 1998 (Special) : 220-230. doi: 10.3934/proc.1998.1998.220
Xiao-Qian Jiang, Lun-Chuan Zhang. A pricing option approach based on backward stochastic differential equation theory. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 969-978. doi: 10.3934/dcdss.2019065
Defei Zhang, Ping He. Functional solution about stochastic differential equation driven by $G$-Brownian motion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (1) : 281-293. doi: 10.3934/dcdsb.2015.20.281
Mickael Chekroun, Michael Ghil, Jean Roux, Ferenc Varadi. Averaging of time - periodic systems without a small parameter. Discrete & Continuous Dynamical Systems - A, 2006, 14 (4) : 753-782. doi: 10.3934/dcds.2006.14.753
Zvi Artstein. Averaging of ordinary differential equations with slowly varying averages. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 353-365. doi: 10.3934/dcdsb.2010.14.353
Magdalena Caubergh, Freddy Dumortier, Robert Roussarie. Alien limit cycles in rigid unfoldings of a Hamiltonian 2-saddle cycle. Communications on Pure & Applied Analysis, 2007, 6 (1) : 1-21. doi: 10.3934/cpaa.2007.6.1
Fangfang Jiang, Junping Shi, Qing-guo Wang, Jitao Sun. On the existence and uniqueness of a limit cycle for a Liénard system with a discontinuity line. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2509-2526. doi: 10.3934/cpaa.2016047
Jaume Llibre Ana Rodrigues
|
CommonCrawl
|
Optical manipulation of Rashba-split 2-dimensional electron gas
Driving ultrafast spin and energy modulation in quantum well states via photo-induced electric fields
Samuel T. Ciocys, Nikola Maksimovic, … Alessandra Lanzara
Room-temperature coherent optical manipulation of hole spins in solution-grown perovskite quantum dots
Xuyang Lin, Yaoyao Han, … Kaifeng Wu
Angular momentum transfer from photon polarization to an electron spin in a gate-defined quantum dot
Takafumi Fujita, Kazuhiro Morimoto, … Seigo Tarucha
Room-temperature electron spin polarization exceeding 90% in an opto-spintronic semiconductor nanostructure via remote spin filtering
Yuqing Huang, Ville Polojärvi, … Weimin M. Chen
All-optical generation and ultrafast tuning of non-linear spin Hall current
Jonas Wätzel & Jamal Berakdar
Optical control of spin-polarized photocurrent in topological insulator thin films
Hiroaki Takeno, Shingo Saito & Kohji Mizoguchi
Spin Hall photoconductance in a three-dimensional topological insulator at room temperature
Paul Seifert, Kristina Vaklinova, … Alexander W. Holleitner
Manipulating long-lived topological surface photovoltage in bulk-insulating topological insulators Bi2Se3 and Bi2Te3
Samuel Ciocys, Takahiro Morimoto, … Alessandra Lanzara
Ultrafast laser-driven topological spin textures on a 2D magnet
Mara Strungaru, Mathias Augustin & Elton J. G. Santos
M. Michiardi ORCID: orcid.org/0000-0001-9640-50931,2,3,
F. Boschini ORCID: orcid.org/0000-0003-3503-93891,2,4,
H.-H. Kung1,2,
M. X. Na ORCID: orcid.org/0000-0002-0470-21441,2,
S. K. Y. Dufresne1,2,
A. Currie1,2,
G. Levy ORCID: orcid.org/0000-0003-2980-08051,2,
S. Zhdanovich ORCID: orcid.org/0000-0002-0673-50891,2,
A. K. Mills ORCID: orcid.org/0000-0002-6629-59191,2,
D. J. Jones1,2,
J. L. Mi5,
B. B. Iversen ORCID: orcid.org/0000-0002-4632-10245,
Ph. Hofmann6 &
A. Damascelli ORCID: orcid.org/0000-0001-9895-22261,2
Nature Communications volume 13, Article number: 3096 (2022) Cite this article
An Author Correction to this article was published on 03 August 2022
This article has been updated
In spintronics, the two main approaches to actively control the electrons' spin involve static magnetic or electric fields. An alternative avenue relies on the use of optical fields to generate spin currents, which can bolster spin-device performance, allowing for faster and more efficient logic. To date, research has mainly focused on the optical injection of spin currents through the photogalvanic effect, and little is known about the direct optical control of the intrinsic spin-splitting. To explore the optical manipulation of a material's spin properties, we consider the Rashba effect. Using time- and angle-resolved photoemission spectroscopy (TR-ARPES), we demonstrate that an optical excitation can tune the Rashba-induced spin splitting of a two-dimensional electron gas at the surface of Bi2Se3. We establish that light-induced photovoltage and charge carrier redistribution - which in concert modulate the Rashba spin-orbit coupling strength on a sub-picosecond timescale - can offer an unprecedented platform for achieving optically-driven spin logic devices.
Spintronics has the potential to deliver computational devices that are less volatile, faster, and more energy efficient with respect to their electronic counterparts1. However, the need to control the spin degree of freedom in a fast and efficient manner is challenging, as the field required to flip the electron's spin in magnetic materials is often prohibitively high. Spin-orbit coupling (SOC) effects, such as the Rashba effect, allow the formation of spin-polarized electron states without a magnetic moment, thereby circumventing this limitation. In particular, the Rashba effect manifests as a broken spin degeneracy at semiconductor interfaces, resulting in quasi-particle bands of opposite spin texture that are offset in momentum2,3. The Rashba effect has long been a staple in the field of spintronics owing to its superior tunability, which allows the observation of fully spin-dependent phenomena, such as the spin-Hall effect, spin-charge conversion, and spin-torque in semiconductor devices4,5. An example of a Rashba-split quasi-free electron state with effective mass m* is shown in Fig. 1a. To the first order, its dispersion relation is given by:
$$E=\frac{{\hslash }^{2}{k}^{2}}{2{m}^{* }}\pm {\alpha }_{R}k.$$
Here, the parameter αR is the strength of the Rashba SOC (RSOC) in the system, and it depends on the atomic SOC as well as the electric field perpendicular to the surface (E⊥). Experimentally, αR can be extracted from the detailed dispersion of the spin-split subbands: the energy splitting of the subbands is given by ΔER = 2αRk, and can be seen as a momentum-dependent Zeeman splitting caused by the pseudo-magnetic field – or Rashba field – BR ∝ k × E⊥; correspondingly, the momentum splitting is given by ΔkR = 2αRm*/ℏ2.
Fig. 1: Rashba spin-orbit coupling in two-dimensional electron gas.
a Rashba spin-orbit-coupling (RSOC) splits a free electron state into two subbands carrying opposite spin texture (red and blue). The splitting of the free electron state in both energy (ΔER) and momentum (ΔkR) is proportional to the RSOC strength αR, which is tunable with an electric field E⊥. This Rashba splitting locks the electron's spin to its momentum. b Fundamental design of a spin field-effect transistor (spinFET) in which spin-polarized electrons are injected from a source into a Rashba 2DEG and collected with a ferromagnetic drain. Due to the momentum-dependent splitting of Rashba 2DEGs, charges traversing from source to drain feel an effective magnetic field, BR, proportional to αR, perpendicular to their direction of motion, causing their spin to precess. The spin polarization of carriers changes by the angle ΔΘ = ΔkR L, where L is the length of the 2DEG. Modulating ΔkR—conventionally via an electric field—switches the spinFET between a state of high {0} and low {1} resistance.
We illustrate the inner workings of these parameters with the paradigmatic example of the spin field-effect transistor (spinFET), depicted in Fig. 1b. In the pioneering concept of Datta and Das6, a Rashba-split two-dimensional electron gas (2DEG) in a channel of length L is sandwiched between two spin-polarized leads. As electrons transit the 2DEG in the direction perpendicular to the Rashba field BR, their spin precesses, acquiring a phase ΔΘ = ΔkRL (assuming the chemical potential lies above the bands' degeneracy point). Switching between the 0/1 logic operation - corresponding to the low/high resistance state in the device—is achieved by tuning ΔkR such that the electron spin at L aligns to that of the drain lead. As ΔkR is proportional to αR, the operation of such a device relies primarily on the possibility to tune the RSOC, typically realized by gating the 2DEG7.
In spintronic devices such as the spinFET, the prospect to replace the gate with an optical field prompts the development of even faster and more efficient hybrid opto-spintronics. To this end, previous works have demonstrated the generation of spin-polarized currents in Rashba and topological states through the photogalvanic effect, as well as the ultrafast switching of spin orientation in antiferromagnets8,9,10,11,12,13,14, however little is known about the direct optical control of the intrinsic spin splitting15,16. Here, we show that light can change the RSOC strength, effectively manipulating the Rashba spin-transport properties on an engineered 2DEG. The proposed mechanism is as follows: in the presence of a band-bending surface potential, an above-gap optical excitation drives a charge redistribution along the axis perpendicular to the surface. This charge redistribution creates an ultrafast photovoltage, which then reliably alters the RSOC strength (αR) of the 2DEG system on a sub-picosecond timescale. We employ time- and angle-resolved photoemission spectroscopy (TR-ARPES) to track the evolution of the RSOC strength through the dispersion of the Rashba 2DEGs. By directly measuring ΔkR and ΔER as a function of pump-probe delay, we unambiguously extract the evolution of αR.
Among the materials that can host Rashba-split 2DEGs, bismuth-based topological insulators (TI) are an ideal platform: 2DEGs can be induced on the surface of TIs by applying a positive surface bias or chemical gating17,18,19,20. The combination of the strong atomic SOC in TIs with surface gating generates a substantial Rashba effect in the 2DEGs, allowing one to finely resolve the spin splitting. In an ideal TI, only the topological surface state (TSS)—recognizable by its linear dispersion across the bandgap—crosses the Fermi level (EF), and all charge carriers belong to the TSS21. As represented in Fig. 2a, the application of a sufficient positive bias at the surface induces a strong band bending, leading to the creation of 2DEGs in the form of surface confined quantum well states (QWSs). While the TSS wavefunction extends only within a few layers from the surface and does not depend on the shape of the surface potential, the wavefunction of the QWS does, and extends comparatively deeper into the bulk18. The difference in spatial extent between the TSS and QWS wavefunctions allows us to extract the behavior specific to QWSs, as opposed to the behavior of surface states in general.
Fig. 2: TR-ARPES of surface-gated topological insulators.
a Representation of the surface and bulk electronic structure in a surface-gated topological insulator as a function of momentum, energy, and distance from the surface (the side view displays the momentum integrated projection of the band structure, where CB and VB are the conduction and valence bands). Two-dimensional electron gases (2DEGs) taking the form of spatially confined quantum well states (QWSs) are created by a sufficiently large positive bias applied to the surface. The dispersion, Rashba-splitting, and spatial extent of the 2DEGs depend on the detailed shape of the band bending. Here, the band bending pushes the two lowest QWSs (blue and green) below the Fermi energy. b TR-ARPES experiment on p-type Bi2Se3; the cleaved sample is gated in situ by alkali atom deposition. A near-infrared (1.55 eV) "pump" pulse perturbs the system, and a UV (6.2 eV) pulse is used to probe the electronic structure by ARPES. The time delay (Δt) between pump and probe pulses is varied to resolve the electron dynamics. c Temporal evolution of the QWSs in p-doped Bi2Se3 plotted relative to the electron quasi-Fermi level EFn. The left panel shows the ARPES spectra of all surface states before pump arrival (–100 ps); in the center panel photoemission intensity integrated around the Brillouin zone center (black dashed lines) is shown as a function of time (pump and probe are overlapped at time zero, red dashed line). We observe that a second QWS emerges after the pump excitation; the right panel shows the dispersion at 500 ps, characterized by two partially populated QWSs.
Our experimental approach is depicted in Fig. 2b. We choose p-doped Bi2Se3 to host QWSs, as the hole doping provides a lower bulk conductivity in this material. The 2DEGs are prepared by depositing a controlled amount of alkali atoms on the surface, leading to a population of conduction band-derived states that are spin-split by the Rashba effect. Increasing the concentration of deposited atoms is analogous to raising the surface bias, which introduces a higher surface charge density and stronger band bending. The system is then optically excited with a near-infrared (1.55 eV) "pump" pulse and its response is probed by photoemission using a UV (6.2 eV) pulse at variable time delay Δt22,23. The result of such a TR-ARPES experiment is summarized in Fig. 2c over a long range of delays. The left and right panels show the ARPES spectra at negative delay (–100 ps), and 500 ps after the pump arrival, respectively. The central panel presents the evolution of the states at the Brillouin zone center (black dashed lines in the left panel). Before the pump arrival (–100 ps), the system is in equilibrium; the Fermi level is crossed by the linear topological surface state (TSS) and a single parabolic band, nominally the first quantum well state (QWS1). Here, the Rashba-splitting is just barely discernible, owing to the moderate chemical gating. At zero-delay, electrons are optically excited into unoccupied states and subsequently decay into a quasi-thermalized state24,25. Remarkably, we see that QWS1 is pushed to lower energies after the excitation, and a second band becomes populated. This second band (shown also in the spectrum at 500 ps) is in fact the second quantum well state (QWS2), which emerges following an increase in surface charge density. It is worth noting that the aforementioned photovoltage induced by the pump pulse also affect the kinetic energy of photoemitted electrons26,27,28. This manifests as a rigid shift of the ARPES spectra that can be accounted for by a simple subtraction; henceforth, we refer all energy scales to the electron quasi-Fermi level EFn, extracted by fitting a Fermi-Dirac distribution to the photoemission intensity around the TSS Fermi vector (details can be found in the supplementary information).
In pursuance of determining the impact of the optical excitation on the Rashba effect, we perform TR-ARPES on a sample with a higher concentration of deposited alkali atoms, so that energy and momentum splittings are better distinguished. The results of this experiment are shown in Fig. 3. In panel a, the dispersion is shown for three pump-probe delays (Δt = –0.5, 0, and 8 ps). We observe that both QWSs (parabolic bands) are populated before pump arrival, with energy minima at –114 and –13 meV, and QWS1 exhibits a visible and strong Rashba splitting. Differential ARPES maps [I(k, E, t) − I(k, E, − 0.5 ps)] of the 0 and 8 ps delays are also shown, highlighting the pump-induced modification of the QWSs. At time zero, the optical excitation creates an electron population (depletion) above (below) EFn, but shows no appreciable change in dispersion. However, at 8 ps, while the TSS shows no significant change, both QWSs shift downwards in response to an increase in surface charge, similarly to what was reported in Fig. 2(c). This is further emphasized in Fig. 3b where the time dependent energy shifts of the QWSs at Brillouin zone center are displayed. For both QWSs, the energy minimum shows a fluctuation at short timescales before eventually settling to lower energy. We fit the curves in Fig. 3b with a phenomenological model that includes two exponentially decaying processes, shown in purple and cyan respectively (note that the latter curve appears flat because of a long decay time). The first process acts to increase the QWSs' energy, peaking at approximately 1.5 ps after the pump excitation, and decaying within 3 ps. The dynamics of this components follows the same temporal evolution of the electronic temperature in the system (shown in Supplementary Information), and the timescales are characteristic of the optically-driven electron population above the Fermi level in TIs24; therefore, we attribute this process to an effect caused by the presence of hot carriers (HC) close to EFn. The effect of HC on the surface potential can be extremely complex, but it is likely that, the mobile hot carriers further screen the built-in electric field, causing the QWSs' to shift to higher energy. The second and more interesting process is a long-lasting shift of the QWSs to lower energy, that emerges as a consequence of an increase in the surface electron population and variation of the electrostatic environment. This process—as we will show in detail—is given by a photovoltage (PV) effect, and it is the central mechanism of this work. The effect of the PV arises within a few hundred femtoseconds and alters the energy and density of the QWSs over hundreds of picoseconds.
Fig. 3: Ultrafast response of Rashba QWSs to optical perturbation.
a ARPES dispersion at negative time delay (before pump arrival), at time zero (pump and probe fully overlapped), and at 8 ps. The latter two are accompanied by differential spectra obtained by subtracting the spectrum acquired at around –0.5 ps; the blue (red) color is indicative of a pump-induced decrease (increase) of photoemission intensity. b Temporal evolution of the energy minimum for QWS1 and QWS2 extracted from fitting the ARPES data at k∥ = 0. The fit to the experimental data (solid black line) stems from two contributions, each consisting of a finite rise-time step function and an exponential decay. The positive contribution (purple) is defined as a hot-carriers (HC) driven process, which is short-lived; the negative contribution (cyan) is the result of a photovoltage (PV) effect. c Momentum distribution curves (MDC) profiles across the right branch of QWS1 (horizontal dashed line in (a)) at EFn at equilibrium (purple) and after 8 ps (blue) relative to the Fermi wave-vector of the inner branch kF1; the solid lines are Voigt fits to the data. The momentum splitting ΔkR is the distance between two peaks of the same MDC, and it is dynamically reduced with the pump excitation from (26 ± 0.5) × 10−3 (at Δt < 0) to (22.5 ± 0.5) × 10−3 Å−1 (at Δt = 8 ps). d Energy distribution curves (EDC) profiles across the inner and outer branch of QWS1 at k∥ = 0.05 Å−1 (vertical dashed line in a) before pump arrival (purple) and after 8 ps (blue). The optical excitation also induces the reduction of the energy splitting. EDC and MDC profiles in (c and d) have been deconvolved by the energy resolution via Lucy-Richardson algorithm (Ref. 47) for better clarity. Both profiles are fitted using Voigt functions (solid lines). e Temporal evolution of the RSOC strength αR in QWS1 calculated from Eq. (1) and extracted by fitting the momentum (orange) and energy (green) splitting at several time delay; αR is reduced by about 0.1 eVÅ at 8 ps with respect to equilibrium. The values of ΔkR at EFn are explicitly plotted against the left axes. At 0 < Δt < 2 the signal is too low to convey reliable physical significance. All the values of energy and momentum splitting are obtained by fitting the raw data, and error bars are evaluated from statistical distribution within 95% confidence.
As the PV alters the electrostatic environment at the surface, we expect it will have an impact on the Rashba effect as well. For a quantitative look at the momentum splitting, we plot in Fig. 3c the momentum distribution curves (MDCs) of QWS1 at EFn, in equilibrium (before the pump arrival) and 8 ps after the excitation. The MDCs span the two spin-polarized bands on the right-hand side of QWS1 (see red dashed line in Fig. 3a) and are referenced to the Fermi momentum of the inner branch (kF1). The MDC peak locations are indicated by dashed lines; in comparing the equilibrium (purple) and post-excitation (blue) MDCs, we observe that the momentum splitting of the carriers is reduced from (26.0 ± 0.5) × 10−3 to (22.5 ± 0.5) × 10−3 Å−1. A similar result is observed for the energy splitting: in Fig. 3d, we plot the energy distribution curves (EDCs), for the same two delays, along the cut shown in Fig. 3a; we find that, whereas the outer branch of QWS1 maintains its position, the inner branch moves to lower energy, leading to a reduction of the energy splitting ΔER by approximately 13 meV. The additional shoulder observed in the EDC at 8 ps arises from QWS2, which also moves to significantly lower energy. The simultaneous reduction of both ΔkR and ΔER is a clear indication of an optically driven change of the Rashba spin-orbit coupling strength αR, and excludes modifications of the electron dispersion and effective mass as relevant contributions.
The full temporal dynamics of the Rashba effect in QWS1 under optical excitation is given in Fig. 3e, where the splitting in momentum ΔkR (in orange), as well as the RSOC strength obtained following Eq. (1), are shown on the left and right y-axis, respectively. We observe that ΔkR decreases immediately after the excitation, and—after less than 3 ps—is effectively reduced to a seemingly constant value. The change in the RSOC strength is about 15%, decreasing from a value of 0.76 ± 0.02 to 0.66 ± 0.02 eVÅ. Similar values of αR are obtained by performing an analogous analysis on ΔER(k), plotted in green; here, the MDC and EDC analysis between 0 and 2 ps could not be performed due to the presence of a highly non-thermal electronic distribution. Our data outline a scenario where an opportune optical pulse changes the RSOC strength, in a manner similar to a static electric field, on a picosecond time-scale.
The observation of an increased surface electron density in concert with a decrease in RSOC is, however, nontrivial, as the two quantities typically increase/decrease correspondingly. Thus, a satisfactory explanation for the observed effect of the pump excitation, requires one to consider the detailed variation of the surface electric field in relation to the spatial electron distribution. To this end, we build a model to capture the salient aspects of the experimental results. We begin by calculating the band bending of the system at equilibrium, which can be described by a one-dimensional model in the out of plane direction x. The potential profile V(x) is calculated by solving the Poisson equation within a modified Thomas-Fermi approximation29,30. The binding energy and wavefunction of the QWSs is computed a posteriori by solving the Schrödinger equation within the calculated V(x). For our simulation, all material-specific parameters for the calculation are taken from Refs. 31,32,33,34, and the surface potential V0 is determined empirically by the shift of the TSS Dirac point induced by chemical gating (see Supplementary Information).
We present the calculated equilibrium potential and QWSs' energy minima in Fig. 4a. The space charge region (SCR) spans more than 30 nm, and QWS1 and QWS2 are partially populated, replicating the experimental observations of Fig. 3. Following an optical excitation across the band gap, the generated electron-hole pairs within the SCR are swept apart by the electric field ESCR, which pushes the negative charges towards the surface and the positive charges towards the bulk35. The electrons and holes become spatially separated over tens of nanometers, giving rise to a long-lasting photovoltage field EPV of the opposite sign to ESCR. This effectively softens the band bending, pushing the surface potential V0 to less negative values, as shown in Fig. 4b. The new shallower surface potential drives both the QWSs and the EFn to higher energy. To accommodate the surplus of surface electric charge, the EFn shifts further upwards, resulting in the QWSs moving to more negative energies when plotted with respect to EFn, as seen in the TR-ARPES data (details in Supplementary Information). It must be noted that, as small surface state-induced band bending is a common feature in semiconductors, surface PV has previously been observed in pristine TIs36,37,38. However, while this is technologically relevant for TIs—because it leads to spin-polarized diffusion currents—the TSS only undergoes a rigid shift in energy under the surface PV. The 2DEGs, on the other hand, are much more sensitive to the shape and magnitude of the confining potential V(x), and by extension, the PV.
Fig. 4: Simulations of the band bending and quantum-well state dynamics.
a Band-bending profile for a surface biased p-doped Bi2Se3; the surface boundary condition V0 is given by the shift of the Dirac point after potassium evaporation, and the bending is calculated solving the Poisson equation within a modified Thomas-Fermi approximation. The energy minima of QWS1 and QWS2 are solutions of the Schrödinger equation within the confining potential, and well reproduce experimental observations. At zero-delay, an optical excitation creates free electron-hole pairs that are swept apart by the built-in electric field of the space-charge region, ESCR. b After 8 ps the charge separation between electrons and holes generates a photovoltage whose electric field (EPV) opposes ESCR and softens the band bending, causing the QWSs to shift upwards. Concomitantly, the increasing surface electron density shifts the EFn upwards. c The calculated energy minima of QWS1 and QWS2 with respect to EFn are obtained from a time-dependent calculation of the band bending and QWSs' energy levels; the observed decrease at time zero is given by the change of the electrostatic environment and carrier redistribution across SCR. d The simulated Rashba momentum splitting (ΔkR) at the Fermi level for QWS1; the Rashba strength αR is given on the right axes. The photovoltage-induced reduction of the surface electric field in the system is responsible for the decrease of αR and ΔkR. Inset: Simulated spectral functions of QWS1 before and 8 ps after the pump excitation, constructed from the results of the dynamical simulation. The simulated data confirms the two main observation from the TR-ARPES experiment, an increase in electron population accompanied by a decrease in the spin-splitting.
The full temporal dynamics of the QWSs has been simulated by introducing a pair of effective photo-charges in the system, which approximates the collective charge motion via a center of mass approach39. At each time step, the charge distribution, electric field and EFn are reevaluated. The time evolution of the QWSs' energy with respect to the EFn—shown in Fig. 4c—is in good qualitative agreement with experimental data. Both QWS1 and QWS2 fall to more negative values almost instantaneously at positive delays. Consistent with experimental observations, the energy shift of QWS2 is larger than that of QWS1. This is a manifestation of the shallower confining potential, which allows for a smaller energy difference between consecutive QWSs. Lastly, the RSOC strength and, by extension, Rashba-splitting for QWS1 are calculated at each step from Eq. (1). The evolution of ΔkR is also consistent with experimental findings (Fig. 4d), confirming that the reduction in the Rashba strength is due to the PV-induced softening of the surface potential.
With the exception of the fluctuation at early delays given by the presence of hot carriers, our simple model succeeds in reproducing all salient features of the experimental data, proving that a photovoltage is responsible for the observed behavior. The small quantitative deviations between the simulated and measured PV effect can be attributed to the model simplicity and approximations, such as the omission of changes in the screening and dielectric properties of the material induced by the PV. More complex simulations of the PV effect such as those recently developed in ref. 40 might further improve quantitative accuracy. Nevertheless, the model captures the fundamental observations of the experiment and provides a clear explanation of the underlying mechanisms. The simulated spectral function in the inset of Fig. 4d showcases the calculated dispersion of QWS1 at the two representative time delays, highlighting the faithful reproduction of the TR-ARPES data. Finally, the PV model can also account for the long timescale (950 ps) needed to recover equilibrium conditions (Fig. 2c), as the spatial separation of electrons and holes in the SCR drastically reduces the recombination rate. The same timescale is expected for the Rashba splitting to recover its initial value as it is modified by the same effect. Since the return to equilibrium is ultimately determined by the charge carriers' diffusion from the illuminated area, the lifetime of the PV effect could in principle be tuned by varying the size of the pump beam.
In conclusion, we demonstrated that light can be used to control the Rashba spin splitting and, by extension, the spin transport properties in semiconductor devices. Specifically, an optically driven photovoltage can be used to manipulate the surface band dispersion and electron distribution at ultrafast (picosecond) timescales. The specific application of this technique on 2DEGs to tune the Rashba-splitting on a picosecond timescale is an important benchmark for the development of optically controlled spin devices. While the implementation of this effect in a working device is by no means trivial, it is informative to contextualize our finding within the framework of the spinFET discussed in Fig. 1b: the observed variation of ΔkR in QWS1, about 3.5 × 10−3 Å−1, translates into a difference in spin precession angle of π after < 100 nm travel distance, making this effect theoretically appreciable in devices of such length, where ballistic transport can be achieved. It is important to emphasize that, while this study is performed on a TI platform, the underlying physics does not require topological non-triviality and is universal to semiconductors. The effect of the PV on the Rashba strength can be enhanced producing 2DEGs with higher effective masses, while surface gating and pump fluence can be utilized as tuning parameters (see Supplementary Information).
Samples of Ca-doped Bi2Se3 are synthesized as described in Ref. 41. Here, Ca acts as acceptor atom, positively doping Bi2Se3 which is normally found to be n-doped due to Se vacancies. The samples are cleaved in vacuum at pressures lower than 7 ⋅ 10−11 mbar, and kept at a temperature of 15 K during evaporation and measurements. 2DEGs are induced by evaporating K (Fig. 2) or Li (Fig. 3) in situ on the cleaved sample surface. The TR-ARPES experiments are performed at QMI's UBC-Moore Center for Ultrafast Quantum Matter42,43, with 1.55 and 6.2 eV photons for pump and probe, respectively. Both pump and probe have linear horizontal polarization (parallel to the analyzer slit direction). The pump (probe) beam radius is 150 μm (100 μm), and the pump fluence is 40 and 80 μJ/cm2 for experiments represented in Figs. 2 and 3, respectively. Pump and probe were collinear with an incidence angle of 45 degrees with respect to the sample normal. Energy and temporal resolution are 17 meV and 250 fs, respectively, as determined by the width of the gold Fermi edge and of the combined pump-probe dynamics of the pump induced direct population peak in Bi2Se324. For the band bending model, the Poisson equation was solved numerically employing a modified Thomas-Fermi approximation, which intrinsically accounts for modulation of the charge density due to confinement-induced quantization, without the need for numerically heavy self-consistent calculations30,44. The Schrödinger equation was solved numerically with the Numerov algorithm45. The code is available at Ref. 46.
The authors declare that the main data supporting the findings of this study are available within the paper and its Supplementary Information files. The raw ARPES data generated in this study have been deposited in the Zenodo database under the digital object identifier https://doi.org/10.5281/zenodo.6471678. Extra data are available from the corresponding authors upon request.
A Correction to this paper has been published: https://doi.org/10.1038/s41467-022-32345-6
Oestreich, M. et al. Spintronics: Spin electronics and optoelectronics in semiconductors. Adv. Solid State Phys. 41, 173–186 (2001).
Bychkov, Y. A. & Rashba, E. I. Properties of a 2D electron gas with a lifted spectrum degeneracy. J. Exp. Theor. Phys. Lett. 39, 78–81 (1984).
Rashba, E. Spin-orbit coupling and spin transport. Phys. E: Low.-Dimensional Syst. Nanostruct. 34, 31–35 (2006).
Bercioux, D. & Lucignano, P. Quantum transport in Rashba spin-orbit materials: a review. Rep. Prog. Phys. 78, 106001 (2015).
Article ADS PubMed Google Scholar
Soumyanarayanan, A., Reyren, N., Fert, A. & Panagopoulos, C. Emergent phenomena induced by spin-orbit coupling at surfaces and interfaces. Nature 539, 509–517 (2016).
Datta, S. & Das, B. Electronic analog of the electro-optic modulator. Appl. Phys. Lett. 56, 665–667 (1990).
Chuang, P. et al. All-electric all-semiconductor spin field-effect transistors. Nat. Nanotechnol. 10, 35–39 (2015).
Article ADS CAS PubMed Google Scholar
Yuan, H. et al. Generation and electric control of spin-valley-coupled circular photogalvanic current in WSe2. Nat. Nanotechnol. 9, 851–857 (2014).
Liu, X. et al. Circular photogalvanic spectroscopy of Rashba splitting in 2D hybrid organic-inorganic perovskite multiple quantum wells. Nat. Commun. 11, 323 (2020).
Article ADS CAS PubMed PubMed Central Google Scholar
Kimel, A. V., Kirilyuk, A., Tsvetkov, A., Pisarev, R. V. & Rasing, T. Laser-induced ultrafast spin reorientation in the antiferromagnet TmFeO3. Nature 429, 850–853 (2004).
Nova, T. F. et al. An effective magnetic field from optically driven phonons. Nat. Phys. 13, 132–136 (2017).
McIver, J. W., Hsieh, D., Steinberg, H., Jarillo-Herrero, P. & Gedik, N. Control over topological insulator photocurrents with light polarization. Nat. Nanotechnol. 7, 96–100 (2012).
Wang, J., Zhu, B.-F. & Liu, R.-B. Proposal for direct measurement of a pure spin current by a polarized light beam. Phys. Rev. Lett. 100, 086603 (2007).
Zhou, B. & Shen, S.-Q. Deduction of pure spin current from the linear and circular spin photogalvanic effect in semiconductor quantum wells. Phys. Rev. B 75, 045339 (2007).
Sheremet, A. S., Kibis, O. V., Kavokin, A. V. & Shelykh, I. A. Datta-and-Das spin transistor controlled by a high-frequency electromagnetic field. Phys. Rev. B 93, 165307 (2016).
Cheng, L. et al. Optical manipulation of Rashba spin-orbit coupling at SrTiO3-based oxide interfaces. Nano Lett. 17, 6534–6539 (2017).
Bianchi, M. et al. Coexistence of the topological state and a two-dimensional electron gas on the surface of Bi2Se3. Nat. Commun. 1, 128 (2010).
Bahramy, M. et al. Emergent quantum confinement at topological insulator surfaces. Nat. Commun. 3, 1159 (2012).
Zhu, Z.-H. et al. Rashba spin-splitting control at the surface of the topological insulator Bi2Se3. Phys. Rev. Lett. 107, 186405 (2011).
Michiardi, M. et al. Strongly anisotropic spin-orbit splitting in a two-dimensional electron gas. Phys. Rev. B 91, 035445 (2015).
Barreto, L. et al. Surface-dominated transport on a bulk topological insulator. Nano Lett. 14, 3755–3760 (2014).
Perfetti, L. et al. Time evolution of the electronic structure of 1T-TaS2 through the insulator-metal transition. Phys. Rev. Lett. 97, 067402 (2006).
Freericks, J. K., Krishnamurthy, H. R. & Pruschke, T. Theoretical description of time-resolved photoemission spectroscopy: Application to pump-probe experiments. Phys. Rev. Lett. 102, 136401 (2009).
Sobota, J. et al. Ultrafast electron dynamics in the topological insulator Bi2Se3 studied by time-resolved photoemission spectroscopy. J. Electron Spectrosc. Relat. Phenom. 195, 249–257 (2014).
Hajlaoui, M. et al. Ultrafast surface carrier dynamics in the topological insulator Bi2Te3. Nano Lett. 12, 3532–3536 (2012).
Ciocys, S., Morimoto, T., Moore, J. E. & Lanzara, A. Tracking surface photovoltage dipole geometry in Bi2Se3 with time-resolved photoemission. J. Stat. Mech.: Theory Exp. 2019, 104008 (2019).
Yang, S.-L., Sobota, J. A., Kirchmann, P. S. & Shen, Z.-X. Electron propagation from a photo-excited surface: implications for time-resolved photoemission. Appl. Phys. A 116, 85–90 (2014).
Tanaka, S.-i. Utility and constraint on the use of pump-probe photoelectron spectroscopy for detecting time-resolved surface photovoltage. J. Electron Spectrosc. Relat. Phenom. 185, 152–158 (2012).
Paasch, G. & Übensee, H. A modified local density approximation. electron density in inversion layers. Phys. Status Solidi (b) 113, 165–178 (1982).
King, P. D. C., Veal, T. D. & McConville, C. F. Nonparabolic coupled Poisson-Schrödinger solutions for quantized electron accumulation layers: Band bending, charge profile, and subbands at InN surfaces. Phys. Rev. B 77, 125305 (2008).
Analytis, J. G. et al. Bulk Fermi surface coexistence with Dirac surface state in Bi2Se3: A comparison of photoemission and Shubnikov-de Haas measurements. Phys. Rev. B 81, 205407 (2010).
Martinez, G. et al. Determination of the energy band gap of Bi2Se3. Sci. Rep. 7, 6891 (2017).
Nechaev, I. A. et al. Evidence for a direct band gap in the topological insulator Bi2Se3 from theory and experiment. Phys. Rev. B 87, 121111 (2013).
Gao, Y.-B., He, B., Parker, D., Androulakis, I. & Heremans, J. P. Experimental study of the valence band of Bi2Se3. Phys. Rev. B 90, 125204 (2014).
Kronik, L. & Shapira, Y. Surface photovoltage phenomena: theory, experiment, and applications. Surf. Sci. Rep. 37, 1–206 (1999).
Yoshikawa, T. et al. Bidirectional surface photovoltage on a topological insulator. Phys. Rev. B 100, 165311 (2019).
Sánchez-Barriga, J. et al. Laser-induced persistent photovoltage on the surface of a ternary topological insulator at room temperature. Appl. Phys. Lett. 110, 141605 (2017).
Ciocys, S. et al. Manipulating long-lived topological surface photovoltage in bulk-insulating topological insulators Bi2Se3 and Bi2Te3. npj Quantum Mater. 5, 16 (2020).
Mora-Seró, I., Dittrich, T., Garcia-Belmonte, G. & Bisquert, J. Determination of spatial charge separation of diffusing electrons by transient photovoltage measurements. J. Appl. Phys. 100, 103705 (2006).
Kremer, G. et al. Ultrafast dynamics of the surface photovoltage in potassium-doped black phosphorus. Phys. Rev. B 104, 035125 (2021).
Bianchi, M. et al. Robust surface doping of Bi2Se3 by rubidium intercalation. ACS Nano 6, 7009–7015 (2012).
Mills, A. K. et al. Cavity-enhanced high harmonic generation for extreme ultraviolet time- and angle-resolved photoemission spectroscopy. Rev. Sci. Instrum. 90, 083001 (2019).
Boschini, F. et al. Collapse of superconductivity in cuprates via ultrafast quenching of phase coherence. Nat. Mater. 17, 416–420 (2018).
Trott, S., Trott, M. & Nakov, V. Modified thomas-fermi approximation. a surprisingly good tool for the treatment of semiconductor layer structures including various two-dimensional systems. Phys. Status Solidi B 177, 389–395 (1993).
Vigo-Aguiar, J. & Ramos, H. A variable-step Numerov method for the numerical solution of the Schrödinger equation. J. Math. Chem. 37, 255–262 (2005).
Article MathSciNet CAS MATH Google Scholar
Michiardi, M. P-MTFA-S band bending simulation (2021). https://github.com/okio-mm/P-MTFA-S_band_bending_simulation
Yang, H.-B. et al. Emergence of preformed Cooper pairs from the doped Mott insulating state in Bi2Sr2CaCu2O8+δ. Nature 456, 77–80 (2008).
We would like to thank M. Bianchi and P. D. C. King for the fruitful discussions. This research was undertaken thanks in part to funding from the Max Planck-UBC-UTokyo Center for Quantum Materials and the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program. This project is also funded by the Gordon and Betty Moore Foundation's EPiQS Initiative, Grant GMBF4779 to A.D. and D.J.J.; the Killam, Alfred P. Sloan, and Natural Sciences and Engineering Research Council of Canada's (NSERC's) Steacie Memorial Fellowships (A.D.); the Alexander von Humboldt Foundation (A.D.); the Canada Research Chairs Program (A.D.); NSERC, Canada Foundation for Innovation (CFI); the Department of National Defence; British Columbia Knowledge Development Fund (BCKDF); the VILLUM FONDEN via the Center of Excellence for Dirac Materials (Grant No. 11744); and the CIFAR Quantum Materials Program.
Quantum Matter Institute, University of British Columbia, Vancouver, BC, V6T 1Z4, Canada
M. Michiardi, F. Boschini, H.-H. Kung, M. X. Na, S. K. Y. Dufresne, A. Currie, G. Levy, S. Zhdanovich, A. K. Mills, D. J. Jones & A. Damascelli
Department of Physics & Astronomy, University of British Columbia, Vancouver, BC, V6T 1Z1, Canada
Max Planck Institute for Chemical Physics of Solids, Dresden, Germany
M. Michiardi
Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, Varennes, QC, J3X 1S2, Canada
F. Boschini
Department of Chemistry, Aarhus University, 8000, Aarhus C, Denmark
J. L. Mi & B. B. Iversen
Department of Physics and Astronomy, Interdisciplinary Nanoscience Center, Aarhus University, 8000, Aarhus C, Denmark
Ph. Hofmann
H.-H. Kung
M. X. Na
S. K. Y. Dufresne
A. Currie
G. Levy
S. Zhdanovich
A. K. Mills
D. J. Jones
J. L. Mi
B. B. Iversen
A. Damascelli
M.M., F.B., H.-H.K., M.-X.N. performed the measurements and data analysis, S.K.Y.D. contributed to the measurements, A.C. contributed to data analysis, G.L., S.Z., A.K.M., D.J.J. provided technical support and instrumentation, J.L.M., B.B.I., and P.H. provided the samples, P.H. and A.D. were responsible for the overall direction, planning, and management of the project. All authors contributed to the manuscript.
Correspondence to M. Michiardi or A. Damascelli.
Nature Communications thanks Vladimir Strocov and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Supplemetary Information
Michiardi, M., Boschini, F., Kung, HH. et al. Optical manipulation of Rashba-split 2-dimensional electron gas. Nat Commun 13, 3096 (2022). https://doi.org/10.1038/s41467-022-30742-5
Rashba-like physics in condensed matter
Gustav Bihlmayer
Paul Noël
Aurélien Manchon
Nature Reviews Physics (2022)
|
CommonCrawl
|
Add and Subtract Polynomials
Determine the degree of polynomials
Evaluate a polynomial function for a given value
Add and subtract polynomial functions
5.1.1 Determine the Degree of Polynomials
We have learned that a term is a constant or the product of a constant and one or more variables. A monomial is an algebraic expression with one term. When it is of the form $ax^{m}$, where $a$ is a constant and $m$ is a whole number, it is called a monomial in one variable. Some examples of monomial in one variable are. Monomials can also have more than one variable such as and $−4a^{2}b^{3}c^{2}$.
A monomial is an algebraic expression with one term.
A monomial in one variable is a term of the form $ax^{m}$, where $a$ is a constant and $m$ is a whole number.
A monomial, or two or more monomials combined by addition or subtraction, is a polynomial. Some polynomials have special names, based on the number of terms. A monomial is a polynomial with exactly one term. A binomial has exactly two terms, and a trinomial has exactly three terms. There are no special names for polynomials with more than three terms.
polynomial—A monomial, or two or more algebraic terms combined by addition or subtraction is a polynomial.
monomial—A polynomial with exactly one term is called a monomial.
binomial—A polynomial with exactly two terms is called a binomial.
trinomial—A polynomial with exactly three terms is called a trinomial.
Here are some examples of polynomials.
Polynomial $y+1$ $4a^{2}-7ab+2b^{2}$ $4x^{4}+x^{3}+8x^{2}-9x+1$
Mononomial $14$ $8y^{2}$ $-9x^{3}y^{5}$ $-13a^{3}b^{2}c$
Binomial $a+7b$ $4x^{2}-y^{2}$ $y^{2}-16$ $3p^{3}q-9p^{2}q$
Trinomial $x^{2}-7x+12$ $9m^{2}+2mn-8n^{2}$ $6x^{4}-k^{3}+8k$ $z^{4}+3z^{2}-1$
Notice that every monomial, binomial, and trinomial is also a polynomial. They are just special members of the "family" of polynomials and so they have special names. We use the words monomial, binomial, and trinomialwhen referring to these special polynomials and just call all the rest polynomials.
The degree of a polynomial and the degree of its terms are determined by the exponents of the variable.
A monomial that has no variable, just a constant, is a special case. The degree of a constant is $0$.
The degree of a term is the sum of the exponents of its variables.
The degree of a constant is $0$.
The degree of a polynomial is the highest degree of all its terms.
Let's see how this works by looking at several polynomials. We'll take it step by step, starting with monomials, and then progressing to polynomials with more terms.
Let's start by looking at a monomial. The monomial $8ab^{2}$ has two variables $a$ and $b$. To find the degree we need to find the sum of the exponents. The variable a doesn't have an exponent written, but remember that means the exponent is $1$. The exponent of $b$ is $2$. The sum of the exponents, $1+2$, is $3$ so the degree is $3$.
Here are some additional examples.
Working with polynomials is easier when you list the terms in descending order of degrees. When a polynomial is written this way, it is said to be in standard form of a polynomial. Get in the habit of writing the term with the highest degree first.
Determine whether each polynomial is a monomial, binomial, trinomial, or other polynomial. Then, find the degree of each polynomial.
$7y^{2}-5y+3$
$-2a^{4}b^{2}$
$3x^{5}-4x^{3}-6x^{2}+x-8$
$2y-8xy^{3}$
$15$
Polynomial Number of terms Type Degree of terms Degree of polynomial
$7y^{2}-5y+3$ $3$ Trinomial $2, 1, 0$ $2$
$-2a^{4}b^{2}$ $1$ Monomial $6$ $6$
$3x^{5}-4x^{3}-6x^{2}+x-8$ $5$ Polynomial $5, 3, 2, 1, 0$ $5$
$2y-8xy^{3}# $2$ Binomial $1, 4$ $4$
$15$ $1$ Monomial $0$ $0$
5.1.2 Add and Subtract Polynomials
We have learned how to simplify expressions by combining like terms. Remember, like terms must have the same variables with the same exponent. Since monomials are terms, adding and subtracting monomials is the same as combining like terms. If the monomials are like terms, we just combine them by adding or subtracting the coefficients.
Add or subtract:
$25y^{2}+15y^{2}$
$16pq^{3}-(-7pq^{3})$
Combine like terms. $40y^{2}$
Combine like terms. $23pq^{3}$
Remember that like terms must have the same variables with the same exponents.
Simplify:
$a^{2}+7b^{2}-6a^{2}$
$u^{2}v+5u^{2}-3v^{2}$
$-5a^{2}+7b^{2}$
There are no like terms to combine. In this case, the polynomial is unchanged. $u^{2}v+5u^{2}-3v^{2}$
We can think of adding and subtracting polynomials as just adding and subtracting a series of monomials. Look for the like terms—those with the same variables and the same exponent. The Commutative Property allows us to rearrange the terms to put like terms together.
Find the sum: $(7y^{2}-2y+9)+(4y^{2}-8y-7)$.
Identify like terms. $\left(\underline{\underline{7y^{2}}}- \underline{2y} +9 \right)+ \left(\underline{\underline{4y^{2}}}-\underline{8y}-7 \right)$
Rewrite without the parentheses, rearranging to get the like terms together. $\underline{\underline{7y^{2}+4y^{2}}} – \underline{2y-8y}+9-7$
Combine like terms. $11y^{2}-10y+2$
Be careful with the signs as you distribute while subtracting the polynomials in the next example.
Find the difference: $(9w^{2}-7w+5)-(2w^{2}-4)$.
$(9w^{2}-7w+5)-(2w^{2}-4)$
Distribute and identify like terms. $\underline{9w^{2}}-7w+5-\underline{2w^{2}}+4$
Rearrange the terms. $\underline{9w^{2}-2w^{2}}-7w+5+4$
Combine like terms. $7w^{2}-7w+9$
To subtract $a$ from $b$, we write it as $b−a$, placing the $b$ first.
Subtract $(p^{2}+10pq-2q^{2})$ from $(p^{2}+q^{2})$.
$(p^{2}+q^{2})-(p^{2}+10pq-2q^{2})$
Distribute. $p^{2}+q^{2}-p^{2}-10pq+2q^{2}$
Rearrange the terms, to put like terms together. $p^{2}-p^{2}-10pq+q^{2}+2q^{2}$
Combine like terms. $-10pq+3q^{2}$
Find the sum: $(u^{2}-6uv+5v^{2})+(3u^{2}+2uv)$.
$(u^{2}-6uv+5v^{2})+(3u^{2}+2uv)$
Distribute. $u^{2}-6uv+5v^{2}+3u^{2}+2uv$
Rearrange the terms to put liker terms together. $u^{2}+3u^{2}-6uv+2uv+5v^{2}$
Combine like terms. $4u^{2}-4uv+5v^{2}$
When we add and subtract more than two polynomials, the process is the same.
Simplify: $(a^{3}-a^{2}b)-(ab^{2}+b^{3})+(a^{2}b+ab^{2})$.
$(a^{3}-a^{2}b)-(ab^{2}+b^{3})+(a^{2}b+ab^{2})$
Distribute. $a^{3}-a^{2}b-ab^{2}-b^{3}+a^{2}b+ab^{2}$
Rearrange to get the like terms together. $a^{3}-a^{2}b+a^{2}b-ab^{2}-+ab^{2}-b^{3}$
Combine like terms. $a^{3}-b^{3}$
5.1.3 Evaluate a Polynomial Function for a Given Value
A polynomial function is a function defined by a polynomial. For example, $f(x)=x^{2}+5x+6$ and $g(x)=3x−4$ are polynomial functions, because $x^{2}+5x+6$ and $3x−4$ are polynomials.
POLYNOMIAL FUNCTION
A polynomial function is a function whose range values are defined by a polynomial.
In Chapter 3, where we first introduced functions, we learned that evaluating a function means to find the value of $f(x)$ for a given value of $x$. To evaluate a polynomial function, we will substitute the given value for the variable and then simplify using the order of operations.
For the function $f(x)=5x^{2}-8x+4$ find:
$f(4)$
$f(-2)$
$f(x)=5x^{2}-8x+4$
To find $f(4)$, substitute $\textcolor{red}{4}$ for $x$. $f(\textcolor{red}{4})=5(\textcolor{red}{4})^{2}-8(\textcolor{red}{4})+4$
Simplify the exponents. $f(4)=5\cdot 16-8(4)+4$
Multiply. $f(4)=80-32+4$
Simplify. $f(4)=52$
To find $f(-2)$, substitute $\textcolor{red}{-2}$ for $x$. $f(\textcolor{red}{-2})=5(\textcolor{red}{-2})^{2}-8(\textcolor{red}{-2})+4$
Simplify the exponents. $f(-2)=5\cdot 4-8(-2)+4$
Multiply. $f(-2)=20+16+4$
Simplify. $f(-2)=40$
To find $f(-2)$, substitute $\textcolor{red}{0}$ for $x$. $f(\textcolor{red}{0})=5(\textcolor{red}{0})^{2}-8(\textcolor{red}{0})+4$
Simplify the exponents. $f(0)=5\cdot 0-8(0)+4$
Multiply. $f(0)=0+0+4$
Simplify. $f(0)=4$
The polynomial functions similar to the one in the next example are used in many fields to determine the height of an object at some time after it is projected into the air. The polynomial in the next function is used specifically for dropping something from $250$ ft.
The polynomial function $h(t)=−16t^{2}+250$ gives the height of a ball $t$ seconds after it is dropped from a $250$-foot tall building. Find the height after $t=2$ seconds.
$h(t)=-16t^{2}+250$
To find $h(2)$, substitute $t=2$ $h(2)=-16(2)^{2}+250$
Simplify. $h(2)=-16 \cdot 4 +250$
Simplify. $h(2)=-64+250$
Simplify. $h(2)=186$
After $2$ seconds, the height of the ball is $186$ feet.
5.1.4 Add and Subtract Polynomial Functions
Just as polynomials can be added and subtracted, polynomial functions can also be added and subtracted.
ADDITION AND SUBTRACTION OF POLYNOMIAL FUNCTIONS
For functions $f(x)$ and $g(x)$,
$(f+g)(x)=f(x)+g(x)$
$(f-g)(x)=f(x)-g(x)$
For functions $f(x)=3x^{2}-5x+7$ and $g(x)=x^{2}-4x-3$, find:
$(f+g)(x)$
$(f+g)(3)$
$(f-g)(x)$
$(f-g)(-2)$
Substitute $f(x)=\textcolor{red}{3x^{2}-5x+7}$ and $g(x)=\textcolor{blue}{x^{2}-4x-3}$. $(f+g)(x)=(\textcolor{red}{3x^{2}-5x+7})+(\textcolor{blue}{x^{2}-4x-3})$
Rewrite without the parentheses. $(f+g)(x)=3x^{2}-5x+7+x^{2}-4x-3$
Put like terms together. $(f+g)(x)=3x^{2}+x^{2}-5x-4x+7-3$
Combine like terms. $(f+g)(x)=4x^{2}-9x+4$
In part 1 we found $(f+g)(x)$ and now are asked to find $(f+g)(3)$.
$(f+g)(x)=4x^{2}-9x+4$
To find $(f+g)(3)$, substitute $x=3$. $(f+g)(3)=4(3)^{2}-9(3)+4$
$(f+g)(3)=4(9)-9(3)+4$
$(f+g)(3)=36-27+4$
$(f+g)(3)=13$
Notice that we could have found $(f+g)(3)$ by first finding the values of $f(3)$ and $g(3)$ separately and then adding the results.
Find $f(3)$. $\begin{align*} f(x)&=3x^{2}-5x+7 \\ f(\textcolor{red}{3})&=3(\textcolor{red}{3})^{2}-5(\textcolor{red}{3})+7 \\ f(3)&=3(9)-5(3)+7 \\ f(3)&=19 \end{align*}$
Find $g(3)$. $\begin{align*} g(x)&=x^{2}-4x-3 \\ g(\textcolor{red}{3})&=(\textcolor{red}{3})^{2}-4(\textcolor{red}{3})-3 \\ g(3)&=9-4(3)-3 \\ g(3)&=-6 \end{align*}$
Find $(f+g)(3)$ $(f+g)(x)=f(x)+g(x)$
$(f+g)(3)=f(3)+g(3)$
Substitute $f(3)=\textcolor{red}{19}$ and $g(3)=\textcolor{blue}{-6}$. $(f+g)(3)=\textcolor{red}{19}+(\textcolor{blue}{-6})$
Substitute $f(x)=\textcolor{red}{3x^{2}-5x+7}$ and $g(x)=\textcolor{blue}{x^{2}-4x-3}$. $(f-g)(x)=(\textcolor{red}{3x^{2}-5x+7})-(\textcolor{blue}{x^{2}-4x-3})$
Rewrite without the parentheses. $(f-g)(x)=3x^{2}-5x+7-x^{2}+4x+3$
Put like terms together. $(f-g)(x)=3x^{2}-x^{2}-5x+4x+7+3$
Combine like terms. $(f-g)(x)=2x^{2}-x+10$
$(f-g)(x)=2x^{2}-x+10$
To find $(f-g)(-2)$, substitute $x=-2$. $(f-g)(-2)=2(\textcolor{red}{-2})^{2}-(\textcolor{red}{-2})+10$
$(f-g)(-2)=2\cdot4-(-2)+10$
$(f-g)(-2)=8+2+10$
$(f-g)(-2)=20$
Marecek, L., & Mathis, A. H. (2020). Add and Subtract Polynomials. In Intermediate Algebra 2e. OpenStax. https://openstax.org/books/intermediate-algebra-2e/pages/5-1-add-and-subtract-polynomials. License: CC BY 4.0. Access for free at https://openstax.org/books/intermediate-algebra-2e/pages/1-introduction
|
CommonCrawl
|
Assessing horizontal equity in health care utilization in Iran: a decomposition analysis
Farideh Mostafavi1,
Bakhtiar Piroozi1,
Paola Mosquera2,
Reza Majdzadeh3 &
Ghobad Moradi ORCID: orcid.org/0000-0003-2612-65281,4
Despite the goal of horizontal equity in Iran, little is known about it. This study aimed i) to assess socioeconomic inequality and horizontal inequity in the healthcare utilization; and ii) to explore the contribution of need and non-need variables to the observed inequalities.
This study used national cross sectional dataset from Utilization of Health Services survey in 2015. Concentration Index (C), Concentration Curve (CC) and Horizontal Inequity index (HI) were calculated to measure inequality in inpatient and outpatient health care utilization. Decomposition analysis was used to determine the contribution of need and non-need factors to the observed inequalities.
Results showed the pro-poor inpatient services in both rural (C = − 0.079) and non-rural areas (C = − 0.096) and the pro-rich outpatient services in both rural (C = 0.038) and non-rural (C = 0.007). After controlling for need factors, HI was positive and significant for outpatient services in rural (HI = 0.039) and non-rural (HI = 0.008), indicating that for given need, the better off especially in rural make greater use of outpatient services. The HI was pro-poor for inpatient services in both rural (HI = − 0.068) and non-rural (HI = -0.090), was significant only in non-rural area. Non-need factors were the most important contributors to explain inequalities in the decomposition analysis.
Disentangle the different contribution of determinants, as well as greater HI in rural areas for outpatient and in non-rural areas for inpatient services, provide helpful information for decision makers to re-design policy and re-distribute resource allocation in order to reduce the socioeconomic gradient in health care utilization.
Equitable access and utilization of health services is one of the goals, tasks and challenges of governments [1]. Universal health coverage (UHC) is an important step toward achieving equity in the utilization of health services by all people [2, 3]. Typically, in high-income countries poorer individuals utilize more health care services due to need factors (i.e. lower health status). Conversely in low-income countries, poorer individuals are less likely to use services due to non-need factors (i.e. low income and lack of health insurance) and despite their greater need [4]. The principle of Universal health coverage (UHC) states that individuals with equal needs should utilize equal healthcare services [5, 6]. Therefore, as poorer individuals often face lower health status and greater need it is expected that they utilize more health services. but also to support the fullfilment of the UHC Monitoring horizontal equity is deemed necesary not only to provide a comprehensive picture of equity in health care [7, 8].
Iran, like many other countries, has set UHC and health equity as some of its main goals [9]. One of the first effort was the establishment of a public health care (PHC) network fully financed by the government in 1985 [10,11,12]. In 1989 The Social Security Act was enacted and the Social Security Organization was appointed as the institution responsible to implement and provide health care services for workers and persons covered by the Labor and Social Security Law. In addition, the Imam Khomeini Relief Foundation, was created by the government as a subsidy fund to cover inpatient services for poor and low-income individuals [10]. In 2005 the Family Physician Program and a Universal Health Insurance scheme were implemented with full financial support for rural areas, and partial financial support for urban residents [12, 13]. Lately, in 2014 the implementation of the Health Transformation Plan was an important step taken by the government (more focus on inpatient services in public sector) to achieve public health coverage through reducing the amount of out of pocket payment [14,15,16].
Little is known about equality in health care utilization in Iran, and equity was not studied nationally or methodologically. Previous studies in Iran have shown pro-rich inequalities in healthcare utilization, in which Sex, place of residence and health insurance coverage have been reported as the main predictors of observed inequalities [17,18,19]. On the same data of this study, a study using logistic regression models to analyze association between social variables with the self-reported need and usage of services in people who reported need; poor people reported more both of outpatient and inpatient needs than rich people, as well as usage of inpatient services was more in rich people and was not significant for outpatient services [20].
Equity in health care in Iran may therefore require being re-assessed and permanently monitored. To contribute and update the knowledge about horizontal equity in health care utilization in Iran, the present study aimed: i) to assess socioeconomic inequality and horizontal inequity in the utilization of health services; and ii) to explore the contribution of need and non-need variables to the observed inequalities.
This study used national cross sectional dataset from Utilization of Health Services (UHS) survey in 2015. UHS was conducted by National Institute of Health Research under the supervision of the Statistical Centre of Iran and in coordination with the relevant departments in the Ministry of Health and Medical Education.
The target population of this study was a set of ordinary resident households (ordinary households are made up of several people who live together in a fixed residence, have the same expenditure and usually eat together) and group households, i.e. a group of people who all or most of them, due to their special circumstances, mainly have a common feature, have chosen a joint residence for their living and jointly manage the affairs of life in that residence. These were selected according to the latest general population and housing census of Iran in 2011. Institutional households such as student dormitories, barracks, and prisons were not included in the study.
The samples were selected using three-stage stratified probability sampling method; i) each province was classified into non-rural/rural geographic segment. ii) the non-rural areas were classified into two categories of "central city/non-central city" segment. iii) 20 households were selected from each segment using simple random sampling method. Which 10 households were selected as the main sample and 10 households were selected as the alternative sample.
The total number of segments in the whole country (m) is obtained by dividing the number of ordinary and group households by 10, and sample areas obtained from following formula:
$$ {m}_{th}=\frac{\sqrt{N_{th}}}{\sum \sqrt{N_{th}}}\times m\kern2.75em t=1,2,3,\dots, 31\kern0.5em ,h=1,2,3 $$
mth is the number of sample areas in the hth class of tth province.Nth is the number of ordinary resident households living in the hth class/category of the tth provinces (from 31 provinces) based on the general census of population and housing in 2011.
A total of 22,470 households were enrolled in this study (N = 81,137 invited, N = 78,378 participated), and the response rate was 96.6%.
For the present study, resulting in a sample size of 12,944 individuals had been received health care from health care facility in the last 2 weeks, and 5404 individuals had been admitted to a hospital in the last year. Data were collected using a questionnaire via interviews. This study was conducted according to the guidelines laid down in the Declaration of Helsinki.
Variable definition
Outcome variables were measured by health care utilization in inpatient and outpatient care, derived from the questions: "Have you been admitted to a hospital in the last year?" and "Have you received health care from a health care facility in the last two weeks?" respectively. Both variable coded as yes = 1 or no = 0.
To calculate the socioeconomic status variable, we used the data on a number of assets collected as part of the UHS survey. Using principal component analysis (PCA), an asset index was calculated for each of the subjects. We divided individuals based on "rank" instead of "weight" for the quintiles included in the decomposition. The index ranks people from the poorest to the richest, by classifying them into five quintiles: very poor, poor, moderate, rich, and very rich [21, 22].
Need factors included demographic variables (age and sex) and a health variable (self-reported health), used as proxies of need [4]. Age was categorized into three groups, less than 30 years, 30–59 and 60 years and older. Sex was defined as male/female. Self-reported health status variable was dichotomized into two groups good and poor. The information for this variable was derived from the question: "you had Have any major illness or suffered from any disability for at least the past year?" (Yes/No). Having either illness or disability was considered poor health and having no illness or disability was measured as good health status.
Non-need factors included socioeconomic status variables, education, basic and supplementary insurance, marital status, and occupation. Education was categorized into three groups: uneducated & elementary, middle & high school and college and above. Basic and supplementary insurance variables were both coded as yes = 1 or no = 0. Marital status was categorized into married or single and occupation or having job was defined as yes/no.
First Concentration Index (C), Concentration Curve (CC) and Horizontal Inequity index (HI) were calculated to measure inequality in health care utilization. To form CC, individuals are sorted according to their socioeconomic status, then the cumulative percentage of population is plotted against the cumulative percentage of health variable. CC above (below) the line of equality indicate health variable is concentrated among poor (rich) individuals. C values range from + 1 to − 1. Positive (negative) value indicates the health variable is concentrated among rich (poor) individuals, and C equals zero means there is no inequality. To calculate the C, The Kakwani method was used by the following eq. (22): (Eq. 2)
$$ C=\frac{2}{\mu}\mathit{\operatorname{cov}}\left({y}_i,{R}_i\right) $$
Where C is concentration index, Cov is the Covariance, yi is the health variable, Ri is the ith individual's fractional rank in the socioeconomic distribution and μ is the health variable mean. Wagstaff correction [23] to the C was used because of binary outcome variables.
Decomposition analysis was used to determine the contribution of need and non-need factors to the observed inequalitc we used the linear approximation of a probit model to estimate partial effects [22] (Eq. 3).
$$ {y}_i={\alpha}^m+{\sum}_j{\beta}_j^m{x}_{ji}+{\sum}_k{\gamma}_k^m{Z}_{k_i}+{\varepsilon}_i $$
The decomposition of the concentration index for yi, can thus be expressed as the following formula (Eq. 4):
$$ C=\sum \left(\frac{\beta_j^m{\overline{x}}_j}{\mu}\right){C}_j+\sum \left(\frac{\gamma_k^m{\overline{Z}}_k}{\mu}\right){C}_k+\frac{GC_{\varepsilon }}{\mu } $$
where, μ is the mean yi (health care utilization), Cj and Ck are the concentration indices for Xj (need factors) and Zk (non-need factors), \( {\upbeta}_{\mathrm{j}}^{\mathrm{m}} \) and \( {\upgamma}_{\mathrm{k}}^{\mathrm{m}} \) are the partial effects (dy/dxj, dy/dzk) for x and z, \( {\overline{\mathrm{x}}}_{\mathrm{j}} \) and \( {\overline{\mathrm{Z}}}_{\mathrm{k}} \) are the mean level of Xj and Zk, \( \left(\frac{\upbeta_{\mathrm{j}}^{\mathrm{m}}{\overline{\mathrm{x}}}_{\mathrm{j}}}{\upmu}\right){\mathrm{C}}_{\mathrm{j}} \) and \( \left(\frac{\upbeta_{\mathrm{j}}^{\mathrm{m}}{\overline{\mathrm{x}}}_{\mathrm{j}}}{\upmu}\right){\mathrm{C}}_{\mathrm{j}} \) are the contributions of need variables (j) and non-need variables (k), and \( \frac{GC_{\varepsilon }}{\mu } \) is the generalized concentration index for the remaining error [22, 24].
Finally, the horizontal index was obtained from the concentration index presented by Eq. (1) minus the estimated contributions of the need variables calculated in Eq. (4). When the horizontal index (HI) is positive, the use of services by individuals with a higher socioeconomic status is more than their need, and when it is negative it indicates that the poor people of the community have received services more than their need [25]. The reference groups employed in the analysis were single, women, under the age of 30, who had college and above education, were in the highest socioeconomic quintile and had basic and complementary insurance. All analyses were performed on rural and non-rural separately.
Table 1 shows the characteristics of the participants. The C for inpatient services were negative in both rural (C = -0.079) and non-rural areas (C = -0.096) pointing toward a higher utilization among individuals belonging to lower income households. On the other hand, the C for outpatient services were positive in both rural (C = 0.038) and non-rural (C = 0.007), which indicated concentration of these services in higher socioeconomic groups. The concentration curves in Fig. 1, confirmed what was indicated by the Concentration indexes.
Table 1 Variable characteristics
Concentration curve for outpatient and inpatient health care utilization in rural and non-rural
After controlling for need factors, the HI was positive and significant for outpatient services in rural (HI = 0.039) and non-rural areas (HI = 0.008), indicating that for given need, the better off make greater use of outpatient services. The HI remained negative for inpatient services in rural (HI = -0.068) and non-rural areas (HI = -0.090), which was significant only in non-rural area indicating that the inpatient services were more utilized by the poor groups (Table 2).
Table 2 Decomposition of Concentration Index for Inpatient health care utilization in rural and non-rural
Table 2 (inpatient services) and Table 3 (outpatient services) show the results of the decomposition analysis by non-rural and rural areas. The first column, regression coefficients show the partial effect of each variable on the utilization. The second column indicates the elasticity of health care utilization for each determinant. The third column shows the concentration index of each of the determinants included in the analysis. The two last columns show the absolute and percentage contributions of each factors to the overall concentration index.
Table 3 Decomposition of Concentration Index for Outpatient health care utilization in rural and non-rural
Regarding the utilization of inpatient care, there was a significant positive association between utilization of services and older age, poor health and low SES in rural residents. In non-rural area, being married, unemployed and individuals who reported poor health, were more likely to use services. Male, people in middle age (30–59) and poor health status were all concentrated among the poor individuals. Need factors explained a smaller proportion of the inequality favoring the poor in both rural and non-rural areas (13.83 and 5.84% respectively), while non-need factor accounted for bigger proportion of the inequality (57.024 and 39.83% in rural and non-rural areas, respectively). Among the need factors, poor health was the major contributor, whereas the other factors displayed an insubstantial role. Among the non-need factors, lack of supplementary insurance and low SES made the largest contributions to explain the pro-poor inequalities in both rural and non-rural areas.
For the utilization of outpatient care, in rural areas, there was a negative association between older age, sex, marital status and lack of basic insurance and a positive association between high SES, poor health, being unemployed and lack supplementary insurance with use of this service. On the other hand, in non-rural areas, individuals with poor health, high SES and lacking supplementary insurance were more likely to use the services. The need factors were slightly offsetting (− 2.26% and − 16.30% in rural and non-rural, respectively) the contribution of non-need factors which in this case accounted for most of the inequality favoring the better-off (92.96 and 46.99% in rural and non-rural, respectively). The lack of supplementary insurance coverage was the largest non-need contributor in both rural and non-rural area, with an additional contribution coming from education. Interestingly, high SES made a very small contribution to the pro-rich inequalities in rural area while low SES was instead offsetting the inequalities in non-rural area.
The results of this study suggest firstly, that whereas inpatient services are fairly equitable and seem to meet the principle of horizontal equity, the use of outpatient services is substantially concentrated among the well-off population. Second, that rural areas displayed lower levels of inequality in the use of inpatient services while non-rural areas showed lower levels of inequalities in the use of outpatient services. Third, the decomposition analysis suggested that non-need factors were the most important contributors to explain both inpatient and outpatient inequalities, and among them, the lack of supplementary insurance and SES were the most important explanatory factors.
The overall observed pattern of inequality in outpatient and inpatient healthcare services in our study is in accordance with the findings of studies conducted in three high-income countries in East Asia (Honk Kong, South Korea, and Taiwan) [26] and Brazil [27]. On the one hand, private insurance coverage (Hong Kong and Brazil), low education, unemployment (South Korea), place of residency and income (Taiwan) were the main explanatory factors for outpatient pro-rich inequalities. On the other hand, policy interventions as for example services-on-wheel and exemption of co-payment in rural residents were driving the pro-poor inpatients inequalities [26, 27]. Contrasting patterns to those found in the present study have also been described in other settings. For example a study in China reported pro-rich inequity of inpatient utilization in rural residents [28]. Our study adds to this meagre literature by suggesting the levels of inequalities in the use of inpatient services among rural residents is lower than in the non-rural ones. Possible explanations of our findings could be the successful implementation of the family physician program, the rural insurance scheme and the referral system, which have already shown increased equal and comfortable access to health services [11, 29]. As a result of these interventions, family physicians act as a gatekeeper to the system and rural insurance holders only pay a small portion of the total costs when they are admitted to hospitals through the referral system. Conversely, in large cities, family physicians doesn't have an obvious role as gate-keepers [30].
This study also suggested that utilization of outpatient service is not equitable among Iranian population. In our study, the pro-rich inequality in outpatient services was higher in rural than in non-rural areas. This inequality was mainly explained by the higher level of utilization among people with no supplementary health insurance, which was in fact concentrated among the high SES. In Iran, there are three type of health care services including; public sectors, quasi-public sectors and private sectors. While hospitalization services are mainly provided by the public sector (more than 70% of inpatient facilities and services), more than 70% of outpatient facilities and services are provided by the private sector [31]. Private sector fee is much higher than the public sector's, which therefore could lead to pro-rich outpatient services. Despite increasing the share of government and insurance funds in total health expenditures, finding has shown the out of pocket payments of the households increased even more than the previous years [15]. After the implementation of the Health transformation Plan, private sector services fee increased by an average of over 100% [32]. In addition, part of the outpatient service including dental services and rehabilitation services are not covered by insurance and have also been reported to be pro-rich [31, 33]. Despite the high insurance coverage in Iran, service coverage and cost coverage by insurances are not sufficient when an individual need to use the private sector, which consequently has led to reduced access and utilization of services among the low-income individuals [34]. Also evidence of other studies in Iran have shown that low quality of health care delivery by public sectors and family physicians led to seeking care in private sectors and specialist services particularly in outpatient care [35, 36]. Similar to our findings, a previous study in Iran showed that poor socioeconomic status was associated with low utilization of services [37]. In contrast of our study and other studies in Iran, a local study conducted in 2012 in Shiraz (the fifth most populous city of Iran) reported a pro-poor inequality in utilization of outpatient services after standardizing for need factors. The allocation of subsidies and low cost of services in the public sector, high insurance coverage, and financial barriers related to upper level access, reported as the main factors associated to the pro-poor inequality; in addition the reason reported for under-utilization of the rich individuals is low quality of services in Shiraz [38].
This study has some limitations; the data on socioeconomic status, health status, and the utilization of services were collected via self-reported questionnaire, as a result, the collected data might have some bias. In the present study a PCA analysis was used to calculate socioeconomic variable therefore the choice of variables and the appropriateness of the weights assigned to them might be amatter of concern [4]. In addition, the self-reported health variable was binary (good and poor health), which will not indicate variation in health status and might not adequately discriminate those in need for health care services, therefore its effect may be overestimated or underestimated. Another limitation is big residuals in some of the models meaning the variables included in the model were not able to adequately explain the inequalities in the outcome. Some other variables i.e. quality of health care delivery could have been relevant to explain these inequalities, however information was not available in the dataset. Also to handle the limitation of secondary data, survey method including the survey instrument was considered. Concerning the analysis, we used the correction proposed by Wagstaff et al. to the concentration index because of the binary nature of the health outcome [4].
Our results, suggest a pro-poor income-related inequality in inpatient and pro-rich income-related inequality in outpatient services. Inequalities are mainly explained by non-need factors i.e. lack of supplementary insurance and SES. Magnitude of HI was greater in rural areas for outpatient services and greater in non-rural area for inpatient services. Disentangle the different contribution of determinants as well as variations in HI among rural and non-rural areas provide helpful information for decision makers to re-design policy and re-distribute resource allocation in order to reduce the socioeconomic gradient in health care utilization.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Concentration Index
Concentration Curve
Horizontal Inequity index
UHC:
PHC:
Public health care
UHS:
Utilization of Health Services
PCA:
Principal component analysis
Boerma T, Eozenou P, Evans D, Evans T, Kieny M-P, Wagstaff A. Monitoring progress towards universal health coverage at country and global levels. PLoS Med. 2014;11(9):e1001731.
Carrin G, James C, Organization WH. Reaching universal coverage via social health insurance: key design features in the transition period; 2004.
Atun R, Aydın S, Chakraborty S, Sümer S, Aran M, Gürol I, et al. Universal health coverage in Turkey: enhancement of equity. Lancet. 2013;382(9886):65–99.
O'Donnell OA, Wagstaff A. Analyzing health equity using household survey data: a guide to techniques and their implementation: World Bank publications; 2008.
Culyer AJ, Wagstaff A. Equity and equality in health and health care. J Health Econ. 1993;12(4):431–57.
Ev D, Koolman X, Jones AM. Explaining income†related inequalities in doctor utilisation in Europe. Health Econ. 2004;13(7):629–47.
Fendall N. Declaration of Alma-Ata. Lancet. 1978;312(8103):1308.
Bonilla-Chacín ME, Aguilera N. The Mexican social protection system in health; 2013.
Takian A, Rashidian A, Kabir MJ. Expediency and coincidence in re-engineering a health system: an interpretive approach to formation of family medicine in Iran. Health Policy Plan. 2010;26(2):163–73.
Ibrahimipour H, Maleki M-R, Brown R, Gohari M, Karimi I, Dehnavieh R. A qualitative study of the difficulties in reaching sustainable universal health insurance coverage in Iran. Health Policy Plan. 2011;26(6):485–95.
Kalhor R, Azmal M, Kiaei MZ, Eslamian M, Tabatabaee SS, Jafari M. Situational analysis of human resources in family physician program: survey from Iran. Materia socio-Med. 2014;26(3):195.
Takian A, Rashidian A, Doshmangir L. The experience of purchaser–provider split in the implementation of family physician and rural health insurance in Iran: an institutional approach. Health Policy Plan. 2015;30(10):1261–71.
Takian A, Doshmangir L, Rashidian A. Implementing family physician programme in rural Iran: exploring the role of an existing primary health care network. Fam Pract. 2013;30(5):551–9.
Doshmangir L, Rashidian A, Ravaghi H, Takian A, Jafari M. The experience of implementing the board of trustees' policy in teaching hospitals in Iran: an example of health system decentralization. Int J Health Policy Manag. 2015;4(4):207.
Yazdi-Feyzabadi V, Bahrampour M, Rashidian A, Haghdoost A-A, Javar MA, Mehrolhassani MH. Prevalence and intensity of catastrophic health care expenditures in Iran from 2008 to 2015: a study on Iranian household income and expenditure survey. Int J Equity Health. 2018;17(1):44.
Ehsani-Chimeh E, Sajadi HS, Majdzadeh R. Iran towards universal health coverage: the role of human resources for health. Med J Islamic Repub Iran (MJIRI). 2018;32(1):578–82.
Mohammadbeigi A, Hassanzadeh J, Eshrati B, Rezaianzadeh A. Decomposition of inequity determinants of healthcare utilization, Iran. Public Health. 2013;127(7):661–7.
Pourreza A, Khabiri R, Rahimi Foroushani A, Akbari Sari A, Arab M, Kavosi Z. Health care-seeking behavior in Tehran, Islamic Republic of Iran. World Appl Sci J. 2011;14(8):1190–7.
Etemad K, Yavari P, Mehrabi Y, Haghdoost A, Motlagh ME, Kabir MJ, et al. Inequality in utilization of in-patients health services in Iran. Int J Prev Med. 2015;6:45.
Abouie A, Majdzadeh R, Khabiri R, Hamedi-Shahraki S, Emami Razavi SH, Yekaninejad MS. Socioeconomic inequities in health services' utilization following the health transformation plan initiative in Iran. Health Policy Plan. 2018;33(10):1065.
Moradi G, Mostafavi F, Azadi N, Esmaeilnasab N, Ghaderi E. Socioeconomic inequality in childhood obesity. J Res Health Sci. 2017;17:3.
Wagstaff A, O'Donnell O, Van Doorslaer E, Lindelow M. Analyzing health equity using household survey data: a guide to techniques and their implementation: World Bank publications; 2007.
Wagstaff A. The bounds of the concentration index when the variable of interest is binary, with an application to immunization inequality. Health Econ. 2005;14(4):429–32.
Mosquera PA, Waenerlund A-K, Goicolea I, Gustafsson PE. Equitable health services for the young? A decomposition of income-related inequalities in young adults' utilization of health care in northern Sweden. Int J Equity Health. 2017;16(1):20.
Van Doorslaer E, Wagstaff A, Van der Burg H, Christiansen T, De Graeve D, Duchesne I, et al. Equity in the delivery of health care in Europe and the US. J Health Econ. 2000;19(5):553–83.
Jui-fen RL, Leung GM, Kwon S, Tin KY, Van Doorslaer E, O'Donnell O. Horizontal equity in health care utilization evidence from three high-income Asian economies. Soc Sci Med. 2007;64(1):199–212.
Macinko J, Lima-Costa MF. Horizontal equity in health care utilization in Brazil, 1998–2008. Int J Equity Health. 2012;11(1):33.
Zhou Z, Gao J, Fox A, Rao K, Xu K, Xu L, et al. Measuring the equity of inpatient utilization in Chinese rural areas. BMC Health Serv Res. 2011;11(1):201.
Khayyati F, Motlagh ME, Kabir M, Kazemeini H. The role of family physician in case finding, referral, and insurance coverage in the rural areas. Iran J Public Health. 2011;40(3):136.
Takian A. Implementing family medicine in Iran identification of facilitators and obstacles: London School of Hygiene & tropical medicine; 2009.
Rezaei S, Hajizadeh M, Bazyar M, Kazemi Karyani A, Jahani B, Karami MB. The impact of health sector evolution plan on the performance of hospitals in Iran: evidence from the Pabon lasso model. Int J Health Governance. 2018;23(2):111–9.
Moghadam TZ, Raeissi P, Jafari-Sirizi M. Analysis of the health sector evolution plan from the perspective of equity in healthcare financing: a multiple streams model. Int J Human Rights Healthc. 2018. https://doi.org/10.1108/IJHRH-07-2018-0044.
Karami Matin B, Hajizadeh M, Najafi F, Homaie Rad E, Piroozi B, Rezaei S. The impact of health sector evolution plan on hospitalization and cesarean section rates in Iran: an interrupted time series analysis. Int J Qual Health Care. 2017;30(1):75–9.
Rashidian A, Joudaki H, Khodayari-Moez E, Omranikhoo H, Geraili B, Arab M. The impact of rural health system reform on hospitalization rates in the Islamic Republic of Iran: an interrupted time series. Bull World Health Organ. 2013;91:942–9.
Dehnavieh R, Kalantari AR, Sirizi MJ. Urban family physician plan in Iran: challenges of implementation in Kerman. Med J Islam Repub Iran. 2015;29:303.
LeBaron SW, Schultz SH. Family medicine in Iran: the birth of a new specialty. Fam Med-Kansas City. 2005;37(7):502.
Piroozi B, Moradi G, Nouri B, Bolbanabad AM, Safari H. Catastrophic health expenditure after the implementation of health sector evolution plan: a case study in the west of Iran. Int J Health Policy Manag. 2016;5(7):417.
Kavosi Z, Mohammadbeigi A, Ramezani-Doroh V, Hatam N, Jafari A, Firoozjahantighi A. Horizontal inequity in access to outpatient services among Shiraz City residents, Iran. J Res Health Sci. 2015;15(1):37–41.
This project was supported and registered by Kurdistan University Medical Science in Iran.
The authors received no specific funding for publishing, or preparation of the manuscript of this work.
Social Determinants of Health Research Center, Kurdistan University of Medical Sciences, Sanandaj, Iran
Farideh Mostafavi, Bakhtiar Piroozi & Ghobad Moradi
Department of Public Health and Clinical Medicine, Epidemiology and Global Health, Umeå University, Umeå, Sweden
Paola Mosquera
School of Public Health and Institute of Public Health Research, Epidemiology and Biostatistics, Tehran University of Medical Sciences, Tehran, Iran
Reza Majdzadeh
Department of Epidemiology and Biostatistics, Kurdistan University of Medical Sciences, Pasdaran Ave, Sanandaj, Iran
Ghobad Moradi
Farideh Mostafavi
Bakhtiar Piroozi
FM and GM contributed to the conceptualizing, planning and writing the article, statistical analysis, result interpretation, literature review. PM, contributed to the result interpretation, literature review. BP and RM contributed to literature review. All authors read and approved the final manuscript.
Correspondence to Ghobad Moradi.
This study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the Kurdistan University of Medical Sciences Ethics Committee (with IR.MUK.REC.1395/396 the committee's reference number).
Mostafavi, F., Piroozi, B., Mosquera, P. et al. Assessing horizontal equity in health care utilization in Iran: a decomposition analysis. BMC Public Health 20, 914 (2020). https://doi.org/10.1186/s12889-020-09071-z
DOI: https://doi.org/10.1186/s12889-020-09071-z
|
CommonCrawl
|
Mosc. Math. J.:
Mosc. Math. J., 2017, Volume 17, Number 1, Pages 59–77 (Mi mmj626)
A calculation of Blanchfield pairings of $3$-manifolds and knots
Stefan Friedla, Mark Powellb
a Fakultät für Mathematik, Universität Regensburg, Germany
b Département de Mathématiques, Université du Québec à Montréal, QC, Canada
Abstract: We calculate Blanchfield pairings of $3$-manifolds. In particular, we give a formula for the Blanchfield pairing of a fibred $3$-manifold and we give a new proof that the Blanchfield pairing of a knot can be expressed in terms of a Seifert matrix.
Key words and phrases: Blanchfield pairing, $3$-manifold, Seifert form.
Deutsche Forschungsgemeinschaft SFB 1085
Natural Sciences and Engineering Research Council of Canada (NSERC)
The first named author was supported by the SFB 1085 "Higher invariants" funded by the DFG. The second named author is supported by an NSERC Discovery Grant.
DOI: https://doi.org/10.17323/1609-4514-2017-17-1-59-77
Full text: http://www.mathjournals.org/.../2017-017-001-005.html
MSC: 57M25, 57M27
Received: March 21, 2016
Citation: Stefan Friedl, Mark Powell, "A calculation of Blanchfield pairings of $3$-manifolds and knots", Mosc. Math. J., 17:1 (2017), 59–77
\Bibitem{FriPow17}
\by Stefan~Friedl, Mark~Powell
\paper A calculation of Blanchfield pairings of $3$-manifolds and knots
\jour Mosc. Math.~J.
\mathnet{http://mi.mathnet.ru/mmj626}
\crossref{https://doi.org/10.17323/1609-4514-2017-17-1-59-77}
http://mi.mathnet.ru/eng/mmj626
http://mi.mathnet.ru/eng/mmj/v17/i1/p59
A. Conway, "An explicit computation of the Blanchfield pairing for arbitrary links", Can. J. Math.-J. Can. Math., 70:5 (2018), 983–1007
A. Conway, S. Friedl, E. Toffoli, "The Blanchfield pairing of colored links", Indiana Univ. Math. J., 67:6 (2018), 2151–2180
|
CommonCrawl
|
The dynamic dimer structure of the chaperone Trigger Factor
The multiple facets of the Hsp90 machine
Laura J. Blair, Olivier Genest & Mehdi Mollapour
Tunable microsecond dynamics of an allosteric switch regulate the activity of a AAA+ disaggregation machine
Hisham Mazal, Marija Iljina, … Gilad Haran
Native mass spectrometry analyses of chaperonin complex TRiC/CCT reveal subunit N-terminal processing and re-association patterns
Miranda P. Collier, Karen Betancourt Moreira, … Judith Frydman
Chaperones convert the energy from ATP into the nonequilibrium stabilization of native proteins
Pierre Goloubinoff, Alberto S. Sassi, … Paolo De Los Rios
Bacterial Hsp70 resolves misfolded states and accelerates productive folding of a multi-domain protein
Rahmi Imamoglu, David Balchin, … F. Ulrich Hartl
Protein folding while chaperone bound is dependent on weak interactions
Kevin Wu, Frederick Stull, … James C. A. Bardwell
How do our cells build their protein interactome?
Benoit Coulombe, Philippe Cloutier & Marie-Soleil Gauthier
The ribosome modulates folding inside the ribosomal exit tunnel
Florian Wruck, Pengfei Tian, … Alexandros Katranidis
CryoEM reveals the stochastic nature of individual ATP binding events in a group II chaperonin
Yanyan Zhao, Michael F. Schmid, … Wah Chiu
Leonor Morgado1 na1,
Björn M. Burmann ORCID: orcid.org/0000-0002-3135-79641 na1 nAff2,
Timothy Sharpe1,
Adam Mazur1 &
Sebastian Hiller ORCID: orcid.org/0000-0002-6709-46841
Nature Communications volume 8, Article number: 1992 (2017) Cite this article
The chaperone Trigger Factor (TF) from Escherichia coli forms a dimer at cellular concentrations. While the monomer structure of TF is well known, the spatial arrangement of this dimeric chaperone storage form has remained unclear. Here, we determine its structure by a combination of high-resolution NMR spectroscopy and biophysical methods. TF forms a symmetric head-to-tail dimer, where the ribosome binding domain is in contact with the substrate binding domain, while the peptidyl-prolyl isomerase domain contributes only slightly to the dimer affinity. The dimer structure is highly dynamic, with the two ribosome binding domains populating a conformational ensemble in the center. These dynamics result from intermolecular in trans interactions of the TF client-binding site with the ribosome binding domain, which is conformationally frustrated in the absence of the ribosome. The avidity in the dimer structure explains how the dimeric state of TF can be monomerized also by weakly interacting clients.
The functionality of most proteins requires their correct folding subsequent to synthesis by the ribosome. Accumulation of structural intermediates or misfolded proteins into insoluble aggregates can lead to substantial impairment of cellular processes. Molecular chaperones have evolved in all kingdoms of life to play fundamental roles in protein biogenesis by preventing misfolding and aggregation of proteins, including transport of clients from their point of synthesis to their final cellular destination, where proper folding occurs1,2,3,4. Trigger Factor (TF) is a chaperone found in Gram-negative and Gram-positive bacteria as well as in chloroplasts5. TF binds to the translating ribosome and is thus the first chaperone to interact with newly synthesized polypeptides. TF is highly abundant in Escherichia coli cells but it is not essential for cell viability since its depletion is compensated by up-regulation of the functionally alternative chaperone DnaK6. However, the deletion of both chaperones is lethal at temperatures above 30 °C6. TF interacts with a multitude of substrates, among which outer membrane proteins are the most abundant ones, as revealed by ribosome profiling experiments7. TF consists of 432 amino acid residues and is organized in three domains adopting an overall elongated shape (Fig. 1a, b)8. The N-terminal domain (residues 1–113) is the ribosome-binding domain (RBD), that contains the TF signature motif GFRxGxxP (residues 43–50), via which TF binds to the ribosomal protein L239. The peptidyl-prolyl isomerase domain (PPD) on the opposite side of TF can catalyze the isomerization of peptidyl-prolyl bonds and is structurally homologous to FK506-binding proteins10. The C-terminal domain of TF is the substrate-binding domain (SBD), stabilized by a linker between the RBD and PPD domains (residues 114–149)11. The SBD forms the central body of the protein and has two helical arms that create a cavity (Arm1: residues 302–360, Arm2: residues 361–412).
Domain organization of full-length TF and secondary structure elements in solution. a On the ribbon representation of a published TF crystal structure (PDB 1W268), the three domains ribosome-binding domain (RBD), substrate-binding domain (SBD), and peptidyl-prolyl-cis/trans isomerase domain (PPD) are colored in red, blue, and yellow, respectively. b Domain constructs of E. coli TF used in this work. Six constructs of TF domains are shown with amino acid numbering corresponding to full-length TF. The names define a color code used throughout this work. c Secondary 13C chemical shifts plotted against the amino acid residue number of TF, as determined from triple-resonance experiments in the domain constructs SBD–PPD (green), RBD (red), and SBD (blue). A 1–2–1 weighting function for residues (i−1)–i–(i + 1) has been applied to the raw data to reduce noise and highlight regular secondary structure elements. Secondary structure elements were calculated for the crystal structure (PDB 1W26, gray) with DSSP12 and for the NMR data with CSI 3.013 and are indicated on top. The red arrows and boxes highlight structural elements detected only in solution
In the absence of ribosomes, TF is known to exist in a two-state equilibrium between a monomeric and a dimeric form. The dissociation constant (K D) of the dimer under physiological conditions is in the range 1–18 μM, as determined by multiple techniques14, 15. Since the cellular concentration of TF is 50 μM5, in the absence of excess concentrations of clients and ribosomes, the dimer is the dominant apo form of TF under physiological conditions, representing a pool of "resting state" molecules. However, despite numerous studies, the spatial arrangement of TF in its dimeric form has remained unclear. Several atomic resolution structures of TF from different organisms (E. coli, Thermotoga maritima, Vibrio cholerae, Deinococcus radiodurans, and Mycoplasma genitalium) are available for the full-length protein8, 16, 17, as well as for its individual domains10, 11, 18, 19, in complex with the ribosome8, 20 or in complex with substrates17, 21. While the fold of the individual domains is essentially maintained in these available crystal structures, their relative orientation varies substantially. Different dimeric arrangements are observed, some or most of which may have arisen from crystal contacts, and no consistently recurring dimeric arrangement is observed. For example, the TF E. coli crystal structure (PDB 1W26) shows a network of crystal contacts formed by the same residues that form the substrate-binding cradle but does not show an apparent dimer interface8. Or, the T. maritima structure was determined both in apo form (PDB 3GU0) and in complex with the ribosomal protein S7 (PDB 3GTY), resulting in different arrangements17. Furthermore, the structure of V. cholerae TF was determined on a construct with a C-terminal truncation that was later shown to diminish the dimerization and also the chaperone activity22, 23. Additionally, the dimerization interface was characterized by fluorescence resonance energy transfer and cross-linking experiments, and a nearly perpendicular orientation between the monomers was proposed14, 24. Notably, all observed TF dimer arrangements except in the T. maritima holo structure are asymmetric. An independent possibility for dimer formation could be extrapolated from the crystal structure of the isolated RBD (PDB 1OMS), which forms symmetric dimers18, 25, and modeling the full-length structure on this template would result in a symmetric arrangement. Finally, published small-angle X-ray scattering (SAXS) measurements show a compact globular particle with a length of about 37 Å26. Overall, different models and suggestions are thus available for the structure of the TF dimer.
This work aims to determine the structural arrangement of the E. coli TF dimer in aqueous solution at the atomic level. A combination of solution Nuclear Magnetic Resonance (NMR) experiments together with biophysical methods revealed that the arrangement of the TF dimer is caused by a dynamic interaction between two monomers. Specific NMR distance constraints established the structure of the TF dimer. Two conformers are presented and the experimental strategy is described, which will be generally useful to describe dynamic protein complexes.
The domain folds are preserved in solution
To obtain a structural description of the TF dimer, we characterized full-length TF and individual domain constructs by high-resolution NMR spectroscopy. The individual domains, as well as the bi-domain construct SBD–PPD, gave rise to well-dispersed and high-quality spectra, enabling backbone resonance assignment in a sequence-specific manner using triple-resonance experiments together with information from published assignments (Supplementary Figs. 1 and 2)21, 25. Since the domain folds are preserved in all available TF crystal structures, we decided to probe the integrity of individual folds in aqueous solution by calculating the secondary chemical shifts of the RBD, SBD, and SBD–PPD constructs and comparing them to the available TF crystal structure (PDB 1W26; Fig. 1c). Compared to the crystal structure, most of the secondary structure elements are maintained in solution, with two notable differences. As previously shown by Hsu and Dobson25, the third β-strand of the RBD is not detected in solution. This strand is at the edge of the β-sheet element of the RBD and is possibly disordered in solution. On the other hand, two short β-strand pairs are formed by residues 115–117 with 298–301, and by residues 181–183 with 192–195, respectively. The former segments face each other in the linkers connecting the RBD to the SBD and the linker between the SBD body and one of the arms. The latter are located within the PPD. Overall, the data show that the structures of the individual domains are maintained in solution.
The topology of the TF dimer
As a next step, we quantified the pairwise homotypic and heterotypic interactions of individual TF domain constructs. Size exclusion chromatography coupled with multi-angle light scattering (SEC–MALS) experiments were used to determine the homo-oligomerization of the constructs (Table 1 and Supplementary Fig. 3). The accessible range of protein concentrations was limited by the individual solubilities of the domain constructs and by the dilution factor of the SEC–MALS experiments. However, since MALS data for dimerization can be fitted with constrained values for the titration end point masses, K D values or lower limits thereof can be reliably obtained also from solubility-limited data sets. The data reveal that two of the six tested constructs, full-length TF and RBD–SBD, undergo a monomer–dimer transition in the micro-molar concentration range in solution, while the other four domain constructs, RBD, SBD, PPD, and SBD–PPD, do not homo-oligomerize in the concentration range analyzed. For the two constructs that form a dimer, TF and RBD–SBD, the respective dissociation constants K D were additionally measured by sedimentation equilibrium analytical centrifugation (AUC) and the values agree with those obtained by SEC–MALS. Under our experimental conditions, full-length TF formed a dimer with K D = 2.5 ± 1.1 μM, and the RBD–SBD construct with K D = 63 ± 13 μM, as averaged from the independent methods (Table 1). We also determined the impact of the buffer ionic strength on the dimerization (Supplementary Fig. 3 and Table 1). For both full-length TF and the RBD–SBD construct, the dimer affinity decreased with increasing potassium chloride concentration, indicating that the interaction includes substantial electrostatic components. Importantly, all these experiments were performed on protein constructs lacking any His6-purification tag. Preliminary SEC–MALS data collected for His6-tagged full-length TF showed that the presence of such tags artificially increases the dimerization affinity. In contrast to the wild-type, the His6-tagged construct was still completely dimeric at an elution concentration of 10 µM, which means, with a conservative estimate, its K D must be below 100 nM, at least one order of magnitude lower than for untagged full-length TF (Supplementary Fig. 4).
Table 1 Dimer dissociation constants K D (μM) of TF domain constructs
The heterotypic domain interactions were probed by solution NMR chemical shift titrations, in which one isotope-labeled domain was titrated with the second unlabeled domain (Fig. 2, Supplementary Table 1, and Supplementary Fig. 5). Out of six possible pairwise combinations of the domain constructs RBD, SBD, PPD, and SBD–PPD, two gave rise to detectable interactions in the micro-molar concentration range. Overall, the biophysical characterization of the domains thus yielded an interaction matrix, representing the dimer topology of TF (Supplementary Table 1). These interactions of the TF domains are consistent with a single possible arrangement wherein the dimer of TF is formed by intermolecular contacts between the RBD of one protomer and the SBD of the other protomer, and that the resulting core interaction is mildly stabilized by the presence of the PPD.
Localization of pairwise interaction sites on individual TF domains. a NMR titration of unlabeled SBD–PPD to 100 μM [U-15N] RBD in sample buffer (20 mM K-phosphate pH 6.5, 100 mM KCl, 0.5 mM EDTA) at 25 °C and 700 MHz. b NMR titration of unlabeled RBD to 250 μM [U-2H,15N] SBD–PPD in sample buffer at 25 °C and 700 MHz. c Chemical shift perturbation of the amide moieties observed in the titrations: [U-15N] RBD + SBD–PPD (top left), [U-15N] RBD + SBD (bottom left), [U-2H,15N] SBD–PPD + RBD (top right), and [U-2H,15N] SBD + RBD (bottom right). Light-shaded bars represent peaks undergoing line-broadening. Dashed lines are plotted at defined thresholds (mean value of the chemical shift perturbations plus one time and plus two times the standard deviation corrected to zero). d Significant chemical shift perturbations plotted in TF crystal structure (PDB 1W26). The affected residues are plotted with color gradient from light to dark for peaks with chemical shift changes above the threshold and that broaden beyond detection, respectively
The RBD is dynamic in the TF dimer
The NMR spectra of full-length TF share the notable feature that the resonances of the RBD are mostly absent, i.e., the 2D [15N,1H]-TROSY spectrum of full-length TF is essentially a superposition of the two isolated domains SBD and PPD (Supplementary Fig. 6). In the spectrum, a single set of resonances was observed, indicating that the two protomers in the dimer are structurally equivalent at least for the SBD and PPD. A closer inspection of the full-length TF NMR spectrum, together with the sequence-specific resonance assignments, revealed that only 16 out of 108 expected resonances of the RBD are observed. Ten of these could be unambiguously assigned (residues 7, 10, 11, 12, 16, 17, 105, 107, 108, 111), and these are all located in the β-sheet region in the proximity of the SBD. Furthermore, some resonances of the SBD are line-broadened in full-length TF (Supplementary Fig. 6). A similar effect was observed for the construct RBD–SBD, which features an NMR spectrum highly similar to the spectrum of SBD (Supplementary Fig. 6). Thus, the NMR resonance lines of most of the RBD and some of the SBD are line-broadened as soon as the RBD is in contact with the SBD, both in the full-length TF and the RBD–SBD construct. The line-broadening directly indicates the presence of dynamic processes on the NMR intermediate timescale, i.e., with kinetic rates between 1000 and 10 s−1 27,28.
In the monomer–dimer equilibrium of TF, these intermediate timescale dynamics could a priori be caused by three different kinetic processes, or combinations thereof: (i) conformational exchange within the monomeric species, (ii) the transition of a protomer between its monomeric and its dimeric form, or (iii) conformational exchange within the dimeric species without going through a monomeric species. Since the individual domains of TF in their monomeric forms, as well as monomerized mutants of TF (see below), do not feature the observed line-broadening, the process (i) does not have strong contributions to the observed line-broadening in the TF dimer. We then used a spin-label exchange experiment in real time, to measure the exchange rate of the dimer to monomer equilibrium as a function of temperature, and thus the possible impact of process (ii). The data show that the lifetimes of the TF dimer is 15.8 min at 15 °C and 2.6 min at 35 °C, corresponding to dimer dissociation rate constants of k off = 0.0011 s−1 and k off = 0.006 s−1, respectively (Fig. 3). Lifetimes determined at further intermediate temperatures are within this range. These monomer–dimer exchange kinetics are not close to the intermediate exchange regime, thus leaving the process (iii) as a main mechanistic cause for the observed intermediate exchange of the TF dimer. Overall, the RBD thus populates a conformational ensemble in the TF dimer, with individual conformers connected by exchange rate constants on the intermediate timescale.
Determination of the lifetime of the TF dimer. a Experimental scheme. At the onset of the experiment (t = 0), separately produced samples of [U-15N,2H]-labeled (green) and randomly spin-labeled (brown) TF are mixed in equal ratio in sample buffer (left). The NMR signal intensity is then monitored in real time t during the equilibration to the end point (right). NMR signals of protomers bound to a spin-labeled protomer are reduced in intensity by the intermolecular PRE, symbolized by light green color. b Intensity of 15N-filtered NMR signals, I Δ, rel, following the experimental scheme in a. The lifetime of the TF dimer is obtained by non-linear least-square fits (lines) to the data (dots). See Supplementary Note 1 for mathematical details. The experiment was performed at five temperatures in the range 15–35 °C, as indicated
Spatial domain positioning in the dimeric arrangement
To establish a structural model of the dynamic TF dimer, we obtained experimental constraints from two sources: chemical shift perturbation (CSP) experiments and paramagnetic relaxation enhancements (PREs). To this end, the contact interfaces of the interacting domains RBD and SBD were mapped by NMR CSP titrations (Fig. 2). Significant CSPs on the RBD were observed upon titrating with either SBD or with the bi-domain construct SBD–PPD. Significantly shifting resonances are, for example, residues E101, L336, and S389 (Fig. 2). Thereby, multiple peaks of the RBD disappeared beyond detection, with the effect being more pronounced in the titration with SBD–PPD. This result agrees with the spectroscopic observations on full-length TF, where only few peaks for the RBD are observable. On the SBD, large chemical shift changes were observed upon titrating RBD to either SBD alone or SBD–PPD, with the strongest changes located in the tips of the arms of the SBD (residues 323/331/335/336 and 376/380/386). The resonances of some residues in the SBD also broadened beyond detection, revealing the changes in local protein dynamics upon interaction. No CSP effect was observed in any of the titrations with the PPD only (Supplementary Table 1). These titration experiments establish the location of the main interaction surfaces in the individual domains for molecular docking.
To obtain atomic distance information on such a dynamic interaction with line-broadened peaks, the method of choice is the use of PREs. For these measurements, a spin label with a paramagnetic unpaired electron is introduced at selected sites into full-length TF, causing enhanced relaxation of the nuclear spins in its vicinity up to ∼25 Å, and thus providing long-distance information (Fig. 4). In general, one possibility to detect purely intermolecular PRE distance information is the use of mixed samples, such as a 1:1 mixture of isotope-labeled (e.g., 15N) and spin-labeled protein (SL). However, the spectral analysis with these type of preparations is not straightforward because three types of dimers are present in solution (15N/15N, 15N/SL, and SL/SL) with overall decreased experimental sensitivity and a reduced averaged effect29. Our initial tests showed that the complicated NMR spectrum, the large molecular weight of TF with 432 residues, as well as its limited solubility make this approach not feasible for TF. Therefore, we decided to work with a sample containing a single species of uniformly isotope- and spin-labeled TF.
Domain contacts in the full-length TF dimer. Result of PRE experiments with a paramagnetic spin label attached to one of the positions S30, V49, S61, S72, A223, and E326 in full-length TF measured in sample buffer (20 mM K-phosphate pH 6.5, 100 mM KCl, 0.5 mM EDTA) at 25 °C and 700 MHz. The peak volume ratio between oxidized and reduced samples from 2D [15N,1H]-TROSY is plotted against the residue number. For visualization purposes, a value of 0.15 is shown for the peaks that were broadened beyond detection in the paramagnetic sample. Data are shown only for non-overlapping resonances. The black line outlines the PRE effect observed for each mutant. The orange-shaded regions correspond to intermolecular PRE and the green shaded to intramolecular effects, as expected from the monomeric crystal structure (PDB 1W26). The colored bars on top show the sequence domain organization as in Fig. 1
The positioning of the spin label was chosen as to maximize the available structural information. PREs were measured in TF samples with the spin label in the following positions: S30C, V49C, S61C, and S72C in the RBD, A223C in the PPD, and E326C in the SBD. Importantly, placing multiple probes at the line-broadened RBD was essential to obtain distance information between this domain and the other residues in the molecule, as the reverse experiment with detection on the RBD would not be meaningful. The PRE data were quantitatively analyzed for the residues in full-length TF with unambiguous assignment and non-overlapping peaks (Fig. 4). For consecutive polypeptide segments, the observed PRE effects were then classified by comparing the measured interspin distances with the known domain structures. Those PREs that could be explained based on the monomeric structure were classified as intramolecular (highlighted in green on Fig. 4) and all remaining PREs as intermolecular effects (highlighted in orange on Fig. 4). Among the multiple long-range distance contacts observed, the most striking one was verified between the spin-labeled residues V49C or S61C in the RBD and the inner surface of the PPD (regions around residues 165, 190, and 220). These regions are far apart in the TF monomer structure but close in the dimer, independently confirming the previous conclusion that the PPD of one protomer is in close spatial contact with the RBD of the other protomer.
Structure of the TF dimer
The experimental data from the CSP and PRE measurements were then used to calculate structural conformers of the TF dimer. Based on the domain folds of the crystal structure 1W26, which we had validated by secondary chemical shift analysis, we employed a two-step procedure consisting of a CSP-based docking followed by PRE-driven annealing (Fig. 5a). For the first calculation step, the docking algorithm HADDOCK30,31,32 was employed, using the CSP data between the RBD and the SBD as input, together with C2 symmetry restraints for the two protomers (Supplementary Table 2). The output of the HADDOCK calculation contained two structural clusters with similar target energy function and overall identical domain topology (Fig. 5b). In conformer 1, the RBD locates inside the cavity formed by the SBD arms. In this structure, the ribosome-binding site is completely occluded inside the SBD cavity. In conformer 2, the RBD of one protomer lies on top of the tips of the arms of the SBD of the other protomer, with the ribosome-binding motif in close contact with one of the arms.
Structure of the TF dimer in solution. a Flowchart for structural model determination of TF dimer. Structural models are indicated in orange boxes. Experimental data contributions are indicated in green boxes. Software packages are identified in purple boxes. b Lowest energy structures from the two clusters obtained with HADDOCK docking based on chemical shift perturbation data. Both monomers are represented in surface view, and one of them is semi-transparent to show the backbone. c XPLOR-NIH results represented as (I) ensemble of the 10 lowest energy structures with the flexible residue segments in gray, (II) lowest energy structures with both monomers represented in surface view, one of them depicted semi-transparent to show the backbone, and (III) lowest energy structures in surface representation with experimental PRE distances represented, intermolecular (A–B, top) and intramolecular (A–A, bottom)
In the second calculation step, the results from HADDOCK together with the PRE data were used as input for the structure calculation software XPLOR-NIH33 (Supplementary Table 3). Based on the experimental confirmation that the individual domains feature the secondary structure elements known from the crystal structure, a set of distance restraints were introduced to maintain the geometry of these elements, referred to as elastic fold network (EFN) constraints. The medium-range PRE restraints were used as intra- or intermolecular, and the long-range PRE restraints were used as intra- and intermolecular. Thereby, approximately 75% of the 521 PRE restraints were intermolecular, providing sufficient information for the calculation of the arrangement of the protomers. Both calculated structural conformers share the same overall topology and each fits the experimental data similarly (Fig. 5c and Supplementary Table 4). In both models, the SBD preserves the relative position of its architectural elements, thus maintaining the central cavity. Comparing with the crystal structure 1W26, the PPD rotates toward its own SBD by 47° and 24 Å in model 1, and by 17° and 25 Å in model 2, getting close to the RBD of the other monomer. In model 2, the RBD sitting on top of the SBD arms after the docking step has the loop that contains the ribosome-binding site (residues 42–49) located between the SBD arms after the XPLOR annealing. Notably, the conformers each fulfill a large majority, but not exactly all experimental constraints (Supplementary Table 4). We thus propose that these two structures describe two representative conformers in the conformational equilibrium of dimeric TF.
Experimental validation of the models
To validate the structures and their interaction mode, we created variants of TF with amino acid residue mutations at selected positions in the dimer domain interfaces. On the basis of a TF variant with reduced ribosome binding9, we introduced three sets of mutations, either isolated or in combination. The mutation sets mutB and mutC, previously described by Saio et al.21, are located in one arm and the neck region of TF, respectively, while a newly designed set of mutations, mutD, was chosen to be located at the other arm (Supplementary Fig. 7a). The mutant mutB has four hydrophobic amino acids substituted by alanines in Arm2 and mutC has a single hydrophobic amino acid substituted in the SBD cavity. These mutants were previously shown to have reduced chaperone activity in the form of lower anti-aggregation activity21. The mutant mutD has two charged residues mutated to alanines in Arm1. All mutants were expressed, purified, and their monomer–dimer equilibrium analyzed by SEC–MALS (Supplementary Fig. 7 and Supplementary Table 5). All three mutant sets weakened the dimer affinity, with mutC and mutD having a moderate effect, while mutB completely abrogated the dimerization, resulting in monomeric protein even at the highest examined concentration. As expected from their distant location in the structure, the mutation sets mutC and mutD showed additive effects in the dimerization as their joint incorporation lead to a further weakened affinity (Supplementary Table 5). Finally, an inspection of the 2D [15N,1H]-TROSY NMR spectra of the mutant TF proteins showed that the monomerization directly lead to the appearance of the resonance peaks of the RBD (Supplementary Fig. 8), in full agreement with the finding that the line-broadening of the RBD is caused by the exchange dynamics of the RBD inside the SBD cavity. In the spectra of mutB and mutB + mutC, all resonances of the RBD, except G95 and A27, could be identified; the latter resonances are however already quite weak in isolated RBD. This appearance of RDB signals is highlighted for residue Ile19 (Supplementary Fig. 8). Overall, on the one hand, the structural location of the mutation sets validates the contact sites of the dimer and thus our structural conformers. On the other hand, the observation that the same mutations which are known to decrease the chaperoning activity of TF also lead to monomerization, confirms that the dimerization between two TF molecules arises from client-like in trans self-interactions.
The experiments presented in this work have resolved the long-standing question about the spatial arrangement of the dimeric form of E. coli TF in solution. The dimer arrangement is dynamic, with the two RBD domains populating a conformational ensemble in the center of the complex. The arrangement results from intermolecular in trans interactions of the TF client-binding site with the RBD. In the absence of clients, TF is in three-state equilibrium between a ribosome-bound, a monomeric, and a dimeric form (Fig. 6). The structure of the dimer is a dynamic conformational equilibrium. Importantly, the dimeric structure provides an explanation why the TF dimer, which has an apparent dimeric affinity of 2.5 μM, can be monomerized by clients with weaker affinities, such as different PhoA-derived model substrates with affinities in the range ∼2–14 μM21. Since the dimerization affinity results from two weaker interactions between the SBDs of each protomer and the RBDs of the respective other by avidity, monomerization can be readily achieved by a client with weaker global, but higher local affinity. Clients with weaker affinity than the local interaction will however not bind. The dimeric state thus also provides a selectivity filter for very weakly interacting clients, protecting the TF client sites from promiscuous binding.
Equilibrium and frustration of the TF dimer in solution. a TF dimer is highly dynamic and is in equilibrium in solution with its monomeric form and with its ribosome-bound form. The ribosome is represented in gray. b Frustration analysis of TF. Local frustration for TF crystal structure was calculated with the online tool Protein Frustratometer 234 (PDB 1W26) and is plotted on TF crystal structure (left) and on the dimer structural models (right). Minimally frustrated interactions are depicted as green lines, highly frustrated interactions as red lines
Our methodological approach to describe the structure of the TF dimer is adapted to the dynamicity of the complex, where the RBD adopts a dynamic ensemble state, rather than a single conformation. Intermolecular nuclear Overhauser effects (NOEs) of dynamic ensembles can be difficult or impossible to interpret35, and in such situations, PREs are thus the method of choice to obtain intermolecular spatial correlations. Importantly, despite the unobservability of the RBD by NMR, positioning paramagnetic probes in this domain allowed measuring intermolecular distances between this otherwise invisible domain and the others. Together, the CSP and PRE data then allowed the determination of two structural models of the TF dimer in solution using the software packages HADDOCK and XPLOR-NIH. Refinement of the docking models with restraints that maintain the fold of each domain led to two models that have overall similarity and that each fulfill a large majority, but not exactly all experimental constraints. The two conformers are thus a first-order approximation to describe the conformational ensemble of TF dimer. With further experimental data that may potentially be available from additional experiments, it may become interesting to calculate a refined ensemble of the TF dimer with more representative conformers in the future.
When comparing the dimer structure to the available crystal structures, a similarity to the arrangement of the holo structure of T. maritima TF is directly observed17. In that structure, two molecules of TF associate and bind two natively folded molecules of the substrate S7, one residing inside each SBD cavity. It was also observed that in both apo structures from T. maritima and E. coli, the RBD of a symmetry-related molecule is bound within the SBD cavity. The observed flexibility, together with the dynamic behavior at the interface, may provide a rationale for why it has so far not been possible to crystallize the dimeric form in the biologically relevant conformation. Furthermore, the high flexibility of the linker between SBD and PPD observed by the conformers is in full agreement with a recent cryo-EM study of TF bound to ribosomes and nascent chains, where the position of the PPD had to be adjusted by 24° rotation toward the SBD to fit on the density map20, as well as with recent molecular dynamics studies showing that the domains maintain their secondary structure during simulations, but that the linkers between the domains are quite flexible36,37,38.
A recent study on the mechanism of recognition between multiple chaperones and the client protein Im7 indicated that chaperones identify locally frustrated regions on client proteins39. We analyzed the frustration of the TF crystal structure34 to rationalize why TF forms dimers, and why the RBD binds in the substrate binding cavity (Fig. 6b). Several distinct regions of the protein are shown to be highly frustrated, namely the RBD loop containing the signature motif, the tips of the SBD arms, and the linker between the SBD and the PPD. Intriguingly, in the structural models of dimer, these regions all interact with each other, suggesting that the release of frustration energy is a driving force behind the TF dimer formation (Fig. 6b). This force contains both hydrophobic and electrostatic components, in agreement with our salt-concentration-dependent SEC–MALS experiments. TF recognizes the frustrated RBD in the absence of the ribosome as if it was a client protein. The in trans self-interaction of TF thus follows general laws of chaperone–client interactions.
Protein preparation
TF (full length, residues 1–432) was cloned from E. coli genomic DNA with NdeI and NotI restriction sites and ligated into a pET28b expression vector containing a thrombin-cleavable N-terminal His tag. All primer sequences used in this work are shown in Supplementary Table 6. RBD (residues 1–117) was constructed by introducing a stop codon at position 118 by site-directed mutagenesis. The remaining constructs, SBD (114–148/247–432), PPD (149–249), RBD–SBD (1–148/247–432), and SBD–PPD (114–432), were prepared by restriction-free cloning40. The thrombin cleavage site was mutated for a TEV cleavage site by site-directed mutagenesis. BL21 (λ DE3) pLysS (Novagen) cells were transformed with the plasmids and grown at 37 °C in medium containing 30 μg ml−1 kanamycin to OD600 = 0.5–0.6 and then for 30 min at 25 °C. Expression was induced by 0.4 mM IPTG (Apollo Scientific) and cells were harvested 15–18 h after induction by centrifugation for 15 min at 6000×g. For RBD, BL21 (λ DE3) Lemo (Novagen) cells were used, and the protein was expressed as inclusion bodies. Cells were resuspended in purification buffer (25 mM HEPES pH 7.5, 150 mM NaCl) supplemented with 10 mM imidazole and Complete EDTA-free protease inhibitor (Roche), lysed by French press, and centrifuged for 45 min at 38,000×g at 4 °C. The supernatant was applied to a Ni-HisTrap column (GE Healthcare) and eluted with an imidazole gradient in purification buffer with 500 mM NaCl. The proteins eluted at 80–150 mM imidazole. The fractions containing the protein were dialyzed against purification buffer and subsequently denatured by the addition of 6 M guanidine hydrochloride (Gdm-HCl). For the RBD, the pellet containing the inclusion bodies after cells lysis was resuspended in purification buffer supplemented with 6 M Gdm-HCl. The denatured proteins were applied to Ni2+-beads (Genscript) and incubated for 1 h under continuous shaking. The resin was extensively washed with purification buffer with 6 M Gdm-HCl and the proteins eluted with the same buffer containing 200 mM imidazole. Eluted proteins were refolded by dialysis against 50 mM TRIS pH 8. Then, 1 mM DTT and 0.5 mM EDTA were added to the samples for TEV-protease cleavage overnight at 4 °C (ratio TEV:protein 1:30 mg). After cleavage, proteins were dialyzed against purification buffer, denatured by the addition of 6 M Gdm-HCl, and further applied to Ni-charged beads to separate from the TEV protease and the cleaved tag. After incubation for 1 h, the flow-through and wash containing the cleaved protein were refolded by dialysis in purification buffer. Proteins were concentrated by ultracentrifugation (Vivaspin concentrators, Sartorius) before being applied to a gel filtration column (HiLoad 16/600 Superdex 200 pg or Superdex 75 pg, GE Healthcare) equilibrated with 20 mM K-phosphate pH 6.5, 100 mM KCl, 0.5 mM EDTA (sample buffer). For NMR experiments, the proteins were expressed in M9 minimal media containing the desired isotopes (H2O or D2O supplemented with 15NH4Cl and D-glucose for double labeling or (2H,13C)-D-glucose for triple labeling).
Random spin labeling of lysine residues
For spin labeling of the ε-amino groups with OXYL-1-NHS (1-oxyl-2,2,5,5-tetramethylpyrroline-3-carboxylate-N-hydroximide ester; Toronto Research Chemicals), TF was exchanged to spin-labeling buffer (10 mM sodium carbonate, pH 9.3) by using a PD-10 desalting column (GE Healthcare) according to the manufacturer's instructions. A 10-fold molar excess of OXYL-1-NHS dissolved in DMSO was directly added to the protein solution, followed by incubation in the dark for 1 h at room temperature and additionally 2 h at 4 °C. To remove excess spin label and to exchange the samples into sample buffer, samples were washed with 20 volumes of sample buffer using Vivaspin concentrators (Sartorius, MWCO 30 kDa).
Preparation of TF mutants
The single-point mutations S30C, V49C, S61C, S72C, A223C, and E326C for PRE measurements were introduced in the TF wild-type sequence by site-directed mutagenesis. The expression and purification of the mutant cysteine-containing proteins was performed as described for the wild-type protein, except that 1 mM DTT was added for the refolding and to the sample buffer for gel filtration. For validation of the structural models, the TF(3A) variant TF(F44A, R45A, K46A), which is deficient of the ribosome interaction9, was chosen as background. On this basis, three sets of mutations—mutB, mutC, mutD—were introduced, either as single sets or in combination. The mutation sets mutB (M374A, Y378A, V384A, F387A) and mutC (M140E) had previously been designed and characterized21, and the mutation set mutD (R316A, R321A) was newly designed.
Spin labeling of cysteine mutants
The cysteine mutants of TF were spin labeled with MTSL ((1-oxyl-2,2,5,5-tetramethyl-Δ3-pyrroline-3-methyl)-methanethiosulfonate; Toronto Research Chemicals). Protein solution in sample buffer with DTT was exchanged to sample buffer with 500 mM KCl and without DTT using PD-10 desalting columns (GE Healthcare). A 10-fold molar excess of MTSL dissolved in acetonitrile was added to the protein and incubated for 1 h on ice and then overnight at room temperature, always in the dark. To remove unreacted MTSL, the buffer was exchanged to sample buffer. Sodium ascorbate was used for the reduction of the spin label in the NMR tube to a final concentration of 5 mM from a 500 mM stock solution.
SEC–MALS
For SEC–MALS measurements, samples were separated at 6 °C for the individual domains, and at 26 °C for the full-length mutants, in sample buffer using a GE Healthcare Superdex 200 5/150 GL column or a Wyatt silica SEC column (4.6 × 300 mm, 5 μm bead, 150 Å pore) run on an Agilent 1260 HPLC. Elution was monitored by three detectors in series: (1) an Agilent multi-wavelength absorbance detector (absorbance at 280 and 254 nm), (2) a Wyatt Heleos II 8+ multi-angle light scattering (MALS) detector, and (3) a Wyatt Optilab rEX differential refractive index (dRI) detector. The columns were equilibrated overnight in the running buffer to obtain stable baseline signals from the detectors before data collection. Molar mass, elution concentration, and mass distributions of the samples were calculated using the ASTRA 6 software (Wyatt Technology). Inter-detector delay volumes, band broadening corrections, and light-scattering detector normalization were calibrated regularly using a 2 mg ml−1 BSA solution (ThermoPierce) and standard protocols in ASTRA 6.
For all constructs, essentially identical mass values were calculated using either dRI or absorbance signals to determine protein concentration, at concentrations where the comparison could be made. However, for the full-length TF, full-length mutants, RBD–SBD, SBD–PPD, and SBD concentration series, absorbance data were used in preference for molar mass calculations due to greater signal-to-noise at low concentration. In these cases, band-broadening corrections for the absorbance signal were obtained from the BSA monomer peak using the MALS detector as the reference. For the RBD and PPD series that used higher loading concentrations, dRI data were used to measure concentration as the absorbance signal was outside the linear range of the detector. In these cases, band-broadening corrections for the UV and MALS detector signals were obtained from the BSA monomer peak using the dRI detector as the reference.
For all constructs, the elution profiles exhibited a single major peak. For full-length TF; the full-length mutants 3A, mutC, and mutD; RBD–SBD and RBD sample series, the elution volume of the peak and the SEC–MALS-derived weight-averaged molar mass (M w) changed in a concentration-dependent manner. In all cases, M w varied within the limits of the molar mass expected for the monomer and dimer, consistent with the presence of a fast-exchanging monomer–dimer equilibrium. In these cases, values of M w and elution concentration at the top of the elution peak were used to fit the dissociation constant (K D) for each construct, assuming a fast monomer–dimer equilibrium (Eq. 1).
$$M_{\rm W} = 2M - M\frac{{ - K_{\rm D} + \sqrt {K_{\rm D}^2 + 8\left[ M \right]K_{\rm D}} }}{{4\left[ M \right]}},$$
where M is the molar mass of the monomer and [M] the molar concentration of the sample (in terms of monomer) as it passes through the MALS detector after elution from the column41.
For the series where concentration was measured by absorbance, the concentration at the MALS detector could be obtained directly from the absorbance signal after band-broadening correction (with the MALS detector as the reference instrument). However, for the RBD concentration series, the concentration measured at the dRI detector was lower than that at the MALS detector, due to significant band-broadening at the dRI detector. To obtain the correct concentration at the MALS detector for fitting the RBD data to Eq. (1), the elution concentrations calculated from the dRI data were multiplied by an experimentally determined factor of 1.26. 95% confidence intervals were determined from the fitting error in GraphPad Prism.
Sedimentation equilibrium runs were conducted at 6 °C using An-60Ti rotor in a Beckman Coulter XL-I analytical ultracentrifuge, for full-length TF and RBD–SBD, which exhibited dimer formation in SEC–MALS experiments. Samples were prepared at 12 μM for TF as well as at 48 and 135 μM for RBD–SBD, and the sample volume was 170 µl. Data were acquired at three speeds, 10,000 rpm (7800 g), 16,000 rpm (20,000 g), and 20,000 rpm (31,000 g), with detection by radial absorbance scanning at 250 and 280 nm. At each, speed centrifugation was allowed to proceed until sedimentation equilibrium was attained, as judged by pairwise comparison of scans taken at 6 h intervals, using the approach to equilibrium function in the program Sedfit42. Equilibrium absorbance scans at three speeds (for both constructs) and two concentrations (for RBD–SBD) were globally fitted to a monomer–dimer model to obtain the dimer dissociation constant K D using the program Sedphat43. Monomer masses and molar extinction coefficients at 280 nm for each construct were constrained to values calculated from their amino acid sequence, and the molar extinction coefficient for RBD–SBD at 250 nm was calculated from an absorbance spectrum and the extinction coefficient at 280 nm. Buffer density was calculated using Sednterp. The total concentration, bottom positions, RI noise, and baseline were globally fitted for each sample at multiple speeds, with mass conservation constraints employed. The globally fitted concentrations were in good agreement with the loading concentrations. 95% confidence intervals were determined using the automatic confidence interval search with projection method.
All NMR samples were prepared in sample buffer. NMR experiments were recorded at 25 °C on Bruker Ascend 700 MHz and Bruker Avance III 900 MHz spectrometers, equipped with cryogenic triple-resonance probes. NMR data were processed with PROSA44 and analyzed with CARA and XEASY45. For sequence-specific backbone resonance assignment 2D [15N,1H]-TROSY46 and 3D TROSY-HNCACB47 experiments were acquired for [U-2H,15N,13C] TF, [U-2H,15N,13C] SBD–PPD, [U-2H,15N,13C] SBD, and [U-2H,15N,13C] RBD. For [U-2H,15N,13C] SBD–PPD, 3D TROSY-HNCA was also acquired, and for [U-2H,15N,13C] SBD, 3D TROSY-HNCA, 3D TROSY-HNCO, and 3D TROSY-HNCACO47. Assignment of [U-15N] PPD was obtained from the experiments for [U-2H,15N,13C] SBD–PPD. Secondary chemical shifts were calculated relative to the random coil values48. Titrations followed by 2D [15N,1H]-TROSY were performed between the monomeric constructs RBD, SBD, PPD, and SBD–PPD. The initial concentrations were 250 μM for SBD–PPD and 100 μM for the other constructs. The chemical shift changes of the amide resonances in the 2D [15N,1H]-TROSY spectra were calculated according to Eq. 2:
$${\mathrm{\Delta }}\delta \,{\rm HN} = \sqrt {\left( {{\mathrm{\Delta }}\delta \left( {\,^1{\rm H}}\right)} \right)^2 + \left( {0.2 \cdot {\mathrm{\Delta }}\delta \left( {\,^{15}{\rm N}} \right)} \right)^2}.$$
PRE experiments were performed on [U-2H,15N] TF at 300 μM. A 2D [15N,1H]-TROSY spectrum was first measured in the paramagnetic state, and after addition of ascorbate to the sample, the diamagnetic reference was measured (Supplementary Fig. 9). The volume of well-resolved peaks was measured with NEASY and used to calculate PRE rates (Eq. 3)49 that were further converted into distances (Eq. 4)50:
$$\frac{{V_{\rm ox}}}{{V_{\rm red}}} = e^{( - {\rm PRE}{\cdot}2\cdot\tau _{{\rm INEPT}})},$$
$$r = \root {6} \of {{\frac{K}{{{\rm PRE}}}\left( {4\tau _{\rm c} + \frac{{3\tau _{\rm c}}}{{1 + \omega _{\rm h}^2\tau _{\rm c}^2}}} \right)}},$$
where r is the distance between the electron and nuclear spins, τ c is the correlation time for the electron–nuclear interaction (the approximation was made that τ c is equal to the global correlation time of the protein determined by [15N,1H]-TRACT51, 42 ns), ω h is the Larmor frequency of the nuclear spin (proton), and K is 1.23 · 10−32 cm6 s−2 50. The secondary structure elements in solution were determined with the CSI 3.0 web server using Cα and Cβ chemical shifts13.
Measuring of the lifetime of the TF dimer
About 100 μl of 100 µM [U-2H,15N] TF were mixed at t = 0 with 20 μl of D2O and 100 μl OXYL-1-NHS-labeled TF (100 μM). In real time after the mixing, single δ1[15N]-1D cross sections of 2D [15N,1H]-TROSY spectra with an experimental time of 60 s were acquired. The measurements were performed in 3 mm NMR tubes on an 800 MHz Bruker AVANCE III HD spectrometer equipped with 3 mm CP-TCI probe. The values for t 0 (experimental dead time between mixing and acquisition of first data point) were 140 s (20 °C), 150 s (25 °C, 30 °C), and 160 s (15 °C, 35 °C), respectively, for the temperature-dependent measurements. For analysis of the data, the 1D proton signal intensity was integrated over the region 7.0–9.5 ppm using Topspin 3.5 (Bruker BioSpin). The data were fitted by least-square minimization to the equation \(I_\Delta \left( t \right) = I\left( t \right) - I_\infty = \left( {I_0 - I_\infty } \right) \cdot \exp \left( { - t/\tau } \right)\), where I 0 and I ∞ are the NMR signal intensities at t = 0 and t = ∞, respectively, and τ is the global lifetime of the dimer (τ = k off −1). See Supplementary Note 1 and Supplementary Figure 10 for a derivation of this equation. Reference experiments, in which non-spin-labeled TF (instead of spin-labeled TF) was added to [U-2H,15N] TF, showed a constant signal intensity over the entire timescale, validating that the observed signal loss in the experiment with the spin label is due to the intermolecular PRE in the mixed dimer, caused by disassembly of the [U-2H,15N] TF dimer and the reassembly of the mixed dimer.
The calculation of the structural model was performed in two phases: docking of the two monomers of TF using the CSP data and rearrangement of the individual domains using PRE distance restraints. In the first phase, the HADDOCK web server was used (Supplementary Table 2)30,31,32. The chain A of the E. coli crystal structure (PDB 1W26) was used for both monomers8. The active residues were defined as the residues having resonances that disappear or that had chemical shift changes above the mean plus one standard deviation corrected to zero52 in the titrations between RBD and SBD (Fig. 3). Passive residues were automatically defined by HADDOCK. Non-crystallographic symmetry (NCS) restraints and C2 symmetry restraints were imposed for all residues. Standard docking parameters were used. HADDOCK clustered 162 out of 200 calculated structures into 12 clusters (Supplementary Table 2). The top 2 HADDOCK models were used as input for XPLOR-NIH33 in the second phase.
EFN restraints were created with crystallography and NMR system (CNS) for each monomer in both HADDOCK models, following the selection rules of the DEN (deformable elastic network) method53, 54. EFN restraints differ from DEN in that they are not re-adjusted in the course of the structure calculation trajectory. The EFN restraints maintain the domains folded by constraining distances between 3 to 15 Å between atoms with a maximum sequence separation of 10 residues, and leave the linkers between the domains flexible. Based on published molecular dynamics simulations, the flexible linkers were defined for the residues 112–115, 149–155, and 241–26138. PRE restraints were introduced as distances between the Cβ atom of the mutated residue and the amide proton detected in the spectra. For each spectrum, an error was determined from the noise (as the standard deviation of the integrals of 12 peaks in the noise), and error propagation was applied to calculate errors for the final distances. The PRE restraints were divided into three classes, according to the ratio between the volume of the peaks in the paramagnetic and diamagnetic samples. For resonances with a volume ratio between 0.15 and 0.85, the calculated distance was restrained with ±4 Å margins. Only distances for which the propagated error was less than 1 Å were considered. For resonances with a volume ratio <0.15 or for which the resonances are broadened beyond detection, only an upper limit was defined. This limit was calculated for a ratio of 0.15 (16.12 Å) and given an upper limit of 4 Å. This selection resulted in 171 PRE distance restraints (Supplementary Table 2). Since these restraints can be intra- or intermolecular, these were submitted to XPLOR-NIH as ambiguous restraints. Resonances with ratio >0.85 were restrained only with a lower limit as the distance corresponding to ratio 0.85 (24.28 Å) with a lower limit of 4 Å. As no effect was observed for these resonances, these were restrained both as intra- and intermolecular (long-range restraints). The ±4 Å in distance constraints were previously shown to be sufficient to account for the flexibility of the MTSL tag and possible errors from the use of a global correlation time and the approximation of the intrinsic relaxation rates50. The structure calculation protocol was derived from the XPLOR-NIH gb1/anneal.py template script. The simulation was repeated 100 times and the 10 lowest energy structures were further refined in explicit solvent using gb1/wrefine.py script (Supplementary Table 3). The calculations were performed in torsion angle space except for initial and final minimization. The potential energy of the TF dimer was modeled by standard XPLOR-NIH bonded (bond, angle, dihedral and improper terms) and non-bonded (van der Waals term) potentials. The symmetry of the dimer was imposed by using distance symmetry and NCS restrains. The local geometry of TF domains was maintained by the EFN distance restraints. Water refinement was done using OPLSX parameters and the XPLOR-NIH Ramachandran potential (backbone dihedral angle database).
The two TF dimer conformers have been deposited in the PDB as entries 5OWI and 5OWJ and the NMR resonance assignments to the BMRB with accession codes 27239 and 27242. All other relevant data are available from the corresponding author upon reasonable request.
Kim, Y. E., Hipp, M. S., Bracher, A., Hayer-Hartl, M. & Hartl, F. U. Molecular chaperone functions in protein folding and proteostasis. Annu. Rev. Biochem. 82, 323–355 (2013).
Balchin, D., Hayer-Hartl, M. & Hartl, F. U. In vivo aspects of protein folding and quality control. Science 353, aac4354 (2016).
Hartl, F. U., Bracher, A. & Hayer-Hartl, M. Molecular chaperones in protein folding and proteostasis. Nature 475, 324–332 (2011).
Preissler, S. & Deuerling, E. Ribosome-associated chaperones as key players in proteostasis. Trends Biochem. Sci. 37, 274–283 (2012).
Hoffmann, A., Bukau, B. & Kramer, G. Structure and function of the molecular chaperone Trigger Factor. Biochim. Biophys. Acta 1803, 650–661 (2010).
Deuerling, E., Schulze-Specking, A., Tomoyasu, T., Mogk, A. & Bukau, B. Trigger Factor and DnaK cooperate in folding of newly synthesized proteins. Nature 400, 693–696 (1999).
Oh, E. et al. Selective ribosome profiling reveals the cotranslational chaperone action of Trigger Factor in vivo. Cell 147, 1295–1308 (2011).
Ferbitz, L. et al. Trigger Factor in complex with the ribosome forms a molecular cradle for nascent proteins. Nature 431, 590–596 (2004).
Kramer, G. et al. L23 protein functions as a chaperone docking site on the ribosome. Nature 419, 171–174 (2002).
Vogtherr, M. et al. NMR solution structure and dynamics of the peptidyl-prolyl cis-trans isomerase domain of the Trigger Factor from Mycoplasma genitalium compared to FK506-binding protein. J. Mol. Biol. 318, 1097–1115 (2002).
Yao, Y., Bhabha, G., Kroon, G., Landes, M. & Dyson, H. J. Structure discrimination for the C-terminal domain of Escherichia coli Trigger Factor in solution. J. Biomol. NMR 40, 23–30 (2008).
Touw, W. G. et al. A series of PDB-related databanks for everyday needs. Nucleic Acids Res. 43, D364–D368 (2015).
Hafsa, N. E., Arndt, D. & Wishart, D. S. CSI 3.0: a web server for identifying secondary and super-secondary structure in proteins using NMR chemical shifts. Nucleic Acids Res. 43, W370–W377 (2015).
Kaiser, C. M. et al. Real-time observation of Trigger Factor function on translating ribosomes. Nature 444, 455–460 (2006).
Patzelt, H. et al. Three-state equilibrium of Escherichia coli Trigger Factor. Biol. Chem. 383, 1611–1619 (2002).
Ludlam, A. V., Moore, B. A. & Xu, Z. The crystal structure of ribosomal chaperone Trigger Factor from Vibrio cholerae. Proc. Natl Acad. Sci. USA 101, 13436–13441 (2004).
Martinez-Hackert, E. & Hendrickson, W. A. Promiscuous substrate recognition in folding and assembly activities of the Trigger Factor chaperone. Cell 138, 923–934 (2009).
Kristensen, O. & Gajhede, M. Chaperone binding at the ribosomal exit tunnel. Structure 11, 1547–1556 (2003).
Martinez-Hackert, E. & Hendrickson, W. A. Structures of and interactions between domains of Trigger Factor from Thermotoga maritima. Acta Crystallogr. D 63, 536–547 (2007).
Merz, F. et al. Molecular mechanism and structure of Trigger Factor bound to the translating ribosome. EMBO J. 27, 1622–1632 (2008).
Saio, T., Guan, X., Rossi, P., Economou, A. & Kalodimos, C. G. Structural basis for protein antiaggregation activity of the Trigger Factor chaperone. Science 344, 1250494 (2014).
Shi, Y., Yu, L., Kihara, H. & Zhou, J. M. C-terminal 13-residue truncation induces compact trigger factor conformation and severely impairs its dimerization ability. Protein Pept. Lett. 21, 476–482 (2014).
Zeng, L. L., Yu, L., Li, Z. Y., Perrett, S. & Zhou, J. M. Effect of C-terminal truncation on the molecular chaperone function and dimerization of Escherichia coli Trigger Factor. Biochimie 88, 613–619 (2006).
Lakshmipathy, S. K. et al. Identification of nascent chain interaction sites on Trigger Factor. J. Biol. Chem. 282, 12186–12193 (2007).
Hsu, S. T. & Dobson, C. M. 1H, 15N and 13C assignments of the dimeric ribosome binding domain of Trigger Factor from Escherichia coli. Biomol. NMR Assign. 3, 17–20 (2009).
Rathore, Y. S., Dhoke, R. R., Badmalia, M., Sagar, A. & Ashish SAXS data based global shape analysis of Trigger Factor (TF) proteins from E. coli, V. cholerae, and P. frigidicola: resolving the debate on the nature of monomeric and dimeric forms. J. Phys. Chem. B 119, 6101–6112 (2015).
McConnell, H. M. Reaction rates by nuclear magnetic resonance. J. Chem. Phys. 28, 430–431 (1958).
Wüthrich, K. NMR assignments as a basis for structural characterization of denatured states of globular proteins. Curr. Opin. Struct. Biol. 4, 93–99 (1994).
Rumpel, S., Becker, S. & Zweckstetter, M. High-resolution structure determination of the CylR2 homodimer using paramagnetic relaxation enhancement and structure-based prediction of molecular alignment. J. Biomol. NMR 40, 1–13 (2008).
de Vries, S. J., van Dijk, M. & Bonvin, A. M. The HADDOCK web server for data-driven biomolecular docking. Nat. Protoc. 5, 883–897 (2010).
van Zundert, G. C. et al. The Haddock2.2 web server: user-friendly integrative modeling of biomolecular complexes. J. Mol. Biol. 428, 720–725 (2016).
Wassenaar, T. A. et al. WeNMR: structural biology on the grid. J. Grid Comput. 10, 743–767 (2012).
Schwieters, C. D., Kuszewski, J. J., Tjandra, N. & Clore, G. M. The Xplor-NIH NMR molecular structure determination package. J. Magn. Reson. 160, 65–73 (2003).
Parra, R. G. et al. Protein Frustratometer 2: a tool to localize energetic frustration in protein molecules, now with electrostatics. Nucleic Acids Res. 44, W356–W360 (2016).
Callon, M., Burmann, B. M. & Hiller, S. Structural mapping of a chaperone-substrate interaction surface. Angew. Chem. Int. Ed. Engl. 53, 5069–5072 (2014).
Deeng, J. et al. Dynamic behavior of Trigger Factor on the ribosome. J. Mol. Biol. 428, 3588–3602 (2016).
Singhal, K., Vreede, J., Mashaghi, A., Tans, S. J. & Bolhuis, P. G. Hydrophobic collapse of Trigger Factor monomer in solution. PLoS. ONE 8, e59683 (2013).
Thomas, A. S., Mao, S. & Elcock, A. H. Flexibility of the bacterial chaperone Trigger Factor in microsecond-timescale molecular dynamics simulations. Biophys. J. 105, 732–744 (2013).
He, L., Sharpe, T., Mazur, A. & Hiller, S. A molecular mechanism of chaperone-client recognition. Sci. Adv. 2, e1601625 (2016).
Article ADS PubMed PubMed Central Google Scholar
Bond, S. R. & Naus, C. C. RF-Cloning.org: an online tool for the design of restriction-free cloning projects. Nucleic Acids Res. 40, W209–W213 (2012).
Benfield, C. T. et al. Mapping the IκB kinase β (IKKβ)-binding interface of the B14 protein, a vaccinia virus inhibitor of IKKβ-mediated activation of nuclear factor κB. J. Biol. Chem. 286, 20727–20735 (2011).
Schuck, P. Size-distribution analysis of macromolecules by sedimentation velocity ultracentrifugation and lamm equation modeling. Biophys. J. 78, 1606–1619 (2000).
Vistica, J. et al. Sedimentation equilibrium analysis of protein interactions with global implicit mass conservation constraints and systematic noise decomposition. Anal. Biochem. 326, 234–256 (2004).
Güntert, P., Dötsch, V., Wider, G. & Wüthrich, K. Processing of multi-dimensional NMR data with the new software PROSA. J. Biomol. NMR 2, 619–629 (1992).
Bartels, C., Xia, T. H., Billeter, M., Güntert, P. & Wüthrich, K. The program XEASY for computer-supported NMR spectral analysis of biological macromolecules. J. Biomol. NMR 6, 1–10 (1995).
Pervushin, K., Riek, R., Wider, G. & Wüthrich, K. Attenuated T2 relaxation by mutual cancellation of dipole-dipole coupling and chemical shift anisotropy indicates an avenue to NMR structures of very large biological macromolecules in solution. Proc. Natl Acad. Sci. USA 94, 12366–12371 (1997).
Salzmann, M., Pervushin, K., Wider, G., Senn, H. & Wüthrich, K. TROSY in triple-resonance experiments: new perspectives for sequential NMR assignment of large proteins. Proc. Natl Acad. Sci. USA 95, 13585–13590 (1998).
Kjaergaard, M. & Poulsen, F. M. Sequence correction of random coil chemical shifts: correlation between neighbor correction factors and changes in the Ramachandran distribution. J. Biomol. NMR 50, 157–165 (2011).
Xue, Y. et al. Paramagnetic relaxation enhancements in unfolded proteins: theory and application to drkN SH3 domain. Protein Sci. 18, 1401–1424 (2009).
Battiste, J. L. & Wagner, G. Utilization of site-directed spin labeling and high-resolution heteronuclear nuclear magnetic resonance for global fold determination of large proteins with limited nuclear overhauser effect data. Biochemistry 39, 5355–5365 (2000).
Lee, D., Hilty, C., Wider, G. & Wüthrich, K. Effective rotational correlation times of proteins from NMR relaxation interference. J. Magn. Reson. 178, 72–76 (2006).
Schumann, F. H. et al. Combined chemical shift changes and amino acid specific chemical shift mapping of protein-protein interactions. J. Biomol. NMR 39, 275–289 (2007).
Brunger, A. T. Version 1.2 of the Crystallography and NMR system. Nat. Protoc. 2, 2728–2733 (2007).
Brunger, A. T. et al. Crystallography & NMR system: a new software suite for macromolecular structure determination. Acta Crystallogr. D 54, 905–921 (1998).
Funding is acknowledged from the University of Basel "Research Fund Junior Researchers", the Wallenberg Centre of Molecular and Translational Medicine in Gothenburg, and the Swiss National Science Foundation. The calculations were performed at sciCORE scientific computing core facility at the University of Basel. The Swedish NMR Centre of the University of Gothenburg is acknowledged for spectrometer time.
Björn M. Burmann
Present address: Department of Chemistry and Molecular Biology, Wallenberg Centre of Molecular and Translational Medicine, University of Gothenburg, 405 30, Göteborg, Sweden
Leonor Morgado and Björn M. Burmann contributed equally to this work.
Biozentrum, University of Basel, Klingelbergstrasse 70, 4056, Basel, Switzerland
Leonor Morgado, Björn M. Burmann, Timothy Sharpe, Adam Mazur & Sebastian Hiller
Leonor Morgado
Timothy Sharpe
Adam Mazur
Sebastian Hiller
L.M. and T.S. performed biophysical experiments. L.M. and B.M.B. conducted all other experimental work. L.M. and A.M. performed structure calculations. All authors analyzed and discussed the data. L.M., B.M.B., and S.H. designed the study and wrote the paper.
Correspondence to Sebastian Hiller.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Morgado, L., Burmann, B.M., Sharpe, T. et al. The dynamic dimer structure of the chaperone Trigger Factor. Nat Commun 8, 1992 (2017). https://doi.org/10.1038/s41467-017-02196-7
Trigger factor both holds and folds its client proteins
Kevin Wu
Thomas C. Minshull
James C. A. Bardwell
Structure determination of high-energy states in a dynamic protein ensemble
John B. Stiller
Renee Otten
Dorothee Kern
Altered ceramide metabolism is a feature in the extracellular vesicle-mediated spread of alpha-synuclein in Lewy body disorders
Marzena Kurzawa-Akanbi
Seshu Tammireddy
Christopher M. Morris
Acta Neuropathologica (2021)
Inter-domain dynamics in the chaperone SurA and multi-site binding to its outer membrane protein clients
Antonio N. Calabrese
Bob Schiffrin
Sheena E. Radford
UvrD helicase–RNA polymerase interactions are governed by UvrD's carboxy-terminal Tudor domain
Ashish A. Kawale
|
CommonCrawl
|
symplectic topology and floer homology
Download symplectic topology and floer homology or read online books in PDF, EPUB, Tuebl, and Mobi Format. Click Download or Read Online button to get symplectic topology and floer homology book now. This site is like a library, Use search box in the widget to get ebook that you want.
Symplectic Topology And Floer Homology Volume 1 Symplectic Geometry And Pseudoholomorphic Curves
Author by : Yong-Geun Oh
Description : Published in two volumes, this is the first book to provide a thorough and systematic explanation of symplectic topology, and the analytical details and techniques used in applying the machinery arising from Floer theory as a whole. Volume 1 covers the basic materials of Hamiltonian dynamics and symplectic geometry and the analytic foundations of Gromov's pseudoholomorphic curve theory. One novel aspect of this treatment is the uniform treatment of both closed and open cases and a complete proof of the boundary regularity theorem of weak solutions of pseudo-holomorphic curves with totally real boundary conditions. Volume 2 provides a comprehensive introduction to both Hamiltonian Floer theory and Lagrangian Floer theory. Symplectic Topology and Floer Homology is a comprehensive resource suitable for experts and newcomers alike.
Symplectic Topology And Floer Homology Volume 2 Floer Homology And Its Applications
Description : Published in two volumes, this is the first book to provide a thorough and systematic explanation of symplectic topology, and the analytical details and techniques used in applying the machinery arising from Floer theory as a whole. Volume 2 provides a comprehensive introduction to both Hamiltonian Floer theory and Lagrangian Floer theory, including many examples of their applications to various problems in symplectic topology. The first volume covered the basic materials of Hamiltonian dynamics and symplectic geometry and the analytic foundations of Gromov's pseudoholomorphic curve theory. Symplectic Topology and Floer Homology is a comprehensive resource suitable for experts and newcomers alike.
Description : The second part of a two-volume set offering a systematic explanation of symplectic topology. This volume provides a comprehensive introduction to Hamiltonian and Lagrangian Floer theory.
Description : The first part of a two-volume set offering a systematic explanation of symplectic topology. This volume covers the basic materials of Hamiltonian dynamics and symplectic geometry.
Contact And Symplectic Topology
Author by : Frédéric Bourgeois
Description : Symplectic and contact geometry naturally emerged from the mathematical description of classical physics. The discovery of new rigidity phenomena and properties satisfied by these geometric structures launched a new research field worldwide. The intense activity of many European research groups in this field is reflected by the ESF Research Networking Programme "Contact And Symplectic Topology" (CAST). The lectures of the Summer School in Nantes (June 2011) and of the CAST Summer School in Budapest (July 2012) provide a nice panorama of many aspects of the present status of contact and symplectic topology. The notes of the minicourses offer a gentle introduction to topics which have developed in an amazing speed in the recent past. These topics include 3-dimensional and higher dimensional contact topology, Fukaya categories, asymptotically holomorphic methods in contact topology, bordered Floer homology, embedded contact homology, and flexibility results for Stein manifolds.
Symplectic Geometry And Topology
Author by : Yakov Eliashberg
Publisher by : American Mathematical Soc.
Description : Symplectic geometry has its origins as a geometric language for classical mechanics. But it has recently exploded into an independent field interconnected with many other areas of mathematics and physics. The goal of the IAS/Park City Mathematics Institute Graduate Summer School on Symplectic Geometry and Topology was to give an intensive introduction to these exciting areas of current research. Included in this proceedings are lecture notes from the following courses: Introductionto Symplectic Topology by D. McDuff; Holomorphic Curves and Dynamics in Dimension Three by H. Hofer; An Introduction to the Seiberg-Witten Equations on Symplectic Manifolds by C. Taubes; Lectures on Floer Homology by D. Salamon; A Tutorial on Quantum Cohomology by A. Givental; Euler Characteristicsand Lagrangian Intersections by R. MacPherson; Hamiltonian Group Actions and Symplectic Reduction by L. Jeffrey; and Mechanics: Symmetry and Dynamics by J. Marsden. Information for our distributors: Titles in this series are copublished with the Institute for Advanced Study/Park City Mathematics Institute. Members of the Mathematical Association of America (MAA) and the National Council of Teachers of Mathematics (NCTM) receive a 20% discount from list price.
J Holomorphic Curves And Symplectic Topology
Author by : Dusa McDuff
Description : This second edition continues to serve as the definitive source of information about some areas of differential topology ($J$-holomorphic curves) and applications to quantum cohomology. The main goal of the book is to establish the fundamental theorems of the subject in full and rigorous detail. It may also serve as an introduction to current work in symplectic topology. The second edition clarifies various arguments, includes some additional results, and updates the references to recent developments.
Morse Theory And Floer Homology
Author by : Michèle Audin
Description : This book is an introduction to modern methods of symplectic topology. It is devoted to explaining the solution of an important problem originating from classical mechanics: the 'Arnold conjecture', which asserts that the number of 1-periodic trajectories of a non-degenerate Hamiltonian system is bounded below by the dimension of the homology of the underlying manifold. The first part is a thorough introduction to Morse theory, a fundamental tool of differential topology. It defines the Morse complex and the Morse homology, and develops some of their applications. Morse homology also serves a simple model for Floer homology, which is covered in the second part. Floer homology is an infinite-dimensional analogue of Morse homology. Its involvement has been crucial in the recent achievements in symplectic geometry and in particular in the proof of the Arnold conjecture. The building blocks of Floer homology are more intricate and imply the use of more sophisticated analytical methods, all of which are explained in this second part. The three appendices present a few prerequisites in differential geometry, algebraic topology and analysis. The book originated in a graduate course given at Strasbourg University, and contains a large range of figures and exercises. Morse Theory and Floer Homology will be particularly helpful for graduate and postgraduate students.
Low Dimensional And Symplectic Topology
Author by : Georgia) Georgia International Topology Conference (2009 : Athens
Description : Every eight years since 1961, the University of Georgia has hosted a major international topology conference aimed at disseminating important recent results and bringing together researchers at different stages of their careers. This volume contains the proceedings of the 2009 conference, which includes survey and research articles concerning such areas as knot theory, contact and symplectic topology, 3-manifold theory, geometric group theory, and equivariant topology. Among other highlights of the volume, a survey article by Stefan Friedl and Stefano Vidussi provides an accessible treatment of their important proof of Taubes' conjecture on symplectic structures on the product of a 3-manifold and a circle, and an intriguing short article by Dennis Sullivan opens the door to the use of modern algebraic-topological techniques in the study of finite-dimensional models of famously difficult problems in fluid dynamics. Continuing what has become a tradition, this volume contains a report on a problem session held at the conference, discussing a variety of open problems in geometric topology.
Floer Homology Gauge Theory And Low Dimensional Topology
Author by : Clay Mathematics Institute. Summer School
Description : Mathematical gauge theory studies connections on principal bundles, or, more precisely, the solution spaces of certain partial differential equations for such connections. Historically, these equations have come from mathematical physics, and play an important role in the description of the electro-weak and strong nuclear forces. The use of gauge theory as a tool for studying topological properties of four-manifolds was pioneered by the fundamental work of Simon Donaldson in the early 1980s, and was revolutionized by the introduction of the Seiberg-Witten equations in the mid-1990s. Since the birth of the subject, it has retained its close connection with symplectic topology. The analogy between these two fields of study was further underscored by Andreas Floer's construction of an infinite-dimensional variant of Morse theory that applies in two a priori different contexts: either to define symplectic invariants for pairs of Lagrangian submanifolds of a symplectic manifold, or to define topological invariants for three-manifolds, which fit into a framework for calculating invariants for smooth four-manifolds. ``Heegaard Floer homology'', the recently-discovered invariant for three- and four-manifolds, comes from an application of Lagrangian Floer homology to spaces associated to Heegaard diagrams. Although this theory is conjecturally isomorphic to Seiberg-Witten theory, it is more topological and combinatorial in flavor and thus easier to work with in certain contexts. The interaction between gauge theory, low-dimensional topology, and symplectic geometry has led to a number of striking new developments in these fields. The aim of this volume is to introduce graduate students and researchers in other fields to some of these exciting developments, with a special emphasis on the very fruitful interplay between disciplines. This volume is based on lecture courses and advanced seminars given at the 2004 Clay Mathematics Institute Summer School at the Alfred Renyi Institute of Mathematics in Budapest, Hungary. Several of the authors have added a considerable amount of additional material to that presented at the school, and the resulting volume provides a state-of-the-art introduction to current research, covering material from Heegaard Floer homology, contact geometry, smooth four-manifold topology, and symplectic four-manifolds.
Introduction To Symplectic Topology
Description : Over the last number of years powerful new methods in analysis and topology have led to the development of the modern global theory of symplectic topology, including several striking and important results. The first edition of Introduction to Symplectic Topology was published in 1995. The book was the first comprehensive introduction to the subject and became a key text in the area. A significantly revised second edition was published in 1998 introducing new sections and updates on the fast-developing area. This new third edition includes updates and new material to bring the book right up-to-date.
Morse Theoretic Methods In Nonlinear Analysis And In Symplectic Topology
Author by : Paul Biran
Description : The papers collected in this volume are contributions to the 43rd session of the Seminaire ́ de mathematiques ́ superieures ́ (SMS) on "Morse Theoretic Methods in Nonlinear Analysis and Symplectic Topology." This session took place at the Universite ́ de Montreal ́ in July 2004 and was a NATO Advanced Study Institute (ASI). The aim of the ASI was to bring together young researchers from various parts of the world and to present to them some of the most signi cant recent advances in these areas. More than 77 mathematicians from 17 countries followed the 12 series of lectures and participated in the lively exchange of ideas. The lectures covered an ample spectrum of subjects which are re ected in the present volume: Morse theory and related techniques in in nite dim- sional spaces, Floer theory and its recent extensions and generalizations, Morse and Floer theory in relation to string topology, generating functions, structure of the group of Hamiltonian di?eomorphisms and related dynamical problems, applications to robotics and many others. We thank all our main speakers for their stimulating lectures and all p- ticipants for creating a friendly atmosphere during the meeting. We also thank Ms. Diane Belanger ́ , our administrative assistant, for her help with the organi- tion and Mr. Andre ́ Montpetit, our technical editor, for his help in the preparation of the volume.
Symplectic Invariants And Hamiltonian Dynamics
Author by : Helmut Hofer
Publisher by : Birkhäuser
Description : Analysis of an old variational principal in classical mechanics has established global periodic phenomena in Hamiltonian systems. One of the links is a class of sympletic invariants, called sympletic capacities, and these invariants are the main theme of this book. Topics covered include basic sympletic geometry, sympletic capacities and rigidity, sympletic fixed point theory, and a survey on Floer homology and sympletic homology.
Contact And Symplectic Geometry
Author by : Charles Benedict Thomas
Description : This volume presents some of the lectures and research during the special programme held at the Newton Institute in 1994. The two parts each contain a mix of substantial expository articles and research papers that outline important and topical ideas. Many of the results have not been presented before, and the lectures on Floer homology is the first avaliable in book form.Symplectic methods are one of the most active areas of research in mathematics currently, and this volume will attract much attention.
Spectral Invariants With Bulk Quasi Morphisms And Lagrangian Floer Theory
Author by : Kenji Fukaya
Description : In this paper the authors first develop various enhancements of the theory of spectral invariants of Hamiltonian Floer homology and of Entov-Polterovich theory of spectral symplectic quasi-states and quasi-morphisms by incorporating bulk deformations, i.e., deformations by ambient cycles of symplectic manifolds, of the Floer homology and quantum cohomology. Essentially the same kind of construction is independently carried out by Usher in a slightly less general context. Then the authors explore various applications of these enhancements to the symplectic topology, especially new construction of symplectic quasi-states, quasi-morphisms and new Lagrangian intersection results on toric and non-toric manifolds. The most novel part of this paper is its use of open-closed Gromov-Witten-Floer theory and its variant involving closed orbits of periodic Hamiltonian system to connect spectral invariants (with bulk deformation), symplectic quasi-states, quasi-morphism to the Lagrangian Floer theory (with bulk deformation). The authors use this open-closed Gromov-Witten-Floer theory to produce new examples. Using the calculation of Lagrangian Floer cohomology with bulk, they produce examples of compact symplectic manifolds which admits uncountably many independent quasi-morphisms . They also obtain a new intersection result for the Lagrangian submanifold in .
Geometry And Topology Of Manifolds
Author by : Hans U. Boden
Description : This book contains expository papers that give an up-to-date account of recent developments and open problems in the geometry and topology of manifolds, along with several research articles that present new results appearing in published form for the first time. The unifying theme is the problem of understanding manifolds in low dimensions, notably in dimensions three and four, and the techniques include algebraic topology, surgery theory, Donaldson and Seiberg-Witten gauge theory, Heegaard Floer homology, contact and symplectic geometry, and Gromov-Witten invariants. The articles collected for this volume were contributed by participants of the Conference "Geometry and Topology of Manifolds" held at McMaster University on May 14-18, 2004 and are representative of the many excellent talks delivered at the conference.
Symplectic 4 Manifolds And Algebraic Surfaces
Author by : Fabrizio Catanese
Description : Modern approaches to the study of symplectic 4-manifolds and algebraic surfaces combine a wide range of techniques and sources of inspiration. Gauge theory, symplectic geometry, pseudoholomorphic curves, singularity theory, moduli spaces, braid groups, monodromy, in addition to classical topology and algebraic geometry, combine to make this one of the most vibrant and active areas of research in mathematics. It is our hope that the five lectures of the present volume given at the C.I.M.E. Summer School held in Cetraro, Italy, September 2-10, 2003 will be useful to people working in related areas of mathematics and will become standard references on these topics. The volume is a coherent exposition of an active field of current research focusing on the introduction of new methods for the study of moduli spaces of complex structures on algebraic surfaces, and for the investigation of symplectic topology in dimension 4 and higher.
The Floer Memorial Volume
Description : Andreas Floer died on May 15, 1991 an untimely and tragic death. His visions and far-reaching contributions have significantly influenced the developments of mathematics. His main interests centered on the fields of dynamical systems, symplectic geometry, Yang-Mills theory and low dimensional topology. Motivated by the global existence problem of periodic solutions for Hamiltonian systems and starting from ideas of Conley, Gromov and Witten, he developed his Floer homology, providing new, powerful methods which can be applied to problems inaccessible only a few years ago. This volume opens with a short biography and three hitherto unpublished papers of Andreas Floer. It then presents a collection of invited contributions, and survey articles as well as research papers on his fields of interest, bearing testimony of the high esteem and appreciation this brilliant mathematician enjoyed among his colleagues. Authors include: A. Floer, V.I. Arnold, M. Atiyah, M. Audin, D.M. Austin, S.M. Bates, P.J. Braam, M. Chaperon, R.L. Cohen, G. Dell' Antonio, S.K. Donaldson, B. D'Onofrio, I. Ekeland, Y. Eliashberg, K.D. Ernst, R. Finthushel, A.B. Givental, H. Hofer, J.D.S. Jones, I. McAllister, D. McDuff, Y.-G. Oh, L. Polterovich, D.A. Salamon, G.B. Segal, R. Stern, C.H. Taubes, C. Viterbo, A. Weinstein, E. Witten, E. Zehnder.
Floer Homology Groups In Yang Mills Theory
Author by : S. K. Donaldson
Description : The concept of Floer homology was one of the most striking developments in differential geometry. It yields rigorously defined invariants which can be viewed as homology groups of infinite-dimensional cycles. The ideas led to great advances in the areas of low-dimensional topology and symplectic geometry and are intimately related to developments in Quantum Field Theory. The first half of this book gives a thorough account of Floer's construction in the context of gauge theory over 3 and 4-dimensional manifolds. The second half works out some further technical developments of the theory, and the final chapter outlines some research developments for the future - including a discussion of the appearance of modular forms in the theory. The scope of the material in this book means that it will appeal to graduate students as well as those on the frontiers of the subject.
The Geometry Of The Group Of Symplectic Diffeomorphism
Author by : Leonid Polterovich
Description : The group of Hamiltonian diffeomorphisms Ham(M, 0) of a symplectic mani fold (M, 0) plays a fundamental role both in geometry and classical mechanics. For a geometer, at least under some assumptions on the manifold M, this is just the connected component of the identity in the group of all symplectic diffeomorphisms. From the viewpoint of mechanics, Ham(M,O) is the group of all admissible motions. What is the minimal amount of energy required in order to generate a given Hamiltonian diffeomorphism I? An attempt to formalize and answer this natural question has led H. Hofer [HI] (1990) to a remarkable discovery. It turns out that the solution of this variational problem can be interpreted as a geometric quantity, namely as the distance between I and the identity transformation. Moreover this distance is associated to a canonical biinvariant metric on Ham(M, 0). Since Hofer's work this new ge ometry has been intensively studied in the framework of modern symplectic topology. In the present book I will describe some of these developments. Hofer's geometry enables us to study various notions and problems which come from the familiar finite dimensional geometry in the context of the group of Hamiltonian diffeomorphisms. They turn out to be very different from the usual circle of problems considered in symplectic topology and thus extend significantly our vision of the symplectic world.
A New Construction Of Virtual Fundamental Cycles In Symplectic Geometry
Author by : John Vincent Pardon
Description : We develop techniques for defining and working with virtual fundamental cycles on moduli spaces of pseudo-holomorphic curves which are not necessarily cut out transversally. Such techniques have the potential for applications as foundations for invariants in symplectic topology arising from "counting" pseudo-holomorphic curves. We introduce the notion of an implicit atlas on a moduli space, which is (roughly) a convenient system of local finite-dimensional reductions. We present a general intrinsic strategy for constructing a canonical implicit atlas on any moduli space of pseudo-holomorphic curves. The main technical step in applying this strategy in any particular setting is to prove appropriate gluing theorems. We require only topological gluing theorems, that is, smoothness of the transition maps between gluing charts need not be addressed. Our approach to virtual fundamental cycles is algebraic rather than geometric (in particular, we do not use perturbation). Sheaf-theoretic tools play an important role in setting up our functorial algebraic "VFC package". We illustrate the methods we introduce by giving definitions of Gromov--Witten invariants and Hamiltonian Floer homology over $\QQ$ for general symplectic manifolds. Our framework generalizes to the $S^1$-equivariant setting, and we use $S^1$-localization to calculate Hamiltonian Floer homology. The Arnold conjecture (as treated by Floer, Hofer--Salamon, Ono, Liu--Tian, Ruan, and Fukaya--Ono) is a well-known corollary of this calculation. We give a construction of contact homology in the sense of Eliashberg--Givental--Hofer. Specifically, we use implicit atlases to construct coherent virtual fundamental cycles on the relevant compactified moduli spaces of holomorphic curves.
Singular Intersection Homology
Author by : Greg Friedman
Description : The first expository book-length introduction to intersection homology from the viewpoint of singular and piecewise linear chains.
Lagrangian Intersection Floer Theory
Description : This is a two-volume series research monograph on the general Lagrangian Floer theory and on the accompanying homological algebra of filtered $A_\infty$-algebras. This book provides the most important step towards a rigorous foundation of the Fukaya category in general context. In Volume I, general deformation theory of the Floer cohomology is developed in both algebraic and geometric contexts. An essentially self-contained homotopy theory of filtered $A_\infty$ algebras and $A_\infty$ bimodules and applications of their obstruction-deformation theory to the Lagrangian Floer theory are presented. Volume II contains detailed studies of two of the main points of the foundation of the theory: transversality and orientation. The study of transversality is based on the virtual fundamental chain techniques (the theory of Kuranishi structures and their multisections) and chain level intersection theories. A detailed analysis comparing the orientations of the moduli spaces and their fiber products is carried out. A self-contained account of the general theory of Kuranishi structures is also included in the appendix of this volume.
Symmetrization In Analysis
Author by : Albert Baernstein II
Description : Symmetrization is a rich area of mathematical analysis whose history reaches back to antiquity. This book presents many aspects of the theory, including symmetric decreasing rearrangement and circular and Steiner symmetrization in Euclidean spaces, spheres and hyperbolic spaces. Many energies, frequencies, capacities, eigenvalues, perimeters and function norms are shown to either decrease or increase under symmetrization. The book begins by focusing on Euclidean space, building up from two-point polarization with respect to hyperplanes. Background material in geometric measure theory and analysis is carefully developed, yielding self-contained proofs of all the major theorems. This leads to the analysis of functions defined on spheres and hyperbolic spaces, and then to convolutions, multiple integrals and hypercontractivity of the Poisson semigroup. The author's 'star function' method, which preserves subharmonicity, is developed with applications to semilinear PDEs. The book concludes with a thorough self-contained account of the star function's role in complex analysis, covering value distribution theory, conformal mapping and the hyperbolic metric.
Equivariant Stable Homotopy Theory And The Kervaire Invariant Problem
Author by : Michael A. Hill
Description : A complete and definitive account of the authors' resolution of the Kervaire invariant problem in stable homotopy theory.
Dirichlet Series And Holomorphic Functions In High Dimensions
Author by : Andreas Defant
Description : Over 100 years ago Harald Bohr identified a deep problem about the convergence of Dirichlet series, and introduced an ingenious idea relating Dirichlet series and holomorphic functions in high dimensions. Elaborating on this work, almost twnety years later Bohnenblust and Hille solved the problem posed by Bohr. In recent years there has been a substantial revival of interest in the research area opened up by these early contributions. This involves the intertwining of the classical work with modern functional analysis, harmonic analysis, infinite dimensional holomorphy and probability theory as well as analytic number theory. New challenging research problems have crystallized and been solved in recent decades. The goal of this book is to describe in detail some of the key elements of this new research area to a wide audience. The approach is based on three pillars: Dirichlet series, infinite dimensional holomorphy and harmonic analysis.
|
CommonCrawl
|
ICML 2020 Roundup
Authors: M. Brubaker, P. Xu, I. Kobyzev, P. Hernandez-Leal, M. O. Ahmed
In July the Thirty-seventh International Conference on Machine Learning (ICML) was held featuring over 1,000 papers and an array of tutorials, workshops, invited talks and more. Borealis researchers were (virtually) there presenting their work
Evaluating Lossy Compression Rates of Deep Generative Models by Sicong Huang, Alireza Makhzani, Yanshuai Cao and Roger Grosse
On Variational Learning of Controllable Representations for Text without Supervision by Peng Xu, Jackie CK Cheung, Yanshuai Cao
Tails of Lipschitz Triangular Flows by Priyank Jaini, Ivan Kobyzev, Yaoliang Yu and Marcus A. Brubaker
and many members of the research team took the time to virtually attend ICML 2020. Now that the conference content is freely available online, it's a great time to look back and check out some of the highlights. In this post, four Borealis AI researchers describe the papers that they found most interesting or significant from the conference.
Improving Generalization by Controlling Label-Noise Information in Neural Network Weights
Hrayr Harutyunyan, Kyle Reing, Greg Ver Steeg, Aram Galstyan
by Peng Xu
Related Papers:
Emergence of Invariance and Disentanglement in Deep Representations
Information-Theoretic Analysis of Generalization Capability of Learning Algorithms.
$\mathcal{L}_{\text{DMI}}$: A Novel Information-theoretic Loss Function for Training Deep Nets Robust to Label Noise
Figure 1. Neural networks tend to memorize labels when trained with noisy labels (80% noise in this case), even when dropout or weight decay are applied. The proposed training approach limits label-noise information in neural network weights, avoiding memorization of labels and improving generalization.
What problem does it solve? Neural networks have the undesirable tendency to memorize information about the noisy labels. This paper shows that, for any algorithm, low values of mutual information between weights and training labels given inputs $I(w : \pmb{y}|\pmb{x})$ correspond to a reduction in memorization of label-noise and better generalization bounds. Novel training algorithms are proposed to optimize for this and achieve impressive empirical performances on noisy data.
Why is this important? Even in the presence of noisy labels, deep neural networks tend to memorize the training labels. This hurts the generalization performance generally and is particularly undesirable with noisy labels. Poor generalization due to label memorization is a significant problem because many large, real-world datasets are imperfectly labeled. From a information-theoretic perspective, this paper reveals the root of the memorization problem and proposes an approach that directly addresses it.
The approach taken and how it relates to previous work: Given a labeled dataset $S=(\pmb{x}, \pmb{y})$ for data $\pmb{x}=\{x^{(i)}\}_{i=1}^n$ and categorical labels $\pmb{y}=\{y^{(i)}\}_{i=1}^n$ and learning weights $w$, Achille & Soatto present a decomposition of the expected cross-entropy $H(\pmb{y}|\pmb{x}, w)$:
\[ H(\pmb{y} | \pmb{x}, w) = \underbrace{H(\pmb{y} | \pmb{x})}_{\text{intrinsic error}} + \underbrace{\mathbb{E}_{\pmb{x}, w}D_{\text{KL}}[p(\pmb{y}|\pmb{x})||f(\pmb{y}|\pmb{x}, w)]}_{\text{how good is the classifier}} - \underbrace{I(w : \pmb{y}|\pmb{x})}_{\text{memorization}}. \]
If the labels contain information beyond what can be inferred from inputs, the model may do well by memorizing the labels through the third term of the above equation. To demonstrate that $I(w:\pmb{y}|\pmb{x})$ is directly linked to memorization, this paper proves that any algorithm with small $I(w:\pmb{y}|\pmb{x})$ overfits less to label-noise in the training set. This theoretical result is also verified empirically, as shown in Figure 1. In addition, the information that weights contain about a training dataset $S$ has previously been linked to generalization (Xu & Raginsky), which can be tightened with small values of $I(w:\pmb{y}|\pmb{x})$.
To limit $I(w:\pmb{y}|\pmb{x})$, this paper first shows that the information in weights can be replaced by information in the gradients, and then introduces a variational bound on the information in gradients. The bound employs an auxiliary network that predicts gradients of the original loss without label information. Two ways of incorporating predicted gradients are explored: (a) using them in a regularization term for gradients of the original loss, and (b) using them to train the classifier.
Results: The authors set up experiments with noisy datasets to see how well the proposed methods perform for different types and amounts of label noise. The simplest baselines are standard cross-entropy (CE) and mean absolute error (MAE) loss functions. The next baseline is the forward correction approach (FW) proposed by Patrini et al., where the label-noise transition matrix is estimated and used to correct the loss function. Finally, they include the recently proposed determinant mutual information (DMI) loss proposed by Xu et al., which is the log-determinant of the confusion matrix between predicted and given labels. The proposed algorithm illustrates the effectiveness on versions of MNIST, CIFAR-10 and CIFAR-100 corrupted with various noise models, and on a large-scale dataset Clothing1M that has noisy labels, as shown in Fig 2.
Figure 2. Test accuracy comparison on CIFAR-10, corrupted with various noise types, on CIFAR-100 with 40% uniform label noise and on Clothing1M dataset.
Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?
Kei Ota, Tomoaki Oiki, Devesh K. Jha, Toshisada Mariyama and Daniel Nikovski
by Pablo Hernandez-Leal
Learning state representation for deep actor-critic control.
State representation learning for control: An overview.
Densely Connected Convolutional Networks.
What problem does it solve? This paper starts from the question of whether learning good representations for states and using larger networks can help in learning better policies in deep reinforcement learning.
The paper mentions that many dynamical systems can be described succinctly by sufficient statistics which can be used to accurately to predict their future. However, there is still the question whether RL problems with intrinsically low-dimensional state (i.e., with simple sufficient statistics) can benefit by intentionally increasing its dimensionality using a neural network with a good feature representation.
Why is this important? One of the major successes of neural networks in supervised learning is their ability to automatically acquire representations from raw data. However, in reinforcement learning the task is more complicated since policy learning and representation learning happen at the same time. For this reason, deep RL usually requires a large amount of data, potentially millions of samples or more. This limits the applicability of RL algorithms to real-world problems, for example, continuous control and robotics where that amount of data may not be practical to collect.
It can be assumed that increasing the dimensionality of the input might further complicate the learning process of RL agents. This paper argues this is not the case and that agents can learn more efficiently with the high-dimensional representations than with the lower-dimensional state observations. The authors hypothesize that larger networks (with a larger search space) is one of the reasons that allows agents to learn more complex functions of states, ultimately improving sample efficiency.
Figure 3. On the left the diagram of how OFENet outputs, $z_{o_t}$, are the inputs to the RL agent and on the right the standard RL learning process. In both cases the agent learns a policy that outputs an action $a_t$.
The approach taken and how it relates to previous work: The area of state representation learning focuses on representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. In this context, the authors highlight a previous work by Munk et al. where the output of a neural network is used as input for a deep RL algorithm. The main difference is that the goal of Munk et al. is to learn a compact representation, in contrast to the idea of this paper which is learning good higher-dimensional representations of state observations.
The paper proposes an Online Feature Extractor Network (OFENet) that uses neural networks to produce good representations that are used as inputs to a deep RL algorithm, see Figure 3.
OFENet is trained with the goal of preserving a sufficient statistic via an auxiliary task to predict future observations of the system. Formally, OFENet trains a feature extractor network for the states, $z_{o_t}=\phi_o(o_t)$, a feature extractor for the state-action, $z_{o_t,a_t}=\phi_{o,a}(o_t,a_t)$, and a prediction network $f_{pred}$ parameterized by $\theta_{pred}$. The parameters $\{\theta_o, \theta_{o,a}, \theta_{pred}\}$ are optimized to minimize the loss:
$$L=\mathbb{E}_{(o_t,a_t)\sim p,\pi} [||f_{pred}(z_{o_t},a_t) - o_{t+1}||^2]$$
which is interpreted as minimizing the prediction error of the next state.
Figure 4. OFENet architecture. Observation and action are represented by $o_t$, $a_t$; FC represents a fully connected layer with an activation function, and concat represents a concatenation of the inputs.
The authors highlight the need for a network that can be optimized easily and produce meaningful high-dimensional representations. Their proposal is a variation of DenseNet, a densely connected convolutional network whose output is the concatenation of previous layer's outputs. OFENet uses a DenseNet architecture and is learned in on online fashion, at the same time as the agents policy, receiving as input observation and action as depicted in Figure 4.
Results: The paper evaluates 60 different architectures with varying connectivity, sizes and activation functions. The results showed that an architecture similar to DenseNet consistently achieved higher scores than the rest.
OFENet was evaluated with both on-policy (SAC and PPO) and off-policy reinforcement learning algorithms (TD3) on continuous control tasks. With these three algorithms the addition of OFENet obtained better results than without it.
Ablation experiments were performed to verify that just increasing the dimensionality of the state representation is not sufficient to improve performance. The key point is that generating effective higher dimensional representations, for example with OFENet, is required to obtain better performance.
Relaxing Bijectivity Constraints with Continuously Indexed Normalising Flows
Rob Cornish, Anthony L. Caterini, George Deligiannidis, and Arnaud Doucet
by Ivan Kobyzev
SurVAE Flows: Surjections to Bridge the Gap between VAEs and Flows
A RAD approach to deep mixture models
Augmented Neural ODEs
What problem does it solve? The key ingredient of a Normalizing Flow is a diffeomorphic function (i.e., invertible function which is differentiable and its inverse is also differentiable). To model a complex target distribution a normalizing flow transforms a simple base measure via multiple diffeomorphisms stacked together. However, diffeomorphisms preserve topology; hence, the topologies of the supports of the base distribution and target distribution must be the same. This is problematic for the real-world data distributions which can have complicated topology (e.g., they can be disconnected, have holes, etc). The paper proposes a way to replace a diffeomorphic map with a continuous family of diffeomorphisms to solve this problem.
Why is this important? It is generally believed that many distributions exhibit complex topology. Generative methods which are unable to learn different topologies will, at the very least, be less sample efficient in learning and potentially fail to learn important characteristics of the target distribution.
Figure 5. Generative process for forward generation for Continuously Indexed Flow. Here $U$ is a continuous latent variable which controls the diffeomorphic flow transformation of $Z$ to produce the target distribution $X$.
The approach taken and how it relates to previous work: Given a latent space $\mathcal{Z}$ and a target space $\mathcal{X}$, the paper considers a continuous family of diffeomorphisms $\{ F(\cdot, u): \mathcal{Z} \to \mathcal{X} \}_{u \in \mathcal{U}}$. The generative process of this model is given by
$$z \sim P_Z, \quad u \sim P_{U|Z}(\cdot|Z), \quad x = F(z,u),$$
which is illustrated in Figure 5. There is no closed form expression on the likelihood $p_X(x)$, hence to train the model one needs to use variational inference. This introduces an approximate posterior $q_{U|X} \approx p_{U|X}$, and constructs an variational lower bound on $p_X(x)$ which can be used for training. To increase expressiveness one can then stack several layers of this generative process.
The authors proved that under some conditions on the family $F_u$, the model can well represent a target distribution, even if its topology is irregular. The downside, compared to other normalizing flows, is that model doesn't allow for exact density computation. However estimates can be computed through the use of importance sampling.
Results: The performance of the method is demonstrated quantitatively and compared against Residual Flows, on which it's architecture is based. On MNIST and CIFAR-10 in particular it performs better than Residual Flow (Figure 6), improving the bits per dimension on the test set by a small but notable margin. On other standard datasets the improvements are even larger and, in some cases, state-of-the-art.
Figure 6. Test set bits per dimension (lower is better).
How Good is the Bayes Posterior in Deep Neural Networks Really?
Florian Wenzel, Kevin Roth, Bastiaan S. Veeling, Jakub Świątkowski, Linh Tran, Stephan Mandt, Jasper Snoek, Tim Salimans, Rodolphe Jenatton, Sebastian Nowozin
by Mohamed Osama Ahmed
Simple and scalable predictive uncertainty estimation using deep ensembles.
Inconsistency of Bayesian Inference for Misspecified Linear Models, and a Proposal for Repairing It
Bayesian deep learning and a probabilistic perspective of generalization.
What problem does it solve? The paper studies the performance of Bayesian neural network (BNN) models and why they have not been adopted in industry. BNNs promise better generalization, better uncertainty estimates of predictions, and should enable new deep learning applications such as continual learning. But despite these potentially promising benefits, they remain widely unused in practice. Most recent work in BNNs has focused on better approximations of the posterior. However this paper asks whether the actual posterior itself is the problem, i.e., is it even worth approximating?
Why is this important? If the actual posterior learned by BNN is poor then efforts to construct better approximations are unlikely to produce better results and could actually hurt performance. Instead this would suggest that more efforts should be directed towards fixing the posterior itself before attempting to construct better approximations.
The approach taken and how it relates to previous work: Many recent BNN papers use the "cold posterior" trick. Instead of using the posterior $p(\theta|D) \propto \exp( -U(\theta) )$, where $U(\theta)= -\sum_{i=1}^{n} \log(y_i|x_i,\theta)-\log p(\theta)$, they use $p(\theta|D) \propto \exp(-U(\theta)/T)$ where $T$ is a temperature parameter. If $T=1$, then we recover the original posterior distribution. However, recent papers report good performance with a "cold posterior" where $T<1$. This causes the posterior to become sharper around the modes and the limiting case $T=0$ corresponds to maximum a posteriori (MAP) point estimate.
This paper studies why the cold posterior trick is needed. That is, why is the original posterior learned from BNN is not good enough on its own. The paper investigates three factors:
Inference: Monte Carlo methods are needed for posterior inference. Could the errors and approximations induced by the Monte Carlo methods cause problems? In particular, the paper studies different problems such as inaccurate SDE simulations, and minibatch noise.
Likelihood: Since the likelihood function used for training BNNs is the same as the one used for SGD models, then this should not be a problem.
However, the paper raises the point that "Dirty Likelihoods" are used in recent deep learning models. For example batch normalization, dropout, and data augmentation may be causing problems.
Prior: Most BNN work uses a Normal prior over the weights. The paper raises the question of whether this is a good prior which they call the "Bad Prior Hypothesis". Specifically, the hypothesis is that the current priors used for the parameters of BNNs may be inadequate, unintentionally introducing an incorrect bias into the posterior and potentially being too strong and overruling the data as model complexity increases. To study this the authors draw samples of the BNN parameters $\theta$ from the prior distribution and examine the predictive distribution that results with these randomly generated parameters.
Results: The experiments find that, consistent with previous work, the best posteriors are achieved with cold posteriors, i.e., at temperatures $T<1$. This can be seen in Figure 7. While it's still not fully understood why, cold posteriors are needed to get good performance with BNNs.
Further, results suggest that neither inference nor the likelihood are the problem. Rather, the prior seems likely to be, at best, unintentionally and misleadingly informative. Indeed, current priors generally map all images to a single class. This is clearly unrealistic and undesirable behaviour of prior. This effect can be seen in Figure 8 which shows the class distribution over the training set for two different samples from the prior.
Figure 7. The "cold posterior;" effect: test accuracy with a ResNet-20 architecture on CIFAR10 with a range of cold ($T<1$) temperatures with the original posterior ($T=1$) on the right. Performance clearly improves with lower temperatures.
Figure 8. The distribution of predicted classes over the training set of CIFAR-10 with two different samples from the prior using a ResNet-20 architecture. In both cases the predicted class distribution is almost exclusively concentrated on a single class which is clearly implausible and undesirable behaviour for the prior.
To date there has been a significant amount of work on better approximations for the posterior in BNNs. While this is an important research direction for a number of reasons, this paper suggests that there are other directions that we should be pursuing. This is highlighted clearly by the fact that the performance of BNNs are worse than single point estimates trained by SGD and to improve the performance, cold posteriors are currently required. While this paper hasn't given a definitive answer to the question of why cold posteriors are needed or why BNNs are not more widely used, it has clearly indicated some important directions for future research.
M. Brubaker
P. Xu
I. Kobyzev
P. Hernandez-Leal
M. O. Ahmed
|
CommonCrawl
|
A mobile NMR lab for leaf phenotyping in the field
Maja Musse ORCID: orcid.org/0000-0002-1681-55921,2,
Laurent Leport2,3,
Mireille Cambert1,2,
William Debrandt1,2,3,
Clément Sorin1,2,3,
Alain Bouchereau2,3 &
François Mariette1,2
Low field NMR has been used to investigate water status in various plant tissues. In plants grown in controlled conditions, the method was shown to be able to monitor leaf development as it could detect slight variations in senescence associated with structural modifications in leaf tissues. The aim of the present study was to demonstrate the potential of NMR to provide robust indicators of the leaf development stage in plants grown in the field, where leaves may develop less evenly due to environmental fluctuations. The study was largely motivated by the need to extend phenotyping investigations from laboratory experiments to plants in their natural environment.
The mobile NMR laboratory was developed, enabling characterization of oilseed rape leaves throughout the canopy without uprooting the plant. The measurements made on the leaves of plants grown and analyzed in the field were compared to the measurements on plants grown in controlled conditions and analyzed in the laboratory.
The approach demonstrated the potential of the method to assess the physiological status of leaves of plants in their natural environment. Comparing changes in the patterns of NMR signal evolution in plants grown under well-controlled laboratory conditions and in plants grown in the field shows that NMR is an appropriate method to detect structural modifications in leaf tissues during senescence progress despite plant heterogeneity in natural conditions. Moreover, the specific effects of the environmental factors on the structural modifications were revealed.
The present study is an important step toward the selection of genotypes with high tolerance to water or nitrogen depletion that will be enabled by further field applications of the method.
In the context of the increasing world population and the move towards more sustainable development, there is a need to increase agricultural productivity and to reduce the ecological footprint of plant production. To achieve these aims, genotypes need to be selected that can better adapt to environmental stresses using water and nutrients applied to the soil more efficiently, so plants can be grown with limited inputs. Large-scale phenotyping has been developed to assist such selection, as it allows characterization of plant adaptative traits in different agricultural systems. Studies have been conducted on a large number of plants with, for example, bulk methods of canopy spectral reflectance and absorbance [1]. On the other hand, to better understand plant functioning and adaptation to environmental changes, fine analyses have been conducted at organ and individual plant scales in a strictly controlled environment [2].
Plant response to the environment may differ in controlled and field conditions because of soil-climate and canopy architecture variability [3]. There is therefore a need to establish a link between measurements made in controlled conditions and field data. Currently, the trend is the development of new tools for fine phenotyping in the natural environment of the plant combined with large scale bulk methods. For example, non-destructive assessment of leaf chlorophyll by Multiplex [4] and Dualex [5] have been used for outdoor characterization of whole plant N status and leaf development. On the other hand, the use of leaf ranking or leaf ageing as indicators of developmental status is not relevant for genotype comparison when environmental conditions (light, temperature, wind, canopy structure, etc.) are responsible for marked heterogeneity among individual plants. The main problem with the methods cited above, and with more classical approaches like gas exchange measurements [6], is that they do not detect small differences in physiological traits between successive leaf ranks throughout the canopy.
The potential of nuclear magnetic resonance (NMR) relaxometry to finely evaluate the cell and tissue structure of oilseed rape leaves was recently demonstrated [7,8,9]. The NMR transverse relaxation time (T2), which is particularly sensitive to variations in water properties in plant tissues, was used to study changes in cell water status and distribution. As demonstrated in different plant tissues including leaves, differences in the physical and chemical properties of water in different compartments and the relatively slow diffusion exchange of water molecules between compartments are reflected by the multi-exponential relaxation times [9, 10]. Applied to a wide panel of leaves collected from oilseed rape plants of different genotypes grown in controlled conditions, NMR relaxometry was shown to be able to detect slight variations in senescence associated structural modifications in leaf tissues [7,8,9]. This characterization of the internal structure of the leaf allows accurate determination of leaf development stage independently of its position along the plant. In the context of fine phenotyping at individual plant scale in field conditions, the ability of NMR to identify leaves at the equivalent developmental stage and hence to allow plant traits to be compared is of particular interest. Moreover, NMR makes it possible to monitor changes in water exchanges and structural changes associated with remobilization efficiency [7,8,9].
Until now, the great majority of NMR and magnetic resonance imaging (MRI) studies on plants have been performed under controlled conditions in the laboratory. Current trends in the further development of the NMR/MRI method, largely motivated by phenotyping needs, are to extend investigations to in situ experiments (climate chambers, greenhouses or the natural environment) rather than to transport plants to the laboratory where the equipment is located. Relatively recent important technological advancements have been reported in mobile NMR devices. A single-sided open NMR sensor equipped with a permanent magnet for near surface studies, which allows free access to large objects, known as NMR MOUSE® [11], was developed for a range of applications [12, 13], such as the characterization of soils, mortars and painting, as a logging tool for the petroleum industry, etc. It has also been used to study leaf water status in situ [14] and to determine the moisture fraction in wood [15]. Another approach has been to design specific NMR and/or MRI devices that can easily been placed or transported into climate chamber, greenhouse or in the field. For example, a small device known as NMR-CUFF [16] with a modified Halbach-type magnet that can be opened for sample positioning was developed by Windt et al. [16] and used to measure sap flow and the amount of water in plants in a climate chamber [17]. Another example of such designed instruments is the NMR system developed by Van As et al. [18] equipped with a permanent U-shaped magnet, with an access space of 2 cm used to study water content and transport in plants in greenhouses and climate chambers [19, 20]. Recently, a size-adjustable radiofrequency coil allowing investigation of plant samples of different diameters in a Halbach magnet has been proposed [21]. Finally, a few portable MRI systems have recently been developed mostly for the imaging of relatively small living trees for use in greenhouses [17, 22,23,24].
Only a limited number of NMR/MRI studies have been performed on plants in their natural environment. Capitani et al. used NMR relaxometry to investigate the water status of rockrose and holm oak leaves growing in sand dunes [14]. Okada et al. [25] reported the first outdoor MRI imaging of a living tree using a 0.3 T permanent magnet. A few years later, a permanent magnet equipped with flexible rotation and translation mechanism and combined with a mobile lift was used for outdoor imaging of pear tree branches up to 2 cm in diameter and up to 160 cm above ground level [26]. Jones et al. [27] designed a transportable MRI system offering an access space of 21 cm diameter for the imaging of living trees in the forest. Geya et al. [28] built a mobile MRI system with a 16 cm gap 0.2 T permanent magnet for measurements of the relaxation times and apparent diffusion coefficients of pear fruits in an orchard. This MRI system was recently shown to be able to measure water transport in trees outdoors [29].
The development of outdoor NMR and MRI measurements has faced two major challenges. The first concerns the NMR/MRI device itself and the effects of the environmental conditions on it. The system needs to be portable and easy to handle in different conditions. Further, the temperature drift of the magnet, the lack of homogeneity of the magnetic field and the variations in sample temperature can make it difficult to distinguish variations in the signal due to the biological changes under study from variations due to the system or measurement conditions. Some applications require accurate measurements and exploitation of the complete NMR signal. For example, in the specific case of characterization of the progress of senescence in oilseed rape leaves based on slight variations in multi-exponential relaxation parameters [8, 9], small variations in sample or magnet temperature can alter the results. Given the amplitude of possible variations in temperature due to weather conditions in the field, it is clear that the temperature of both the magnet and the sample have to be controlled. Furthermore, the inhomogeneous field, like that of the unilateral portable NMR system, is an additional source of relaxation, which shortens the T2 values measured [14]. The second challenge facing the study of plants grown in field conditions is related to the biological variability of the plant material. There are two potential sources of random variations in plant tissue characteristics that can alter the NMR signal. The main source is associated with the abiotic and biotic factors experienced by the plant throughout its development that lead to hardening of the leaf tissues. Further, compared to control conditions, plants grown in the field present higher variability in their canopy architecture, which is responsible for additional micro-environmental differences immediately prior to sampling. These aspects can cause erroneous results if the NMR measurements are simultaneously polluted by instabilities of the NMR device. The solution is to perform NMR measurements in the field using an NMR device that has been carefully checked in controlled laboratory conditions and to ensure the same measurement accuracy. Moreover, if field measurements are compared with those obtained on plants grown under well-controlled conditions, it is possible to identify the specific effects of variabilities caused by environmental factors.
The objective of this study was to demonstrate the potential of NMR to access information about the status and sub-cellular distribution of water in leaves from plants grown in natural conditions, thereby providing robust indicators of the stage of development directly in the field. To this end, a mobile NMR lab was designed for in situ measurements of the relaxation times of leaves from plants in their natural environment. A commercially available NMR spectrometer similar to that previously used for investigations of oilseed rape leaves [7,8,9] was used to create a mobile laboratory with the same performance as in the well-controlled laboratory experiments. The device was designed to be positioned at the edge of individual parcels in a field trial. The measurements made on the leaves of plants grown and analyzed in the field were compared to the measurements on plants grown in controlled conditions and analyzed in the laboratory. Results of the comparison showed that NMR can detect structural modifications in leaf tissues associated with senescence progress despite plant heterogeneity found under natural conditions.
NMR relaxometry
Instrumental setup
Transverse relaxation measurements were performed using a mobile NMR lab specifically set up for this purpose (Fig. 1). A commercially available 20 MHz spectrometer (Minispec PC-120, Bruker, Karlsruhe, Germany) equipped with a temperature control device connected to an optical fiber (Neoptix Inc, Canada) allowing ±0.1 °C temperature regulation was placed inside a van. The experimental device was powered by a battery. Such equipped van was positioned in the field close to the plants under investigation. No special care was taken to control the temperature inside the van. The leaf under investigation was cut from the plant (Fig. 2a) without uprooting the plant and, if the leaf was wet, wiped gently. Eight discs 8 mm in diameter were cut from each leaf of the plant studied (Figs. 2b, c). To obtain homogeneous tissues, the discs were cut in the middle of the limb on each side of the central vein and avoiding lateral second order veins. The discs were then placed in NMR tubes which were closed with a 2-cm long Teflon cap to avoid water loss during measurements (Fig. 2d). The temperature of the samples inside NMR probe was set at 18 °C.
Mobile NMR laboratory with the NMR spectrometer including the magnet system and the probe assembly (1), the electronic control NMR unit (2), the battery (3) and a standard laptop computer for measurement control (4)
Oilseed rape plants at the end of stem elongation stage (a), leaf (LR −9) after disc sampling (b), eight discs cut from the leaf for NMR measurement (c) and NMR tube containing 8 discs, closed with a 2-cm long Teflon cap (d)
Transverse relaxation signal acquisition and analysis
The transverse relaxation times was measured using the Carr–Purcell–Meiboom–Gill (CPMG) sequence. The signal was acquired with the 90°–180° pulse spacing of 0.2 ms. Data were averaged over 64 acquisitions. The number of successive echoes recorded was adjusted for each sample according to its T2. The recycle delay for each sample was adjusted after measurement of the longitudinal relaxation time (T1) with a fast-saturation-recovery sequence. The measurement time for T2 (including spectrometer adjustments and T1 measurement) was about 10 min per sample.
The CPMG signal was fitted using Scilab software according to the maximum entropy method (MEM) [30], which provides a continuous distribution of relaxation time components with no assumption concerning their number. In this representation, the peaks of the distribution are centered at the corresponding most probable T2 values, while peak areas correspond to the intensity of the T2 components. Signal intensity was expressed through the specific leaf water weight of the ith signal component (LWW) expressed in g m−2. LWW of each CPMG component was calculated according to the equation:
$$ LWW_{i} = \frac{{I_{0Ri} \, \times \,m_{w} }}{A} $$
where I0Ri is the relative intensity of the ith signal component expressed as a percentage of the total CPMG signal intensity, mw is the water mass of the leaf discs used for NMR (in g) and A is the leaf disc area (in m2).
Oilseed rape (Brassica napus L., genotype Aviso) plants were grown in a field trial in Le Rheu, France (La Gruche, 48°8′17″N–1°48′11″O) during the 2014–2015 cropping season. The seeds were sown on the 10th of September, 2014 in plots measuring 6.75 m2 (4.5 × 1.5 m, 4 rows) at a density of 45 seeds m−2 and were grown under an optimal N regime. Four plots were used for this experiment corresponding to four repetitions. Measurements were made between the 23rd and the 26th of March 2015, at the end of the stem elongation stage (Fig. 2a), while floral buds were still closed (BBCH 55), on all fully expanded leaves (about 17 leaf ranks) of one individual plant (in four different plots, corresponding to four replicates).
Indicators of leaf physiological status
Chlorophyll content
Before sampling the leaf discs for the NMR experiment, relative chlorophyll content per unit leaf area was estimated using a non-destructive chlorophyll meter (SPAD, Soil Plant Analysis Development; Minolta, model SPAD-502). The chlorophyll content of each leaf was estimated as the average of six independent measurements.
Leaf discs were kept in the closed NMR tubes until the end of the NMR experiment each day. The samples were transferred in the laboratory and water content was then determined by weighing before (fresh weight) and after drying (dry weight) in an oven at 70 °C for 48 h. Water content is expressed as a percentage of fresh weight.
Comparison of the outdoor and laboratory NMR measurements
The outdoor NMR measurements made in the present study were compared with the NMR measurements performed on plants grown under controlled conditions and analyzed in the laboratory (data from [7, 8]). The objective of this comparison was to evaluate the NMR parameters as indicators of the leaf development stage in plants grown and analyzed in outdoor conditions. Data obtained on leaves from two different sets of plant were used for the comparison:
32 non-vernalized oilseed rape plants of Tenor genotype, details are reported in [8]
20 vernalized oilseed rape plants of Aviso genotype, details are reported in [7]
Like in the outdoor experiment, the NMR device used for transverse relaxation times in controlled conditions was a 20 MHz spectrometer (Minispec PC-120, Bruker, Karlsruhe, Germany). The CPMG measurements were performed at 18 °C with the 90°–180° pulse spacing of 0.1 ms and 64 signal averages.
Both NMR results and parameters describing physiological status measured were represented according to the NMR split scale. It was previously shown [8] that it is possible to use the split of the longest T2 signal component measured in the mature leaves to target leaves at the same developmental stage. In this representation, leaf rank zero is assigned to the leaf of the last leaf rank in which the split occurred, the subsequent leaf rank is numbered 1, etc. According to this scale, the older the leaf, the higher its rank, while negative ranks represent young leaves in which split has not yet occurred. This NMR split scale makes it possible to average values from data obtained in leaves located at different positions in the canopy. The NMR split scale is used in all the following figures.
NMR field measurements
Two typical examples (plants 1 and 2) of the transverse relaxation time distribution for two leaf development stages (leaf ranks −2 and 0) are shown in Fig. 3. For each stage, the curves are compared with the representative transverse relaxation spectrum obtained on leaves from plants grown under controlled conditions [7] at the same developmental stage according to the split scale. For the youngest leaves (leaf rank −2) from the field experiment, the longest T2 component corresponding to the vacuolar water was centered at 150–200 ms, which correspond to slightly higher values than those measured in the controlled conditions (Fig. 3a). This component corresponded to the largest amount of leaf water as it was observed in the plants grown under controlled conditions. For the older leaves (leaf rank 0), the vacuolar signal split into two components. The T2 values of the longest T2 components of the two spectra depicted in Fig. 3c were very different, illustrating the high variability of this parameter in senescing leaves, as already reported in [8, 9] and attributed to the high rate of structural changes responsible for the variations in the signal.
Distribution of transverse relaxation time (T2) calculated from the CPMG data for different leaves from plants grown in the field compared with the results obtained under controlled conditions from [7] Sorin et al., Botanical Studies 2016, 57; acknowledgment to Springer. a corresponds to the leaf rank −2 with b the zoom of the T2 distribution up to 30 ms. c corresponds to the leaf rank 0 with d the zoom of the T2 distribution up to 30 ms
In addition to the vacuolar component described above, under controlled conditions, two other CPMG components were systematically observed. Figure 3b, d are zooms up to 30 ms on the T2 spectra shown in (a) and (c), respectively. The first one was centered at a few ms and represented a small percent of the water amount. It was always observed in the youngest leaves (leaf rank −2), disappeared with leaf age and was consequently observed only in some leaves of leaf rank 0. The second component was centered at about 15 ms and represented less than 20% of the water amount for all leaf ranks. Under field conditions, the signal differed from what was expected according to these results. Actually, the number of components detected in the T2 range 0–30 ms varied between one and three. In the case of two components (more than 50% of the leaves analyzed), the same general frame was found as that observed under controlled conditions (Fig. 3b leaf rank −2, plant 1 and d leaf rank 0 plant 2). In the case of three components detected (about 10% of the leaves), the additional component was detected with an intermediate T2 value of 5–8 ms (Fig. 3b leaf rank 0, plant 1). Finally, in the case of one component detected, the T2 value of this component was centered at 5–8 ms. Note that in all cases, the sum of the intensities of all these components represented approximately the same percentage of the total signal for a given leaf rank.
Figure 4 shows the T2 (a) and LWW (b) values of the NMR signal components (4 and 5) associated with the vacuole during leaf ageing in plants grown in the field. Data from plants grown in controlled environment conditions extracted from [7, 8] are shown on the same graphs for the purpose of comparison. All the data are plotted according to the NMR split scale leaf rank (see "Methods"). In the case of the component 4, T2 values were reproducible over the whole leaf rank scale in all three growing conditions (Fig. 4a). After leaf rank 0, the maximum rate of the increase in T2(5) was the same in the different experiments (about 200 ms per leaf rank). However, while this maximum rate followed the appearance of the fifth component in plants grown in field conditions, it was delated for leaves from [8] where T2(5) increased at the maximum rate between leaf ranks +5 and +7.
Transverse relaxation time (a) and specific leaf water weight (b) corresponding to vacuolar water of leaves during leaf development according to the NMR split scale leaf rank. The results of the field experiment are compared with the results obtained under controlled conditions [7]—Sorin et al., Botanical Studies 2016, 57; acknowledgment to Springer; [8]—Sorin et al., Planta. 2015, 24; copyright Springer. As the leaves were rearranged according to the NMR split scale after sampling, in the case of the field experiment the values corresponding to the leaf ranks between 0 and −10 are the averages ± standard deviations of data collected from leaves of four plants, for leaf rank −11 the data is the average ± standard deviations of data collected from leaves of three plants. For leaf ranks between −15 and −12 and between 3 and 1 less than tree measurements were available for analysis
Note that in the case of the field measurements, it was possible to analyze leaves characterized by leaf ranks up to −15, corresponding to very young leaves. This made it possible to confirm the increasing trend in the value of T2(4) with ageing of the leaf. Considering differences in genotypes, vernalization and environmental factors between plant growing conditions, Fig. 4a shows that it was possible to establish a master curve describing the structural changes in young leaves (negative leaf ranks). After LR = 0, T2 values describing structural leaf changes appeared to be more affected by environmental conditions.
According to [7,8,9], the NMR intensities of each signal component (Fig. 4b) are expressed in leaf water weight (Eq. 1). Like in the plants grown under controlled conditions, LWW (4) increased steadily until leaf rank 0, reflecting the progressive increase in the amount of vacuolar water with aging. An unexpected result was that for the positive leaf ranks associated with the four oldest leaves, LWW (4 + 5) corresponding to vacuolar water decreased in the field experiment in contrast to the data reported in [7, 8].
Physiological characterization of leaf development
Changes in physiological traits during leaf development were described through general parameters (Fig. 5) i.e. chlorophyll content, dry weight and water content. These data, like those presented in Fig. 4 are presented according to the NMR split scale and compared with data obtained on plants grown in controlled environment conditions extracted from [7, 8]. The chlorophyll content measured in the field was at its maximum value from leaf rank −13 to −4 and decreased markedly from leaf rank −2 (Fig. 5a) to a very low value for the oldest leaves. These results show that our study was performed on a large panel representing a relatively wide range of young, mature and senescent leaves. In controlled conditions, the same general trend was observed. However, the curve representing chlorophyll content measured on non-vernalized plants [8] started to decrease for LR 1, which corresponded to older leaves, compared to the curves obtained on vernalized plants independently of the trial. In field conditions, the dry weight (Fig. 5b) increased slightly from leaf rank −15 to −2, reflecting the production of biomass associated with leaf growth. It then dropped from leaf rank −1, reflecting the loss of about 75% of leaf biomass explained by major remobilization at leaf senescence. This was consistent with the data obtained in controlled conditions [7, 8]. Note that in non-vernalized plants, specific dry weight had lower values than in vernalized plants. In field conditions, leaf water content increased slightly from the youngest leaves to leaf rank −3, indicating that leaf tissue was able to maintain cell homeostasis. From leaf rank −4, the loss of dry matter was more marked than that of water, resulting in an increase in water content. The same general trend was observed in controlled conditions, although the water content was systematically lower in the vernalized plants grown under controlled conditions.
Changes in chlorophyll content (a), dry weight (b) and water content (c) during leaf development according to the NMR split scale. The results of the field experiment are compared with the results from obtained under controlled conditions [7]—Sorin et al., Botanical Studies 2016, 57; acknowledgment to Springer; [8]—Sorin et al., Planta. 2015, 24; copyright Springer. As the leaves were rearranged according to the NMR split scale after sampling, in the case of the field experiment the values corresponding to the leaf ranks between 0 and −10 are the averages ± standard deviations of data collected from leaves of four plants, for leaf rank −11 the data is the average ± standard deviations of data collected from leaves of three plants. For leaf ranks between −15 and −12 and between 3 and 1 less than tree measurements were available for analysis
NMR as a phenotyping tool in field conditions
The results of the present study show that a mobile NMR lab makes it possible to successfully perform outdoor NMR measurements under controlled conditions usually obtained in the laboratory that ensure optimum measurement accuracy. Using this approach, it was possible to compare the NMR signal from plants grown under well-controlled conditions (growth cabinet) with the signal from the plants grown in the field.
The trend in the transverse relaxation times associated with vacuolar water revealing changes in leaf structure during senescence was very similar to that previously measured on plants grown under controlled conditions. This shows that, despite the great variability of the environmental factors during plant growth and throughout the canopy (marked variations in temperature and humidity, light exposure, wind, etc.), the NMR method can provide robust indicators of the leaf development stage of plants grown in the field. The results of the present outdoor study, confirmed previous data [7,8,9] showing that enlargement of palisade cells reflected by the signal split correspond to a rather late leaf senescence event in oilseed rape leaves. However, the present study using data collected from a wider range of leaf ranks also demonstrated that structural changes were initiated very early in mature leaves. Indeed, the physiological process at the origin of this late event (split of the NMR component corresponding to the vacuolar water) highlighted by a continuous increase in the T2 value of the longest component corresponding to vacuolar water appears to have been initiated while the chlorophyll and dry matter content were still high.
Within the common framework of changes in relaxation times, some differences in the NMR parameters measured on plants grown under different conditions were observed and are discussed in the following paragraph.
Impact of the environmental conditions on leaf development
As mentioned above, some differences were observed in the NMR parameters measured on plants grown under different conditions (Fig. 4a, b). These differences revealed modifications in leaf development pattern caused by heterogeneous climate conditions with marked and abrupt changes in temperature and/or humidity. Although the T2 of the signal component associated with vacuolar water before the split (leaf rank 0) was very similar, a difference was observed in the leaf ranks at which the maximum rate of the increase in the T2 value occurred. This indicated that changes in the vacuolar volume associated with the T2(5) increase [8] were precocious in field plants. This is in accordance with the measurements of chlorophyll content (Fig. 5a) and is probably explained by more marked changes in environmental conditions during leaf development.
The environment perceived by plants has an effect on the progress of leaf senescence but not necessarily on the onset of senescence [31]. It is well known that several factors, including an unbalanced sink-source ratio, may initiate leaf senescence. For instance, in the case of a high source-sink ratio and photoassimilates, inhibition of feedback by photosynthesis may trigger senescence, whereas in opposite high sink activity may trigger leaf senescence through the remobilization of nutrients (mainly nitrogen) [32]. The effect of N status on the induction of senescence may also be driven by plant archichecture and the quantity and quality of light that reaches the leaf [33]. In sunflower, it has been reported that nitrogen export is promoted by a low red/infra-red ratio rather than by the amount of light [34]. Nevertheless, in the present study, it seems that the main discriminating factor at the origin of the differences observed in the senescence patterns described by NMR signal components corresponding to the vacuolar water and other parameters was the vernalization process and the physiological status of plants and not to the heterogeneous conditions prevailing in the field experiment (in contrast to the homogenous conditions in the growth cabinet). The role of vernalization in the regulation of leaf senescence has been reviewed in [35]. Cold temperatures can affect both plant development and leaf senescence [36]. Vernalization has an impact through different mechanisms. While infertility increases the life span of the whole plant through the production of additional young leaves [37], as was the case in the non-vernalized plants analyzed in [8], the vernalization process initiates the development of the new reproductive organ with high sink activity. It has also been reported that leaf senescence may speed up with flowering [38]. In the present study, measurements were made at the stem elongation stage during which the process of nutrient remobilization from the old leaves was emphasized. Finally, it should be noted that, in the experiment corresponding to the non-vernalized plants [8], the nutritive solution was supplied twice a week, while in the case of the vernalized plants ([7] and the present study) N was supplied according to the plant development. In both cases, the plants were grown under optimum nutrient conditions but in the case of the vernalized plants, the level of the available nutrients in the soil was irregular.
The young leaves from the vernalized plants (present experiment and [7]) were characterized by higher dry weight than those of the non-vernalized plants [8]. This is probably due to a more active cell division induced by cold stress [39] associated with a higher number of cell layers. However, the dry mass decreased markedly in senescing leaves sampled on vernalized plants, whereas the decrease was minor in leaves taken from non-vernalized plants. This phenomenon has been explained by the appearance of large intercellular grass-filled spaces and a resulting decrease in the number of cells [7]. This means that although senescence is characterized by an increase in vacuole volume [7,8,9] reflected by an increase in water content and in T2, the total amount of the vacuolar water expressed by LWW (4 + 5) decreased in senescing leaves.
Concerning the NMR signal components corresponding to water in cell compartments other than vacuoles (T2 range up to 30 ms), very similar results were obtained in the field experiment and in the experiments conducted under controlled conditions in about 50% of the leaves analyzed. In the remaining leaves, the number of components detected in this T2 range was one or three, instead of two components as expected from [7, 8]. Considering that in all cases, the sum of the intensities of all these components (one, two or three) represented appromimately the same percentage of the total signal for a given leaf rank, it can be assumed that these T2 peaks corresponded to the same water pools (apoplastic water and water inside starch granules and chlorophyll) as proposed in [8]. However, due to the heterogeneous environmental factors (for example, the amount of light received by the leaves), it was not possible to distinguish the corresponding water pools according to their T2 values in all cases. It will be interesting to clarify this point in further experiments, although this is not crucial for the purpose of phenotyping.
Considering a negative shift of five leaf ranks for the chlorophyll content curve (Fig. 5a) corresponding to the non-vernalized plants, a perfect match was observed in the three sets of data. It would be therefore possible to argue that structural changes associated with senescence occur earlier in non-vernalized plants. This can be explained by the fact that, as mentioned above, leaf tissues of non-vernalized plants had fewer cell layers and a thinner cuticle. For this reason, it is possible to conclude that (1) structural changes occurring in the leaf throughout the canopy are a constant feature in the development of the leaf in oilseed rape independently of environmental and growing conditions, (2) it is possible to monitor these structural changes with NMR relaxometry in the field as in a controlled environment, but (3) the NMR split scale should only be used for comparison within a set of measurements made in the same set of growing conditions.
NMR experiment in outdoor conditions
Until now, only a few mobile NMR [14] and MRI [27,28,29] systems have been used to investigate plants in their natural (outdoor) environment. Although the constraints of NMR and MRI approaches are not exactly the same, both techniques have to deal with the effects of the environmental conditions on the NMR/MRI devices. All outdoor measurements were performed with permanent magnets that are the best suited for mobile devices but have the disadvantage of being very sensitive to variations in temperature. In the first instance, the temperature drift of the magnetic field is limited by thermal insulation of the magnet [27, 28] and is generally further compensated by using the field frequency lock approach. However, because changes in air temperature in outdoor environments are usually much larger than in standard laboratories, shifts in temperature can be a major problem in outdoor measurements [26]. Another problem with outdoor measurements is the role of the temperature variations in the NMR signal. In plant tissues, the dependence of the distribution of water proton transverse relaxation times on temperature is complex, as several contributions may overlap [40]. Indeed, the temperature affects molecular mobility and the diffusive exchange of water between the subcellular compartments (diffusion coefficients and membrane permeability). As a result, both T2 and peak amplitudes change with temperaure, and are thus a possible source of interpretation errors.
Malone et al. [24] isolated the variation in the signal due to biological factors from that due to changes in the temperature of the detector. These authors characterized the sensitivity of their system to temperature and built a model that accounted for the particular linear dependence of the detector circuitry on temperature for the prediction of the variation in the NMR signal. The method was shown to be effective in a greenhouse experiment in which the average daily temperature variation was 10 K, with an average daily high near 305 K. However it should be tested for higher temperature variations before the method is used in outdoor conditions. The temperature stability of the samples is also critical for measurements of relaxation times as both signal ampliture and relaxation times depend to a great extent on temperature. This issue has been addressed in only a limited number of studies. Windt et al. [17] addressed a problem of variations in the temperatures of the sample and of the spectrometer by continuously monitoring the temperature of the tree stem in the NMR device and the temperature of the spectrometer. They observed slight differences in the amplitude of the NMR signal, which they attributed to variations in the temperature of the object (due to a shift in the Boltzmann equilibrium) and temperature-induced differences in the signal amplification factor of the spectrometer. A temperature correction factor was applied to compensate for these differences. Another possible approach is to equip the NMR device with a temperature control device for a sample. Anferova et al. [41] developed two different types of mobile temperature control units compatible with the NMR-MOUSE for analysis of rubber by transverse NMR relaxation. For this application, stable temperatures of the samples were critical, and temperature stability of better than 0.5 °C was needed for good reproducibility of measurements. The use of the device developed and tested here extends the range of possible investigations with the NMR-MOUSE to samples in environments with marked variations in temperature.
In the present study, the effects of the environmental conditions on both the device and the sample were controlled. The operating temperature of the benchtop NMR device magnet was maintained through a magnet heating system. The stable temperature of the sample was ensured by a variable temperature control units within 0.1 °C. The NMR mobile laboratory therefore made it possible to perform measurements in exactly the same controlled conditions throughout the experiment, despite the fact the measurements were performed in an environment with no temperature regulation.
As described in "Methods", after the NMR experiment, the samples were transferred to the laboratory for the determination of water content and dry weight of leaves. These measurements were not possible in the field, as weighing the samples requires a precision balance, which was shown to be sensitive to transport. However, by measuring the free induction decay (FID) signal in addition to the CPMG curve, it is possible to descriminate between water and dry matter and to use the relationship established in [9] to estimate the water content and dry mass of the leaf samples. This would make it possible to overcome the difficulty of using a balance in the field in further experiments.
The mobile NMR laboratory developed in this study was shown to be able to perform accurate outdoor characterization of oilseed rape leaves throughout the canopy. The approach used enabled in situ assessment of physiological status of leaves from plants grown in their natural environment without disturbing the plants themselves. The study enabled the comparison between the patterns of NMR signal evolution from plants grown under well-controlled conditions and plants grown in the field. The method described here provides new opportunities for fine phenotyping and monitoring of plant development in the natural environment. It can be used for the selection of oilseed rape genotypes with high tolerance to water or nitrogen depletion. Future investigations should extend the method to other crops.
Gitelson AA, Gritz Y, Merzlyak MN. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J Plant Physiol. 2003;160:271–82.
Gironde A, Poret M, Etienne P, Trouverie J, Bouchereau A, Caherec F, Leport L, Orsel M, Niogret M-F, Deleu C, Avice JC. A profiling approach of the natural variability of foliar N remobilization at the rosette stage gives clues to understand the limiting processes involved in the low N use efficiency of winter oilseed rape. J Exp Bot. 2015;66:2461–73.
Jullien A, Mathieu A, Allirand JM, Pinet A, de Reffye P, Cournede PH, Ney B. Characterization of the interactions between architecture and source–sink relationships in winter oilseed rape (Brassica napus) using the GreenLab model. Ann Bot. 2011;107:765–79.
Agati G, Foschi L, Grossi N, Volterrani M. In field non-invasive sensing of the nitrogen status in hybrid bermudagrass (Cynodon dactylon × C. transvaalensis Burtt Davy) by a fluorescence-based method. Eur J Agron. 2015;63:89–96.
Pfuendel EE, Ben Ghozlen N, Meyer S, Cerovic ZG. Investigating UV screening in leaves by two different types of portable UV fluorimeters reveals in vivo screening by anthocyanins and carotenoids. Photosynth Res. 2007;93:205–21.
Evans T, Song J, Jameson PE. Micro-scale chlorophyll analysis and developmental expression of a cytokinin oxidase/dehydrogenase gene during leaf development and senescence. Plant Growth Regul. 2012;66:95–9.
Sorin C, Leport L, Cambert M, Bouchereau A, Mariette F, Musse M. Nitrogen deficiency impacts on leaf cell and tissue structure with consequences for senescence associated processes in Brassica napus. Bot Stud. 2016;57:1–14.
Sorin C, Musse M, Mariette F, Bouchereau A, Leport L. Assessment of nutrient remobilization through structural changes of palisade and spongy parenchyma in oilseed rape leaves during senescence. Planta. 2015;241:333–46.
Musse M, De Fransceschi L, Cambert M, Sorin C, Lecaherec F, Burel A, Bouchereau A, Mariette F, Leport L. Structural changes in senescing oilseed rape leaves at tissue and subcellular levels monitored by nuclear magnetic resonance relaxometry through water status. Plant Physiol. 2013;163:392–426.
Van As H. Intact plant MRI for the study of cell water relations, membrane permeability, cell-to-cell and long distance water transport. J Exp Bot. 2007;58:743–56.
Eidmann G, Savelsberg R, Blümler P, Blümich B. The NMR MOUSE, a mobile universal surface explorer. J Magn Reson Ser A. 1996;122:104–9.
Blumich B, Casanova F, Dabrowski M, Danieli E, Evertz L, Haber A, Van Landeghem M, Haber-Pohlmeier S, Olaru A, Perlo J, Sucre O. Small-scale instrumentation for nuclear magnetic resonance of porous media. New J Phys. 2011;13:015003.
Di Tullio V, Capitani D, Atrei A, Benetti F, Perra G, Presciutti F, Proietti N, Marchettini N. Advanced NMR methodologies and micro-analytical techniques to investigate the stratigraphy and materials of 14th century Sienese wooden paintings. Microchem J. 2016;125:208–18.
Capitani D, Brilli F, Mannina L, Proietti N, Loreto F. In situ investigation of leaf water status by portable unilateral nuclear magnetic resonance. Plant Physiol. 2009;149:1638–47.
Casieri C, Senni L, Romagnoli M, Santamaria U, De Luca F. Determination of moisture fraction in wood by mobile NMR device. J Magn Reson. 2004;171:364–72.
Windt CW, Soltner H, Dusschoten DV, Blümler P. A portable Halbach magnet that can be opened and closed without force: the NMR-CUFF. J Magn Reson. 2011;208:27–33.
Windt CW, Blumler P. A portable NMR sensor to measure dynamic changes in the amount of water in living stems or fruit and its potential to measure sap flow. Tree Physiol. 2015;35:366–75.
Van As H, Reinders JEA, De Jager PA, Van de Sanden PACM, Schaafsma TJ. In situ plant water balance studies using a portable NMR spectrometer. J Exp Bot. 1994;45:61–7.
Scheenen TWJ, Vergeldt FJ, Heemskerk AM, Van As H. Intact plant magnetic resonance imaging to study dynamics in long-distance sap flow and flow-conducting surface area. Plant Physiol. 2007;144:1157–65.
Windt CW, Vergeldt FJ, De Jager PA, Van As H. MRI of long-distance water transport: a comparison of the phloem and xylem flow characteristics and dynamics in poplar, castor bean, tomato and tobacco. Plant Cell Environ. 2006;29:1715–29.
Oligschläger D, Rehorn C, Lehmkuhl S, Adams M, Adams A, Blümich B. A size-adjustable radiofrequency coil for investigating plants in a Halbach magnet. J Magn Reson. 2017;278:80–7.
Yoder J, Malone MW, Espy MA, Sevanto S. Low-field nuclear magnetic resonance for the in vivo study of water content in trees. Rev Sci Instrum. 2014;85:095110.
Rokitta M, Rommel E, Zimmermann U, Haase A. Portable nuclear magnetic resonance imaging system. Rev Sci Instrum. 2000;71:4257–62.
Malone MW, Yoder J, Hunter JF, Espy MA, Dickman LT, Nelson RO, Vogel SC, Sandin HJ, Sevanto S. In vivo observation of tree drought response with low-field NMR and neutron imaging. Front Plant Sci. 2016;7:564.
Okada F, Handa S, Tomiha S, Ohya K, Kose K, Haishi T, Utsuzawa S, Togashi K. Development of a portable MRI for outdoor measurements of plants. In 6th Colloquium on mobile magnetic resonance, Aachen, Germany; 2006.
Kimura T, Geya Y, Terada Y, Kose K, Haishi T, Gemma H, Sekozawa Y. Development of a mobile magnetic resonance imaging system for outdoor tree measurements. Rev. Sci. Instrum. 2011;82:053704.
Jones M, Aptaker PS, Cox J, Gardiner BA, McDonald PJ. A transportable magnetic resonance imaging system for in situ measurements of living trees: the tree hugger. J Magn Reson. 2012;218:133–40.
Geya Y, Kimura T, Fujisaki H, Terada Y, Kose K, Haishi T, Gemma H, Sekozawa Y. Longitudinal NMR parameter measurements of Japanese pear fruit during the growing process using a mobile magnetic resonance imaging system. J Magn Reson. 2013;226:45–51.
Nagata A, Kose K, Terada Y. Development of an outdoor MRI system for measuring flow in a living tree. J Magn Reson. 2016;265:129–38.
Mariette F, Guillement J, Tellier C, Marchal P. Continuous relaxation time distribution decomposition by MEM. Data Handl Sci Technol. 1996;18:218–34.
Borras L, Maddonni GA, Otegui ME. Leaf senescence in maize hybrids: plant population, row spacing and kernel set effects. Field Crops Res. 2003;82:13–26.
Rajcan I, Tollenaar M. Source: sink ratio and leaf senescence in maize: I. Dry matter accumulation and partitioning during grain filling. Field Crops Res. 1999;60:245–53.
Drouet JL, Bonhomme R. Do variations in local leaf irradiance explain changes to leaf nitrogen within row maize canopies? Ann Bot. 1999;84:61–9.
Rousseaux MC, Hall AJ, Sanchez RA. Light environment, nitrogen content, and carbon balance of basal leaves of sunflower canopies. Crop Sci. 1999;39:1093–100.
Wingler A. Interactions between flowering and senescence regulation and the influence of low temperature in Arabidopsis and crop plants. Ann Appl Biol. 2011;159:320–38.
Masclaux-Daubresse C, Purdy S, Lemaitre T, Pourtau N, Taconnat L, Renou JP, Wingler A. Genetic variation suggests interaction between cold acclimation and metabolic regulation of leaf senescence. Plant Physiol. 2007;143:434–46.
Nooden LD, Penney JP. Correlative controls of senescence and plant death in Arabidopsis thaliana (Brassicaceae). J Exp Bot. 2001;52:2151–9.
Lacerenza JA, Parrott DL, Fischer AM. A major grain protein content locus on barley (Hordeum vulgare L.) chromosome 6 influences flowering time and sequential leaf senescence. J Exp Bot. 2010;61:3137–49.
Manupeerapan T, Davidson J, Pearson C, Christian K. Differences in flowering responses of wheat to temperature and photoperiod. Crop Pasture Sci. 1992;43:575–84.
Hills BP, Lefloch G. NMR-studies of non-freezing water in cellular plant-tissue. Food Chem. 1994;51:331–6.
Anferova S, Anferov V, Adams M, Fechete R, Schroeder G, Blumich B. Thermo-oxidative aging of elastomers: a temperature control unit for operation with the NMR-MOUSE. Appl Magn Reson. 2004;27:361–70.
All the authors cooperated on all the experiments reported in this article and participated in the data analyses. All authors read and approved the final manuscript..
This work was supported by the program "Investments for the Future" (Project ANR-11-BTBR-0004 "RAPSODYN"). We are most grateful to the PRISM core facility (Rennes, France) for access to the facilities, the Genetic Resources Center (Bracy Sol, BRC, UMR IGEPP, INRA Ploudaniel, France) for providing the seeds of the Aviso variety and the Experimental Unit of "La Motte" (INRA Le Rheu, France) for the trial setup and management. We thank Françoise LE CAHEREC, Marie-Françoise NIOGRET, Carole DELEU, Nathalie NESI, Anne LAPERCHE and Christine BISSUEL (IGEPP) for setting up the field experiment and for cooperation on this study. We also thank Michel Loubat and Yves Diascorn (IRSTEA) for the technical assistance in setting up the mobile NMR lab and Patrick Leconte, Elise Alix and Bernard Moulin (IGEPP) for technical support in plant management.
The authors declare that they have no competing interests
All data generated or analyzed during this study are included in this published article [and its supplementary information files].
IRSTEA, OPAALE, 17, avenue de Cucillé, 35044, Rennes Cedex, France
Maja Musse, Mireille Cambert, William Debrandt, Clément Sorin & François Mariette
Université Bretagne Loire, Rennes, France
Maja Musse, Laurent Leport, Mireille Cambert, William Debrandt, Clément Sorin, Alain Bouchereau & François Mariette
INRA, UMR 1349 IGEPP-Institut de Génétique, Environnement et Protection des Plantes, UMR INRA – Agrocampus Ouest-Université de Rennes 1, Domaine de la Motte, 35653, Le Rheu Cedex, France
Laurent Leport, William Debrandt, Clément Sorin & Alain Bouchereau
Maja Musse
Laurent Leport
Mireille Cambert
William Debrandt
Clément Sorin
Alain Bouchereau
François Mariette
Correspondence to Maja Musse.
Musse, M., Leport, L., Cambert, M. et al. A mobile NMR lab for leaf phenotyping in the field. Plant Methods 13, 53 (2017). https://doi.org/10.1186/s13007-017-0203-5
Transverse relaxation (T2)
Leaf senescence
|
CommonCrawl
|
Artificial intelligence for channel estimation in multicarrier systems for B5G/6G communications: a survey
Evandro C. Vilas Boas ORCID: orcid.org/0000-0002-7225-77831,
Jefferson D. S. e Silva1,
Felipe A. P. de Figueiredo ORCID: orcid.org/0000-0002-2167-72861,
Luciano L. Mendes ORCID: orcid.org/0000-0002-1996-72921 &
Rausley A. A. de Souza ORCID: orcid.org/0000-0002-6179-98941
EURASIP Journal on Wireless Communications and Networking volume 2022, Article number: 116 (2022) Cite this article
Multicarrier modulation allows for deploying wideband systems resilient to multipath fading channels, impulsive noise, and intersymbol interference compared to single-carrier systems. Despite this, multicarrier signals suffer from different types of distortion, including channel noise sources and long- and short-term fading. Consequently, the receiver must estimate the channel features and compensate it for data recovery based on channel estimation techniques, such as non-blind, blind, and semi-blind approaches. These techniques are model-based and designed with accurate mathematical channel models encompassing their features. Nevertheless, complex environments challenge accurate mathematical channel estimation modeling, which might neither be accurate nor correspond to reality. This impairment decreases the system performance due to the channel estimation accuracy loss. Fortunately, (AI) algorithms can learn the relationship among different system variables using a model-driven or model-free approach. Thereby, AI algorithms are used for channel estimation by exploiting its complexity without unrealistic assumptions, following a better performance than conventional techniques under the same channel. Hence, this paper comprehensively surveys AI-based channel estimation for multicarrier systems. First, we provide essential background on conventional channel estimation techniques in the context of multicarrier systems. Second, the AI-aided channel estimation strategies are investigated using the following approaches: classical learning, neural networks, and reinforcement learning. Lastly, we discuss current challenges and point out future research directions based on recent findings.
Multicarrier systems rely on transmitting data over several subcarrier signals, offering significant advantages compared to single-carrier systems [1, 2]. For example, multicarrier modulation (MCM) splits a wideband channel into overlapping narrowband subcarriers, yielding high spectral efficiency and throughput. In addition, these systems are resilient to multipath fading channels, impulsive noise interference, and intersymbol interference (ISI) [2, 3]. Due to the development of digital signal processing, the MCM has been implemented in different wireless communication systems. For instance, the orthogonal frequency division multiplexing (OFDM) modulation has been applied to the long-term evolution (LTE) system air interface [4]. Likewise, the 3GPP fifth-generation (5G) network technical specifications adopted the OFDM modulation in the new radio (NR) air interface for early deployment [1]. Concurrently, other multicarrier systems are also proposed for the beyond 5G (B5G) and sixth-generation (6G) mobile networks, such as filter bank multicarrier (FBMC), generalized frequency division multiplexing (GFDM), and universal filtered multicarrier (UFMC) [2, 5, 6].
The OFDM applies inverse fast Fourier transform (IFFT) and Fourier transform (FFT) to, respectively, modulate and demodulate a given signal with low complexity [1]. The conventional OFDM also adds a cyclic prefix (CP) to its symbol to mitigate ISI. Some OFDM waveform disadvantages comprise high peak-to-average power ratio (PAPR), frequency offset sensibility, and out-of-band leakage characteristics [1,2,3]. However, some techniques are introduced to OFDM systems to mitigate those drawbacks giving rise to some OFDM waveform variations, for example, wavelet OFDM, discrete Fourier transform spread OFDM, windowed OFDM, and resource block filtered [3, 7, 8].
The FBMC waveform uses non-orthogonal subcarriers generated based on distinct filtered pulses [9,10,11]. According to the filter design, a given subcarrier suffers intercarrier interference (ICI) only related to its adjacent subcarriers. Therefore, the FBMC improves spectral efficiency by removing the frequency guard band and drastically reducing the out-of-band leakage [9]. However, FBMC still has some disadvantages, like suffering from high PAPR. Cosine modulated multitone, filtered multitone, and discrete Fourier transform spread approaches are some techniques that reduce the PAPR, introducing FBMC waveform variations [3, 10].
GFDM and UFMC are seen as a variation of OFDM. The GFDM is a generalized conventional OFDM that maps different services into flexible subcarriers and CP by deploying different filters [3, 12]. GFDM is also robust to frequency offset and has low PAPR with a high ICI sensibility. On the other hand, the UFMC has been proposed to mitigate the ICI in OFDM systems by filtering a group of subcarriers to reduce the out-of-band leakage [13, 14]. It allows for relaxing the CP and carrier synchronization constraints.
The multicarrier signals are sensitive to carrier frequency offsets introduced by a mismatch between the local transmitter and receiver oscillators or due to high-mobility receivers in wireless communication systems [15]. The high-mobility receivers boost the Doppler effect phenomenon, which leads to ICI and system performance degradation. Multicarrier signals also suffer from other types of distortion, including channel noise sources and long- and short-term fading. Hence, the channel-imposed impairments must be evaluated and compensated at the receiver for data recovery. This process is accomplished through channel estimation and equalization techniques, involving a mathematical model that includes a channel matrix reflecting the relationship between the transmitted and the received signal [2, 16,17,18,19].
Traditional channel estimation techniques for multicarrier systems are classified into two main categories based on the sent signal knowledge at the receiver: blind- and non-blind-based approaches [2, 17,18,19]. Blind-based channel estimation extracts statistical properties from the received signals to avoid transmitting data training sequences during communication. Regardless, it requires a large amount of received data, resulting in performance degradation over fast-fading channels. Non-blind strategies rely on transmitting data known at the receiver for channel estimation, called pilot symbols. They outperform blind techniques at the cost of reducing spectral efficiency due to the pilots' symbols transmission [19, 20]. A hybrid method between blind and non-blind procedures is called semi-blind channel estimation. It comprises sending training data to initialize the estimator, followed by blind detection techniques.
Radio channel estimations are challenging due to the rise of time-varying and frequency selectivity introduced by the high randomness and environment-dependent statistical features driven by multipath propagation, transmitter and receiver mobility, and local scattering [2, 19, 20]. Consequently, the conventional model-based channel estimation techniques have performance limitations under complex channel conditions, such as fast time-varying, multipath fading, and nonlinear deep fading conditions [21,22,23]. These environments challenge accurate mathematical channel estimation modeling, which might not fully encompass the channel features. This impairment lowers a multicarrier system's performance due to the loss of channel estimation accuracy. However, AI-based learning algorithms can overcome those conditions by cramming the relationship among different system variables using either a model-driven or model-free approach.
AI enables devices to make decisions on their own based on past learning experiences. Instead of requiring hand-tuning, devices adapt their parameters to fluctuating environments to achieve the best operational state. Furthermore, the learning algorithms exploit the channel complexity without making unrealistic assumptions to outperform the conventional techniques under similar channels. Consequently, AI algorithms discard the need for accurate mathematical models for channel estimation, allowing for tracking parameter fluctuations over complex environments, undoubtedly encompassing those well-modeled channels. Thereby, AI-based channel estimation renews the channel estimation techniques and creates new ones. As a result, AI-based channel estimation approaches surpass the limitations of conventional methods, providing a high degree of estimation accuracy and improving communication systems performance [24].
AI-aided channel estimation studies are relevant to B5G/6G communications since AI itself is considered one of the foundations of future 6G networks [25, 26]. Moreover, B5G/6G networks are expected to operate in millimeter and terahertz frequencies to overcome bandwidth limitations and provide higher throughput. Hence, future radio communication systems will meet channels with grown complexity due to the increased attenuation (including the rain attenuation) and the high atmospheric absorption rates [25,26,27,28,29]. Beyond that, other required key technologies for B5G/6G, such as massive MIMO (mMIMO) and channel bandwidth improvement, will enlarge the transceiver's complex architecture and introduce new challenges to channel estimation [24, 29,30,31].
Wideband channels can be frequency-selective compared to narrowband ones since the frequency components will face distinctive fading [29]. While multicarrier systems mitigate this effect, channel estimation techniques must be able to acquire the channel state information (CSI) under different system architectures and environments. For instance, the mMIMO architecture requires a large number of antennas while demanding a great number of pilot symbols [29, 30]. On the other hand, the worldwide spectrum availability in millimeter and terahertz frequencies can boost the adoption of frequency division duplex, dropping out the reciprocity between the downlink and uplink channel and raising the need for periodic CSI feedback [24, 30, 31].
Channel estimation techniques will undoubtedly face a renewed set of complex channel conditions in these new frequency bands, including some early mentioned ones. Therefore, the studies circumventing the extended application of AI capabilities to boost well-known channel estimation approaches and introduce new techniques are crucial to the physical layer of future communication systems. Moreover, it contributes directly to reshaping and building an intelligent physical layer to optimize the system decision through virtualized tools [22, 24, 30,31,32,33]. In this regard, this work is devoted to comprehensively and thoroughly discussing how AI algorithms play a critical role in the field of channel estimation techniques.
Several surveys and reviews are in the multicarrier systems channel estimation field [2, 5, 16,17,18,19,20, 34,35,36,37,38]. They mainly discuss the conventional channel estimation techniques without mentioning AI integration. Also, dedicated works about channel estimation for OFDM systems provide a comprehensive review of the state of the art by the time it was published [16,17,18,19,20, 35,36,37,38]. Other authors addressed the channel estimation techniques within a comprehensive review of OFDM systems [37]. An extensive review of channel estimation for waveforms of next-generation networks, including OFDM, FBMC, GFDM, and UFMC schemes, is found in [2]. Recently, channel estimation techniques have been discussed for 5G and millimeter-wave communication systems, including but not limited to OFDM systems [5, 29].
The AI-based channel estimation approach was considered for intelligent wireless communication systems for 5G/6G future networks [21, 24, 30, 32, 33, 39,40,41,42,43,44,45,46]. The channel estimation process was presented as a physical layer application employing AI algorithms to improve the CSI acquisition accuracy [21, 39, 45, 46]. There are already machine learning (ML) techniques overviews for solving different challenges in a wireless network, with a discussion concerning the ML categories and pointing out several applications [32, 33]. For instance, a regression-aided technique was indicated for channel estimation in high-mobility and nonlinear deep fading scenarios. A comprehensive survey about ML in the vehicular network context is found in [43], reviewing and discussing the AI-based channel estimation techniques in the context of high-mobility OFDM systems.
Massive MIMO channel techniques for modeling and estimation were the focus of [30, 40], which briefly exploited channel estimation in OFDM systems. In [30], the channel characteristics were handled as an image processing problem in the context of deep learning (DL) networks. Meanwhile, DL application in mobile and wireless networks was conducted concerning the channel estimation techniques in the role of signal-driven processing [41], similar to the discussion in [44]. However, the channel estimation subject was not comprehensively reviewed, nor were the multicarrier systems included. The authors in [42] presented the channel estimation subject as a general approach without focusing on multicarrier systems, investigating DL in terms of the model-based block architecture and algorithm design. A comparison between DL-based and conventional channel estimation methods has been provided in [24]. Moreover, performance analysis of ML-based channel estimation was carried out in [47], while recurrent neural network (RNN) channel estimation was studied in [48]. Finally, a short review of DL for channel estimation was provided in [49] without focusing on multicarrier systems.
Several works have been carried out DL for physical layer applications [22, 23, 31, 50, 51]. A comprehensive overview of model-driven DL for physical layer communication was provided in [50]. It briefly concerned the model-driven advantages over the data-driven to leverage low complexity algorithms for channel estimation in OFDM and MIMO-OFDM systems. DL-based block-structured functions for the physical layer were approached in [31], which investigated joining channel estimation and signal detection in the context of data-driven. In [23], the authors summarized the DL-based physical layer applications in 5G wireless, demonstrating how DL could assist the channel estimation process. DL use-cases for physical layer applications for 6G communication systems are found in [22]. It discussed, in a general manner, the essential requirements and challenges on the physical layer in 6G future communication systems, highlighting the deployment strategies and key enabling technologies to employ DL. Some works in the channel estimation field were also cited, discussing their findings.
Motivation and contributions
The analysis of the surveys, magazines, and review papers regarding AI for channel estimation has shown a direct approach to demonstrate the ML and DL application in the 5G and 6G physical layer communication systems, as summarized in Table 1. However, a few papers have covered AI for channel estimation research, and they are limited to specific scenarios, like high-mobility systems [43] or partially enfolding the subject [51]. Other papers supplied a tutorial introduction to AI-aided channel estimation, addressing the performance of a specific technique or comparing the most recent ones.
Table 1 Summary of existing surveys, magazines, and review papers related to artificial intelligence for channel estimation in multicarrier systems
Motivated by the research growth of AI-based channel estimation in multicarrier systems, this work offers a comprehensive survey that covers the recent discoveries in the field, discusses them, and addresses future research directions. Therefore, this paper's main contributions are as follows:
An overview of channel estimation techniques for multicarrier systems comprising non-blind-, blind-, semi-blind-, and AI-based approaches, with the latter as a new group.
A tutorial discussion on different approaches to implement channel estimation based on AI. It covers the concepts and implementations impairments of the AI-aided model-based block-type, AI-aided block-type, and AI-aided block-type channel estimation joining function methods.
A comprehensive survey and discussion about the recent findings in AI-based channel estimation and its complexity, considering the classical learning techniques such as regression, evolutionary algorithm, dimensionality reduction, and Bayesian learning.
A comprehensive survey and discussion about the relevant neural network algorithms for channel estimation and their complexity, including feed-forward neural network, extreme learning machine, recurrent neural network, deep neural network, and autoencoder.
A discussion about the recent applications of reinforcement learning in channel estimation and their complexity.
A collection of open issues and future research opportunities to unwind the channel estimation for MCM communications systems, with an extension to single-carrier systems.
Organization of the paper
The research on the paper subject has shown extensive interest by the academic community in devoting ML algorithms to channel estimation techniques for OFDM and mMIMO-OFDM. This statement is mainly driven by the natural adoption of MCM for 4G and 5G mobile networks and other wireless systems. Hence, the authors have carried out OFDM principles to provide the fundamentals for guiding research after first contact with the technology. Despite the lack of research field extension, a few works were uncovered carrying out the AI-based channel estimation for FBMC, GFDM, and UFMC modulation techniques. These findings were also included in the paper discussion, presenting a complete state-of-the-art review.
Therefore, this survey is organized as shown in Fig. 1. A brief review of the OFDM principles is found in Sect. 2. Conventional, non-using AI channel estimation techniques for multicarrier systems are reviewed in Sect. 3, providing a background for further understanding of AI-aided techniques. The AI-aided channel estimation approach is discussed as a new set of techniques identifying their main aspects. Henceforth, classical learning-aided channel estimation techniques are reviewed in Sect. 4. Regression, evolutionary algorithm, dimensionality reduction, and Bayesian learning are covered in the context of supporting conventional channel estimation techniques. The neural network (NN)-aided channel estimation techniques are discussed in Sect. 5, mainly including feed-forward neural network (FFNN), extreme learning machine (ELM), RNN, and deep neural network (DNN). The relevant networks are compared concerning the AI-aided channel estimation characteristic presented in Sect. 3. End-to-end communication is also included and discussed in Sect. 5 since channel estimation is an intrinsic process learned by the autoencoder network, with the channel as a hidden layer. Finally, reinforcement learning techniques are addressed in Sect. 6. This emerging branch of AI has been recently investigated in the channel estimation context. Practical issues and open research topics are discussed in Sect. 7, and a conclusion is provided in Sect. 8.
OFDM principles
Due to the prevailing OFDM channel estimation techniques during the research, this section presents the OFDM fundamentals to provide a look inside this modulation technique and insights into the following sections. Regardless, we recommend looking inside the content in [2, 3, 9, 11, 12, 52] and the references therein for those also interested in reviewing the fundamentals of the FBMC, GFDM, and UFMC modulations.
General OFDM system structure
Figure 2 depicts a general OFDM system structure, where the first and last blocks are similar, being only arranged inversely [20, 35, 37]. The first block comprises the serial-to-parallel converter (S/P) and the mapping functions, while the last one includes the demapping and parallel-to-serial converter (P/S) blocks. The S/P and P/S blocks are responsible for converting the bits into parallel groups or serial streams, respectively. The mapping and demapping blocks convert the bits into quadrature and in-phase components and the opposite, respectively, according to the modulation scheme adopted by each subcarrier.
OFDM divides the available channel bandwidth into N different overlapping narrowband sub-channels. Instead of having one modulated single-carrier, N subcarriers are modulated to be the data bearers. In the time domain, single-carrier symbols of duration \(T_\text {s}\) are converted into symbols of duration \(T = N T_\text {s}\). In the frequency domain, each sub-channel is utilized by a different subcarrier so that all subcarriers are orthogonal. The following subcarrier frequency spacing equation achieves the orthogonality,
$$\begin{aligned} |f_i - f_k| = \frac{n}{T}, \end{aligned}$$
in which \(f_i\) and \(f_k\) are the ith and kth subcarrier frequencies, \(1 \le i, k \le N\), with \(i\ne k\), respectively, and n a positive integer number. The OFDM symbol is formed by summing up all of the modulated subcarriers.
In practice, the OFDM symbol to be transmitted is obtained using an inverse discrete Fourier transform (IDFT). The transmitter applies the IDFT to the in-phase and quadrature components of all subcarrier modulating symbols. Thus, the transmitted signal is represented by
$$\begin{aligned} s(t) = \mathfrak {F}^{-1}\{c_i\}, \end{aligned}$$
in which \(\{c_i\} = \{I_i + j Q_i\}\) with \(I_i\) and \(Q_i\) being the in-phase and quadrature components of the modulating symbols, respectively, \(1 \le i \le N\). At the receiver, discrete Fourier transform (DFT) is applied to the received OFDM symbol to separate each subcarrier signal.
Although the OFDM modulation does not present ICI, the symbols can interfere with one another, leading to interblock interference (IBI) [3, 7, 9]. This issue is handled using the CP, a copy of the OFDM's symbol end inserted at its beginning [19, 37]. Concerning the time-domain, the CP adds a guard time between OFDM symbols that avoids IBI as long as the time guard introduced by the CP is longer than the channel-imposed delay. Although CP combats IBI in OFDM systems, it affects the orthogonality among the OFDM modulated subcarriers. Hence, the receiver extracts the CP from the incoming signal before applying the DFT to separate each subcarrier signal at the receiver.
After using the DFT to obtain each subcarrier signal at the receiver (\(y_i\)), there are expected differences compared to each sent symbol (\(c_i\)). The channel influence and receptor noise mainly own this contrast. Nevertheless, these phenomena have been extensively studied and found to be stochastic, meaning that their impact cannot be precisely calculated but assessed in terms of probability. Hence, the channel influence and the receptor noise over the sent signal will likely change as the communication system operates.
The channel affects the modulated transmitted symbol in a multiplicative manner. In other words, it introduces a complex gain over the symbol that can increase or decrease its magnitude and phase. The additive white Gaussian noise (AWGN), intrinsic to every communication system, is added to the received symbol. Therefore, each received subcarrier symbol sample is represented by
$$\begin{aligned} y_i = h_i \times c_i + n_i, \end{aligned}$$
in which, \(h_i\) and \(n_i\) represent the channel frequency response (CFR) and the AWGN at each subcarrier, respectively.
The channel estimation block employs different techniques to estimate each \(h_i\) value and feed the equalize block. If those techniques are robust enough, the output of the equalization block assumes the form
$$\begin{aligned} y_{i_\text {Equalized}} = c_i + \frac{n_i}{h_i}, \end{aligned}$$
meaning that perfect estimation was achieved. When the channel estimation is imperfect, many issues arise, such as high bit error rate (BER), spectral inefficiency, an increase in the outage probability, and so forth [3, 7, 9, 19, 37].
The main focus of this paper is on the techniques employed to estimate the channel in a multicarrier system. Hence, the following sections address traditional and recently proposed techniques. The former is a collection of immutable methods of system functioning that are well known in the literature. The latter comprehends self-adjustable algorithms that have the potential to surpass traditional methods.
Channel estimation techniques for multicarrier systems
Classification of channel estimation techniques
This section overviews different conventional channel estimation techniques for multicarrier systems. It classifies them into blind, non-blind, and semi-blind approaches, as shown in Fig. 3. The blind methods are divided into two main subgroups: statistical and deterministic. The non-blind techniques are subclassified as data-aided and decision-directed channel estimation (DDCE). The former only uses the training sequence or pilot symbols for channel estimation, while the latter also employs the detected data symbol. Combining non-blind and blind methods results in a set of techniques called semi-blind. Applying AI-based techniques to the channel estimation field gives rise to a fourth group that uses ML, including DL algorithms. Since this work is dedicated to exploring the application of AI-based methods in the channel estimation area for multicarrier systems, we discuss its general characteristics and definitions herein.
Blind-based channel estimation techniques
Blind-based channel estimation techniques are classified as statistical and deterministic. Statistical techniques explore the cyclic statistical properties of the received signal in the channel estimation process. As a result, it underperforms beneath shorter data sample sequences due to the statistical dependence of data. On the other hand, deterministic methods rely on quantities of both received signal and channel coefficients. Still, the computational complexity for deterministic methods is higher than the statistic ones and increases as the constellation order grows at the transmitter side. However, deterministic methods converge faster than statistical methods.
Statistical blind-based channel estimation methods are based on either the second-order statistics (SOS) or higher-order statistics (HOS) of the received signal [53, 54]. The SOS approach requires signals with cyclostationary characteristics or channel diversity with single-input single-output (SISO) [55, 56]. Also, it demands less amount of data to obtain reliable statistical estimates related to the HOS approach. Indeed, the HOS has the advantage of providing system phase information without the need for channel diversity at the cost of a large amount of data sampling and computational capacity [53, 54].
HOS applications mainly rely on single-carrier and MIMO systems [53, 54, 57, 58]. They leverage the functional properties of the impulse response channel matrix through third- or fourth-order cumulants. Meanwhile, earlier SOS algorithms have been applied to multicarrier systems, such as OFDM [53, 59]. The transmitter-induced cyclostationarity inserted by adding the CP evaluation of the received signal autocorrelation matrix using SOS [53, 59,60,61,62]. Transmitter-induced cyclostationarity techniques rely on filterbanks and non-redundant linear precoding [53, 63,64,65,66]. These techniques are inserted before the MCM systems, enabling blind channel estimation at the system output through cross-correlation operations.
Blind channel estimation without the use of any statistics has also been proposed. The authors in [67] have shown that the channel matrix null space defines the channel parameters, forming the basis for the subspace blind channel estimation algorithm. These algorithms handle the orthogonality of the noise and the correlation matrix subspaces of the received signal to estimate the channel coefficient. The correlation matrix is also estimated through time-averaging over received samples. This technique outperforms several statistic-based methods, especially under a limited number of data. The concept of subspace-based techniques has been investigated in multicarrier direct-sequence code-division multiple-access (DS-CDMA) and multicarrier code-division multiple-access (MC-CDMA) systems to obtain timing and channel coefficient to deploy linear minimum mean squared error (MMSE) receivers [68,69,70].
Concerning OFDM systems, the subspace-based channel estimation method is proposed to save bandwidth by removing or utilizing inherited redundancy or reducing or eliminating the CP by taking advantage of virtual subcarriers [71,72,73,74,75]. The subspace-based channel estimation method for SISO-OFDM systems is generalized for MIMO-OFDM systems in [76]. Furthermore, a subspace combined SOS approach has been proposed for CP-MIMO-OFDM systems [77]. Since the channel must remain static during the estimation process, system performance is improved by reducing the time-averaging and exploiting the frequency correlation among adjacent OFDM subcarriers [78]. Reduced received blocks in CP- and zero padding (ZP-OFDM) is achieved by obtaining the correlation matrix from the cyclostationary properties of the received signal [79]. Meanwhile, a second approach is discussed by frequency-domain calculating the covariance matrix from a selected group of subcarriers [60].
Blind channel estimations are also implemented based on the finite-alphabet property of the information-bearing transmitted symbols [80,81,82,83,84]. The finite-alphabet approach overcomes the loss of channel identifiability in a subspace-based algorithm when the channel has nulls in subcarriers. These algorithms have been proposed for multicarrier and MIMO multicarrier systems [80,81,82,83,84]. A blind shortening channel estimation algorithm might also mitigate the ICI based on an adaptive time-domain equalizer (TEQ) [85, 86]. Other blind-based channel estimations make the most of the concept of expectation maximization (EM) algorithm, maximum-likelihood principle, minimum variance principle, and orthogonal space-time block codes (OS-TBCs) [87,88,89,90,91]. Blind adaptive algorithms are implemented based on normalized least mean square (NLMS), recursive least square (RLS), and variable step size approaches [92, 93]. These algorithms adapt their filter parameters to minimize the mean squared error (MSE) between the filter output and the signal.
Data-aided channel estimation techniques
Data-aided channel estimation techniques are common in multicarrier communication systems. First, the known information is multiplexed within the data symbols at specific positions at the transmitter. Next, the receiver uses this information to estimate the related channel impulse response (CIR). Finally, it implements an interpolation process among these isolated CIRs to estimate the channel for those unknown data symbols.
Data-aided channel estimation techniques are implemented using two conventional strategies. The first is a training-based channel estimation technique that relies on periodically knowing the transmitted information over one or more symbol periods. The second method considers sending general information within the data, giving rise to pilot-assisted channel estimation. Despite the approach, knowing the information requires a fraction of the signal bandwidth to be wasted, reducing the spectral efficiency compared with other channel estimation techniques. In addition to that, the interpolation process introduces errors in channel estimation.
Regarding multicarrier systems, conventional data-aided channel estimation uses least square (LS), MMSE, or least mean square (LMS) methods to estimate the CFR in training or pilot mode. Most researches deal with OFDM and MIMO-OFDM systems due to the extensive adoption of this MCM in a wireless network. Still, recent works are dedicated to generalizing the devoted OFDM data-aided channel estimation to other MCM, such as FBMC and GFDM [94,95,96]. The training sequence, also called block-type pilots, allows for tracking only channel frequency variations (slow fading channel) due to the one-dimensional (1D) periodicity, estimating the channel response at each subcarrier. The conventional method assumes the channel is the same within the training sequence periodicity [97]. In this case, the estimated channel is used for the consecutive received symbols until another training sequence arrives. Time-domain linear interpolation or higher-order polynomials are considered under fast-fading channels, with the cost of increasing the system latency [98, 99].
The pilot-assisted or comb-type pilot methods utilize scattered pilot patterns and, therefore, tracks time–frequency variation. The channel estimation accuracy depends on the pilot pattern and the interpolating algorithm. The two-dimensional time–frequency pilot space defines the former. The frequency-domain pilot spacing must ensure the estimation of channel frequency variation, which depends on the delay spread. On the other hand, the time-domain pilot placement is related to the Doppler spread. Several works have studied the optimum time–frequency pilot pattern to reduce the number of pilots while preserving the time–frequency variation sampling capabilities. Some OFDM and MIMO-OFDM approaches rely on optimally designing the pilot pattern to minimize the MSE during the channel estimation [100,101,102,103,104,105,106,107,108,109,110,111,112,113]. For instance, that has been demonstrated to be accomplished through equipowered and equispaced pilots [100, 101], optimum power and pilot space related to the lower bound of the average channel capacity [102], heuristic algorithm [103, 104], general interpolator [105], convex optimization algorithm [106], nonuniform placement [107,108,109], optimum pilot power and phase selection [110, 111], iterative algorithm [112], and hopping pilots scheme [113]. In addition, grouping pilot tones into some equispaced clusters can also improve the channel estimation under the MMSE criterion [114].
Analyzing the different pilot placement approaches provides a comprehensive conclusion on the need for adaptive pilot allocation schemes. Pilots transmitted with a power higher than the data symbols improve the channel estimation accuracy. However, it gives rise to the power allocation issue [115]. Furthermore, the power of pilots at different subcarriers must remain equal to meet the MMSE [100]. Superimposed training consists of transmitting data and pilot symbols within the same available resources with different power values, and avoids data rate loss [116,117,118,119,120]. Nonetheless, the channel estimation performance is decreased due to the interference introduced by the superimposed data symbols. Partial superimposed data is an alternative to improve the data rate, while channel estimation takes advantage of the aforementioned pilot-assisted methods [117, 119]. Other pilot design criteria remains on bit error minimization [121], MIMO preamble pilot design [122], channel tracking performance [123], channel capacity maximization [124, 125], multiuser pilot design [126, 127], and PAPR reduction [128,129,130].
The OFDM pilot-assisted methods can be extended to GFDM schemes due to their block-based modulation, as found in [131, 132] and the references therein. Regarding FBMC systems, the channel estimation techniques for FBMC offset quadrature amplitude modulation (OQAM) have been addressed. This FBMC scheme counts on real-valued OQAM symbols, relaxing the real-domain orthogonality of the ambiguity function in the FBMC-OQAM system [133, 134]. However, due to the inherent interference problem, this characteristic is insufficient for channel estimation purposes in FBMC-OQAM. Thus, the estimation techniques are established in two main categories: scattered pilots and preamble-based approaches [94, 135,136,137,138]. The former consists of an auxiliary pilot (AP) or a pair of pilots (POP). The AP method allows for canceling the interference at the transmitter while the POP combines the two adjacent pilots to estimate the channels real and imaginary parts. The latter encompasses training symbols periodically transmitted over three symbols for interference control.
Decision-directed channel estimation techniques
The DDCE techniques use the data-aided strategy with the detected data symbols in the channel estimation process [19, 20]. First, it employs the detected symbols to estimate the channel, which, in turn, is applied to estimate the incoming data. Later, a channel estimation update uses this data, extending the process until all the symbols are counted. Finally, the decision is based on a bitwise approach or forced constellation points, defining the soft [139] or hard techniques, respectively [140, 141]. Using detected symbols introduces some disadvantages to DDCE techniques under fast-fading channel estimation. First, the estimation process is based on outdated data, decreasing the system performance. Once the current channel might not correspond to the one in which the incoming symbols have propagated, symbol error detection is introduced. The new symbols are fed back into the process to update the channel, leading to the propagation of the error estimation. In this case, the training symbols transmission periodicity must be adjusted according to the channel characteristic [19, 141].
The DDCE methods are addressed for OFDM systems with different approaches. They include joint estimation of carrier frequency offset (CFO) and sampling clock frequency offset [140], sample-spaced and fractionally spaced CIR [142], generalized M estimators for mitigating error propagation [141], LS and least MMSE estimators [143], maximum a posteriori channel estimation [144], hard decision signal-to-noise ratio (SNR)-assisted residual CFO estimation [145], joint CIR and noise variance estimation [146], subspace algorithm [147], time-domain channel equalizer [144], and EM algorithm [81]. Performing soft DDCE based on selecting reliable data tones purified by inter stream interference cancelation is proposed in [139]. In [148], it considers a DDCE channel estimator based on OFDM packets consisting of a preamble followed by data symbols. The technique leverages the temporal correlation in channel responses over adjacent OFDM symbols. Further, pilot symbols extract correlation in the CFR across nearby subcarriers to decrease the effect of decision errors in the time domain through frequency-domain averaging. Other works depend on reducing DDCE technique complexity for OFDM systems using transmit diversity [149,150,151].
Semi-blind channel estimation techniques
Semi-blind channel estimation techniques combine both non-blind and blind methods [152, 153]. The hybrid solution allows for better tracking of channel variations by sending training data at the beginning of the transmission interval to initialize the estimator. Similar to the previous techniques, most works have been dedicated to exploiting semi-blind channel estimation for OFDM and MIMO-OFDM systems. For instance, blind subspace algorithms combined with training sequences explore the signal SOS [74, 154,155,156]. Furthermore, first-order statistics of the received signal have been used for semi-blind channel estimation in pseudo-random postfix OFDM systems using weighted pseudo-random postfix sequences [157].
Semi-blind algorithms are accomplished using HOS or SOS within linear prediction like training-based LS [158]. SOS has been proven helpful in ICI and ISI suppression in a time-domain equalizer [159]. In [160], superimposed training is applied with the Gaussian maximum-likelihood criterion. Then, semi-blind estimation methods are used with sparse channels [161,162,163]. Some approaches employ the SOS signal to express the received signal's correlation matrix utilizing the most significant taps (MST) [161, 164, 165]. Thereafter, the MST estimation is performed based on a training-based LS criterion. Concerning the GFDM and FBMC systems, there are a few semi-blind approaches for channel estimation that are similar to those discussed for OFDM [166,167,168].
AI-aided channel estimation techniques
AI-aided channel estimation aspects
The conventional channel estimation techniques use a model-based design, requiring accurate mathematical models according to the channel attributes. In addition, complex environments are challenging for designing channel estimating mathematical models, which might not correspond to reality. This impairment decreases the system performance due to the loss of accuracy in the channel estimation process. AI-based algorithms have the property of learning the relationship among different system variables without the knowledge of a mathematical model [30, 32]. This model-free approach allows leveraging the channel complexity without unrealistic assumptions, following a better performance than conventional techniques under the same channel [24]. Figure 4 shows the system implementation, training process, strategy, and supervision learning-level aspects related to the AI-aided channel estimation techniques.
The traditional wireless communication systems design depends on block-type transmitter and receiver structures. The different system functions are deployed as independent blocks and can be described as mathematical models. This approach supports block-by-block optimization to enhance the system overall performance. The channel estimation process is also deployed as an independent block functionality. Consequently, the AI algorithms can be added to the model-based design to strengthen the channel estimation through channel parameters prediction, defining the AI-aided model-based block-type channel estimation (AMBCE) approach [31]. Further, the AI algorithm can replace the model-based channel estimation, resulting in the AI-aided block-type channel estimation (ABCE) methods.
The AI learning capacity yields different joint functions at the transmitter or the receiver. For instance, combining the channel estimation process with signal detection [31]. This approach is defined as AI-aided block-type channel estimation joining function (ABCEx). An extension of this concept comprises modeling both the transmitter and receiver as a unique AI network resembling autoencoder models [30, 45, 46, 50]. From the implementation point of view, the system is seen as an end-to-end solution with a single block, whereas the channel is a hidden layer. This strategy is also considered in the review process since channel estimation is an intrinsic function.
Regarding the supervision level, AI-based channel estimation algorithms are grouped into supervised, unsupervised, and reinforcement learning, a conventional classification of ML algorithms [21, 169, 170]. The former is efficient but requires a labeled dataset for training purposes. On the other hand, unsupervised learning observes a random dataset to extract patterns to model the process and predict its behavior. This AI algorithm is quite useful when the system data are vast. Finally, reinforcement learning introduces interactions between the system and its experience performance using feedback rewards and penalties.
There are aspects related to the AI-aided channel estimation techniques concerning the training process. The standard AI networks are data-driven, with the network structure trained using a large amount of data. This approach is extended to the AI-aided channel estimation technique, giving rise to some impairments. Moreover, the standard algorithm requires a long training time, which may not be affordable for some wireless applications. Hence, the model-driven approach is an alternative to solve these drawbacks, comprising a model, an approach, and a network [22, 50, 171]. The model is based on physical mechanisms and domain knowledge to provide general solution guidance to design an algorithm as a solution. As a result, the AI network is deployed based on the unfolding algorithm process, which demands less training time and data.
Online and offline training strategies are considered for AI-aided channel estimation techniques [22, 50]. The former trains with a large amount of data as they come from different communication systems or reliable simulators, whereas the latter means training with a static dataset. Offline training is not affordable for complex environments due to the static training model, eliminating the capability of tracking channel variation effects. Also, the static characteristic reduces the AI network training for communication systems related to the training dataset. Furthermore, online training introduces real-time updating of the AI network parameters by tracking the channel effects variation and extending the application range to different practical environments.
Classification of AI-aided channel estimation techniques
The AI-aided channel estimation strategies survey has revealed three practical approaches: classical ML, NNs, and reinforcement learning (RL). The former consists of applying regression, dimensionality reduction, and Bayesian learning to improve conventional channel estimation methods performance, as shown in Fig. 5. The estimator preserves the block-type structure based on the AMBCE approach with supervised learning. The NN-based estimator comprises AMBCE, ABCE, and ABCEx structures depending on the proposed approach. The data-driven or model-driven procedures are also recurrent. Different schemes are surveyed among the NN structures, as presented in Fig. 5. The autoencoder network is classified as ABCEx, with data-driven nature and online training. The reinforcement learning branch is also covered with pioneer works studying the Q-learning technique. The following sections look at the relevant works in the AI-aided channel estimation for the multicarrier systems field.
Classical learning-aided channel estimation techniques
The classical learning techniques are discussed in this section, focusing on their applications to conventional model-based techniques. The linear, polynomial, and nonlinear regression algorithms are early basic applications of ML concepts for channel estimation. Support vector regression (SVR) has recently been raised as a potential regression strategy in AI-aided channel estimation techniques [172, 173]. The evolutionary algorithm has also been applied to channel estimation, whereas the genetic algorithm is more widely used than other evolutionary techniques. In parallel, dimensionality reduction has been investigated as an iterative algorithm estimator technique. It has been revisited in the AI era as an interesting technique to reduce voluminous datasets while preserving their information and refining DNN strategies [174]. Finally, the Bayesian learning techniques are also applied to iterative algorithms and are seen as primary stages of ML.
Regression algorithms consist of a statistical process defining a relationship between two dataset-related variables. The main concept is to find a function that best fits the training data behavior to perform predictions. For example, linear [175,176,177], polynomial [178], 2D nonlinear [99, 179, 180], and support vector [172, 173, 181,182,183,184,185,186,187] regressions have been employed in channel estimation for multicarrier systems. Regression algorithms go under the supervised learning paradigm. The following works considered the regression strategy applied in an AMBCE manner.
Linear and polynomial regression
Linear regression involves finding a linear equation to predict the value of a dependent variable (y) according to a given data value, called an independent variable (x). The linear equation is given as \(y = ax+b\), where a is the slope of the linear function and b is the intersection with the y axis [188]. The method considers simple linear regression with a single-input variable or a multiple linear regression comprising multiple inputs. The line best fitting the dataset values is obtained using an approximation based on an error criterion such as the MSE. However, there are datasets for which a linear curve does not represent the relationship between the independent and dependent variables. Therefore, the linear regression evolved into a polynomial regression by adding a polynomial of order higher or equal to two [189]. The polynomial degree is a hyperparameter that must be determined to avoid dataset over- or underfitting.
In connection with channel estimation, the linear regression algorithm enhances the interpolation process in data-aided methods, where the channel is first estimated through comb-type schemes at the pilot subcarriers. For instance, the linear regression is combined with a pilot-assisted iterative channel estimation [175], an LS estimator [176], and normalized MSE estimator [177]. An LS fitting (LSF) polynomial regression is derived from a linear MMSE to approximate the eigenvectors of the channel correlation matrix by orthogonal polynomials [178]. The MSE performance of the LSF is close to the linear MMSE when the polynomial degree is high or equals to two. The LSF advantage is the non-statistical strategy over the linear MMSE. Channel estimation and data detection are combined in a blind or semi-blind regression model approach [190]. The regression algorithm is applied to find the data sequence associated with the LS channel estimator within the set of possibilities.
Nonlinear regression
Nonlinear regression is a variation where the model function combines nonlinear parameters related to one or more independent variables. This regression is similar to the above-mentioned variations since they all call to find a curve or surface best fitting a dataset [191]. A look inside its application to the channel estimation for multicarrier systems has revealed that the nonlinear regression model is based on a time–frequency space (2D regression model) and combined with an initial channel estimation through an LS estimator [99, 179, 180]. First, the pilot carriers are applied to the LS estimator to calculate the channel at those taps. Next, the time–frequency plane is divided into the same block structure. Then, the 2D nonlinear regression is applied to each block to find a 2D surface function to minimize the Euclidean distance to the initial LS channel estimation at the pilot subcarriers. Finally, the regression function estimates the channel at the data symbol taps in the time–frequency domain grid. The results of BER have revealed an excellent approximation to the perfect channel estimation.
Support vector regression
The support vector regression is an extension of the support vector machine algorithm for regression estimation problems [181, 192, 193]. This algorithm introduces the error acceptance flexibility into the regression field [192]. By taking the linear regression as an example, the goal is to minimize the squared error, while the SVR aims at minimizing the coefficient errors. Hence, the model absolute error is managed to be lower or equal to a maximum error (\(\epsilon\)). As a consequence, the model accuracy is handled by constrained specifications [193].
The SVR has been used to estimate nonlinear channels in OFDM and MIMO-OFDM systems. The SVR was combined with a data-aided channel estimation method like the previous regression techniques. A multi-regressor SVR was proposed to track the relationship between transmitted and received data through channel estimation, with BER performance similar to the MMSE [181]. Using the same training dataset, a BER comparison between the proposal and the radial basis function network (RBFN) has shown that the multi-regressor SVR exhibits lower values.
Moreover, a complex LS-SVR channel estimator for pilot-assisted OFDM systems was formulated by observing the signals time–frequency relationship, surpassing the LS estimator [182]. Next, the nonlinear SVR-based algorithm was extended to stand highly selective channels for OFDM systems [184,185,186]. Notably, a method was proposed based on a learning and estimation phase process to get the frequency response of a MIMO-OFDM system. This approach comprises mapping trained data into a high-dimensional space and using the structural risk minimization principle to leverage the regression estimation for the CFR function [186].
By combining the MMSE with the nonlinear SVR, the authors in [187] accomplished better channel estimation than the LS-SVR. The proposal was to map the input data into a finite-dimensional space to enable a higher-dimensional Hilbert space, similar to the approaches in [184, 185]. A nonlinear SVR-based algorithm implemented with a radial basis function kernel for LTE systems leveraged the information in the pilot subcarriers to estimate the CFR [183]. The algorithm leads to lower BER under the same SNR compared with the LS and feedback estimators, from a good approximation to a perfect estimation. The wavelet transform was used to obtain weights to improve twin SVR (TSVR) channel estimation in pilot-assisted OFDM systems operating in fast selective fading channels [172, 173]. The training samples are weighted according to their distance from the mean values filtered by the wavelet transform. The TSVR algorithm was evaluated in terms of BER and compared with other approaches, resulting in the TSVR being the closest to the perfect estimation curve.
Complexity discussion
Complexity-wise, linear and nonlinear regression estimation shows low complexity and can be adapted to MIMO systems [99, 175, 176, 179, 180]. On the other hand, SVR algorithms demand higher computational complexity but still reserve room for improvement and outperform the other algorithms [181, 182, 184, 185]. Models combining MMSE and SVR also require high computational complexity, but the authors in [187] claim it can be reduced.
Evolutionary algorithm
Evolutionary algorithms are convectional ML methods based on biological evolution mechanisms, aiming at the global minimum while not sticking to local minima. Some evolutionary algorithms are the genetic algorithm (GA), repeated weighted boosting search (RWBS), particle swarm optimization (PSO), differential evolution algorithms (DEA), and colony optimization [194, 195]. Among those approaches, GA [196,197,198,199,200,201,202], RWBS [203,204,205], and PSO [206,207,208,209] have been applied to channel estimation in multicarrier systems. Evolutionary algorithms are also exploited mainly in pilot pattern placement optimization, which is out of the scope of this work [210, 211]. This approach indirectly improves conventional estimators performance, defining the optimal pilot pattern without supporting the channel estimation process.
Genetic algorithm working principle
The GA solves a given optimization problem based on biological evolution, as shown in Fig. 6 [212]. First, the algorithm generates an initial population and evaluates each individual with a fitness value. After that, it selects the fittest individuals, discarding the others. Then, the remaining individuals are crossed-over to generate new ones, employing a mutation scheme to insert randomness. Finally, the new population is evaluated to rank the individuals for future replacement and selection. After reaching a given criterion or a predefined number of generations, the algorithm terminates.
The GA has been used for OFDM channel estimation based on AMBCE systems implementation. For instance, the GA was applied to yield weight optimization for NN, decreasing the iteration number, training time, and overall computational complexity [196]. The joint solution has overcome the MMSE estimator. Beyond that, the interpolation process has been replaced by a GA to assist a pilot-aided OFDM system in estimating the CFR of non-pilot subcarriers [197]. However, the results have proved that the approach has not outperformed the conventional techniques, claimed by the authors as a novel method. A blind channel estimator based on GA has been proposed using cyclostationarity and spectral factorization [201]. The solution has been shown to improve blind estimators by combining spectral factorization and GA compared to a subspace-based estimator from the literature.
Combining the LS and MMSE estimators has been accomplished using a GA [198]. The linear estimators generate the initial population to feed the GA and optimize the channel estimation. The GA allowed selecting the best channel estimation matrix among three candidates using a fitness function. Then, mutation operation is applied to the LS and MMSE, followed by a crossover process and a second mutation. The method was evaluated by comparing its normalized MSE with the standalone LS and MMSE implementation. In conclusion, the approach exhibited better results than the conventional estimators for binary phase-shift keying (BPSK) with a few iterations, which was overcome by the quadrature phase-shift keying (QPSK) modulation as the iteration numbers grew. Joint GA-based channel estimation and multiuser detection have also been carried out in rank-deficient scenarios [194, 199, 200, 202].
Repeated weighted boosting search
The RWBS is a guided stochastic global search optimization algorithm to solve complex problems [213]. It requires an initial random population related to the potential solutions. Then, the population is updated by replacing the worst individuals with a convex combination of the potential solutions [214]. Based on this concept, some channel estimation techniques were obtained for OFDM systems [203,204,205]. For example, this algorithm was used and modified to generate a candidate CIR vector approximating to the global optimum solution instead of summing the weighted candidate vectors [203]. This approach improves the convergence rate of the proposed estimator when compared with the conventional version (using the RWBS without modifying the generation process) at the cost of the worst performance. Still, under scenarios with a limited number of subcarriers, an assessment revealed the algorithms equivalent performance with faster convergence and low complexity.
Furthermore, joining channel estimation and multiuser detection for OFDM systems was accomplished by applying the RWBS algorithm to provide soft outputs to feed a forward error correction (FEC) decoder [204]. The joint solution iteratively estimates the CIR while trading information between the detector and estimator through the FEC capability. The results have shown the solution potential to equal the performance of the LS estimator and approach the maximum-likelihood multiuser detection with those using the perfect CIR. Despite the lack of comparison, the work under discussion might be a variation solution based on a GA [199]. Lately, Hanzo's research group has proposed a quantum-assisted RWBS algorithm for channel estimation with joint data detection [199]. They have claimed their quantum RWBS-based estimator differs from their previous work by adopting a different methodology for creating the individual population and maintaining the algorithms complexity. However, an evaluation comparison of their solutions has shown superior performance of the quantum RWBS algorithm [205].
Particle swarm optimization
Particle swarm optimization principle
The PSO concept relies on the social behavior of insects and sociable animals. Such an approach defines group behavior while also considering individual intelligence. Since a particle finds an optimal solution, the others attempt to pursue it considering its position, as shown in Fig. 7.
Inspired by this behavioral algorithm, some works have inserted it in the channel estimation context for OFDM and MIMO-OFDM systems. The channel parameters are estimated using iterative linear estimators and delivered to the PSO algorithm that works with it to improve the BER [206]. The BER comparison against the LS and least minimum mean squared error (LMMSE) estimator has shown that the proposal equals performance, mainly with the latter. Furthermore, using superimposed training symbols, a multi-objective PSO has been designed to join channel estimation and decoding in MIMO-OFDM systems [207]. The estimator analysis showed promising results under rank-deficient scenarios. Moreover, other approaches were proposed for joint channel estimation and data detection [208], or partial parallel interference cancelation through auxiliary PSO [209], with this last one outperforming the MMSE estimator.
GAs present more computational complexity than regression algorithms [196,197,198,199,200]. Mainly, GA-artificial neural network (ANN) exhibits \(10\%\) less number of iterations contrary to the conventional Levenberg–Marquardt (LM) multilayer perceptron (MLP) channel estimator [202]. RWBS proved less complex and able to achieve perfect channel estimation without requiring a large dataset [203,204,205]. PSO had its computational complexity varying according to the channel coefficients demanding a few iterations (100 at maximum) to overcome LMMSE estimate [206,207,208,209]. A comparison among some of the discussed evolutionary algorithms is available in [194].
Dimensionality reduction
The dimensionality reduction ML algorithm includes techniques aiming to reduce dataset dimensions, yielding better predictions [174]. Among those strategies, principal component analysis (PCA) and independent component analysis (ICA) are applied to multicarrier systems channel estimation. The former relies on orthogonal transformation to convert correlated variables into uncorrelated variables [174]. It reduces the dataset dimension to principal components while maximizing the variance. The latter focuses on separating diverse independent sources while keeping the dataset dimension.
Channel estimation using the I-MSPCA proposed in [217]
The PCA has been applied to OFDM and MIMO-OFDM systems [174, 215,216,217]. The approaches use PCA to find the principal components of the dataset for channel estimation purposes. The data is arranged in a matrix to calculate the eigenvectors of the covariance [215], with the greatest one defining the principal component eigenvector of the dataset used for channel estimation. An improved multi-scale PCA (I-MSPCA) is accomplished by combining a wavelet transform, as shown in Fig. 8 [217]. Wavelet decomposition is performed upon the received OFDM symbols to compute the covariance matrix of the wavelet coefficients, filtered by a threshold, and applied to the PCA algorithm. The computed principal components are passed to a cross-correlation block to correlate them with the received OFDM symbols. The most outstanding value of the maximum cross-correlation values is selected to define the principal component representing the CIR. The I-MSPCA was evaluated in a frequency-selective channel and performed better than the proposal in [215] and the traditional LS estimator.
A semi-blind method used the LS criteria within pilots to estimate the initial CIR and applied the PCA to track channel variations through a two-layer NN using an RLS variable step size [216]. A recent approach applied PCA as a dimensionality reduction transformation to create an ML synthesizer for CSI [174]. The PCA is used to assist in generating of artificial samples from a real voluminous dataset while preserving the information. This approach aims to support DL models that require a large amount of training data.
Similarly to the PCA, the ICA has been studied in the context of OFDM and MIMO-OFDM systems channel estimation [218,219,220,221]. The ICA application enables blind signal separation, supporting blind equalizers design based on iterative layered space-time equalization [218] or MMSE and layered space-frequency equalization to enhance the system performance [219]. The proposal combining wavelet transformation and ICA is presented in [220], which is similar to the approach discussed in [217] for PCA. In [221], a semi-blind channel estimation strategy integrates ICA with pilot carriers. The pilots allow obtaining initial channel estimation that serves as the input data to the ICA algorithm. Notably, ICA usage has leveraged blind and semi-blind estimators outperforming the MMSE estimator even when using perfect CSI [219, 220].
Complexity-wise, dimensionality reduction ML algorithms represent a simple and fast approach to assist the multicarrier systems decision block. Moreover, they accelerate the convergence of the decision block after enriching the training dataset. However, the decision block technique mainly impacts the overall systems computational complexity.
Bayesian learning algorithms rely on the Bayes theorem, where the a posteriori probability of a variable is conditional on the observed a priori probability of a known input variable [222]. The model is initialized based on the belief that the data is updated after the learning algorithm extracts information from it. Regarding the channel estimation area, Bayesian learning estimates the channel parameters upon the received signal observation [222, 223]. This method generates a model-based design approach, which is the concept of several works, including multicarrier systems. Furthermore, some works have recently addressed the Bayesian learning theory to enable iterative channel estimation in multicarrier systems [223,224,225,226]. These strategies count on joining the Bayes theorem to an iterative technique, which can be seen as a prior stage to ML Bayesian algorithm for channel estimation. Thus, a brief discussion of their findings is addressed.
The Bayesian clustered-sparse channel estimation (BCS-CE) method is applied to frequency-selective fading channels to exploit the cluster correlation in the training matrix, improving the estimation performance [223]. The BCS-CE was compared with traditional sparse channel estimators, with an LS estimator knowing the channel using the lower bound. The average MSE results show that the proposed estimator exceeds the traditional methods and converges to the lower bound for higher SNR. Despite that, its complexity is higher than that of traditional estimators. Besides, Bayesian learning has been joined to binary particle swarm optimization to afford pilots optimal design and channel estimation through the mutual incoherence property (MIP) criterion [224]. The proposed estimator was evaluated assuming 16 and 24 pilots optimally positioned according to the MIP criterion and compared with the 124 equidistant pilots applied to the LS estimator. The estimator outputs a better performance than the LS with 16 pilots, while the 24 pilot cases showed a better performance for higher SNR.
EM and Bayesian learning algorithms have also enhanced channel estimation on OFDM systems [225]. Bayesian learning allows the construction of a prior sparse signal model in which the EM algorithm updates the parameters. In [226], a joint model- and data-driven strategy is proposed to derive a training, theoretical interpretive, and flexible model. It is accomplished using a Gaussian mixture model adapted to evolve based on the stochastic behavior of the received signal [222, 223]. In addition, Bayesian learning allows for estimating the posterior distribution of the channel parameters.
Regarding Bayesian learning algorithms, optimal Bayesian estimation involves a heavy computational load [222, 223]. Hence, the proposal used supporting algorithms to enable the Bayes theorem-based channel estimator to infer the channel parameters [222]. However, those algorithms demanded a higher computational load than LS estimator, and an optimum pilot design is required to reduce the computational complexity [224, 225]. Indeed, Bayesian learning algorithms for channel estimation can outperform traditional methods at the cost of higher computational complexity [223, 226].
Neural network-aided channel estimation techniques
This section addresses NN applications in the channel estimation process. The NN schemes have been grouped into different sections considering their standard features or training methods. Surveys about this subject showed an effort to design DNN and optimize the volume of the training dataset. At the same time, recent approaches are rising to circumvent the training issue by building different networks and ML dimensionality reduction strategies.
Neural network concepts
Before going into the survey on the paper subject, it is essential to define some concepts related to NN to yield better comprehension of the discussion in the following sections. First, regarding artificial NN, the direct computation units are called neurons, arranged in a layer fashion to form the network [227]. The neurons are connected through structures defined as weights that scale the neuron input and alter the function computed at the neuron. Hence, the functions employ those weights as parameters to propagate the inputs to the outputs [227, 228]. Second, NN learning comes from the weight changing at each interaction based on external stimuli referred to as training sequences or datasets. Here, the learning process is classified as supervised, unsupervised, and reinforcement learning, with the definitions presented in Sect. 3 [227,228,229]. During the training process, the output provides feedback prediction errors that allow to adjust the weight in the NN according to the learning process to pursue a better prediction in the incoming iteration [227, 230].
The weighted input sum at each neuron is applied to an activation function or transfer function responsible for introducing nonlinear operations to the prediction process based on mathematical operations [229]. This function is essential to leverage NN learning through complex tasks. Despite the layer number, it breaks through simple linear mathematical iterations and avoids getting a linear regression model. The activation function might be linear or nonlinear. There are a set of types among nonlinear activation functions, such as the sigmoid or logistic, hyperbolic tangent, rectified linear unit (ReLU), Gaussian error linear unit (GELU), softmax, and so forth. These nonlinear activation functions present advantages and limitations, which are not in this work scope and are appropriately found in [227,228,229].
Perceptron basic structure
The architecture of a NN is related to the layer design fashion. Based on this principle, a NN primary architecture definition is classified as the single layer and multilayer. The single-layer NN comprises a set of weighted (\(w_1, w_2, \dots , w_n\)) inputs (\(x_1, x_2, \dots , x_n\)) directly mapped to the output through an activation function, as shown in Fig. 9. This structure is commonly referred to as perceptron [227]. In addition, the perceptron might have an input invariant to the prediction part, defined as a bias, which defines the activation threshold. The multilayer NN architecture integrates neurons layer—arranged in an input and output layer connected by single or multiple intermediate layers defined as a hidden layer. For instance, Fig. 10 shows a multilayer structure. Once again, the neurons in the hidden layer might also have a bias weight.
Finally, another essential aspect to discuss is the algorithm used for NN training. The training algorithms are related to the function applied to update the weights among the network layers, searching to boost the learning process at each iteration [227, 229]. There are two possible concepts defined: incremental or batch training. The former updates the weights immediately after each iteration, while batch training leverages the updating process after all the inputs are inserted in the NN [230]. While the error is computed considering the network output prediction and the output expectation, the training algorithms rely on the error back-propagation mechanism. In other words, the algorithm implements a set of steps to update the weight starting from the output layer in the direction of the input layer.
Notably, the NN families are classified according to their structural aspects, such as the number of hidden layers and neuron connections. Thus, this survey considered grouping the proposed approaches regarding the NN classification while introducing each type appropriately.
Back-propagation neural network
When back-propagation algorithms are used in NN training, back-propagation neural networks (BPNNs) are created [231]. The basic algorithm concept is backward propagating the network error from the output to the input layer and adjusting the weights to reduce the network error through the steepest descent approach. This BPNN is deployed to work with real-domain data. Since the CIR is a complex-type signal, the channel estimation is also a complex-valued process.
Multilayer neural network basic structure elements
General complex-valued BPNN for channel estimation [231, 232]
Complex-valued BPNNs have been designed based on three layers (input, hidden, and output) of NN for channel estimation purposes, as shown in Fig. 11 [231, 232]. The complex signal is decomposed into real and imaginary parts to feed-forward the network. At the end, the output is summed to compose the channel estimated sample.
The BPNN in Fig. 11 has been used for channel estimation and equalization in FBMC [233] and OFDM systems [231, 232, 234,235,236,237,238], employing supervised learning through training sequences. The number of used perceptrons is specific, according to the proposal. The BPNN performance has been assessed in terms of BER and MSE compared to other conventional channel estimation approaches. Concerning the MMSE, LS, and LMS methods, the BPNN has underperformed the former while it has outperformed the others [231, 232, 236].
Complexity-wise, BPNN shows less complexity than the MMSE algorithm, although underperforms it [232]. BPNN exhibited a loss of about 2 dB compared with MMSE for the 0 dB SNR scenario. BPNN was also tested against semi-blind channel estimation and presented \(96\%\)–\(97\%\) BER enhancement at the cost of an \(86\%\)–\(87\%\) increase in complexity [236]. In general, BPNN estimation approaches do not require complicated matrix computations, and the optimum result happens when the size of hidden neurons is almost equal to the channel length [231, 233,234,235, 237, 238].
Feed-forward neural network
The FFNN is characterized by presenting connections among the neurons, not forming a cycle, and depending on the same layer. The data flow between the input and output layers includes single or multiple hidden layers. When there is one single hidden layer, the FFNN is known as MLP. Linear operations are realized in each perceptron, and the result is applied to an activation function before perpetuating it to the adjacent layer. The use of a radial basis activation function defines the RBFN subgroup.
Complex-valued MLP network for channel estimation proposed in [239]
FFNNs have been applied to FBMC, OFDM, and MIMO-OFDM systems. The networks are data-driven and use an ABCE and ABCEx approach. The training process is supervised by issuing pilot sequences online or offline. Concerning the MLP, a channel estimation has been implemented for a preamble-based FBMC system using a complex-valued two-hidden layer NN, which is offline trained with simulated datasets [239]. Figure 12 shows the proposed network, where the ReLU and tanh nonlinear activation functions are used in the hidden and output layers, respectively.
Furthermore, the initial MLP network was modified by inserting an MSE loss function to update the network. The two proposals were evaluated in terms of BER, with lower rates than the traditional LS. The Levenberg–Marquardt training algorithm allowed the designing of a two-layer complex-valued MLP for OFDM systems [240]. Therefore, it was extended to a MIMO-OFDM system, where the training also considers the one-step secant strategy [241]. The performance analysis showed that MLP results in more diversity gain than the conventional channel estimation approaches.
Initially, RBFN estimators were implemented with a single hidden layer and analyzed against the LMS, MMSE, and zero-forcing (ZF), outperforming the referred estimators [242]. The network structure resembles the one shown in Fig. 11, considering the discussion context. RBFNs were tested for OFDM systems by exploring the channel correlation in the time and time–frequency domains [243, 244]. The former considers estimating the channel for each subcarrier independently through the network. The latter cooperatively estimates the channels at different subcarriers. The strategies performance has shown to be similar in terms of BER. Meanwhile, the one-dimensional RBFN has been compared with an interpolation RBFN using fewer pilot subcarriers as training inputs. This second approach offered lower BER than the first one.
Tracking channel fluctuations in pilot-aided OFDM systems operating in a boisterous environment using RBFN have been shown to work well compared to traditional interpolation approaches [245]. In parallel, a Gaussian radial basis function interpolation was applied for fast-fading channel estimation. The LS method treats the initial estimation, and the channel response estimation is assisted by the Gaussian one hidden layer RBFN [246, 247]. The proposed scheme was applied to comb-type OFDM systems for analysis purposes, generating lower MSE than the LS and other RBFNs estimators. Lately, RBFN has been applied to a coherent optical OFDM system to implement an RBFN-based nonlinear equalizer [248]. The network weights are updated based on a two-step process. First, a K-means clustering algorithm is used to adjust the hidden layer weights. Further, the least mean square algorithm updates the output layer weight. Finally, a Q-factor assessment has been performed to highlight the proposal results against other works, resulting in a 4-dB performance improvement.
The MIMO-OFDM channel estimation based on RBFN was evaluated in [245, 247,248,249,250,251,252,253,254]. The RBFN structure is replicated to each antenna branch connected to the input layer. Thus, N inputs are forward connected to the next layer to demodulate the signals. A semi-blind technique has been improved by updating the function iteration based on an RBFN [249]. Further, evolutionary algorithms (PSO and GA) were employed to enhance the network parameters. Despite the mixture of techniques, there was no comparison to the conventional estimator for assessment purposes. In [250], the RBFN estimates the initial values of the MIMO channel supporting the particle filter method, which drops out the need for more training pilots since it tracks the channel variation.
Furthermore, joining the channel estimation and signal detection was done using an RBFN optimized by a genetic algorithm [251]. The approach was close to the MMSE estimator in terms of BER. Cyclic delay diversity OFDM systems were also targeted to the RBFN, which was introduced to solve interpolation problems in an uneven-pilot-based system [252]. Meanwhile, the Gaussian radial basis function has been extended to the MIMO-OFDM scenarios to leverage RBFN solutions [253, 254]. The solutions have returned better performance than the LS and LMS estimator, close to the MLP network BER.
Regarding the complexity, FFNN adds computational latency while improving the BER. For example, the proposal in [240] contributes to a gain of 1.2 dB at \(10^{-3}\). Also, [241] concludes that a training data length of 16 symbols or more produces remarkable results and better performance than the conventional LS, meaning that a compromise between performance and computational complexity must be reached [242]. Interpolation RBFN-based techniques exhibit complexity and performance trade-offs [244, 246]. The ultimate complex estimation methods are proposed in [250, 251], which achieve optimal performance in terms of BER and spectral efficiency at the cost of higher computational complexity.
Extreme learning machine
ELM NN for channel estimation in MIMO-OFDM systems
An ELM is an FFNN based on fast learning and one-shot training, reducing the training time with low computational complexity. The weights are set through the Moore–Penrose generalized inverse matrix. This learning technique has been applied in the channel estimation field for OFDM and MIMO-OFDM systems. The evaluated ELM networks are a single-hidden layer with an implementation based on the AMBCE [255], ABCE [256,257,258,259,260,261,262,263,264], and ABCEx [265, 266] approaches. The referred works employ a network comprising p input and m output neurons, as shown in Fig. 13. These network variables have different meanings according to the system design. For instance, p is equal to the number of receiving antennas, while m is related to the number of transmitting antennas for MIMO-OFDM systems. The number of hidden layer neurons (l) defines the Moore–Penrose generalized inverse matrix dimension.
Applying real-valued ELM networks has exploited joint channel equalization and symbol detection [265, 266]. This scheme has two input layer neurons corresponding to the real and imaginary parts of the received symbol. In [265], the training process uses an LS solution, while the ELM algorithm in [266] employs pilot blocks. Complex-valued ELM schemes were also investigated for channel estimation with p equal to the training sequence length [256]. The online trained network has been evaluated in a nonlinear channel condition, overcoming the LS and MMSE estimator BER results. Furthermore, the network performs similarly to the scheme without nonlinearities. The nonlinear distortion has been carried out in [258] to enhance the performance of OFDM systems with insufficient CP. The offline trained network was deployed online using an initial LS estimator to obtain the features of the CFR.
A technique to reduce the number of training pilots was developed based on the ensemble learning theory [257]. This method generates and combines different models to find an optimal predictive model. The ensemble approach comprised weighted averaging and median of the ELM model predictions based on the training error and pruning generated models, including combinations thereof. The BER results demonstrated the proposal effectiveness with a lower rate than ELM schemes and a similar performance compared to the MMSE.
A semi-supervised ELM has been proposed to channel estimation and equalization for vehicle to vehicle communications [264]. The training phase considered taking the label data training length equal to the unlabeled dataset. Afterward, the system implementation applies an LS pre-equalization after the FFT is conducted, with the output delivered to the semi-supervised ELM. The evaluation has demonstrated BER performance close to the LS and other ELM-based estimators. However, the algorithm execution time has been the longest among the compared methods. On the other hand, an ELM-based equalizer for OFDM-based radio-over-fiber systems was evaluated in [263]. The authors proposed a multilayer generalized complex-valued ELM build circumventing the ELM algorithm expansion to achieve an ELM-autoencoder. The network evaluation has outperformed other ELM from the literature, while the authors claimed that the proposal increased the computational cost.
Regarding MIMO systems, a semi-blind channel estimation process based on ELM networks has outperformed the BPNN, MLP, and RBFN. The scheme encompasses estimating the CFR at the pilot subcarriers and applying it to the training of the real-valued network. In addition, an ELM scheme with training based on symbol construction is proposed in [259]. The approach reduced the training sequence length and kept the performance, providing a better estimation than the MMSE. Another attempt to reduce the training time has combined manifold learning with ELM. Manifold learning is a nonlinear dimensionality reduction technique grouped with the PCA and ICA schemes presented in Sect. 4. This approach has also outperformed the MMSE estimator.
Recently, an ELM-based detector has been founded on online training for pilot-assisted mMIMO-OFDM systems at the millimeter-wave [262]. The network resembles that shown in Fig. 13, with the pilots being applied to the online training to leverage post-symbol detection. The BER assessment highlighted the ELM network performance over the MMSE estimator. Despite that, a lack of evaluation among the ELM network solutions has been identified.
Complexity appraisal shows that complex-valued ELM can involve only one hidden layer, outperform offline DNN in terms of complexity and performance, and reduce the training time [256,257,258, 260, 266]. Furthermore, ELM complexity was investigated to require the same number of neurons in the hidden layers as the number of antennas at the base station (BS) to achieve higher spectral efficiency than linear mMIMO receivers [261]. An attempt to leverage unsupervised learning to an ELM has been shown to increase the computational time cost with no performance improvement [264]. Besides, an ELM-autoencoder solution has significantly improved performance with a high computational cost. In contrast to complex-valued ELM, real-valued ELM demands less computation than FFNN and complex-valued ELM due to real-domain values instead of complex domain ones [255].
Recurrent neural network
RNN architecture and work principle
The RNN consists of a network structure with one-step temporal dependence among the input data [267, 268]. The hidden layers receive the incoming information from the previous ones, and its output results through a feedback loop, as shown in Fig. 14. Consequently, it can learn over time in a cumulative process. Taking the unfolded example, the output at \(t-1\) feedback the input at time t, and the output at this current instant is provided as input at time \(t+1\). Thus, this NN learns not only from the incoming input but also by considering the influence of past information.
The RNN features are suitable for tackling time variations in channel estimation. This NN has been used to estimate channel response in OFDM, FBMC, and MIMO-OFDM systems [48, 267,268,269,270,271,272,273]. It has been deployed in an ABCE approach with supervised learning. The RNN was designed as a mapping function to assist pilot-aided OFDM systems [268]. The RNN was trained with the pilot subcarriers and then used to find the channel estimation at the data position. Lately, a bidirectional RNN has been proposed to enhance the system performance. A similar approach has been considered in training an RNN to provide signal recovery in an OFDM system operating under an interference environment. For instance, the network in [269] could predict 50 lost subcarriers based on channel estimation under severe interference with a root-mean-square error (RMSE) of 0.37065 and 0.24596 after 100 iterations and training epochs.
Moreover, the RNN was applied to track channel variations in MIMO-OFDM systems [267]. The proposal attempted to design an RNN for estimating channel response using signals with tightly coupled real and imaginary parts. Thus, a split-complex activation RNN was accomplished by allowing the network to learn to estimate the real and imaginary parts separately and combining them through the time average of the input information over a time window. The work has been improved by adding a self-organized map-based optimization to obtain a complex time delay fully RNN block for MIMO-OFDM systems [270]. The BER assessment has shown that the performance of the proposed network is close to the perfect CSI, superposing the MMSE estimator.
Besides, a SoftMax RNN using frequency index modulation was proposed to perform channel estimation on MIMO-OFDM systems [271]. The network provided lower BER values than the LS estimator and the ELM algorithm found in [256]. However, the comparison lacks an evaluation of the involved complexity. Reducing the ISI in MIMO-OFDM systems has been carried out by an Elman RNN for channel estimation [272]. The network evaluation has proved its application to channel estimation providing low PAPR and BER, with high capacity and throughput. The comparison included a convolutional neural network (CNN) and DNN, with the Elman RNN outperforming those networks. The RNN has also been used to design DNNs, such as the ChanEstNet DNN, which is later discussed [273]. However, the RNN performance has been recently evaluated in MIMO-OFDM systems [48].
LSTM unit cell detail
The channel estimation field has also investigated a derivation of the RNN called long short-term memory (LSTM). The LSTM is designed to yield good performance in long sequence approaches and solve the vanishing and exploding gradient issue in conventional RNNs [274, 275]. This network can obtain long-term dependencies calling for learning based on past extended sequence information. Figure 15 shows an LSTM unit cell composed of a forget, output, and input gate responsible for the data flow regulation inside the cell. The forget gate decides what kind of information is thrown away or included in the cell state based on observing the past state and the actual data. Therefore, the \(\sigma _f\) assumes values equal to 0 (throw away) or 1 (accept the information). The candidate cell allows storing certain information in the current cell state, scaling it by the \(\sigma _c\) value. According to the decided value, the input from the gate is added to the current state. Finally, the output gate imposes management on what is computed as the output value, considering that the cell state is scaled into the range -1 to 1.
The LSTM network has been combined with conventional RNN, CNN, and MLP networks [274,275,276,277]. The inherent imaginary interference channel estimation problem in FBMC systems was approached by combining a bidirectional LSTM and an RNN [274]. The network has worked well under fast time-varying scenarios and outperformed a DNN algorithm. Meanwhile, the LSTM was joined with a CNN to support channel estimation in time-varying scenarios for OFDM systems [275]. The CBR-Net (CNN batch normalization RNN) provided lower BER than the convectional estimator and other DNN architectures. A similar hybrid solution, the CNN-LSTM algorithm, achieved lower BER than other NN [276]. An MLP-LSTM network is found in [277], with the joint solution working well under high-mobility scenarios with a velocity of up to 150km/h. Recently, bidirectional LSTM network architectures have been raised to prove their performance on MIMO-OFDM systems [278,279,280]. The evaluation has confirmed the superposition of conventional estimators. In addition, the researchers have claimed low complexity due to using a DNN architecture to combine massive LSTM units, adding a bidirectional arrangement.
Furthermore, an extension of the LSTM concept is named gated recurrent network (GRU). It comprises a cell unit in which the input and output gates are replaced by an updating gate that controls the amount of information to be retained or updated. This network type has been used to design a data-driven model for channel estimation in an OFDM system applied to a fog radio scenario [281]. The performance comparison was addressed with the orthogonal matching pursuit channel estimation strategy, showing promising results. The GRU network performance was also investigated under the FBMC system [282] to deal with the inherent imaginary interference channel estimation problem. Resembling the bidirectional LSTM architecture, a GRU network called BiGRU has been proposed for a MIMO FBMC-OQAM system [283]. The training process is based on an offline stage followed by an online prediction. The BER assessment uses different time-varying channel models to face the BiGRU performance against the interference approximation channel estimation method, with an improvement in the FBMC system employing the former.
ESN architecture for [284] and [286]
An RNN with random connections among the neurons of the hidden layer is defined as an echo state network (ESN), with a network architecture as shown in Fig. 16. This network is typically designed in a single hidden layer called a reservoir. It stands for a NN that drops out of the training process through the back-propagation mechanism. The ESN has been recently investigated to leverage the channel estimation process in OFDM and MIMO-OFDM systems [284,285,286,287,288,289,290,291].
The ESN was used for channel estimation purposes [284]. First, the real and imaginary parts of the OFDM symbol are separated and delivered to two ESNs. After that, the network outputs were combined. Then, the ESN was supervised, trained, and analyzed based on comparing the desired results and those estimated, which leak from a performance analysis regarding system implementation. Moreover, an adaptive elastic ESN has been designed for channel estimation on IEEE 802.11ah systems employing the OFDM modulation [285]. The hybrid network architecture comprises an ESN and an adaptive elastic network. The latter has been added to handle ill-conditioned solutions of the LS and applied to obtain the frequency-domain CSI. The ill-conditioned solution rises from the collinearity problem in the input of the basic ESN model [285]. Therefore, the adaptive elastic network replaces the LS method to calculate the frequency-domain CSI. The results regarded the RMSE evaluation of adaptive elastic networks against auto-regression and support vector machine algorithms, highlighting the networks superior performance.
A three-layer estimator for the MIMO-OFDM system was designed considering a feature, enhancement, and output layer [286]. The feature layer comprised a pool of parallel ESNs connected with the enhancement layer by weights and biases. These layers extract feature information to feed the output layer, leveraging the channel estimation process. Besides, a supervised learning ESN has been proposed for nonlinear MIMO-OFDM systems for joint channel estimation and symbol detection, with BER results close but inferior to the LMMSE estimator [288, 289]. Thereafter, the symbol detection was based on a deep ESN, superposing the LMMSE estimator performance and showing results close to a shallow ESN [290]. Meanwhile, an ESN was designed to detect symbols using comb and scattered patterns in a standard LTE system with MIMO. The network evaluation has demonstrated superior performance over fewer pilots [291].
Complexity-wise, RNN leverages the training dataset to overcome other NNs trade-offs between accuracy and complexity. For example, they have been shown to require 218 epochs to achieve an average precision of \(96\%\), while MLP requires 326 epochs to achieve an average precision of \(94\%\) [267]. They also demand less computation due to low overhead using layers of simple matrix–vector multiplications and nonlinear activation functions [268]. However, DL-based RNN still has a challenging complexity, although its robustness can even estimate fast time-varying channels [274, 275, 277]. As a solution to reduce RNN intricacy, reservoir computing (RC) has been used to generate random synaptic weights [284,285,286,287,288,289,290,291].
Deep neural network
DNN architecture proposed in [292]
DNNs consist of multiple layers between the input and the output layers, as shown in Fig. 17 [23, 292, 293]. The multiple layers are hidden and can contain the same number of neurons or decrease towards the output layer. The layers are fully connected because each neuron is connected to all the neurons of the subsequent layer. The input value reaching a given neuron is the summation of the weighted output and bias values from the primary layer neurons. A given neuron output is a nonlinear activation function value such as the ReLU or the Sigmoid functions. Hence, the output sequences of the DNN are a cascaded nonlinear transformation of its input sequences.
The general DNN has been used for channel estimation for multicarrier systems [292, 294, 295]. For instance, a general DNN has been proposed to estimate CSI, allowing for joint channel estimation and symbol detection in an OFDM system with performance close to the MMSE estimator [292]. In [294], DNN is applied to the received signal to yield a less noisy signal and estimate the channel based on the generated signal. It has been shown that the proposed DNN channel estimator approaches MMSE estimation to within 1 dB. The authors in [295] have combined the conventional channel estimation technique for an OFDM receiver with a DNN to surpass MMSE estimation in terms of normalized MSE.
Researchers have proposed variations of the DNN for estimating the channel in multicarrier systems [293, 296,297,298]. A deep learning residual framework (ResNet) consisting of two short-connected layers and two fully connected hidden layers was used for channel estimation and equalization in FBMC/OQAM systems [293]. The ResNet uses a long real-valued sequence of a filtered frequency-domain complex sequence of the received signal as the training dataset. Accordingly, the channel estimation performance is better than the general DNN. Meanwhile, a DNN cascading with a zero-forcing preprocessor called Cascade-Net was proposed for detecting OFDM symbols, outperforming the zero-forcing method [296]. Model-driven DNN subnets, ComNet, replaced the usual OFDM channel estimation and symbol detection receiver blocks, surpassing general DNN by offering to refine inputs [297]. A variation of the ComNet receiver includes a compensating network called SwitchNet that outperforms the ComNet [298].
CNN architecture
DNN hidden layer with only a tiny portion of its neurons connected to the previous layer neurons is called the convolutional layer [299]. In addition, the convolutional layer neurons share the same parameters. General CNNs significantly reduce the total amount of training parameters, comprising an architecture with an input and convolution layer followed by a pooling set and fully connected layers until the output layer is reached, as shown in Fig. 18 [227, 228]. The convolution layer enables the gathering of local patterns upon the input data. Meanwhile, the pooling layers summarize the given information. This network region reduces the data dimensional space while retaining the original information. Thus, the classification stage is conducted by fully connected layers.
A CNN has been exploited to recover information from OFDM signals without relying on explicit DFT or IDFT computations and performed better than channel estimators based on linear MMSE [300]. In [299], the authors added a CNN between preprocessing modules to develop a CNN-based detector that adapts to large systems or wide bands. The authors in [301] have joined CNN and image super-resolution to create a channel estimation method that, after offline training, outperforms the MMSE estimator and can potentially save spectrum.
Joining CNN and DNN can boost channel estimation. The authors of [302] have proposed intelligent signal detection comprising DNN and CNN for OFDM with index modulation. The signal detector uses pilots to achieve semi-blind channel estimation and reconstructs the transmitted symbols based on CSI. In [303], a hybrid NN-based fading channel prediction has been designed by connecting CNN and DNN layers. The hybrid channel predictor aggregates robustness to systems operating over frequency-selective channels such as MIMO-OFDM. The authors in [273] have developed a channel estimation method for high-speed scenarios using a combination of CNN and RNN. The new network, ChanEstNet, extracts the channel response feature vectors for channel estimation, exhibiting low computational complexity compared to traditional channel estimation methods.
Regarding the complexity issue, DNNs depend on extensive training datasets and apply matrix multiplication between sequential layers. For example, the adaptive DNN complexity investigated in [295] is equivalent to the accurate LMMSE channel estimation scheme, but its performance is much better. To reduce DNN complexity, the authors in [294] have combined the deep image prior (DIP) model, diminishing the training overhead and only needing pilot symbols during channel estimation. Also, a sliding structure based on the signal-to-interference power has been designed for computational complexity reduction compared to a single deep detection network [296]. Furthermore, by splitting the receiver into different subnets, DNNs demand less memory and computation than LMMSE-MMSE methods [297,298,299]. Instead of reducing the DNN-aided detector complexity, some researchers have traded it for better capabilities. For instance, the complexity has been swapped for the ability to replace DFT with a linear transformation [300]. Finally, merging LSTM and CNN creates a hybrid network that was shown to be able to predict channel characteristics [273].
Autoencoder-aided end-to-end systems
Autoencoder architecture
Autoencoders apply unsupervised learning to replace an end-to-end communication system. Hence, from the block-structure communication system point of view, autoencoders substitute the whole structure composed of the serial-to-parallel converter, lookup table, modulator, detector, symbol estimation, parallel-to-serial converter, and so forth. Autoencoders take advantage of the input data statistics to communicate them through the channel so that the fewest possible data is sent. Still, it allows the receiver to understand the input data completely [304]. Autoencoders reconstruct the input data through a series of latent representations, typically using an MMSE objective and a stochastic gradient descent (SGD) solver to find the network weights, achieving a practical regression [305]. Figure 19 depicts a general autoencoder architecture, which is taken as the basis for autoencoder systems implementation in the following discussion.
DNN and CNN are used to construct autoencoders. On the transmitter side, they learn the mapping from bits to waveforms. At the receiver side, they learn the synchronization, parameter estimation, and demapping from waveforms to bits. Some channel impairments are considered to train the autoencoder: noise, time and rate of the signal arrival, carrier frequency, phase offset, and the received signal delay spread [305]. Although it may seem that an extensive dataset is required for training autoencoders, they usually require a tiny portion of the code space, the ratio being even \(2.9387359 \times 10^{-34}\). Thus, autoencoders contribute to the used resources [306]. The trained autoencoder results in a transmit and receive signal that resembles those of MCM communication systems.
The end-to-end autoencoder-based communication system can compete with mature systems such as OFDM, FBMC, GFDM, and UFMC without any prior mathematical modeling or analysis [307, 308]. In [307], the DNN and CNN-based autoencoder of [305] has been enhanced to deal with synchronization and ISI. For synchronization, an introduced NN is responsible for separating the infinite sequence of the received samples into different probable block groups and estimating each group probability. For ISI, during the training phase, the autoencoder assumes the received messages present ISI interference in learning to solve this impairment. The enhanced autoencoder has been tested against real channels and demonstrated a performance 2 dB worse than that of the MMSE method. In [308], the proposed DNN-based autoencoder exhibited fast convergence when operating over an aggressive Rayleigh fading channel. The autoencoder transmitter and receiver parts were alternatively trained until the loss stopped decreasing. The authors claimed that the autoencoder could be applied to any channel without analysis.
Instead of competing with well-established MCM systems, autoencoders can be combined with them, bringing more reliability [309, 310]. DNN-based autoencoders have been proposed to mitigate synchronization errors and simplify equalization over multipath channels [309]. The proposed model has also shown flexibility regarding imprecise knowledge about the channel and reduced complexity compared to conventional OFDM systems. The authors in [310] have combined autoencoders to an OFDM under single-bit quantization. The OFDM data detection loss under that constraint was reduced using an unsupervised autoencoder, competing with unquantized OFDM at SNR values smaller than 6 dB.
Autoencoders have also been compared with MIMO systems [311, 312]. The authors in [311] have obtained an autoencoder that outperforms Alamouti space-time block code (STBC) [313] operating over the Rayleigh fading channel for SNR values greater than 15 dB. It is considered perfectly known, quantized, and none CSI information scenarios. The optimum autoencoder was achieved using NN-based regression, considering channel estimation on both the transmitter and receiver sides. In [312], the authors combined autoencoders and ELM and proposed a novel detection scheme for MIMO-OFDM. In this approach, the autoencoders refine the input data before transmitting it and ELM is employed to classify the received signal based on regular features. The BER performance of the novel MIMO-OFDM detector is similar to the maximum-likelihood detection (MLD).
The extension of MIMO, mMIMO, has also been targeted to use autoencoders. The proposed network in [314] employs CNN to learn the channel structure effectively from training samples to recover CSI even in low compression regions. This autoencoder is mainly investigated for multicarrier systems where the BS receives the CSI from the users. The autoencoder can transform the channel matrix into a shorter-dimensional vector and vice versa. Even though executing new sensing and recovery mechanism beats existing compressive sensing-based methods, the authors claimed it could be enhanced by applying advanced DL strategies.
In terms of complexity, autoencoders require a large dataset for training and to reach the optimum solution, thus resulting in a trade-off between performance and computation. Some works have addressed power demand reduction as the attractiveness of their proposed method. For example, tensor-based processing can reduce power requirements by lowering clock rates, increasing algorithm concurrency, and adapting, as pointed out in [305]. The PAPR could also be reduced using a network based on an autoencoder architecture of DL [306, 309]. Other works implement different training strategies to reduce the intrinsic trade-off between the performance and computation of autoencoders [308, 310]. For example, in [307], the authors have used a two-phase training: the architecture is trained with simulated channels in the first phase, and the receiver is fine-tuned over realistic channels in the second phase.
Other neural networks
Generative adversarial network (GAN) [315,316,317,318], general regression neural network (GRNN) [319, 320], and fuzzy neural network (FNN) [321, 322] have also been investigated in the channel estimation subject. Likewise, the least mean error [323], meta-learning [324], k-means clustering [325], and LS [326] techniques were applied to leverage NN training. Regarding these training techniques, the survey has shown that ML might also be an interesting approach to overcoming the voluminous training dataset problems in DNN.
Generative adversarial network
GAN working principles
A GAN comprises two networks: generative and adversarial networks. These networks operate competitively, as shown in Fig. 20. The generative network aims to retrieve the original information utilizing training. On the other hand, the adversarial network discriminates the incoming labeled fake samples of the first network by comparing them with accurate data. In other words, the adversarial must learn to recognize false and true patterns and the generative to deceive the former. In this way, the generative network is later trained to fool the adversarial network by passing through samples as true [316].
This concept was applied to reshape the ResEsNet [315, 327] by considering the channel response with known pilot positions as a low-resolution image. Thus, the GAN was applied to estimate the CSI in a super-resolution approach. First, the generator comprises convolution layers and residual blocks with pre-residual activation units. Then, batch normalization is applied to the beginning and the end to map/remap the data to the scale model. Finally, the fake samples feed the discriminator, also formed by convolution, batch normalization, and Leaky ReLU layers [315]. The super-resolution GAN has outperformed the ResEsNet estimation while presenting better performance than the LMMSE estimator. Furthermore, a GAN-based channel estimation approach was proposed for high-speed mobile scenarios [317]. The method goal was to reduce the complexity of the channel estimation process by training a discriminator to learn and extract channel time-varying features. After, the generator acts upon the samples to generate and restore the channel information.
The GAN approach has also been modeled to reduce the number of pilots in MIMO-OFDM and OFDM systems [316, 318]. The first network proposal exploited the generative network to learn how to produce channel samples based on training on real data [316]. After that, the trained model was used to get current channel samples according to the received signal. The results have been compared with a supervised learning ResNet mode, exhibiting better performance. However, it could not overcome the LMMSE estimator. Meanwhile, the GAN has been devoted to mapping low-dimensional channel space into a high-dimensional one, reducing the pilots number in an OFDM system [318]. As a result, the designed network could track the CIR at different channels after training, outperforming the LMMSE and ChannelNet estimators.
General regression neural network
GRNN architecture
The GRNN has been proposed as an enhanced version of the RBFN founded on nonparametric regression [319, 320, 328]. The network falls into the probabilistic NN category. The GRNN architecture comprises four layers known as the input, pattern, summation, and output layers, as shown in Fig. 21. The former and the latter are classical structures of NN architecture. The pattern layer is the single learning layer of the network and it is fully connected with the neurons of the input layer [328]. The pattern output is fully connected to the s-summation and the d-summation neurons of the summation layer. In contrast, the former computes the weighted sum from the previous layer and the latter the unweighted values. Thereafter, the output layer divides the s-summation results by the d-summation.
This neural network approach has been applied in channel estimation using partial CSI information obtained from data-aided decision feedback channel estimation, showing more accurate interpolation results [319, 320, 329]. The network structure has four layers: input, pattern, summation, and output layer. The pattern layer includes the radius of the radial basis function that can control the smoothness level of the regression results. The summation layer sums the neuron pattern outputs by multiplying them by the desired results and, after, by their own, which are further combined in the output layer. This network was first applied to time-domain [319] smoothness and extended to a frequency-domain strategy [320]. The latter has outperformed the former and conventional pilot-aided channel estimation.
Fuzzy neural network
The fuzzy logic was applied to leverage a fuzzy controller to periodically adjust the step size in an LMS algorithm for OFDM systems [321]. The results showed a faster convergence and robust tracking of channel variations when compared with the LMS under different channel conditions. Furthermore, a functional link FNN estimator was developed [322]. The network comprises a functional link NN integrated with fuzzy rules, whereas each one is a sub-functional link NN with a function expansion of input variables. The network performance was close to the MMSE estimator.
Reduction training techniques for neural networks
Regarding the training approach, the least mean error algorithm was applied to a NN with two sub-networks to identify amplitude gain and phase variation [323]. Moreover, the LS algorithm was integrated into a black box NN [326]. The process uses the LS to estimate the channel at the pilot subcarriers, then apply it to the network to predict the channel response at the data subcarriers. This approach might be seen as an ML interpolation strategy with results similar to the MMSE, and some other discussed NN. A proposed similar channel estimation method using a multiple variable regression approach to design an ML algorithm that does not require any initial information or statistics about the channel is found in [330]. It uses the SGD algorithm for parameter optimization purposes. This proposal has been compared with the LS and MMSE estimators, outperforming the conventional estimator while providing performance similar to the perfect estimation.
The K-means clustering algorithm was proposed to support a semi-blind channel estimator for cell-free mMIMO [325]. The algorithm allows clustering of the received signal to optimize the channel estimation process. In the meantime, the meta-learning has been exploited in a two-stage method named robust channel estimation with meta-learning neural networks (RoemNet) for OFDM symbols [324]. The proposed network can learn general characteristics from multiple channels, gathering meta-knowledge for training purposes. Furthermore, this approach allows applying the RoemNet to different unknown channels and fast refinement of its weights by using a few pilot symbols through the meta-update process. The RoemNet performance has proved its ability to learn and better estimate the channel with a few pilots, outperforming the MMSE estimator. However, the increase in the pilots quantity leads to similar results. Also, it was shown that with 8 pilot-long sequences, training the RoemNet yields lower BER than the LS estimator with 128 pilot-long sequences.
Regarding the complexity, GANs can reduce it during training while improving the performance compared with residual NN [315, 316]. Additionally, the GAN-based estimation proposed in [316] does not require retraining, even if the number of clusters and rays changes considerably, and lowers the number of necessary pilot tones. Complexity-wise, the network approaches in [316, 318] have the lowest value compared to the LS and LMMSE. Meanwhile, the network algorithm complexity of [318] was compared with the MMSE estimator, resulting in a linear and cubic relationship with the number of pilots, respectively. FNN could not reduce the complexity of well-known estimators while improving performance [322]. In [321], the used FNN showed a steeper learning curve than MSE but increased the computation load slightly. GRNN demanded only 0.0534ms of processing time for channel estimation at SNR, such as 30 dB to achieve a BER of \(1.2 \times 10^{-4}\), as an example of its computational complexity [319]. However, it kept the trade-off between performance and complexity, requiring 0.4206ms to reach a BER of \(1.8 \times 10^{-5}\). GRNN could reduce this trade-off for other NN-based estimation methods. For example, GRNN replaces ANN in [320] to eliminate the iterative training process and diminish the computational complexity as the BER decreases. Other techniques, such as least mean error, meta-learning, k-means clustering, and LS, focus on reducing the training overhead to demand less computation.
Reinforcement learning working problem
Reinforcement learning is a training approach, as mentioned in Sect. 3, that defines an emerging branch of ML. The algorithms under this classification learn from the reward maximization hypothesis principle [331]. An agent executes actions in an environment that has its states modified over time [332], as shown in Fig. 22. The state-changing according to taken actions results in a reward or a penalty for the agent. The algorithm must establish a strategy, also known as policy, to define actions to achieve a specific goal and maximize the expected cumulative reward. The environment is commonly molded as a Markov decision process (MDP) that describes the agent sequence of actions, the present reward, and the future state and reward [331, 333]. The Q-learning approach releases quality rather than optimal learning [332, 333]. Unlike other ML algorithms, reinforcement learning succeeds without an explicit training process, learning through a mix of exploration and exploitation of the environment in a trial-and-error manner.
The investigation of channel estimation from the multicarrier systems perspective has been addressed for OFDM schemes. A model-free Q-learning technique is applied to select the best CIR predictor [332]. First, the CIR prediction is constructed using an adaptive RLS estimator without pilot signals. The RLS estimator predicts one or more future CIR block coefficients using previously estimated ones. Then, the agent interacts with the algorithm (environment) to enable dynamic reinforcement learning in this context. The results have shown the dominance of the Q-learning-based estimator over the conventional RLS. Besides, a denoising method for channel estimation in MIMO-OFDM systems has been modeled as an MDP based on channel curvature computation [333]. The channel curvature allows for identifying the unreliable estimation for the future MDP. The reward function is defined to reduce the MSE. Finally, Q-learning is used for the channel estimation process. This estimator has shown better results related to the LS estimator and poor BER values when faced with the MMSE estimator performance.
Combining DNN to approximate the strategy (i.e., the policy), and MDP, deep reinforcement learning (DRL) algorithms arise [334,335,336]. Compared with reinforcement learning, weights of the DNN are used as extra input parameters, and the SGD optimizer is employed to update the weights. Although DRL might yield instability and divergence, their recent upgrades, deep Q-network [337] and AlphaGo [337], have been able to represent the environment even with high-dimensional sensory inputs, e.g., pixels of an image. Those two developments were based on games, the former achieving a level comparable to that of a professional human gamer across a set of 49 games of Atari 2600 [338] and the latter defeating a human professional player in the full-sized game of Go for the first time.
A few attempts have been addressed regarding the channel estimation for multicarrier systems [339, 340]. Double deep Q learning (DDQL) has been proposed for channel estimation in industrial wireless networks as an alternative to DNN and Q-learning approaches [339]. It aims to circumvent the DNN long data sequence training while eliminating the overestimation of action values of the Q-learning. Therefore, the DDQL has been exploited for channel estimation to adapt to the Rician channel model for the dynamic industrial wireless network. The DDQL comprised five hidden layers of fully connected neurons with tanh activation functions, and a linear activation layer on the output. Lately, the DDQL performance has been compared against some MMSE-based estimators. The authors' proposal estimates the channel better than the other estimators, except for ideal MMSE [339].
The pilot contamination problem in mMIMO systems was addressed using DRL to leverage a pilot assignment strategy to adapt to the channel variations and keep a modest pilot contamination effect [340]. First, the system model was considered an OFDM system; consequently, the sub-channel was assessed as a flat-fading model. Next, the reward was modeled as a cost function based on the user's angle of arrival information. The channel characteristic and the maximum cost function allowed to define the states, actions, and rewards. Thereafter, the agent learned the pilot assignment policy, adapting them to the channel variations to minimize the cost function. Finally, the DRL was leveraged based on a six-hidden layer deep residual network (ResNet) as a Q-neural network (QNN). The proposed DRL results have demonstrated lower system overhead against other approaches, such as soft pilot reuse (SPR) [340].
Concerning complexity, the training dataset does not need to be labeled, making reinforcement learning practical and adaptive to time-varying channels. However, those advantages and other issues may increase the complexity when combining reinforcement learning and channel estimation, as shown in [332]. For example, dominant CIR tap index identification adds to the overall load computation. Regardless, other solutions succeeded in reducing the complexity, as the strategy proposed in [333] is based on the frequency domain instead of the time domain and reduces the requirement to perform DFT. Meanwhile, DRL has been raised as an approach to reduce the complexity of DNN in channel estimation, comprising a field of opportunities in the context of multicarrier systems and their extension to MIMO schemes.
Discussions and research directions
This section aims to nourish a discussion about how AI-aided channel estimation strategies have been proposed for multicarrier systems, highlighting some learned lessons. Afterward, future research directions based on recent findings are pointed out. Therefore, classical ML, NNs, and RL are carried out in the sense of how the plethora of works has modeled them to leverage channel estimation in multicarrier system scenarios. At the same time, how those works have striven to improve the results against standalone conventional channel estimation techniques, such as blind, data-aided, decision-directed, and semi-blind estimators.
Regarding classical ML, regression algorithms have been joined with conventional estimators, such as pilot-assisted iterative channel estimation, an LS estimator, and normalized MSE estimator. Meanwhile, the estimator block-type structures were preserved based on the AMBCE approach with supervised learning. The regression algorithms mainly enhanced the interpolation process in data-aided methods. Therefore, the channel was first estimated through data-aided schemes and then delivered to the trained algorithms to estimate the channel at the data subcarriers. Under this assumption, OFDM and MIMO-OFDM were the system models to apply the estimators based on regression techniques. Some target channels were the fast time-varying, highly selective, and doubly selective fading environment.
Recently, the research on estimators based on regression algorithms has been scarce, which is understood due to the growth of NNs and RL solutions. However, some research has accomplished promising results regarding the SVR for OFDM and MIMO-OFDM systems [172, 173]. Therefore, it is a research direction to apply these estimators for OFDM variations or other multicarrier systems, extending them to MIMO schemes. In addition, joint regression algorithms with blind and semi-blind estimators is an open and remotely exploited field, which may pursue methods that do not require the information of the channel statistics [190].
Evolutionary algorithms are mainly exploited for channel estimation in OFDM and MIMO-OFDM systems in the sense of the GA, RWBS, and PSO. The GA has conceived means to design estimators based on AMBCE systems implementation. Some approaches replace the interpolation process to aid a pilot-aided channel estimation scheme, while others leverage a blind channel estimator based on GA. Beyond that, combining the LS and MMSE estimators has also been accomplished by using GA. At the same time, the RWBS-based algorithms were devoted to aid a combination between channel estimation block with multiuser and data detection functions, comprising an ABCEx approach. Likewise, PSO allowed for joint channel estimation and decoding MIMO-OFDM systems while also being used to enhance iterative estimator performance. These evolutionary algorithms may also be extended to other multicarrier systems and their variants to unveil their potential and validate them against other classical ML techniques, NNs, and RL. Therein, computational complexity and processing time may be assessed along with an MMSE performance comparison.
Bayesian learning has been recently considered for channel estimation purposes in OFDM and MIMO-OFDM systems addressing sparse channels [222,223,224,225,226]. However, the approaches stand mostly for model-based design, with some works including a joint model- and data-driven strategy. On the other hand, combining Bayesian learning with PSO has also been accomplished to join pilots optimal design and channel estimation [224]. Alike, the Bayesian learning-based channel estimator performance has only faced the conventional estimators, outperforming them at the cost of higher computational complexity. Hence, comparing the Bayesian learning performance with those of NN and RL algorithms under the same channel assumptions is necessary to validate its computational complexity since some works claimed their proposal was close to the MMSE estimator [226].
Taking NNs into account, they have been mainly employed using AMBCE and ABCE approaches. In other words, they have been used to aid the channel estimation block or to replace it. Furthermore, they have proved able to assist with semi-blind and data-aided channel estimation techniques, even in scenarios involving only a few pilots [244]. Also, the inputs used for training and estimation can be complex- or real-valued symbols. Overall, these capabilities rendered NN adaptive to fast-fading, high mobility, and vehicular-to-vehicular communication cases.
This survey has provided several different NN models utilized for channel estimation in multicarrier systems. They were discriminated between the hidden layer structures and training methods. Extensive work was found using the following NN models: BPNN, FFNN, ELM, RNN, DNN, autoencoders, GAN, GRNN, and FNN. In general, they all exhibited more complexity than classical learning AI algorithms and accompanied a trade-off with the performance. Hence, strategies to overcome this impairment are welcomed and represent a research direction. Specifically, some NNs lack complexity analysis, such as the joint FFNN and GA and ELM for mMIMO-OFDM. Also, a complexity comparison between RNN and ELM would enrich this topic. These issues are all considered an open field to be investigated. In addition, NNs have been implemented to estimate the channel in the following multicarrier systems: OFDM, FBMC, and MIMO-OFDM. However, only ELM and autoencoders have been used with other communication systems, such as vehicular-to-vehicular, OFDM-based radio-over-fiber, mMIMO, GFDM, UFMC, and MIMO. This last observation signifies another open area for investigation, which would help consolidate NN applicability for channel estimation in multicarrier systems.
Regarding the RL, this survey points out its usage only to enable channel estimation in OFDM and MIMO-OFDM systems. A few approaches have been considered using an AMBCE or an ABCE design. RL estimators relied on model-free Q-learning by exploiting a highly mobile and dynamic propagation environment. In the meantime, DRL was proposed based on DDQL, aiming at avoiding the DNN long data sequence training and the overestimation action values of the Q-learning. In addition, a deep residual network (ResNet) was also used as a QNN to accomplish channel estimation in the mMIMO-OFDM system.
Although RL channel estimation handled OFDM and MIMO systems, this survey concludes that the RL application for channel estimation in multicarrier systems still indicates an open research field, barely exploited. Hence, there are opportunities to address its variation along with other multicarrier systems, including a performance comparison with the MMSE and other NN-based estimators. Note that there are recent surveys devoted to investigating the RL usage with MIMO systems that have also confirmed the lack of work in the channel estimation field [341].
Besides estimating, iteration and human brain-inspired networks are capable of predicting and equalizing the channel in multicarrier systems. They all approximate the MMSE estimation. Indeed, most works compared the respective AI-aided channel estimation technique performance with the MMSE estimation [178, 231, 241, 294]. Although comparisons might consider other estimators, the MMSE estimator is the most popular due to its performance in minimizing the mean error. Some iterative methods also depend on the channel model probabilistic knowledge to exhibit a suitable performance. Neither NN-based channel estimation techniques nor the RL approach relied on those probabilistic models. Therefore, those learning algorithms can be better candidates for channel estimation in complex and fluctuating environments.
Regarding RL, there is a wide-open field on applying this strategy to MCM to evaluate its performance and complexity over the NN. The latter estimators are the best option when the multicarrier system operates in a hard-to-model channel or when the goal is to provide a less human-dependent channel estimation method. Also, they are more capable of imitating real-world data. On the other hand, RL techniques leverage training by exploiting the environment in a trial-and-error manner, eliminating the need for explicit training processes and labeled datasets.
Combining different ML algorithms can outperform the strategies that use only one of them. They can be employed to work so that one algorithm treats the incoming data and provides the new input to another or one controls the overall multicarrier system working instead. Configuring the algorithms so that one feeds the another can reduce the processing time required by only one algorithm [303]. On the other hand, controlling the system means allocating power, pilots, and other resources [299]. Combination of different ML algorithms for channel estimation in multicarrier systems remains a wide-open field that could unravel new strategies and models to solve such an issue.
A common trade-off among ML algorithms is that the estimation accuracy increases at the cost of the training dataset expansion, which increases the computational complexity. These learning strategies call attention to the necessity of a large number of training samples to achieve an approximate MMSE estimator's performance. However, note that after training, the AI model can be less complex than the MMSE, which requires regular estimates of channel parameters (e.g., noise variance). However, dimensionality reduction [174], Bayesian learning [223], ELM networks [257], k-mean clustering [325], meta-learning [324], LS-based ML algorithms [326], and DNN [296] have been investigated as solution candidates for reducing training sequences. Further research can consider reducing the computational work required by each AI-aided channel estimation. Moreover, combining the ELM network with distinct dimensionality reduction techniques is missing investigation.
Although conventional OFDM presents some drawbacks, it is still largely used as the primary multicarrier system to assess AI-aided channel estimation techniques' performance. OFDM variations remain multicarrier systems that haven't been densely investigated yet for AI-aided channel estimation techniques. Therefore, applying the methods presented throughout this survey for the OFDM variations can lead to discoveries that can result in mature versions of the aforementioned AI-aided channel estimation strategies. The performance of the AI-aided channel estimation approaches employed by conventional OFDM can be compared with one of the OFDM variations. The performance comparisons might include the MMSE estimation, but other metrics, such as computational complexity, processing time, and manufacturing cost, can be analyzed.
Multicarrier systems employing variations of OFDM, FBMC, GFDM, and UFMC can also be used for testing more AI channel estimation methods. New performance results can be obtained, and even better multicarrier systems can be designed. Simpler models can arise by investigating the combination between OFDM variations and AI mechanisms created to solve the same drawbacks as the OFDM variations. Different AI models can join FBMC to improve spectral efficiency or reduce its intrinsic high PAPR. AI can control the allocation of pilots or reduce the ICI sensibility of the GFDM and UFMC multicarrier systems.
Finally, channels can be better explored in multicarrier systems with AI-aided channel estimation. Impulsive noise, flexible short-term fading, arbitrarily correlated short-term fading, shadowed fading, arbitrarily correlated shadowed fading, and cascaded fading channels can be used as different fluctuating environments. They can bring more robustness to the AI-aided channel estimation methods or help address limitations. For example, the needed number of input parameters can indicate a drawback or impact the processing time when the multicarrier system operates in a more aggressive environment.
This paper extensively investigates AI applications for estimating the channel for MCM systems. Previous surveys on the same subject have been reviewed, but only a few have addressed AI usage in estimating the channel. In addition, most of them have been devoted to analyze OFDM and mMIMO-OFDM systems. Therefore, the present survey first contribution was detailing AI techniques used for channel estimation in MCM systems. Generally, the following families of AI methods have been presented: classical learning, neural networks, and reinforcement learning. Specifically, the following AI models have been described: regression, evolutionary algorithm, dimensionality reduction, Bayesian learning, FFNN, ELM, RNN, DNN, CNN, RBFN, autoencoders, GAN, FNN, GRNN, and Q-learning. The survey second contribution was to carry use-case examples of AI for channel estimation in MCM systems that do not include OFDM but others, such as FBMC, GFDM, UFMC, STBC, MIMO-OFDM, FBMC-OQAM, and mMIMO-OFDM. A third contribution encompassed collecting conventional channel estimation techniques for MCM systems, such as non-blind, semi-blind, and blind techniques. Lastly, this survey points out open issues and highlights future research topics that can help evolve the channel estimation for not only MCM communication systems but also single-carrier communication systems. Due to the immense number of references herein, the paper main contribution is to serve as a basis for guiding researchers about the current development and opening for new and enhancement works of AI-aided channel estimators for MCM communication systems.
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
5G NR: Architecture, Technology, Implementation, and Operation of 3GPP New Radio Standards
O.E. Ijiga, O.O. Ogundile, A.D. Familua, D.J.J. Versfeld, Review of channel estimation for candidate waveforms of next generation networks. Electronics (2019). https://doi.org/10.3390/electronics8090956
L. Jiang, H. Zhang, S. Cheng, H. Lv, P. Li, An overview of FIR filter design in future multicarrier communication systems. Electronics (2020). https://doi.org/10.3390/electronics9040599
A. Racz, A. Temesvary, N. Reider, Handover Performance in 3GPP Long Term Evolution (LTE) Systems, in 2007 16th IST Mobile and Wireless Communications Summit (2007), pp. 1–5. https://doi.org/10.1109/ISTMWC.2007.4299068
N. Shaik, P.K. Malik, A comprehensive survey 5G wireless communication systems: open issues, research challenges, channel estimation, multi carrier modulation and 5G applications. Multimed. Tools Appl. 80, 28789–28827 (2021). https://doi.org/10.1007/s11042-021-11128-z
S. Research, 6G: The Next Hyper-connected Experience for All, Technical report (2020)
A. Sahin, R. Yang, E. Bala, M.C. Beluri, R.L. Olesen, Flexible DFT-S-OFDM: solutions and challenges. IEEE Commun. Mag. 54(11), 106–112 (2016). https://doi.org/10.1109/MCOM.2016.1600330CM
G. Berardinelli, K.I. Pedersen, T.B. Sorensen, P. Mogensen, Generalized DFT-spread-OFDM as 5G waveform. IEEE Commun. Mag. 54(11), 99–105 (2016). https://doi.org/10.1109/MCOM.2016.1600313CM
B. Farhang-Boroujeny, OFDM versus filter bank multicarrier. IEEE Signal Process. Mag. 28(3), 92–112 (2011). https://doi.org/10.1109/MSP.2011.940267
K. Choi, Alamouti coding for DFT spreading-based low PAPR FBMC. IEEE Trans. Wirel. Commun. 18(2), 926–941 (2019). https://doi.org/10.1109/TWC.2018.2886347
B. Farhang-Boroujeny, Filter bank multicarrier modulation: a waveform candidate for 5G and beyond. IEEE Signal Process. Mag. 2014, 1–26 (2014). https://doi.org/10.1155/2014/482805
C.-L. Tai, T.-H. Wang, Y.-H. Huang, An overview of generalized frequency division multiplexing (GFDM). ArXiv abs/2008.08947 (2020)
Z. Guo, Q. Liu, W. Zhang, S. Wang, Low complexity implementation of universal filtered multi-carrier transmitter. IEEE Access 8, 24799–24807 (2020). https://doi.org/10.1109/ACCESS.2020.2970727
L. Zhang, A. Ijaz, P. Xiao, K. Wang, D. Qiao, M.A. Imran, Optimal filter length and zero padding length design for universal filtered multi-carrier (UFMC) system. IEEE Access 7, 21687–21701 (2019). https://doi.org/10.1109/ACCESS.2019.2898322
Y.-Y. Wang, C.-A. Lai, On the cfo estimation of the ofdm: a frequency domain approach. J. Franklin Inst. 351(5), 2489–2503 (2014)
Article MATH Google Scholar
V. Savaux, Y. Louet, LMMSE channel estimation in OFDM context: a review. IET Signal Proc. 11(2), 123–134 (2017). https://doi.org/10.1049/iet-spr.2016.0185
Y. Liu, Z. Tan, H. Hu, L.J. Cimini, G.Y. Li, Channel estimation for OFDM. IEEE Commun. Surv. Tutor. 16(4), 1891–1908 (2014). https://doi.org/10.1109/COMST.2014.2320074
F.A. Dietrich, W. Utschick, Pilot-assisted channel estimation based on second-order statistics. IEEE Trans. Signal Process. 53(3), 1178–1193 (2005). https://doi.org/10.1109/TSP.2004.842176
Article MathSciNet MATH Google Scholar
M.K. Ozdemir, H. Arslan, Channel estimation for wireless OFDM systems. IEEE Commun. Surv. Tutor. 9(2), 18–48 (2007). https://doi.org/10.1109/COMST.2007.382406
O.O. Oyerinde, S.H. Mneney, Review of channel estimation for wireless communication systems. J. Theor. Appl. Inf. Technol. 29(4), 282–298 (2012)
R. Shafin, L. Liu, V. Chandrasekhar, H. Chen, J. Reed, J.C. Zhang, Artificial intelligence-enabled cellular networks: a critical path to beyond-5G and 6G. IEEE Wirel. Commun. 27(2), 212–217 (2020). https://doi.org/10.1109/MWC.001.1900323
S. Zhang, J. Liu, T.K. Rodrigues, N. Kato, Deep learning techniques for advancing 6G communications in the physical layer. IEEE Wirel. Commun. (2021). https://doi.org/10.1109/MWC.001.2000516
H. Huang, S. Guo, G. Gui, Z. Yang, J. Zhang, H. Sari, F. Adachi, Deep learning for physical-layer 5G wireless techniques: opportunities, challenges and solutions. IEEE Wirel. Commun. 27(1), 214–222 (2020). https://doi.org/10.1109/MWC.2019.1900027
Q. Hu, F. Gao, H. Zhang, S. Jin, G.Y. Li, Deep learning for channel estimation: interpretation, performance, and comparison. IEEE Trans. Wirel. Commun. 20(4), 2398–2412 (2021). https://doi.org/10.1109/TWC.2020.3042074
V.P. Rekkas, S. Sotiroudis, P. Sarigiannidis, S. Wan, G.K. Karagiannidis, S.K. Goudos, Machine learning in beyond 5g/6g networks-state-of-the-art and future trends. Electronics 10(22), 2786 (2021)
A.I. Salameh, M. El Tarhuni, From 5g to 6g-challenges, technologies, and applications. Future Internet 14(4), 117 (2022)
M.Z. Chowdhury, M. Shahjalal, S. Ahmed, Y.M. Jang, 6g wireless communication systems: applications, requirements, technologies, challenges, and research directions. IEEE Open J. Commun. Soc. 1, 957–975 (2020)
A. Dogra, R.K. Jha, S. Jain, A survey on beyond 5g network with the advent of 6g: architecture and emerging technologies. IEEE Access 9, 67512–67547 (2020)
K. Hassan, M. Masarra, M. Zwingelstein, I. Dayoub, Channel estimation techniques for millimeter-wave communication systems: achievements and challenges. IEEE Open J. Commun. Soc. 1, 1336–1363 (2020). https://doi.org/10.1109/OJCOMS.2020.3015394
Z. Liu, L. Zhang, Z. Ding, Overcoming the channel estimation barrier in massive MIMO communication via deep learning. IEEE Wirel. Commun. 27(5), 104–111 (2020). https://doi.org/10.1109/MWC.001.1900413
Z. Qin, H. Ye, G.Y. Li, B.-H.F. Juang, Deep learning in physical layer communications. IEEE Wirel. Commun. 26(2), 93–99 (2019). https://doi.org/10.1109/MWC.2019.1800601
H. Yang, X. Xie, M. Kadoch, Machine learning techniques and a case study for intelligent wireless networks. IEEE Netw. 34(3), 208–215 (2020). https://doi.org/10.1109/MNET.001.1900351
C. Jiang, H. Zhang, Y. Ren, Z. Han, K.-C. Chen, L. Hanzo, Machine learning paradigms for next-generation wireless networks. IEEE Wirel. Commun. 24(2), 98–105 (2017). https://doi.org/10.1109/MWC.2016.1500356WC
B. Hassan, S. Baig, H.M. Asif, S. Mumtaz, S. Muhaidat, A survey of FDD-based channel estimation schemes with coordinated multipoint. IEEE Syst. J. (2021). https://doi.org/10.1109/JSYST.2021.3111284
P. Sure, C.M. Bhuma, A survey on OFDM channel estimation techniques based on denoising strategies. Int. J. Eng. Sci. Technol. 20(2), 629–636 (2017). https://doi.org/10.1016/j.jestch.2016.09.011
A. Angelo Missiaggia Picorone, T. Rodrigues Oliveira, M. Vidal Ribeiro, PLC channel estimation based on pilots signal for OFDM modulation: a review. IEEE Lat. Am. Trans. 12(4), 580–589 (2014). https://doi.org/10.1109/TLA.2014.6868858
T. Hwang, C. Yang, G. Wu, S. Li, G. Ye Li, OFDM and its wireless applications: a survey. IEEE Trans. Veh. Technol. 58(4), 1673–1694 (2009). https://doi.org/10.1109/TVT.2008.2004555
S.G. Kang, Y.M. Ha, E.K. Joo, A comparative investigation on channel estimation algorithms for OFDM in mobile communications. IEEE Trans. Broadcast. 49(2), 142–149 (2003). https://doi.org/10.1109/TBC.2003.810263
Q. Mao, F. Hu, Q. Hao, Deep learning for intelligent wireless networks: a comprehensive survey. IEEE Commun. Surv. Tutor. 20(4), 2595–2621 (2018). https://doi.org/10.1109/COMST.2018.2846401
M. Zamanipour, A survey on deep-learning based techniques for modeling and estimation of massive MIMO channels 1910, 03390 (2020)
C. Zhang, P. Patras, H. Haddadi, Deep learning in mobile and wireless networking: a survey. IEEE Commun. Surv. Tutor. 21(3), 2224–2287 (2019). https://doi.org/10.1109/COMST.2019.2904897
L. Dai, R. Jiao, F. Adachi, H.V. Poor, L. Hanzo, Deep learning for wireless communications: an emerging interdisciplinary paradigm. IEEE Wirel. Commun. 27(4), 133–139 (2020). https://doi.org/10.1109/MWC.001.1900491
F. Tang, B. Mao, N. Kato, G. Gui, Comprehensive survey on machine learning in vehicular network: technology, applications and challenges. IEEE Commun. Surv. Tutor. 23(3), 2027–2057 (2021). https://doi.org/10.1109/COMST.2021.3089688
Q.-V. Pham, N.T. Nguyen, T. Huynh-The, L. Le Bao, K. Lee, W.-J. Hwang, Intelligent radio signal processing: a survey. IEEE Access 9, 83818–83850 (2021). https://doi.org/10.1109/ACCESS.2021.3087136
T. O'Shea, J. Hoydis, An introduction to deep learning for the physical layer. IEEE Trans. Cognit. Commun. Netw. 3(4), 563–575 (2017). https://doi.org/10.1109/TCCN.2017.2758370
D. Gunduz, P. de Kerret, N.D. Sidiropoulos, D. Gesbert, C.R. Murthy, M. van der Schaar, Machine learning in the air. IEEE J. Sel. Areas Commun. 37(10), 2184–2199 (2019). https://doi.org/10.1109/JSAC.2019.2933969
K. Mei, J. Liu, X. Zhang, N. Rajatheva, J. Wei, Performance analysis on machine learning-based channel estimation. IEEE Trans. Commun. 69(8), 5183–5193 (2021). https://doi.org/10.1109/TCOMM.2021.3083597
W. Jiang, H.D. Schotten, Neural network-based fading channel prediction: a comprehensive overview. IEEE Access 7, 118112–118124 (2019). https://doi.org/10.1109/ACCESS.2019.2937588
Y. Fan, D. Dan, Y. Li, Z. Wang, Z. Liu, Intelligent communication: application of deep learning at the physical layer of communication, in 2021 IEEE 4th Advanced Information Management, Communicates, Electronic and Automation Control Conference (IMCEC), vol. 4 (2021), pp. 1339–1345. https://doi.org/10.1109/IMCEC51613.2021.9482326
H. He, S. Jin, C.-K. Wen, F. Gao, G.Y. Li, Z. Xu, Model-driven deep learning for physical layer communications. IEEE Wirel. Commun. 26(5), 77–83 (2019). https://doi.org/10.1109/MWC.2019.1800447
T. Wang, C.-K. Wen, H. Wang, F. Gao, T. Jiang, S. Jin, Deep learning for wireless physical layer: opportunities and challenges. China Commun. 14(11), 92–111 (2017). https://doi.org/10.1109/CC.2017.8233654
L. Sakkas, E. Stergiou, G. Tsoumanis, C.T. Angelis, 5g ufmc scheme performance with different numerologies. Electronics 10(16), 1915 (2021)
G.B. Giannakis, Filterbanks for blind channel identification and equalization. IEEE Signal Process. Lett. 4(6), 184–187 (1997). https://doi.org/10.1109/97.586044
J. Liang, Z. Ding, Blind MIMO system identification based on cumulant subspace decomposition. IEEE Trans. Signal Process. 51(6), 1457–1468 (2003). https://doi.org/10.1109/TSP.2003.811232
L. Tong, G. Xu, T. Kailath, Blind identification and equalization based on second-order statistics: a time domain approach. IEEE Trans. Inf. Theory 40(2), 340–349 (1994). https://doi.org/10.1109/18.312157
H.H. Zeng, L. Tong, Blind channel estimation using the second-order statistics: asymptotic performance and limitations. IEEE Trans. Signal Process. 45(8), 2060–2071 (1997). https://doi.org/10.1109/78.611205
S. Chen, Y. Wu, S. McLaughlin, Genetic algorithm optimization for blind channel identification with higher order cumulant fitting. IEEE Trans. Evol. Comput. 1(4), 259–265 (1997). https://doi.org/10.1109/4235.687886
J.K. Tugnait, Identification and deconvolution of multichannel linear non-Gaussian processes using higher order statistics and inverse filter criteria. IEEE Trans. Signal Process. 45(3), 658–672 (1997). https://doi.org/10.1109/78.558482
B. Muquet, M. de Courville, Blind and semi-blind channel identification methods using second order statistics for OFDM systems, in 1999 IEEE International Conference on Acoustics, Speech, and Signal Processing Proceedings. ICASSP99 (Cat. No.99CH36258), vol. 5 (1999), pp. 2745–27485
H. Bolcskei, R.W. Heath, A.J. Paulraj, Blind channel identification and equalization in OFDM-based multiantenna systems. IEEE Trans. Signal Process. 50(1), 96–109 (2002). https://doi.org/10.1109/78.972486
R.W. Heath, G.B. Giannakis, Exploiting input cyclostationarity for blind channel identification in OFDM systems. IEEE Trans. Signal Process. 47(3), 848–856 (1999). https://doi.org/10.1109/78.747790
M. de Courville, P. Duhamel, P. Madec, J. Palicot, Blind equalization of OFDM systems based on the minimization of a quadratic criterion, in Proceedings of ICC/SUPERCOMM '96 - International Conference on Communications, vol. 3 (1996), pp. 1318–13223. https://doi.org/10.1109/ICC.1996.533623
A. Petropulu, R. Zhang, R. Lin, Blind OFDM channel estimation through simple linear precoding. IEEE Trans. Wirel. Commun. 3(2), 647–655 (2004). https://doi.org/10.1109/TWC.2003.821140
S. Yatawatta, A.P. Petropulu, Blind channel estimation in MIMO OFDM systems with multiuser interference. IEEE Trans. Signal Process. 54(3), 1054–1068 (2006). https://doi.org/10.1109/TSP.2005.862944
F. Gao, A. Nallanathan, Blind channel estimation for MIMO OFDM systems via nonredundant linear precoding. IEEE Trans. Signal Process. 55(2), 784–789 (2007). https://doi.org/10.1109/TSP.2006.885764
J. Gao, X. Zhu, A.K. Nandi, Non-redundant precoding and PAPR reduction in MIMO OFDM systems with ICA based blind equalization. IEEE Trans. Wirel. Commun. 8(6), 3038–3049 (2009). https://doi.org/10.1109/TWC.2009.080541
E. Moulines, P. Duhamel, J.-F. Cardoso, S. Mayrargue, Subspace methods for the blind identification of multichannel FIR filters. IEEE Trans. Signal Process. 43(2), 516–525 (1995). https://doi.org/10.1109/78.348133
J. Namgoong, T.F. Wong, J.S. Lehnert, Subspace multiuser detection for multicarrier DS-CDMA. IEEE Trans. Commun. 48(11), 1897–1908 (2000). https://doi.org/10.1109/26.886487
F. Verde, Subspace-based blind multiuser detection for quasi-synchronous MC-CDMA systems. IEEE Signal Process. Lett. 11(7), 621–624 (2004). https://doi.org/10.1109/LSP.2004.830111
H. Cheng, S.C. Chan, Blind linear MMSE receivers for MC-CDMA systems. IEEE Trans. Circuits Syst. I Regul. Pap. 54(2), 367–376 (2007). https://doi.org/10.1109/TCSI.2006.887595
S. Roy, C. Li, A subspace blind channel estimation method for OFDM systems without cyclic prefix. IEEE Trans. Wirel. Commun. 1(4), 572–579 (2002). https://doi.org/10.1109/TWC.2002.804160
S. Wang, J.H. Manton, Blind channel estimation for non-CP OFDM systems using multiple receive antennas. IEEE Signal Process. Lett. 16(4), 299–302 (2009). https://doi.org/10.1109/LSP.2009.2014284
S. Wang, J.H. Manton, A cross-relation-based frequency-domain method for blind SIMO-OFDM channel estimation. IEEE Signal Process. Lett. 16(10), 865–868 (2009). https://doi.org/10.1109/LSP.2009.2025926
B. Muquet, M. de Courville, P. Duhamel, Subspace-based blind and semi-blind channel estimation for OFDM systems. IEEE Trans. Signal Process. 50(7), 1699–1712 (2002). https://doi.org/10.1109/TSP.2002.1011210
C. Li, S. Roy, Subspace-based blind channel estimation for OFDM by exploiting virtual carriers. IEEE Trans. Wirel. Commun. 2(1), 141–150 (2003). https://doi.org/10.1109/TWC.2002.806383
C. Shin, R.W. Heath, E.J. Powers, Blind channel estimation for MIMO-OFDM systems. IEEE Trans. Veh. Technol. 56(2), 670–685 (2007). https://doi.org/10.1109/TVT.2007.891429
F. Gao, Y. Zeng, A. Nallanathan, T.-S. Ng, Robust subspace blind channel estimation for cyclic prefixed MIMO ODFM systems: algorithm, identifiability and performance analysis. IEEE J. Sel. Areas Commun. 26(2), 378–388 (2008). https://doi.org/10.1109/JSAC.2008.080214
C.-C. Tu, B. Champagne, Subspace-based blind channel estimation for MIMO-OFDM systems with reduced time averaging. IEEE Trans. Veh. Technol. 59(3), 1539–1544 (2010). https://doi.org/10.1109/TVT.2009.2039008
J.-G. Kim, J.-H. Oh, J.-T. Lim, Subspace-based channel estimation for MIMO-OFDM systems with few received blocks. IEEE Signal Process. Lett. 19(7), 435–438 (2012). https://doi.org/10.1109/LSP.2012.2197201
S. Zhou, G.B. Giannakis, Finite-alphabet based channel estimation for OFDM and related multicarrier systems. IEEE Trans. Commun. 49(8), 1402–1414 (2001). https://doi.org/10.1109/26.939873
C.H. Aldana, E. de Carvalho, J.M. Cioffi, Channel estimation for multicarrier multiple input single output systems using the EM algorithm. IEEE Trans. Signal Process. 51(12), 3280–3292 (2003). https://doi.org/10.1109/TSP.2003.819082
I. Ghaleb, O.A. Alim, K. Seddik, A new finite alphabet based blind channel estimation for OFDM systems, in IEEE 5th Workshop on Signal Processing Advances in Wireless Communications, vol. 2004 (2004), pp. 102–105. https://doi.org/10.1109/SPAWC.2004.1439212
Z. Hou, V.K. Dubey, Improved finite-alphabet based channel estimation for OFDM systems, in The Ninth International Conference on Communications Systems, 2004. ICCS 2004 (2004). pp. 155–159. https://doi.org/10.1109/ICCS.2004.1359358
Z. Chen, T. Zhang, Z. Gong, Finite-alphabet and decision-feedback based channel estimation for space-time coded OFDM systems, in Joint IST Workshop on Mobile Future, 2006 and the Symposium on Trends in Communications. SympoTIC '06 (2006). pp. 64-67. https://doi.org/10.1109/TIC.2006.1708023
R.K. Martin, J. Balakrishnan, W.A. Sethares, C.R. Johnson, A blind adaptive TEQ for multicarrier systems. IEEE Signal Process. Lett. 9(11), 341–343 (2002). https://doi.org/10.1109/LSP.2002.804423
J. Balakrishnan, R.K. Martin, C.R. Johnson, Blind, adaptive channel shortening by sum-squared auto-correlation minimization (SAM). IEEE Trans. Signal Process. 51(12), 3086–3093 (2003). https://doi.org/10.1109/TSP.2003.818892
G.A. Al-Rawi, T.Y. Al-Naffouri, A. Bahai, J. Cioffi, Exploiting error-control coding and cyclic-prefix in channel estimation for coded OFDM systems. IEEE Commun. Lett. 7(8), 388–390 (2003). https://doi.org/10.1109/LCOMM.2003.814712
M.C. Necker, G.L. Stuber, Totally blind channel estimation for OFDM on fast varying mobile radio channels. IEEE Trans. Wirel. Commun. 3(5), 1514–1525 (2004). https://doi.org/10.1109/TWC.2004.833508
T.-H. Chang, W.-K. Ma, C.-Y. Chi, Maximum-likelihood detection of orthogonal space-time block coded OFDM in unknown block fading channels. IEEE Trans. Signal Process. 56(4), 1637–1649 (2008). https://doi.org/10.1109/TSP.2007.909229
H. Li, Blind channel estimation for multicarrier systems with narrowband interference suppression. IEEE Commun. Lett. 7(7), 326–328 (2003). https://doi.org/10.1109/LCOMM.2003.814030
N. Sarmadi, S. Shahbazpanahi, A.B. Gershman, Blind channel estimation in orthogonally coded MIMO-OFDM systems: a semidefinite relaxation approach. IEEE Trans. Signal Process. 57(6), 2354–2364 (2009). https://doi.org/10.1109/TSP.2009.2016887
X.G. Doukopoulos, G.V. Moustakides, Blind adaptive channel estimation in ofdm systems. IEEE Trans. Wirel. Commun. 5(7), 1716–1725 (2006). https://doi.org/10.1109/TWC.2006.1673083
L. Deng, Y.M. Huang, Q. Chen, Y. He, X. Sui, Collaborative blind equalization for time-varying OFDM applications enabled by normalized least mean and recursive square methodologies. IEEE Access 8, 103073–103087 (2020). https://doi.org/10.1109/ACCESS.2020.2999387
W. Li, D. Qu, T. Jiang, An efficient preamble design based on comb-type pilots for channel estimation in FBMC/OQAM systems. IEEE Access 6, 64698–64707 (2018). https://doi.org/10.1109/ACCESS.2018.2877957
V.K. Singh, M.F. Flanagan, B. Cardiff, Generalized least squares based channel estimation for FBMC-OQAM. IEEE Access 7, 129411–129420 (2019). https://doi.org/10.1109/ACCESS.2019.2939674
D. Ren, J. Li, G. Lu, J. Ge, Per-subcarrier RLS adaptive channel estimation combined with channel equalization for FBMC/OQAM systems. IEEE Wirel. Commun. Lett. 9(7), 1036–1040 (2020). https://doi.org/10.1109/LWC.2020.2979851
C.-S. Yeh, Y. Lin, Channel estimation using pilot tones in OFDM systems. IEEE Trans. Broadcast. 45(4), 400–409 (1999). https://doi.org/10.1109/11.825535
S. Coleri, M. Ergen, A. Puri, A. Bahai, Channel estimation techniques based on pilot arrangement in OFDM systems. IEEE Trans. Broadcast. 48(3), 223–229 (2002). https://doi.org/10.1109/TBC.2002.804034
M.-X. Chang, Y.T. Su, Model-based channel estimation for OFDM signals in Rayleigh fading. IEEE Trans. Commun. 50(4), 540–544 (2002). https://doi.org/10.1109/26.996066
R. Negi, J. Cioffi, Pilot tone selection for channel estimation in a mobile OFDM system. IEEE Trans. Consum. Electron. 44(3), 1122–1128 (1998). https://doi.org/10.1109/30.713244
I. Barhumi, G. Leus, M. Moonen, Optimal training design for MIMO OFDM systems in mobile wireless channels. IEEE Trans. Signal Process. 51(6), 1615–1624 (2003). https://doi.org/10.1109/TSP.2003.811243
S. Ohno, G.B. Giannakis, Average-rate optimal PSAM transmissions over time-selective fading channels. IEEE Trans. Wirel. Commun. 1(4), 712–720 (2002). https://doi.org/10.1109/TWC.2002.804183
J.K. Moon, S.I. Choi, Performance of channel estimation methods for OFDM systems in a multipath fading channels. IEEE Trans. Consum. Electron. 46(1), 161–170 (2000). https://doi.org/10.1109/30.826394
H. Steendam, On the pilot carrier placement in multicarrier-based systems. IEEE Trans. Signal Process. 62(7), 1812–1821 (2014). https://doi.org/10.1109/TSP.2014.2306179
J.-W. Choi, Y.-H. Lee, Optimum pilot pattern for channel estimation in OFDM systems. IEEE Trans. Wirel. Commun. 4(5), 2083–2088 (2005). https://doi.org/10.1109/TWC.2005.853891
R.J. Baxley, J.E. Kleider, G.T. Zhou, Pilot design for OFDM with null edge subcarriers. IEEE Trans. Wirel. Commun. 8(1), 396–405 (2009). https://doi.org/10.1109/T-WC.2009.080065
D. Hu, L. Yang, Y. Shi, L. He, Optimal pilot sequence design for channel estimation in MIMO OFDM systems. IEEE Commun. Lett. 10(1), 1–3 (2006). https://doi.org/10.1109/LCOMM.2006.1576550
P. Fertl, G. Matz, Channel estimation in wireless OFDM systems with irregular pilot distribution. IEEE Trans. Signal Process. 58(6), 3180–3194 (2010). https://doi.org/10.1109/TSP.2010.2044254
Q. Li, M. Wen, Y. Zhang, J. Li, F. Chen, F. Ji, Information-guided pilot insertion for OFDM-based vehicular communications systems. IEEE Internet Things J. 6(1), 26–37 (2019). https://doi.org/10.1109/JIOT.2018.2872438
J.-H. Oh, J.-G. Kim, J.-T. Lim, On the design of pilot symbols for OFDM systems over doubly-selective channels. IEEE Commun. Lett. 15(12), 1335–1337 (2011). https://doi.org/10.1109/LCOMM.2011.100511.111594
Y. Chen, L. You, A.-A. Lu, X. Gao, X.-G. Xia, Channel estimation and robust detection for IQ imbalanced uplink massive MIMO-OFDM with adjustable phase shift pilots. IEEE Access 9, 35864–35878 (2021). https://doi.org/10.1109/ACCESS.2021.3060184
Z. Sheng, H.D. Tuan, H.H. Nguyen, Y. Fang, Pilot optimization for estimation of high-mobility OFDM channels. IEEE Trans. Veh. Technol. 66(10), 8795–8806 (2017). https://doi.org/10.1109/TVT.2017.2694821
M.R. Raghavendra, S. Bhashyam, K. Giridhar, Exploiting hopping pilots for parametric channel estimation in OFDM systems. IEEE Signal Process. Lett. 12(11), 737–740 (2005). https://doi.org/10.1109/LSP.2005.856889
K. Kim, H. Park, H.M. Kwon, Optimum clustered pilot sequence for OFDM systems under rapidly time-varying channel. IEEE Trans. Commun. 60(5), 1357–1370 (2012). https://doi.org/10.1109/TCOMM.2012.032012.100508
J. Wang, H. Yu, Y. Wu, F. Shu, J. Wang, R. Chen, J. Li, Pilot optimization and power allocation for OFDM-based full-duplex relay networks with IQ-imbalances. IEEE Access 5, 24344–24352 (2017). https://doi.org/10.1109/ACCESS.2017.2766703
K. Chen-Hu, M.J.F.-G. Garcia, A.M. Tonello, A.G. Armada, Pilot pouring in superimposed training for channel estimation in CB-FMT. IEEE Trans. Wirel. Commun. 20(6), 3366–3380 (2021). https://doi.org/10.1109/TWC.2021.3049530
H. Zhang, B. Sheng, An enhanced partial-data superimposed training scheme for OFDM systems. IEEE Commun. Lett. 24(8), 1804–1807 (2020). https://doi.org/10.1109/LCOMM.2020.2992042
J.C. Estrada-Jimenez, B.G. Guzman, M.J. Fernandez-Getino Garcıa, V.P.G. Jimenez, Superimposed training-based channel estimation for MISO optical-OFDM VLC. IEEE Trans. Veh. Technol. 68(6), 6161–6166 (2019). https://doi.org/10.1109/TVT.2019.2909428
J.C. Estrada-Jimenez, M.J. Fernandez-Getino Garcıa, Partial-data superimposed training with data precoding for OFDM systems. IEEE Trans. Broadcast. 65(2), 234–244 (2019)
Q. Wang, G. Dou, X. He, R. Deng, J. Gao, Novel OFDM system using data-nulling superimposed pilots with subcarrier index modulation. IEEE Commun. Lett. 22(10), 2164–2167 (2018). https://doi.org/10.1109/LCOMM.2018.2859989
X. Cai, G.B. Giannakis, Error probability minimizing pilots for OFDM with M-PSK modulation over Rayleigh-fading channels. IEEE Trans. Veh. Technol. 53(1), 146–155 (2004). https://doi.org/10.1109/TVT.2003.819624
E.G. Larsson, J. Li, Preamble design for multiple-antenna OFDM-based WLANs with null subcarriers. IEEE Signal Process. Lett. 8(11), 285–288 (2001). https://doi.org/10.1109/97.969445
M. Dong, L. Tong, B.M. Sadler, Optimal pilot placement for channel tracking in OFDM. Proc. MILCOM 1, 602–6061 (2002). https://doi.org/10.1109/MILCOM.2002.1180512
S. Adireddy, L. Tong, H. Viswanathan, Optimal placement of training for frequency-selective block-fading channels. IEEE Trans. Inf. Theory 48(8), 2338–2353 (2002). https://doi.org/10.1109/TIT.2002.800466
X. Ma, L. Yang, G.B. Giannakis, Optimal training for MIMO frequency-selective fading channels. IEEE Trans. Wirel. Commun. 4(2), 453–466 (2005). https://doi.org/10.1109/TWC.2004.842998
M. Dong, L. Tong, Optimal design and placement of pilot symbols for channel estimation. IEEE Trans. Signal Process. 50(12), 3055–3069 (2002). https://doi.org/10.1109/TSP.2002.805504
C. Budianu, L. Tong, Channel estimation for space-time orthogonal block codes, in ICC 2001. IEEE International Conference on Communications. Conference Record (Cat. No.01CH37240), vol. 4 (2001), pp. 1127–11314. https://doi.org/10.1109/ICC.2001.936836
A. Aggarwal, T.H. Meng, Minimizing the peak-to-average power ratio of OFDM signals using convex optimization. IEEE Trans. Signal Process. 54(8), 3099–3110 (2006). https://doi.org/10.1109/TSP.2006.875390
X. Guo, J. Zhang, S. Chen, C. Zhu, J. Yang, Optimal uplink pilot-data power allocation for large-scale antenna array-aided OFDM systems. IEEE Trans. Veh. Technol. 69(1), 428–442 (2020). https://doi.org/10.1109/TVT.2019.2949874
N. Chen, G.T. Zhou, Peak-to-average power ratio reduction in OFDM with blind selected pilot tone modulation. IEEE Trans. Wirel. Commun. 5(8), 2210–2216 (2006). https://doi.org/10.1109/TWC.2006.1687737
S. Ehsanfar, M. Matthe, M. Chafii, G.P. Fettweis, Pilot- and CP-aided channel estimation in MIMO non-orthogonal multi-carriers. IEEE Trans. Wirel. Commun. 18(1), 650–664 (2019). https://doi.org/10.1109/TWC.2018.2883940
Z. Na, Z. Pan, M. Xiong, X. Liu, W. Lu, Y. Wang, L. Fan, Turbo receiver channel estimation for GFDM-based cognitive radio networks. IEEE Access 6, 9926–9935 (2018). https://doi.org/10.1109/ACCESS.2018.2803742
M.D. Nisar, W. Anjum, F. Junaid, Preamble design for improved noise suppression in FBMC-OQAM channel estimation. IEEE Wirel. Commun. Lett. 9(9), 1471–1475 (2020). https://doi.org/10.1109/LWC.2020.2994203
A.I. Perez-Neira, M. Caus, R. Zakaria, D. Le Ruyet, E. Kofidis, M. Haardt, X. Mestre, Y. Cheng, MIMO signal processing in offset-QAM based filter bank multicarrier systems. IEEE Trans. Signal Process. 64(21), 5733–5762 (2016). https://doi.org/10.1109/TSP.2016.2580535
M. Fuhrwerk, S. Moghaddamnia, J. Peissig, Scattered pilot-based channel estimation for channel adaptive FBMC-OQAM systems. IEEE Trans. Wirel. Commun. 16(3), 1687–1702 (2017). https://doi.org/10.1109/TWC.2017.2651806
W. Liu, S. Schwarz, M. Rupp, T. Jiang, Pairs of pilots design for preamble-based channel estimation in OQAM/FBMC systems. IEEE Wirel. Commun. Lett. 10(3), 488–492 (2021). https://doi.org/10.1109/LWC.2020.3035388
D. Kong, P. Liu, Q. Wang, J. Li, X. Li, X. Cheng, Preamble-based MMSE channel estimation with low pilot overhead in MIMO-FBMC systems. IEEE Access 8, 148926–148934 (2020). https://doi.org/10.1109/ACCESS.2020.3015809
W. Cui, D. Qu, T. Jiang, B. Farhang-Boroujeny, Coded auxiliary pilots for channel estimation in FBMC-OQAM systems. IEEE Trans. Veh. Technol. 65(5), 2936–2946 (2016). https://doi.org/10.1109/TVT.2015.2448659
S. Park, B. Shim, J.W. Choi, Iterative channel estimation using virtual pilot signals for MIMO-OFDM systems. IEEE Trans. Signal Process. 63(12), 3032–3045 (2015). https://doi.org/10.1109/TSP.2015.2416684
K. Shi, E. Serpedin, P. Ciblat, Decision-directed fine synchronization in OFDM systems. IEEE Trans. Commun. 53(3), 408–412 (2005). https://doi.org/10.1109/TCOMM.2005.843463
S. Kalyani, K. Giridhar, Mitigation of error propagation in decision directed OFDM channel tracking using generalized M estimators. IEEE Trans. Signal Process. 55(5), 1659–1672 (2007). https://doi.org/10.1109/TSP.2006.889399
J. Akhtman, L. Hanzo, Decision directed channel estimation aided OFDM employing sample-spaced and fractionally-spaced CIR estimators. IEEE Trans. Wirel. Commun. 6(4), 1171–1175 (2007). https://doi.org/10.1109/TWC.2007.348308
I. Dagres, A. Polydoros, Decision-directed least-squares phase perturbation compensation in OFDM systems. IEEE Trans. Wirel. Commun. 8(9), 4784–4796 (2009). https://doi.org/10.1109/TWC.2009.081420
X. Li, W.-D. Zhong, A. Alphones, C. Yu, Time-domain adaptive decision-directed channel equalizer for RGI-DP-CO-OFDM. IEEE Photon. Technol. Lett. 26(3), 285–288 (2014). https://doi.org/10.1109/LPT.2013.2292694
G. Ren, J. Xing, H. Zhang, An SNR-assisted decision-directed RCFO estimation algorithm for wireless OFDM systems. IEEE Trans. Veh. Technol. 58(4), 2099–2103 (2009). https://doi.org/10.1109/TVT.2008.2005835
J. Zhang, L. Hanzo, X. Mu, Joint decision-directed channel and noise-variance estimation for MIMO OFDM/SDMA systems based on expectation-conditional maximization. IEEE Trans. Veh. Technol. 60(5), 2139–2151 (2011). https://doi.org/10.1109/TVT.2011.2148184
O.O. Oyerinde, S.H. Mneney, Subspace tracking-based decision directed CIR estimator and adaptive CIR prediction. IEEE Trans. Veh. Technol. 61(5), 2097–2107 (2012). https://doi.org/10.1109/TVT.2012.2192460
C. Wei, D.W. Lin, A decision-directed channel estimator for OFDM-based Bursty vehicular communication. IEEE Trans. Veh. Technol. 66(6), 4938–4953 (2017). https://doi.org/10.1109/TVT.2016.2619490
K.-G. Wu, J.-A. Wu, Efficient decision-directed channel estimation for OFDM systems with transmit diversity. IEEE Commun. Lett. 15(7), 740–742 (2011). https://doi.org/10.1109/LCOMM.2011.060111.110200
S.D. Muruganathan, A.B. Sesay, A low-complexity decision-directed channel-estimation scheme for OFDM systems with space-frequency diversity in doubly selective fading channels. IEEE Trans. Veh. Technol. 58(8), 4277–4291 (2009). https://doi.org/10.1109/TVT.2009.2021600
K.-G. Wu, M.-K.C. Chang, Adaptively regularized least-squares estimator for decision-directed channel estimation in transmit-diversity OFDM systems. IEEE Wirel. Commun. Lett. 3(3), 309–312 (2014). https://doi.org/10.1109/WCL.2014.030714.140013
A. Ladaycia, A. Mokraoui, K. Abed-Meraim, A. Belouchrani, Performance bounds analysis for semi-blind channel estimation in MIMO-OFDM communications systems. IEEE Trans. Wirel. Commun. 16(9), 5925–5938 (2017). https://doi.org/10.1109/TWC.2017.2717406
M.-S. Baek, M.-J. Kim, Y.-H. You, H.-K. Song, Semi-blind channel estimation and PAR reduction for MIMO-OFDM system with multiple antennas. IEEE Trans. Broadcast. 50(4), 414–424 (2004). https://doi.org/10.1109/TBC.2004.837885
S. Zhou, B. Muquet, G.B. Giannakis, Subspace-based (semi-) blind channel estimation for block precoded space-time OFDM. IEEE Trans. Signal Process. 50(5), 1215–1228 (2002). https://doi.org/10.1109/78.995088
Y. Zeng, T.-S. Ng, A semi-blind channel estimation method for multiuser multiantenna OFDM systems. IEEE Trans. Signal Process. 52(5), 1419–1429 (2004). https://doi.org/10.1109/TSP.2004.826183
Article MathSciNet MATH
|
CommonCrawl
|
Animal Systematics, Evolution and Diversity
The Korean Society of Systematic Zoology
2234-6953(pISSN)
Agriculture, Fishery and Food > Science of Animal Resources
http://www.e-ased.org KSCI KCI
Issue spc9
Issue nspc8
Volume 1 Issue 1_2
Mitochondrial DNA Sequence Variations and Genetic Relationships among Korean Thais Species (Muricidae: Gastropoda)
Lee, Sang-Hwa;Kim, Tae-Ho;Lee, Jun-Hee;Lee, Jong-Rak;Park, Joong-Ki 1
https://doi.org/10.5635/KJSZ.2011.27.1.001 PDF KSCI
Thais Roding, 1798, commonly known as rock-shell, is among the most frequently found gastropod genera worldwide on intertidal rocky shores including those of Japan, China, Taiwan and Korea. This group contains important species in many marine environmental studies but species-level taxonomy of the group is quite complicated due to the morphological variations in shell characters. This study examined the genetic variations and relationships among three Korean Thais species based on the partial nucleotide sequences of mitochondrial cox1 gene fragments. Phylogenetic trees from different analytic methods (maximum parsimony, neighbor-joining, and maximum likelihood) showed that T. bronni and T. luteostoma are closely related, indicating the most recent common ancestry. The low sequence divergence found between T. luteostoma and T. bronni, ranging from 1.53% to 3.19%, also corroborates this idea. Further molecular survey using different molecular marker is required to fully understand a detailed picture of the origin for their low level of interspecific sequence divergence. Sequence comparisons among conspecific individuals revealed extensive sequence variations within the three species with maximum values of 2.43% in T. clavigera and 1.37% in both T. bronni and T. luteostoma. In addition, there is an unexpectedly high level of mitochondrial genotypic diversity within each of the three Korean Thais species. The high genetic diversity revealed in Korean Thais species is likely to reflect genetic diversity introduced from potential source populations with diverse geographic origins, such as Taiwan, Hong Kong, and a variety of different coastal regions in South China and Japan. Additional sequence analysis with comprehensive taxon sampling from unstudied potential source populations will be also needed to address the origin and key factors for the high level of genetic diversity discovered within the three Korean Thais species studied.
Two New Marine Sponges of the Genus Halichondria (Halichondrida: Halichondriidae) from Uljin, Korea
Kang, Dong-Won;Sim, Chung-Ja 19
Two new marine sponges, Halichondria jangseungenesis n. sp. and H. nagokenesis n. sp., of the family Halichondriidae, were collected from Uljin-gun, Gyeongsangbuk-do, Korea by SCUBA diving during the period from Apr 2007 to Aug 2007. Based on their spicule composition and skeletal structure, H. jangseungenesis n. sp. appears to have a close similarity with H. panicea (Pallas, 1766); however, they differ in length of spicule. The spicule length of oxea of H. jangseungenesis n. sp. is shorter than that of H. panicea. Based on their spicule composition and growth form, H. nagokenesis n. sp. is quite similar to H. cylindrata Tanita and Hoshino, 1989; however, but they differ in length of spicule. The spicule length of oxea of H. nagokenesis n. sp. is longer than that of H. cylindrata.
First Record of Three Uronychia Species (Ciliophora: Spirotrichea: Euplotida) from Korea
Kim, Se-Joo;Min, Gi-Sik 25
Three morphospecies of the genus Uronychia, i.e. U. setigera Calkins, 1902, U. binucleata Young, 1922, and U. multicirrus Song, 1997, were collected from the coastal waters of Gumjin-ri on the East Sea and the public waterfront of Incheon on the Yellow Sea in Korea, respectively. These species are described based on live observation, protargol impregnation, silver nitrate impregnation, and their morphometrics. Diagnostic keys for these species are also provided. In addition, their small subunit ribosomal DNA sequences were compared with previously known sequences of Uronychia species. Diagnostics of three Uronychia species are as follows: U. setigera: $50-80\;{\mu}m$ long in vivo, oval-shaped, 2 macronuclear nodules (Ma), 1 spur on the left margin, 11 adoral membranelles (AM) 1, 4 AM2, 1 buccal cirrus (BC), 4 frontal cirri (FC), 3 left marginal cirri (LMC), 2 ventral cirri (VC), 5 transverse cirri (TC), 3 caudal cirri (CC), 6 dorsal kineties (DK), and approximately 23 cilia in the leftmost kinety. U. binucleata: $70-110\;{\mu}m$ long in vivo, oval to slightly rectangular shaped, 2 Ma, 1 micronucleus (Mi), 2 spurs on the posterior region, 11 AM1, 4 AM2, 1 BC, 4 FC, 3 LMC, 2 VC, 5 TC, 3 CC, 6 DK, and approximately 37 cilia in the leftmost kinety. U. multicirrus: $140-200\;{\mu}m$ long in vivo, oval to slightly rectangular shaped, ca. 7 Ma, 1 Mi, 11 AM1, 4 AM2, 1 BC, 4 FC, 3 LMC, approximately 8 VC, 5 TC, 3 CC, and 6 DK. This study presents the first record of this genus in Korea.
Three Aetideid Species of Copepods (Copepoda: Calanoida: Aetideidae) from East Sea of Korea
Lim, Byung-Jin;Song, Sung-Joon;Min, Gi-Sik 35
Three aetideid copepods collected from the East Sea of Korea are described: Bradyidius angustus (Tanaka, 1957), Gaetanus minutus (Sars, 1907), and Aetideus acutus Farran, 1929. The former two species are new to the Korean copepod fauna. The sequences of cytochrome c oxidase subunit 1 were determined to be the molecular characteristics of these three species.
New Records of Creeping Ctenophores, Genus Coeloplana (Tentaculata: Platyctenida: Coeloplanidae), from Korea
Song, Jun-Im;Hwang, Sung-Jin;Lee, Sang-Hwa;Park, Joong-Ki 47
Creeping ctenophores, Coeloplana species, were collected by SCUBA divers throughout the year (November 2006 to June 2010) from the branches and polyp masses of encrusting dendronephthyas at a depth of 20-32m off Munseom Island (Seogwipo-si, Jeju-do, Korea). A single individual of a newly recorded species in Korea, Coeloplana bocki Komai, 1920, was collected together with C. anthostella from the same location on 16 August 2009. A large number of individuals of each species were subsequently collected from the host Dendronephthya aff. dendritica on 20 June 2010. C. bocki can be distinguished from C. anthostella Song and Hwang, 2010 and C. komaii Utinomi, 1963 by its unique blue and orange colored stripes, and/or the branching and anastomosing milky-white stripes encircling the aboral sense organ towards the margin. The detailed morphology and molecular sequence information (nuclear 18S rDNA, internal transcribed spacer 1, and mitochondrial cox1 gene sequences) for C. bocki is provided, and C. bocki and C. anthostella are compared.
Zoeal Stages of Pisidia serratifrons (Crustacea: Decapoda: Porcellanidae) under Laboratory Conditions
Kim, Han-Ju;Ko, Hyun-Sook 53
The zoeal stages of Pisidia serratifrons are described and illustrated for the first time and its morphological characteristics are compared with those of three known Pisidia species of the family Porcellanidae. The zoea of P. serratifrons differs from those of other Pisidia (P. brasiliensis, P. dispar, and P. dehaanii), by having 11 spinules on the exopod of the antenna. In order to facilitate the study of plankton-collected material, a provisional key is provided for identification of the Korean porcellanid zoeae.
First Records of Two Aquatic Oxytrichid Ciliates (Ciliophora: Sporadotrichida: Oxytrichidae) from Korea
Kwon, Choon-Bong;Shin, Mann-Kyoon 59
For investigation of the Korean ciliate fauna, two oxytrichid ciliates, Histriculus histrio (Muller, 1773) and Sterkiella thompsoni Foissner, 1996, were collected from freshwater and brackish waters in South Korea, respectively. These two ciliates are reported for the first time in Korea. Descriptions were based on observations of live and silver stained specimens. Diagnoses of these species are as follows. Histriculus histrio: body is approximately $190{\times}85\;{\mu}m$ in size, almost ellipsoidal in shape, posterior part rather thin and very translucent. Cortical granules are absent. Both marginal rows are almost confluent at the posterior end. Six dorsal kineties are present. Sterkiella thompsoni: body is approximately $140{\times}50\;{\mu}m$ in size. Body margins are usually in parallel. Cortical granules are absent. Invariably, four dorsal kineties are present. Two caudal cirri are located on the dorsal surface. Three ellipsoidal macronuclear nodules are present.
New Record of Philonthus liopterus (Coleoptera, Staphylinidae) in Korea
Cho, Young-Bok 67
Philonthus liopterus Sharp is reported for the first time in Korea. A habitus photo and illustration of the male genitalia of this species are provided.
Two Marine Littoral Species of the Genus Medon (Coleoptera: Staphylinidae: Paederinae) New to Korea
Kim, Tae-Kyu;Cho, Young-Bok;Ahn, Kee-Jeong 69
The taxonomy of marine littoral species of the genus Medon Stephens in Korea is presented. The genus and two species-Medon prolixus (Sharp) and M. rubeculus Sharp-are identified for the first time in the Korean Peninsula. Redescriptions of M. prolixus and M. rubeculus with an illustration of its habitus and line drawings are provided.
New Record of Two Species of Ampelisca (Crustacea: Amphipoda: Ampeliscidae) from Korea
Kim, Young-Hyo;Eun, Ye;Lee, Kyung-Sook 75
Two species of gammaridean amphipod are newly recorded from the shallow Korean waters; Ampelisca alatopedunculata Ren and A. miharaensis Nagata. These are described and figured in detail. A key to the Korean Ampelisca species is provided.
A New Record of Tamba igniflua (Lepidoptera: Noctuidae) from Korea
Choi, Sei-Woong;Lee, Jin 85
A noctuid species, Tamba igniflua (Wileman and South), was reported for the first time in Korea. One female of T. igniflua was successfully reared with leaves of Eurya japonica from a caterpillar collected in the southwestern part of the Korean Peninsula. Diagnostic characteristics of the species are provided with a brief description of caterpillar and adult, including genitalic structure.
New Records of Two Hydroids(Cnidaria: Hydrozoa) from Korean Waters
Park, Jung-Hee 89
Hydroid specimens were collected from the coasts of Isl. Jeongjokdo (Taean) and Gampo Harbour, Korea, on 10 May and 19 October 2010. Two of the species identified as Sertularia tenera G.O. Sars 1874 and Plumularia halecioides Alder 1859 are new to Korean fauna. They are described with illustrations.
A New Species of the Genus Hippospongia (Demospongiae: Dictyoceratida) from Korea
Lee, Kyung-Jin;Sim, Chung-Ja 93
Sponges of the family Spongiidae are poorly known in Korean waters. This paper describes Hippospongia bergquistia n. sp. of the family Spongiidae (Demospongiae: Dictyoceratida) collected from Moselpo, Jejudo Island, Korea in 2007. This new species has a cavernous construction, rare pseudo-tertiary fibre, and rare primary fibres.
A New Record of Eupithecia praepupillata (Lepidoptera: Geometridae) from Korea
Choi, Sei-Woong;Mironov, Vladimir;Kim, Sung-Soo 97
In this paper, we report for the first time on a species of Eupithecia from Korea. Three females of Eupithecia praepupillata Wehrli, 1927, were collected from the northeastern part of South Korea. With this addition, a total of 53 species of Eupithecia from Korea have been recorded. Diagnosis and description of the species are provided with figures of the genitalia.
A New Species of the Genus Tetilla (Spirophorida: Tetillidae) from Korea
Shim, Eun-Jung;Sim, Chung-Ja 101
A new species in the genus Tetilla, Tetilla hwasunensis n. sp. was collected from Hwasun Harbor, Jejudo Island in 2009. This species differs from T. serica its lack of spherules and from T. radiate by having sigmaspires. Description and figures of the new species are provided.
Discovery of Vespa binghami (Vespidae: Hymenoptera) in Korea
Kim, Jeong-Kyu;Kim, Il-Kwon 105
Vespa binghami Vecht, a poorly known vespine species, was discovered in Korea. This is the second treatment of this species on the Korean peninsula since the first report on Korean occurrence by Virula in 1925; however, the first one was based on practical Korean distributional information. Diagnostic description and digital images are provided.
New Record of Gynodiastylis platycarpus (Cumacea: Gynodiastylidae) from Korea
Lee, Chang-Mok;Hong, Soon-Sang;Lee, Kyung-Sook 109
Gynodiastylis platycarpus Gam$\^{o}$ is redescribed as a new record of Korean fauna and a key to the Korean species of the genus is provided. This species is characterized by a slender body, a small telson, and a 3-articulated uropodal endopod.
|
CommonCrawl
|
Kinetic models for epidemic dynamics with social heterogeneity
G. Dimarco1,
B. Perthame2,
G. Toscani3 &
M. Zanella ORCID: orcid.org/0000-0001-8456-58663
Journal of Mathematical Biology volume 83, Article number: 4 (2021) Cite this article
We introduce a mathematical description of the impact of the number of daily contacts in the spread of infectious diseases by integrating an epidemiological dynamics with a kinetic modeling of population-based contacts. The kinetic description leads to study the evolution over time of Boltzmann-type equations describing the number densities of social contacts of susceptible, infected and recovered individuals, whose proportions are driven by a classical SIR-type compartmental model in epidemiology. Explicit calculations show that the spread of the disease is closely related to moments of the contact distribution. Furthermore, the kinetic model allows to clarify how a selective control can be assumed to achieve a minimal lockdown strategy by only reducing individuals undergoing a very large number of daily contacts. We conduct numerical simulations which confirm the ability of the model to describe different phenomena characteristic of the rapid spread of an epidemic. Motivated by the COVID-19 pandemic, a last part is dedicated to fit numerical solutions of the proposed model with infection data coming from different European countries.
The SARS-CoV-2 pandemic led in many countries to heavy lockdown measures assumed by the governments with the aim to control and limit its spreading. In this context, an essential role is played by the mathematical modeling of infectious diseases since it allows direct validation with real data, unlike other classical phenomenological approaches. This consequently permits evaluation of control and prevention strategies by comparing their cost with effectiveness and to give support to public health decisions (Ferguson 2006; Riley 2003). On this subject, most of the models present in literature make assumptions on transmission parameters (Brauer et al. 2019; Diekmann and Heesterbeek 2000) which are considered the only responsible of the spread of the infection. However, special attention was recently paid by the scientific community to the role and the estimate of the distribution of contacts between individuals as also a relevant cause of the potential pathogen transmission (cf. (Béraud 2015; Dolbeault and Turinici 2020; Fumanelli 2012; Mossong 2008) and the references therein).
In this direction, the results reported in (Béraud 2015) can be of great help when designing partial lockdown strategies. In fact, an optimal control of the pathogen transmission of the epidemic could be achieved through a direct limitation of the number of daily contacts among people. On this subject, the detailed analysis performed in (Béraud 2015) put into evidence that the number of social contacts in the population is in general well-fitted by a Gamma distribution, even if this distribution is not uniform with respect to age, sex and wealth. Gamma distributions belong to the wide class of generalized Gamma distributions (Lienhard and Meyer 1967; Stacy 1962), which have been recently connected to the statistical study of social phenomena (Kehoe 2012; Rehm 2010), and fruitfully described as steady states of kinetic models aiming to describe the formation of these profiles in consequence of repeated elementary interactions (Dimarco and Toscani 2019; Toscani 2020).
Starting from the above consideration and inspired by the recent development concerning kinetic models describing human behavior (Dimarco and Toscani 2019; Toscani 2020), in this paper we develop a mathematical framework to connect the distribution of social contacts with the spreading of a disease in a multiagent system. The result is obtained by integrating an epidemiological dynamics given by a classical compartmental model (Brauer et al. 2019; Diekmann and Heesterbeek 2000; Hethcote 2000) with a statistical part based on kinetic equations determining the formation of social contacts.
The study of the kinetic compartmental system allows to compare the results with that obtained by studying the spread of the epidemic by means of social networks, where contacts between individuals in a given population can be captured by assuming that nodes represent individuals and the edges represent the connections between them (Hernandez-Vargas et al. 2019). In connection with this subject, by following a similar approach it was possible to understand how diseases spread in highly heterogeneous social networks (Barthélemy et al. 2005). Also, it was recently shown that a strategic social network-based reduction of contact strongly enhances the effectiveness of social distancing measures while keeping risks lower (Block 2020), and that overdispersed diseases such as COVID-19 are very sensitive to social network size and clustering (Nielsen et al. 2021).
In this paper we concentrate on the classical SIR dynamics. However, we stress that the ideas here described are clearly not reduced to this model which can be intended as an example. Instead, the methodology here discussed can be extended to incorporate more realistic epidemiological dynamics, like for instance the classical endemic models discussed in (Brauer et al. 2019; Diekmann and Heesterbeek 2000; Hethcote 2000). In particular, the extension of the present approach to age-dependent compartmental models could be of great interest to produce realistic scenarios.
Other aspects, which certainly have a stronger impact on how a virus spreads, are related to the presence of asymptomatic individuals (Gaeta 2021) as well as to a time delay between contacts and outbreak of the disease (Cooke et al. 1999) which helps in the diffusion of the illness. These possible modeling improvements are the subject of future investigations and they will not be treated in this work. In fact, we stress that the principal scope of this work is to introduce a new class of models which is capable to incorporate information on the social heterogeneity of a population which we believe to be a crucial aspect in the spread of contagious diseases. Besides these simplifying assumptions, we will show that the basic features considered and detailed in the rest of the article are sufficient in many cases to construct a new class of models which well fits with the experimental data. Precise quantitative estimates are postponed to future investigations.
An easy way to understand epidemiology models is that, given a population composed of agents, they prescribe movements of individuals between different states based on some matching functions or laws of motion. According to the classical SIR models (Hethcote 2000), agents in the system are split into three classes, the susceptible, who can contract the disease, the infected, who have already contracted the disease and can transmit it, and the recovered who are healed, immune or isolated. Inspired by the model considered in (Dimarco and Toscani 2019) for describing a social attitude and making use of classical epidemiological dynamics, we present here a model composed by a system of three kinetic equations, each one describing the time evolution of the distribution of the number of contacts for the subpopulation belonging to a given epidemiological class. These three equations are further coupled by taking into account the movements of agents from one class to the other as a consequence of the infection, with an intensity proportional to the product of the average contact frequencies, rather than the product of population fractions.
The interactions which describe the social contacts of individuals are based on few rules and can be easily adapted to take into account the behavior of agents in the different classes in presence of the infectious disease. Our joint framework is consequently based on two models which can be considered classical in the respective fields. From the side of multi-agent systems in statistical mechanics, the linear kinetic model introduced in (Dimarco and Toscani 2019; Toscani 2020) has been shown to be flexible and able to describe, with suitable modifications, different problems in which human behavior plays an essential role, like the formation of social contacts. Once the statistical distribution of social contacts has been properly identified as equilibrium density of the underlying kinetic model, this information is used to close the hierarchy of equations describing the evolution of moments (Bobylev 1988; Dimarco et al. 2020; Cercignani 1988). In this way, we obtain a coupled system of equations, identifying a new epidemiological model which takes into account at best the statistical details about the contact distribution of a population. The model connects the measure of the heterogeneity of the population, i.e. the variance of the contact distribution, with the epidemic trajectory. This is in agreement with a well-know finding in the epidemiological literature, see e.g. (Anderson and May 1985; Barthélemy et al. 2005; Bonaccorsi et al. 2020; Diekmann et al. 1990; Flaxman et al. 2020; Novozhilov 2008; Van den Driessche and Watmough 2002). A recent research showing the influence of population heterogeneity on herd immunity to COVID-19 infection is due to Britton et al. (Britton et al. 2020).
Starting from the general macroscopic model, one can fruitfully obtain from it various sub-classes of SIR-type epidemiological models characterized by non-linear incidence rates, as for instance recently considered in (Korobeinikov and Maini 2005). It is also interesting to remark that the presence of non-linearity in the incidence rate function, and in particular, the concavity condition with respect to the number of infected has been considered in (Korobeinikov and Maini 2005) as a consequence of psychological effects. Namely, the authors observed that in the presence of a very large number of infected, the probability for an infected to transmit the virus may further decrease because the population tend to naturally reduce the number of contacts. The importance of reducing at best the social contacts to countering the advance of a pandemic is a well-known and studied phenomenon (Ferguson 2006). While in normal life activity, it is commonly assumed that a large part of agents behave in a similar way, in presence of an extraordinary situation like the one due to a pandemic, it is highly reasonable to conjecture that the social behavior of individuals is strictly affected by their personal feeling in terms of safeness. Thus, in this work, we focus on the assumption that it is the degree of diffusion of the disease that changes people's behavior in terms of social contacts, in view both of the personal perception and/or of the external government intervention. More generally, the model can be clearly extended to consider more realistic dependencies between an epidemic disease and the social contacts of individuals. However, this does not change the essential conclusions of our analysis, namely that there is a close interplay between the spread of the disease and the distribution of contacts, that the kinetic description is able to quantify. In particular, we stress the fact that we consider our approach as methodological, thus the encouraging results described in the rest of the article suggest that a similar analysis can be carried out, at the price of an increasing difficulty in computations, in more complex epidemiological models like the SIDARTHE model (Giordano 2020; Gatto 2020), to validate and improve the eventual partial lockdown strategies of the government and to suggest future measures.
The rest of the paper is organized as follows. Section 2 introduces the system of three SIR-type kinetic equations combining the dynamics of social contacts with the spread of infectious disease in a system of interacting individuals. Then, in Sect. 2.2 we show that through a suitable asymptotic procedure, the solution to the kinetic system tends towards the solution of a system of three SIR-type Fokker-Planck type equations with local equilibria of Gamma-type (Béraud 2015). Once the system of Fokker-Planck type equations has been derived, in Sect. 3, we close the system of kinetic equations around the Gamma-type equilibria to obtain a new epidemiological model in which the incidence rate depends on the number of social contacts between individuals. Last, in Sect. 4, we investigate at a numerical level the relationships between the solutions of the kinetic system of Boltzmann type, its Fokker-Planck asymptotics and the macroscopic model. These simulations confirm the ability of our approach to describe different phenomena characteristic of the trend of social contacts in situations compromised by the rapid spread of an epidemic and the consequences of various lockdown action in its evolution. A last part is dedicated to a fitting of the model with the experimental observations: first we estimate the parameters of the epidemic through the data at disposal and successively we use them in the macroscopic model showing that our approach is able to reproduce the pandemic trend.
A kinetic approach combining social contacts and epidemic dynamics
Our goal is to build a kinetic system which suitably describes the spreading of an infectious disease under the dependence of the contagiousness parameters on the individual number of social contacts of the agents. Accordingly to classical SIR models (Hethcote 2000), the entire population is divided into three classes: susceptible, infected and recovered individuals. As already claimed the ideas here developed can be extended to more complex compartmental epidemic models.
Aiming to understand social contacts effects on the dynamics, we will not consider in the sequel the role of other sources of possible heterogeneity in the disease parameters (such as the personal susceptibility to a given disease), which could be derived from the classical epidemiological models, suitably adjusted to account for new information (Diekmann et al. 1990; Novozhilov 2008; Van den Driessche and Watmough 2002). Consequently, agents in the system are considered indistinguishable (Pareschi and Toscani 2014). This means that the state of an individual in each class at any instant of time \(t\ge 0\) is completely characterized by the sole number of contacts \(x \ge 0\), measured in some unit.
While x is a natural positive number at the individual level, without loss of generality we will consider x in the rest of the paper to be a nonnegative real number, \(x\in {{\mathbb {R}}^+}\), at the population level. We denote then by \(f_S(x,t)\), \(f_I(x,t)\) and \(f_R(x,t)\), the distributions at time \(t > 0\) of the number of social contacts of the population of susceptible, infected and recovered individuals, respectively. The distribution of contacts of the whole population is then recovered as the sum of the three distributions
$$\begin{aligned} f(x,t)=f_S(x,t)+f_I(x,t)+f_R(x,t). \end{aligned}$$
We do not consider for simplicity of presentation disease related mortality as well as the presence of asymptomatic individuals which we aim to insert in future investigations. Therefore, we can fix the total distribution of social contacts to be a probability density for all times \(t \ge 0\)
$$\begin{aligned} \int _{{\mathbb {R}}_+} f(x,t)\,dx = 1. \end{aligned}$$
As a consequence, the quantities
$$\begin{aligned} J(t)=\int _{{\mathbb {R}}^+}f_J(x,t)\,dx,\quad J \in \{S,I,R\} \end{aligned}$$
denote the fractions, at time \(t \ge 0\), of susceptible, infected and recovered respectively. For a given constant \(\alpha >0\), and time \(t \ge 0\), we also denote with \(x_\alpha (t)\) the moment of the distribution of the number of contacts f(x, t) of order \(\alpha \)
$$\begin{aligned} x_\alpha (t) =\int _{{\mathbb {R}}^+}x^\alpha \, f(x,t)\,dx. \end{aligned}$$
In the same way, we denote with \(x_{J,\alpha }(t)\) the local moments of order \(\alpha \) for the distributions of the number of contacts in each class conveniently divided by the mass of the class
$$\begin{aligned} x_{J,\alpha }(t)= \frac{1}{{J(t)}}\int _{{\mathbb {R}}^+}x^\alpha f_J(x,t)\,dx, \quad J \in \{S,I,R\}. \end{aligned}$$
Unambiguously, we will indicate the mean and the local mean values, corresponding to \(\alpha =1\), by x(t) and, respectively, \(x_J(t)\), \(J \in \{ S,I,R\}\).
In what follows, we assume that the various classes in the model could act differently in the social process constituting the contact dynamics. The kinetic model then follows combining the epidemic process with the contact dynamics. This gives the system
$$\begin{aligned} \frac{\partial f_S(x,t)}{\partial t}= & {} -K_\epsilon (f_S,f_I)(x,t) + Q_S(f_S)(x,t) \nonumber \\ \frac{\partial f_I(x,t)}{\partial t}= & {} K_\epsilon (f_S,f_I)(x,t) - \gamma _\epsilon f_I(x,t) + Q_I(f_I)(x,t) \nonumber \\ \frac{\partial f_R(x,t)}{\partial t}= & {} \gamma _\epsilon f_I(x,t) + Q_R(f_R)(x,t) \end{aligned}$$
where \(\gamma _\epsilon \) is the constant recovery rate while the transmission of the infection is governed by the function \(K_\epsilon (f_S,f_I)\), the local incidence rate, expressed by
$$\begin{aligned} {K_\epsilon (f_S,f_I)(x, t) = f_S(x,t) \int _{{\mathbb {R}}^+} \kappa _\epsilon (x,y)f_I(y,t) \,dy.} \end{aligned}$$
In full generality, we will assume that both the recovery rate \(\gamma \) and the contact function \(\kappa \) depend on a small positive parameter \(\epsilon \ll 1\) which measures their intensity. In (4) the contact function \(\kappa _\epsilon (x,y)\) is a nonnegative function growing with respect to the number of contacts x and y of the populations of susceptible and infected, and such that \(\kappa _\epsilon (x, 0) = 0\). A leading example for \(\kappa _\epsilon (x,y)\) is obtained by choosing
$$\begin{aligned} \kappa _\epsilon (x,y) = \beta _\epsilon \, x^\alpha y^\alpha , \end{aligned}$$
where \(\alpha , \beta _\epsilon \) are positive constants, that is by taking the incidence rate dependent on the product of the number of contacts of susceptible and infected people. When \(\alpha =1\), for example, the incidence rate takes the simpler form
$$\begin{aligned} {K_\epsilon (f_S,f_I)(x,t) = \beta _\epsilon \, x f_S(x,t) x_I(t)\,I(t).} \end{aligned}$$
Let us observe that with the choices done, the spreading of the epidemic depends heavily on the function \(\kappa _\epsilon (\cdot ,\cdot )\) used to quantify the rate of possible contagion in terms of the number of social contacts between the classes of susceptible and infected.
In our combined epidemic contact model (3), the operators \(Q_J\), \(J\in \{S,I,R\}\) characterize the thermalization of the distribution of social contacts in the various classes. To that aim, observe that the evolution of the mass fractions J(t), \(J\in \{S,I,R\}\) obeys to a classical SIR model by choosing \(Q_S \equiv 0\) and \(\kappa _\epsilon (x,y) \equiv \beta >0\), thus considering the spreading of the disease independent of the intensity of social contacts.
The \(Q_J\), \(J\in \{S,I,R\}\) are integral operators that modify the distribution of contacts \(f_J(x,t)\), \(J\in \{S,I,R\}\) through repeated interactions among individuals (Dimarco and Toscani 2019; Toscani 2020). Their action on observable quantities \(\varphi (x)\) is given by
$$\begin{aligned} \int _{{\mathbb {R}}_+}\varphi (x)\,Q_J(f_J)(x,t)\,dx = \,\, \Big \langle \int _{{\mathbb {R}}_+}B(x) \bigl ( \varphi (x_J^*)-\varphi (x) \bigr ) f_J(x,t) \,dx \Big \rangle . \end{aligned}$$
where B(x) measures the interaction frequency, \(\langle \cdot \rangle \) denotes mathematical expectation with respect to a random quantity, and \(x_J^*\) denotes the updated value of the number x of social contacts of the J-th population as a result of an interaction. We discuss in the sequel the construction of the social contact model.
On the distribution of social contacts
The process of formation of the distribution of social contacts is obtained by taking into account the typical aspects of human behaviour, in particular the search, in absence of epidemics, of opportunities for socialization. In addition to that, social contacts are due to the common use of public transportations to reach schools, offices and, in general, places of work as well as to basic needs of interactions due to work duties. As shown in Béraud (2015), this leads individuals to stabilize on a characteristic number of daily contacts depending on the social habits of a country. This quantity is represented in the following by a suitable value \(\bar{x}_M\), which can be considered as the mean number of contacts relative to the population under investigation. This kind of dynamics and the relative distribution of average daily contacts observed in (Béraud 2015) is the one we aim to explain and reproduce in our model.
As a final result of our investigation, we look to a characterization of the distribution of social contacts in a multi-agent system, the so-called macroscopic behavior. This can be obtained starting from some universal assumption about the personal behavior of the single agents, i.e. from the microscopic behavior. Indeed, as in many other human activities, the daily amount of social contacts is the result of a repeated upgrading based on well-established rules. To this extent, it is enough to recall that the daily life of each person is based on a certain number of activities, and each activity carries a certain number of contacts. Moreover, for any given activity, the number of social contacts varies in consequence of the personal choice. The typical example is the use or not of public transportations to reach the place of work or the social attitudes which scales with the age. Clearly, independently of the personal choices or needs, the number of daily social contacts contains a certain amount of randomness, that has to be taken into account. Also, while it is very easy to reach a high number of social contacts attending crowded places for need or will, going below a given threshold is very difficult, since various contacts are forced by working or school activities, among others causes. This asymmetry between growth and decrease, as exhaustively discussed in (Dimarco and Toscani 2019; Gualandi and Toscani 2019), can be suitably modeled by resorting to a so-called value function (Kahneman and Tversky 1979) description of the elementary variation of the x variable measuring the average number of daily contacts. We will come back to the definition of the value function later in the section.
It is important to outline that, in presence of an epidemic, the characteristic mean number of daily contacts \({\bar{x}}_M\) reasonably changes in time, even in absence of an external lockdown intervention, in reason of the perception of danger linked to social contacts. Consequently, even if not always explicitly indicated, we will assume \({\bar{x}}_M= {\bar{x}}_M(t)\).
Furthermore, an important aspect of the formation of the number of social contacts is that their frequency is not uniform with respect to the values of x. Indeed, it is reasonable to assume that the frequency of interactions is inversely proportional to the number of contacts x. This relationship takes into account that is highly probable to have at least some contacts, and also the rare situation in which one reaches a very high values of contacts \(x\gg \bar{x}_M\). On this subject, we mention a related approach discussed in Furioli et al. (2020). The introduction of a variable kernel B(x) into the kinetic equation does not modify the shape of the equilibrium density as shown later, but it allows a better physical description of the phenomenon under study, including an exponential rate of relaxation to equilibrium for the underlying Fokker-Planck type equation derived from the kinetic equation that it we will introduced next in 2.2.
Following (Dimarco and Toscani 2019; Gualandi and Toscani 2019; Toscani 2020), we will now illustrate the mathematical formulation of the previously discussed behavior. In full generality, we assume that individuals in different classes can have a different mean number of contacts. Then, the microscopic updates of social contacts of individuals in the class \(J\in \{S,I,R\}\) will be taken of the form
$$\begin{aligned} x_J^* = x - \Phi ^\epsilon (x/{\bar{x}}_J) x + \eta _\epsilon x. \end{aligned}$$
In a single update (interaction), the number x of contacts can be modified for two reasons, expressed by two terms, both proportional to the value x. In the first one, the coefficient \(\Phi ^\epsilon (\cdot )\), which takes both positive and negative values, characterizes the typical and predictable variation of the social contacts of agents, namely the personal social behavior of agents. The second term takes into account a certain amount of unpredictability in the process. A frequent choice in this setting consists in assuming that the random variable \(\eta _\epsilon \) is of zero mean and bounded variance of order \(\epsilon >0\), expressed by \(\langle \eta _\epsilon \rangle =0\), \(\langle \eta _\epsilon ^2 \rangle = \epsilon \lambda \), with \(\lambda >0\). Furthermore, we assume that \(\eta _\epsilon \) has finite moments up to order three.
The function \(\Phi ^\epsilon \) plays the role of the value function in the prospect theory of Kahneman and Tversky (Kahneman and Tversky 1979, 2000), and contains the mathematical details of the expected human behavior in the phenomenon under consideration. In particular, the main hypothesis on which this function is built is that, in relationship with the mean value \({\bar{x}}_J\), \(J\in \{S,I,R\}\), it is considered normally easier to increase the value of x (individuals look for larger networks) than to decrease it (people maintain as much connections as possible). In terms of the variable \( s = x/{\bar{x}}_J\) we consider then as in (Dimarco and Toscani 2019) the class of value functions obeying to the above general rule given by
$$\begin{aligned} \Phi _\delta ^\epsilon (s) = \mu \frac{e^{\epsilon (s^\delta -1)/\delta }-1}{e^{\epsilon (s^\delta -1)/\delta }+1 } , \quad s \ge 0, \end{aligned}$$
where the value \(\mu \) denotes the maximal amount of variation of x that agents will be able to obtain in a single interaction, \(0 < \delta \le 1\) is a suitable constant characterizing the intensity of the individual behavior, while \(\epsilon >0\) is related to the intensity of the interaction. Hence, the choice \(\epsilon \ll 1\) corresponds to small variations of the mean difference \(\langle x_J^* -x\rangle \). Thus, if both effects, randomness and adaptation are scaled with this interaction intensity \(\epsilon \), it is possible to equilibrate their effects as we will show in Section 2.2 and obtain a stationary distribution of contacts. Note also that the value function \(\Phi _\delta ^\epsilon (s)\) is such that
$$\begin{aligned} -\mu \le \Phi _\delta ^\epsilon (s) \le \mu \end{aligned}$$
and clearly, the choice \(\mu <1\) implies that, in absence of randomness, the value of \(x_J^*\) remains positive if x is positive.
Once the elementary interaction (8) is given, for any choice of the value function, the study of the time-evolution of the distribution of the number x of social contacts follows by resorting to kinetic collision-like approaches (Cercignani 1988; Pareschi and Toscani 2014), that quantify at any given time the variation of the density of the contact variable x in terms of the interaction operators.
Thus, for a given density \(f_J(x,t)\), \(J\in \{S,I,R\}\), we can measure the action on the density of the interaction operators \(Q_J(f)(x,t)\) in equations (3) fruitfully written in weak form. This form corresponds to say that for all smooth functions \(\varphi (x)\) (the observable quantities) we have
$$\begin{aligned} \dfrac{d}{dt} \int _{{\mathbb {R}}_+}\varphi (x)f_J(x,t)\,dx = \Big \langle \int _{{\mathbb {R}}_+}B(x) \bigl ( \varphi (x_J^*)-\varphi (x) \bigr ) f_J(x,t) \,dx \Big \rangle . \end{aligned}$$
Here, the expectation \(\langle \cdot \rangle \) takes into account the presence of the random parameter \(\eta _\epsilon \) in the microscopic interaction (8) while the function B(x), as already discussed, measures the interaction frequency. The right-hand side of equation (10) quantifies the variation in density, at a given time \(t>0\), of individuals in the class \(J\in \{S,I,R\}\) that modify their value from x to \(x_J^* \) (loss term with negative sign) and agents that change their value from \(x_J^*\) to x (gain term with positive sign). In many situations, the interaction kernel B(x) can been assumed constant (Dimarco and Toscani 2019). This simplifying hypothesis is not always well justified from a modeling point of view and thus in this work, we consider instead a non constant collision kernel B(x) (see (Furioli et al. 2020) for a discussion on this aspect). Thus, following the approach in (Furioli et al. 2020; Toscani 2020), we express the mathematical form of the kernel B(x) by assuming that the frequency of changes in the number of social contacts depends on x itself through the following law
$$\begin{aligned} B(x) = x^{-b}, \end{aligned}$$
for some constant \(b >0\). This kernel assigns a low probability to interactions in which individuals have already a large number of contacts and assigns a high probability to interactions when the value of the variable x is small. The constant b can be suitably chosen by resorting to the following argument (Toscani 2020). For small values of the x variable, the rate of variation of the value function (9) is given by
$$\begin{aligned} \frac{d}{dx} \Phi _\delta ^\epsilon \left( \frac{x}{{\bar{x}}_J}\right) \approx {\mu \epsilon }\, {{\bar{x}}_J^{-\delta }}\, x^{\delta -1}. \end{aligned}$$
Hence, for small values of x, the mean individual rate predicted by the value function is proportional to \(x^{\delta -1}\). Then, the choice \(b = \delta \) would correspond to a collective rate of variation of the system independent of the parameter \(\delta \) which instead characterizes the individual rate of variation of the value function.
In the next section, we investigate the steady states of the interaction operators \(Q_J(f)(x,t), \ J\in \{S,I,R\}\) which permit to derive the macroscopic epidemic model containing the effects of social interactions among individuals described in Section 3.
Asymptotic scaling and steady states
Let us focus on the sole social contact dynamic and introduce a time scaling
$$\begin{aligned} \tau = \epsilon t, \qquad f_{J,\epsilon }(x,\tau ) = {f_J(x,t)}, \qquad J \in \{S,I,R\}. \end{aligned}$$
which, in the following, will permit to separate the scale of the epidemic from the time scale at which, by hypothesis, the social contacts acts. Then, as a result of the scaling, the distribution \(f_{J,\epsilon }\) is solution to
$$\begin{aligned} \dfrac{d}{d\tau } \int _{{\mathbb {R}}_+} \varphi (x) f_{J,\epsilon }(x,\tau )dx = \dfrac{1}{\epsilon } \Big \langle \int _{{\mathbb {R}}_+}B(x) \bigl ( \varphi (x_J^*)-\varphi (x) \bigr ) f_{J,\epsilon }(x,\tau ) \,dx \Big \rangle . \end{aligned}$$
We concentrate now on the analysis of the asymptotic states of the social contact dynamics in the case in which elementary interactions (8) produce extremely small modification of the number of social contacts. To that aim, note that, from the definition of \(\Phi _\delta ^\epsilon \) in (9) and the assumptions on the noise term \(\eta _\epsilon \) we have
$$\begin{aligned} \lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon }{ \Phi _\delta ^\epsilon \left( \frac{x}{\bar{x}_J}\right) } = \frac{\mu }{2\delta } \left[ \left( \frac{x}{{\bar{x}}_J} \right) ^\delta -1\right] , \quad \lim _{\epsilon \rightarrow 0} \frac{1}{\epsilon }\langle \eta _\epsilon ^2\rangle = \lambda . \end{aligned}$$
Consequently, the actions of both the value function and the random part of the elementary interaction in (8) survive in the limit \(\epsilon \rightarrow 0\). Observe that the limit procedure induced by (13) corresponds precisely to the situation of small interactions while, at the same time, the time scale of the dynamics is suitably scaled to see their effects. In kinetic theory, this is a well-known procedure with the name of grazing limit, we point the interested reader to Cordier et al. (2005); Furioli et al. (2017); Pareschi and Toscani (2014) for further details. Since if \(\epsilon \ll 1 \) the difference \(x^*_J - x\) is small and, assuming \(\varphi \in {\mathcal {C}}_0\), we can perform the following Taylor expansion
$$\begin{aligned} \varphi (x^*_J)-\varphi (x) = (x^*_J - x) \varphi '(x) + \dfrac{1}{2} (x^*_J-x)^2 \varphi ''(x) + \dfrac{1}{6}(x^*_J - x)^3\varphi '''({\hat{x}}_J), \end{aligned}$$
being \({\hat{x}}_J \in (\min \{x,x^*_J\},\max \{x,x^*_J\})\). Writing \(x^*_J-x = - \Phi ^\epsilon _\delta (x/{{\bar{x}}_J})x+x\eta _\epsilon \) from (8) and plugging the above expansion in the kinetic model (12) we have for \(J\in \{S,I,R\}\)
$$\begin{aligned} \begin{aligned}&\dfrac{d}{d\tau } \int _{{\mathbb {R}}_+}\varphi (x) f_{J,\epsilon }(x,\tau )dx = \\&\qquad \dfrac{1}{\epsilon } \left[ \int _{{\mathbb {R}}_+} -\Phi _\delta ^\epsilon (x/{\bar{x}_J})x^{1-\delta } \varphi '(x)f_{J,\epsilon }(x,\tau )dx + \dfrac{\lambda \epsilon }{2} \int _{\mathbb R_+} \varphi ''(x)x^{2-\delta }f_{J,\epsilon }(x,\tau )dx \right] \\&\qquad + R_\varphi (f_{J,\epsilon }), \end{aligned} \end{aligned}$$
where \(R_\varphi (f_{J,\epsilon })\) is the remainder
$$\begin{aligned} \begin{aligned} R_\varphi (f_{J,\epsilon })(x,\tau ) =&\dfrac{1}{2\epsilon }\int _{{\mathbb {R}}_+}\varphi ''(x) x^{-\delta }(\Phi _\delta ^\epsilon (x/{\bar{x}_J})x)^2f_{J,\epsilon }(x,t)dx \\&\dfrac{1}{6\epsilon } \left\langle \int _{{\mathbb {R}}_+} \varphi '''({\hat{x}}_J) x^{-\delta }(-\Phi ^\epsilon _\delta (x/{\bar{x}_J})x + x\eta _\epsilon )^3 f_{J,\epsilon }(x,t)dx \right\rangle . \end{aligned} \end{aligned}$$
Since, by assumption, \(\varphi \) and its derivatives are bounded in \({\mathbb {R}}_+\) and decreasing at infinity and since \(\eta _\epsilon \) has bounded moment of order three, namely \(\langle |\eta _\epsilon |^3\rangle <+\infty \), using the bound (13) we can easily argue that in the limit \(\epsilon \rightarrow 0^+\) we have
$$\begin{aligned} |R_\varphi (f_J)| \rightarrow 0. \end{aligned}$$
Hence, it can be shown that \(f_{J,\epsilon }\) converges, up to subsequences, to a distribution function \(f_J\) solution to
$$\begin{aligned} \dfrac{d}{d\tau } \int _{{\mathbb {R}}_+}\varphi (x)f_{J}(x,\tau )dx= & {} \int _{{\mathbb {R}}_+} \left\{ -\varphi '(x) \, \frac{\mu \,x^{1-\delta }}{2\delta }\left[ \left( \frac{x}{{\bar{x}}_J} \right) ^\delta -1\right] + \frac{\lambda }{2}\varphi ''(x)\,x^{2-\delta } \right\} \\&f_{J}(x,\tau )\,dx \end{aligned}$$
If we also impose at \(x=0\) the following no-flux boundary conditions
$$\begin{aligned} \frac{\partial }{\partial x} (x^{2-\delta } f_J(x,\tau ))\Big |_{x=0} = 0 \quad J\in \{S,I,R\}, \end{aligned}$$
the limit equation coincides with the Fokker-Planck type equation
$$\begin{aligned} \dfrac{\partial }{\partial \tau }f_J(x,\tau ) = Q^\delta _J(f_J)(x,\tau ), \end{aligned}$$
$$\begin{aligned} Q_J^\delta (f_J)(x,\tau )= & {} \frac{\mu }{2\delta }\frac{\partial }{\partial x}\left\{ \,x^{1-\delta }\left[ \left( \frac{x}{{\bar{x}}_J} \right) ^\delta -1\right] f_{J}(x,\tau )\right\} +\frac{\lambda }{2} \frac{\partial ^2}{\partial x^2} (x^{2-\delta } f_J(x,\tau )),\nonumber \\&J \in \{S,I,R\} \end{aligned}$$
is characterized by a variable diffusion coefficient.
Remarkably enough, we can compute explicitly the equilibrium distribution of the surrogate Fokker-Planck model. Indeed, assuming that the mass of the initial distribution is one and the mean values \({\bar{x}}_J\), \(J \in \{S,I,R\}\) are constant, and by setting \(\nu = \mu /\lambda \), the equilibria are given by the functions
$$\begin{aligned} f_J^\infty (x) = C_J({\bar{x}}_J,\delta ,\nu ) x^{\nu /\delta +\delta -2} \exp \left\{ - \frac{\nu }{\delta ^2} \left( \frac{x}{{\bar{x}}_J} \right) ^\delta \right\} ,\qquad J \in \{S,I,R\}, \end{aligned}$$
where \(C_J> 0\) is a normalization constant. We may rewrite the obtained steady state (16) as a generalized Gamma probability density \(f_\infty (x;\theta , \chi ,\delta )\) defined by Lienhard and Meyer (1967); Stacy (1962)
$$\begin{aligned} f_{J,\infty }(x;\theta , \chi ,\delta ) = \frac{\delta }{\theta ^\chi } \frac{1}{\Gamma \left( \chi /\delta \right) } x^{\chi -1} \exp \left\{ - \left( x/\theta \right) ^\delta \right\} , \end{aligned}$$
characterized in terms of the shape \(\chi >0\), the scale parameter \(\theta >0\), and the exponent \(\delta >0\) that in the present situation are given by
$$\begin{aligned} \chi = \frac{\nu }{\delta }+\delta -1, \quad \theta = {\bar{x}}_J \left( \frac{\delta ^2}{\nu }\right) ^{1/\delta }. \end{aligned}$$
It has to be remarked that the shape \(\chi \) is positive, only if the constant \(\nu = \mu /\lambda \) satisfies the bound
$$\begin{aligned} \nu >\delta (1-\delta ). \end{aligned}$$
Note that condition (19) holds, independently of \(\delta \), when \(\mu \ge \frac{\lambda }{4}\), namely when the variance of the random variation in (8) is small with respect to the maximal variation of the value function. Note moreover that for all values \(\delta >0\) the moments are expressed in terms of the parameters denoting respectively the mean \({\bar{x}}_J\), \(J \in \{S,I,R\}\), the variance \(\lambda \) of the random effects and the values \( \delta \) and \(\mu \) characterizing the value function \(\phi _\delta ^\epsilon \) defined in (9). Finally, the standard Gamma and Weibull distributions are included in (16), and are obtained by choosing \(\delta =1\) and, respectively \(\delta = \chi \). In both cases, the shape \(\chi =\nu \), and no conditions are required for its positivity. It is important to notice that the function (17) expressing the equilibrium distribution of the daily social contacts in a society is in agreement with the one observed in (Béraud 2015). This was one of the goals of our investigation.
The macroscopic social-SIR dynamics
Once we have obtained the characterization of the equilibrium distribution of the transition operators \(Q_J\), \(J \in \{S,I,R\}\), we are ready to study the complete system (3). To that aim, the scope of the rest of this section is the determination of the observable macroscopic equations of the introduced kinetic model.
Derivation of moment based systems
The assumption that the dynamics leading to the contact formation is much faster than the epidemic dynamics corresponds to consider \( \beta _\epsilon = \epsilon \beta \) in the formula below (4) and \(\gamma _\epsilon = \epsilon \gamma \) with \(\beta ,\gamma >0\), being \(\epsilon \ll 1\) the scaling parameter introduced in the previous section. After introducing the scaled distributions \({f_{J}(x,\tau )}\), noticing that \(\frac{\partial }{\partial \tau } f_J = \frac{1}{\epsilon }\frac{\partial }{\partial t }f_J\), we can rewrite system (3) as follows
$$\begin{aligned}&\frac{\partial f_S(x,\tau )}{\partial \tau } = -K(f_S,f_I)(x,\tau ) + \frac{1}{\epsilon }\, Q_S^\delta (f_S)(x,\tau ), \nonumber \\&\frac{\partial f_I(x,\tau )}{\partial \tau } = K(f_S,f_I)(x,\tau ) - \gamma f_I(x,\tau ) + \frac{1}{\epsilon }\, Q_I^\delta (f_I)(x,\tau ), \nonumber \\&\frac{\partial f_R(x,\tau )}{\partial \tau } = \gamma f_I(x,\tau ) + \frac{1}{\epsilon }\, Q_R^\delta (f_R)(x,\tau ). \end{aligned}$$
The system (20) is complemented with the boundary conditions (14) at \(x=0\) and it contains all the information on the spreading of the epidemic in terms of the distribution of social contacts. Indeed, the knowledge of the densities \(f_J(x,t)\), \(J\in \{S,I,R\}\), allows to evaluate by integrations all moments of interest. Due to the incidence rate \(K(f_S,f_I)\), as given in (4), the time evolution of the moments of the distribution functions is not explicitly computable, since the evolution of a moment of a certain order depends on the knowledge of higher order moments, thus producing a hierarchy of equations, like in classical kinetic theory of rarefied gases (Cercignani 1988).
Before discussing the closure, i.e. how to obtain a closed set of macroscopic equations, we highlight that is not restrictive to assume \(\delta = 1\) since the obtained Gamma densities depend on 2 shape parameters. This choice gives for \(J\in \{S,I,R\}\)
$$\begin{aligned} Q_J^1(f_J)(x,\tau )= \frac{\mu }{2}\frac{\partial }{\partial x}\left[ \,\left( \frac{x}{{\bar{x}}_J} -1\right) f_J(x,\tau )\right] +\frac{\lambda }{2} \frac{\partial ^2}{\partial x^2} (x f_J(x,\tau )). \end{aligned}$$
In this case (18) implies \(\chi = \nu \) and \(\theta = \bar{x}_J/\nu \), and the steady states of unit mass, for \(J\in \{S,I,R\}\), are the Gamma densities
$$\begin{aligned} f_J^\infty (x;\theta , \nu ) = \left( \frac{\nu }{{\bar{x}}_J}\right) ^\nu \frac{1}{\Gamma \left( \nu \right) } x^{\nu -1} \exp \left\{ -\frac{\nu }{{\bar{x}}_J}\, x\right\} . \end{aligned}$$
With this particular choice, the mean values and the energies of the densities (22), \(J\in \{S,I,R\}\), are given by
$$\begin{aligned} \int _{{\mathbb {R}}^+} x\, f_J^\infty (x;\theta , \nu ) \, dx = {\bar{x}}_J; \quad \int _{{\mathbb {R}}^+} x^2 \, f_J^\infty (x;\theta , \nu ) \, dx =\frac{\nu +1}{\nu }{\bar{x}}_J^2. \end{aligned}$$
Following the observations of Remark 1 we can now assume \({\bar{x}}_J= x_J(t)\) where this time dependent value can be different depending on the class to which agents belong. In order to obtain the time evolution of the macroscopic observable quantities like densities and local means from the kinetic model (20), we now consider the Fokker–Planck operator (21) This operator vanishes in correspondence to a time-dependent Gamma density equilibrium with mean \(x_J(t)\). With these notations, system (20) with \(\delta =1\) reads then
$$\begin{aligned}&\frac{\partial f_S(x,\tau )}{\partial \tau } = - \beta x \,f_S(x,\tau ) x_I(t) \,I(\tau ) + \frac{1}{\epsilon }\, Q_S^1(f_S)(x,\tau ), \nonumber \\&\frac{\partial f_I(x,\tau )}{\partial \tau } = \beta x\, f_S(x,\tau ) x_I(t) \,I(\tau ) - \gamma f_I(x,\tau ) + \frac{1}{\epsilon }\, Q_I^1(f_I)(x,\tau ), \nonumber \\&\frac{\partial f_R(x,t)}{\partial \tau } = \gamma f_I(x,\tau ) + \frac{1}{\epsilon }\, Q^1_R(f_R)(x,\tau ). \end{aligned}$$
Integrating both sides of equations in (24) with respect to x, and recalling that, in presence of boundary conditions of type (14) the Fokker-Planck type operators are mass and momentum preserving, we obtain the system for the evolution of the fractions J defined in (1), \(J\in \{S,I,R\}\)
$$\begin{aligned}&\frac{d S(t)}{d t} = -\beta \, x_S(t) x_I(t) S(t)I(t), \nonumber \\&\frac{d I(t)}{d t} = \beta \, x_S(t) x_I(t) S(t)I(t) - \gamma I(t), \nonumber \\&\frac{d R(t)}{d t} = \gamma I(t), \end{aligned}$$
where we have restored the macroscopic time variable \( t \ge 0\). As anticipated, unlike the classical SIR model, system (25) is not closed, since the evolution of the mass fractions J(t), \(J \in \{S,I,R\}\), depends on the evolution of the local mean values \(x_J(t)\). The closure of system (25) can be obtained by resorting, at least formally, to a limit procedure. In fact, as outlined in the Introduction, the typical time scale involved in the social contact dynamics is \(\epsilon \ll 1\) which identifies a faster adaptation of individuals to social contacts with respect to the evolution time of the epidemic disease. Consequently, the choice of the value \(\epsilon \ll 1\) pushes the distribution function \(f_J(x,t)\), \(J\in \{S,I,R\}\) towards the Gamma equilibrium density with a mass fraction S(t), respectively I(t) and R(t), and local mean value \(x_S(t)\), respectively \(x_I(t)\) and \(x_R(t)\), as it can be easily verified from the differential expression of the interaction operators \(Q_J^1\), \(J \in \{S,I,R\}\).
Indeed, if \(\epsilon \ll 1\) is sufficiently small, one can easily argue from the exponential convergence of the solution of the Fokker-Planck equation towards the equilibrium \(f_S^\infty (x;\theta ,\nu )\) (see (Toscani 2020) for details), that the solution \(f_S(x, t)\) remains sufficiently close to the Gamma density with mass S(t) and local mean density given by \(x_S(t)\) for all times. This equilibrium distribution \(f_S^\infty (x;\theta ,\nu )\) can then be plugged into the first equation of (24). Successively, by multiplying by x both sides of this equation (24) and integrating it with respect to the variable x, since the Fokker–Planck operator on the right-hand side is momentum-preserving, one obtains that the mean \(x_S(t)S(t)\) satisfies the differential equation
$$\begin{aligned} \frac{d }{d t} (x_S(t)S(t)) = - \beta \, x_{S,2}(t) x_I(t) S(t)I(t), \end{aligned}$$
which depends now on the second order moment. However, it is now possible to close this expression by using the energy of the local equilibrium distribution, which can be expressed in terms of the mean value as in (23) as follows
$$\begin{aligned} x_{S,2}(t) = \frac{\nu +1}{\nu } x_S^2(t). \end{aligned}$$
Therefore, we have
$$\begin{aligned} S(t) \frac{d x_S(t)}{ d t} = - \beta \, x_{S,2}(t) x_I(t) S(t)I(t) - x_S(t)\frac{dS(t)}{d t}, \end{aligned}$$
where the time evolution of the fraction S(t) can be recovered by the first equation of system (25). Hence, the evolution of the local mean value \(x_S(t)\) satisfies the equation
$$\begin{aligned} \frac{d x_S(t)}{d t} = - \frac{\beta }{\nu }x_{S}(t)^2 x_I(t) I(t). \end{aligned}$$
An analogous procedure can be done with the second equation in system (24), which leads to relaxation towards a Gamma density with mass fraction I(t) and local mean value given by \(x_I(t)\), and with the third equation in system (24). We easily obtain in this way the system that governs the evolution of the local mean values of the social contacts of the classes of susceptible, infected and recovered individuals
$$\begin{aligned}&\frac{d x_S(t)}{d t} = - \frac{\beta }{\nu }x_{S}(t)^2 x_I(t) I(t), \nonumber \\&\frac{d x_I(t)}{d t} = \beta x_{S}(t) x_I(t) \left( \frac{1+\nu }{\nu } x_S(t) - x_I(t) \right) S(t), \nonumber \\&\frac{d x_R(t)}{d t} = \gamma \frac{I(t)}{R(t)}\left( x_I(t) - x_R(t)\right) . \end{aligned}$$
The closure of the kinetic system (3) around a Gamma-type equilibrium of social contacts leads then to a system of six equations for the pairs of mass fractions J(t) and local mean values \(x_J(t)\), \(J\in \{S,I,R\}\). In the following, we refer to the coupled systems (25) and (27) as the social SIR model (S-SIR). With respect to the classical epidemiological model from which we took inspiration, the main novelty is represented by the presence of system (27), that describes the evolution of the social contacts. It is immediate to conclude from the first equation of (27) that the local mean number of contacts of the population of susceptible individuals decreases, thus showing that the social answer to the presence of the pandemic is to reduce the number of social contacts. A maybe unexpected behavior is present in the second equation of (27), which indicates that, at least in the initial part of the time evolution of the S-SIR model, the class of infected individuals increases the local mean number of social contacts. This effect must be read as a consequence of the more probable transition from susceptible to infected of individuals with a high number of social contacts. A similar conclusion has been derived by resorting to a network-based model (Barthélemy et al. 2005), where it was observed that the epidemic spreads hierarchically (from highly to less highly connected nodes).
It is interesting to remark that system (27) is explicitly dependent on the positive parameter \(\nu =\mu / \lambda \), which measures the heterogeneity of the population in terms of the variance of the statistical distribution of social contacts. More precisely, small values of the constant \(\nu \) correspond to high values of the variance, and thus to a larger heterogeneity of the individuals with respect to social contacts. This is an important point which is widely present and studied in the epidemiological literature (Anderson and May 1991; Diekmann et al. 1990; Diekmann and Heesterbeek 2000). Concerning the COVID-19 pandemic, the influence of population heterogeneity on herd immunity has been recently quantified in (Britton et al. 2020) by testing a SEIR model on different types of populations categorized by different ages and/or different activity levels, thus expressing different levels of heterogeneity.
A limiting case of system (27) is obtained by letting the parameter \(\nu \rightarrow +\infty \), which corresponds to push the variance to zero (absence of heterogeneity). In this case, if the whole population starts with a common number of daily contacts, say \({\bar{x}}\), it is immediate to show that the number of contacts remains fixed in time, thus reducing system (25) to a classical SIR model with contact rate \(\beta {\bar{x}}^2\). Hence this classical epidemiological model is contained in (25)–(27) and corresponds to consider the non realistic case of a population that, regardless of the presence of the epidemic, maintains the same fixed number of daily contacts. The described behaviors are exemplified in Fig. 1, where we considered \(S(0) = 0.98\), \(I(0) = R(0) = 0.01\) and mean number of contacts \(x_S(0) = x_I(0) = x_R(0) = 15\) for two choices \(\nu = 0.5\) and \(\nu = 1\). We can easily observe how the number of recovered is affected by contact heterogeneity: the smaller the heterogeneity is, the larger the population recovers from the pandemic, we point the interested reader to (Britton et al. 2020) for an in-depth discussion on this matter.
Evolution of system (25)–(27) for \(\nu = 0.5\) and \(\nu = 1\)
The derivation leading to systems (25) and (27) can be easily generalized to local incidence rates (4) with a contact function of the form \(\kappa (x,y) = \beta x^\alpha y^\alpha \) with \(\alpha \not = 1\). Also, the procedure can be applied to equilibria which are different from the Gamma density considered here, provided this density has enough moments bounded.
The approach just described can be easily adapted to other compartmental models in epidemiology like SEIR, MSEIR (Brauer et al. 2019; Diekmann and Heesterbeek 2000; Hethcote 2000) and/or SIDHARTE (Gatto 2020; Giordano 2020). For all these cited models, the fundamental aspects of the interaction between social contacts and the spread of the infectious disease, we expect not to change in a substantial way.
A social-SIR model with saturated incidence rate
The system (25)–(27) is a model for describing the time evolution of an epidemic in terms of the statistical distribution of social contacts, without taking into account any external intervention. However, protection measures such as lockdown strategies inevitably cause reduction in the average number of social contacts of the population which can be taken explicitly into account in our model. In the epidemiological literature, a natural way to introduce this mechanism, which dates from the work of Capasso and Serio (Capasso and Serio 1978), consists in considering a non linear incidence rate whose main feature is to be bounded with respect to the number of infected. Interestingly, on this subject, we aim to highlight that a similar behavior of the incidence rate can be directly derived starting from our social-SIR model (25)–(27). The additional hypothesis that is sufficient to introduce is that the average number \(\tilde{x}_I\) of social contacts of infected is frozen as an effect, for instance, of external interventions aimed in controlling the pandemic spread. If this is the case, one can explicitly solve the first equation of system (27), which now reads
$$\begin{aligned} \frac{d x_S(t)}{d t} = - \frac{\beta }{\nu }x^2_{S}(t) {\tilde{x}}_I I(t), \end{aligned}$$
due to the fact that \(x_I(t) = x_I(t=0)= {\tilde{x}}_I\). The exact solution of the equation (28) can then be computed and it gives
$$\begin{aligned} {x_S(t) = \frac{x_S(0)}{1 + \displaystyle \frac{\beta }{\nu }x_S(0) {\tilde{x}}_I \int _0^t I(s)\, ds}.} \end{aligned}$$
The above expression is a generalization of the so-called saturated incidence rate (Capasso and Serio 1978; Korobeinikov and Maini 2005) whose classical form is
$$\begin{aligned} g(I)=\frac{1}{1+\phi I} \end{aligned}$$
with \(\phi \) a suitable positive constant. The same expression can be found from (25) by plugging (29) into the system. This gives
$$\begin{aligned}&\frac{d S(t)}{d t} = -{\bar{\beta }} \, S(t)I(t) H(I(t),t), \nonumber \\&\frac{d I(t)}{d t} = {\bar{\beta }}\, H(I(t),t) S(t)I(t) - \gamma I(t), \nonumber \\&\frac{d R(t)}{d t} = \gamma I(t), \end{aligned}$$
where \({\bar{\beta }} =\beta x_S(0){\tilde{x}}_I\) and
$$\begin{aligned} {H(r(t),t) = \frac{1}{1 + \displaystyle \frac{{\bar{\beta }}}{\nu } \int _0^t r(s)\, ds}, \quad 0<r \le 1.} \end{aligned}$$
We refer in the following to this function H(r(t), t) to as the macroscopic incidence rate. Finally by approximating the integral \(\int _0^t r(s)\, ds\approx t r(t)\) one obtains
$$\begin{aligned} H(r(t),t) = \frac{1}{1 + \displaystyle \phi (t) r(t)}, \quad 0<r \le 1. \end{aligned}$$
with \(\phi (t)=({\bar{\beta }} t)/\nu \), i.e. the classical incidence rate described for the first time in (Capasso and Serio 1978).
It is possible to consider a higher influence of the number of contact in the transmission dynamics by taking \(\alpha >1\) in (5). Proceeding then as described in Sect. 3.1, since
$$\begin{aligned} \int _0^{+\infty } x^\alpha f_S^\infty (x)dx = c_\alpha x_S^\alpha , \qquad c_\alpha >0, \end{aligned}$$
in the specific case \(f^\infty _S\) a Gamma distribution, we would obtain
$$\begin{aligned} \dfrac{d}{dt}x_S(t) = -\dfrac{\beta }{\nu }x_S^{\alpha +1} \tilde{x}_I^\alpha I(t), \end{aligned}$$
with \(x_I^\alpha (t) = {\tilde{x}}_I^\alpha >0\) whose solution is
$$\begin{aligned} x_S(t) = \dfrac{x_S(0)}{\left( 1 + \dfrac{c_\alpha \beta \alpha x_S^\alpha (0)}{\nu } {\tilde{x}}_I^\alpha \displaystyle \int _0^t I(s)ds \right) ^{1/\alpha }}. \end{aligned}$$
Therefore we obtain the closed system for the evolution of mass fractions of the type (31) which incorporates the generalized macroscopic incidence function
$$\begin{aligned} H(r(t),t) = \dfrac{1}{\left( 1+\phi (t)r(t)\right) ^{1/\alpha }}, \qquad 0< r(t)\le 1, \end{aligned}$$
with \(\phi (t) = c_\alpha \alpha \beta x_S^\alpha (0)t/\nu >0\).
In the next Sect. 4 we will perform some numerical computations in which the social SIR model (31) is used in the case in which the incidence rate takes the form (32) as well as in the classical case (33) with \(\phi (t)=\phi \). In a last part, we will also show that a more accurate fit of the model with the experimental data is obtained using a generalized incidence rate of the form (34).
To conclude this part, let now derive the basic reproduction number of the model introduced and discussed in the previous part. To that aim, let us note that by defining
$$\begin{aligned} D(S(t),I(t)) = \int _{{\mathbb {R}}^+} K(f_S,f_I)(x, t) \, dx = {\bar{\beta }} H(I(t),t)S(t)I(t). \end{aligned}$$
we have that in both cases D(S, I) fulfills all the properties required by the class of non-linear incidence rates considered in Korobeinikov and Maini (2005). Indeed, \(D(S,0) = 0\), and the function D(S, I) satisfies
$$\begin{aligned} \frac{\partial D(S,I)}{\partial I}>0, \quad \frac{\partial D(S,I)}{\partial S} >0 \end{aligned}$$
for all \(S,I >0\). Moreover D(S, I) is concave with respect to the variable I, i.e.
$$\begin{aligned} \frac{\partial ^2D(S,I)}{\partial I^2} \le 0, \end{aligned}$$
for all \(S,I >0\).
Furthermore, in this case we may define classically the basic reproduction number \(R_0\) of the model which is given by
$$\begin{aligned} R_0= \frac{1}{\gamma }\, \lim _{I\rightarrow 0, S\rightarrow 1} \frac{\partial D(S,I)}{\partial I} = \frac{{\bar{\beta }}}{\gamma }= \dfrac{\beta x_S(0){\tilde{x}}_I}{\gamma } \end{aligned}$$
Test 1. Large time distribution of the Boltzmann dynamics compared with the equilibrium state of the corresponding Fokker-Planck equation. The initial distribution has been chosen of the form in (39)
Numerical experiments
Test 1. Top: distribution of the daily social contacts for the two choices of the function H. Middle: SIR dynamics corresponding to the different choices of the different mean number of daily contact (left constant case, right as a function of the epidemic). Bottom left: final distribution of the number of contacts. Bottom right: time evolution of the contact function H
In addition to analytic expressions, numerical experiments allow us to visualize and quantify the effects of social contacts on the SIR dynamics used to describe the time evolution of the epidemic. More precisely, starting from a given equilibrium distribution detailing in a probability setting the daily number of contacts of the population, we show how the coupling between social behaviors and number of infected people may modify the epidemic by slowing down the number of encounters leading to infection. In a second part, we discuss how some external forcing, mimicking political choices and acting on restrictions on the mobility, may additionally improve the reduction of the epidemic trend avoiding concentration in time of people affected by the virus and consequently decreasing the probability of hospitalization peaks. In a third part, we focus on experimental data for the specific case of COVID-19 in different European countries and we extrapolate from them the main features characterizing the incidence rate H(I(t), t).
Test 1: On the effects of the social contacts on the epidemic dynamics
We solve the social-SIR kinetic system (24). The starting point is represented by a population composed of \(99.99\%\) of susceptible and \(0.01\%\) of infected. The distribution of the number of contact is described by (16) with \(\nu =1.65\), \(\delta =1\) and \(x_J=10.25\) in agreement with (Béraud 2015) while the epidemic parameters are \(\beta =0.25/x_J^2\) and \(\gamma =0.1\). The kinetic model (20) is solved by a splitting strategy together with a Monte Carlo approach (Pareschi and Russo 2001) where the number of samples used to described the population is fixed to \(M=10^6\). The time step is fixed to \(\Delta t=10^{-2}\) and the scaling parameter is \(\epsilon =10^{-2}\). These choices are enough to observe the convergence of the Boltzmann dynamics to the Fokker-Planck one as shown in Fig. 2 where the analytical equilibrium distribution is plotted together with the results of the Boltzmann dynamics. For this problem, we considered a uniform initial distribution
$$\begin{aligned} f_0(x) = \dfrac{1}{c} \chi (x \in [0,c]), \quad c=20, \end{aligned}$$
being \(\chi (\cdot )\) the indicator function. In the introduced setting, we then compare two distinct cases: in the first one we suppose that nonlinear effects in the contacts dynamics do not modify the contact rate, meaning \(H(I(t),t)=1\), while the second includes the effects of the function H(I(t), t) given in (33) with \(\phi (t)=\phi =10\), i.e. the classical saturated incidence rate case (Capasso and Serio 1978). The results are depicted in Fig. 3. The top right images show the time evolution of the distribution of the number of contacts for the two distinct cases, while the middle images report the corresponding evolution of the epidemic. For this second case, the function H(I(t), t) as well as the distribution of contacts for respectively the susceptible and the recovered are shown at the bottom of the same figure. We clearly observe a reduction of the peak of infected in the case in which the dynamics depends on the number of contacts with H(I(t), t) given by (33) and \(\phi \) constant. We also observe a spread of the number of infected over time when sociality reduction is taken into account.
Test 2: Forcing a change in the social attitudes
Next, we compare the effects on the spread of the disease when the population adapts its habits with a time delay with respect to the onset of the epidemic. This kind of dynamics corresponds to a modeling of a possible lockdown strategy whose effects are to reduce the mobility of the population and, correspondingly, to reduce the number of daily contacts in the population.
The setting is similar to the one introduced in Sect. 4.1 where we first consider a switch between \(H=1\) to \(H(I(t))=1/(1+\phi I(t))\), with \(\phi =20\), when the number of infected increases. The social parameters are \(\nu =1.65\), \(\delta =1\) and \(x_J=10.25\), as before, while the epidemic parameters are \(\beta =0.25/x_J^2\) and \(\gamma =0.1\), the final time is fixed to \(T=70\). The initial distribution of contacts is also assumed to be of the form (39).
We consider three different settings, in the first one \(H=1\) up to \(t<35\) (days), in the second one up to \(t<17\) (days) while in the third case we prescribe a lockdown for a limited amount of time (\(5<t<15\)) and then we relax back to \(H=1\). The results are shown in Fig. 4 for both the distribution of daily contacts over time and the SIR evolution. We can consequently identify three scenarios. In the first case, on the top, we observe a slight reduction of the speed of the infection after \(t>T/2\). The second case, middle images, causes a clear change to the epidemic dynamic, an inversion around \(t=20\) happens. Finally, for the third case we first observe inversion and then the resurgence of the number of infected when the lockdown measures are relaxed. We consider now an alternative scenario whose results are depicted in Fig. 5. In this situation, we compare the case
$$\begin{aligned} H_1(I(t),t)=\dfrac{1}{1+\phi I(t)} \end{aligned}$$
with the case
$$\begin{aligned} H_2(I(t),t) = \dfrac{1}{1 + \phi \int _0^t I(s)\, ds}. \end{aligned}$$
The value of \(\phi \) is increased to \(\phi =50\) to enhance the different behaviors of the two models in two of the three possible lockdown scenarios described previously. In the case of the early lockdown (lockdown after 17 days), the difference between the two models is small. In particular, the case of the incidence rate depending on the instantaneous number of infected gives, as expected, a slightly larger number of total infected in time. The case of early lockdown followed by a relaxation exhibits much stronger differences. In this latter, a time shift between the second wave is clearly present while the incidence rate depending on the history of the pandemic gives a higher pick of infected. For this problem, the simulations are run for \(T=100\) days.
Test 2. Comparisons of different lockdown behaviors. Top: late lockdown. Middle: early lockdown. Bottom: early lockdown and successive relaxation
Test 2. Comparisons of different lockdown behaviors and different form of the incidence rate: \(H_1\), \(H_2\) defined in (40)–(41). Left: early lockdown. Right: early lockdown and successive relaxation
Test 3: Extrapolation of the incidence rate shape from data
In this part, we consider experimental data about the dynamics of COVID-19 in three European countries: France, Italy and Spain. For these three countries, the evolution of the disease, in terms of reported cases, evolved in rather different ways. The estimation of epidemiological parameters for compartmental-like models is an inverse problem of, in general, difficult solution for which different approaches can be considered. We mention in this direction a very recent comparison study (Liu et al. 2020). It is also worth to mention that often the data are partial and heterogeneous with respect to their assimilation, see for instance the discussions in Albi et al. (2021); Capaldi (2012); Chowell (2017); Roberts (2013). This makes the fitting problem challenging and the results naturally affected by uncertainty.
The data we employ, concerning the actual number of infected, recovered and deaths of COVID-19 are publicly available from the John Hopkins University GitHub repository (Dong et al. 2020). For the specific case of Italy, we considered instead the GitHub repository of the Italian Civil Protection DepartmentFootnote 1. In the following, we present the adopted fitting approach which is based on a strategy with two optimization horizons (pre-lockdown and lockdown time spans) depending on the different strategies enacted by the governments of the considered European countries.
In details, we considered first the time interval \(t \in [t_0,t_\ell ]\), being \(t_\ell \) the day in which lockdown started in each country (Spain, Italy and France) and \(t_0\) the day in which the reported cases hit 200 units. The lower bound \(t_0\) has been imposed to reduce the effects of fluctuations caused by the way in which data are measured which have a stronger impact when the number of infected is low. Once the time span has been fixed, we then considered a least square problem based on the minimization of a cost functional \({\mathcal {J}}\) which takes into account the relative \(L^2\) norm of the difference between the reported number of infected and the reported total cases \({\hat{I}}(t)\), \({\hat{I}}(t)+{\hat{R}}(t)\) and the evolution of I(t) and \(I(t)+R(t)\) prescribed by system (31) with \(H \equiv 1\). In practice, we solved the following constrained optimisation problem
$$\begin{aligned} \text {min}_{\beta ,\gamma } {\mathcal {J}}({\hat{I}}, {\hat{R}}, I, R) \end{aligned}$$
where the cost functional \({\mathcal {J}}\) is a convex combination of the mentioned norms and assumes the following form
$$\begin{aligned} J({\hat{I}}, {\hat{R}}, I, R) = p \frac{ \Vert {\hat{I}}(t)-I(t)\Vert _2}{\Vert {\hat{I}}(t) \Vert _2}+ (1-p) \frac{ \Vert {\hat{I}}(t)+{\hat{R}}(t)-I(t)-R(t) \Vert _2}{\Vert {\hat{I}}(t) + {\hat{R}}(t) \Vert _2} \end{aligned}$$
We then choose \(p = 0.1\) and we look for a minimum under the constraints
$$\begin{aligned} \begin{aligned} 0\le \beta \le 0.6, \qquad 0.04 \le \gamma \le 0.06. \end{aligned} \end{aligned}$$
In Table 1 we report the results of the performed parameter estimation together with the resulting reproduction number \(R_0\) defined in (38).
Table 1 Test 3. Model fitting parameters in estimating the reproduction number for the COVID-19 outbreak before lockdown in various European countries
Once that the contagion parameters have been estimated in the pre-lockdown time span, we successively proceeded with the estimation of the shape of the function H from the data. To estimate this latter quantity, we solved a second optimization problem which reads
$$\begin{aligned} \text {min}_H {\mathcal {J}} \end{aligned}$$
in terms of H where \({\mathcal {J}}\) is the same functional of the previous step and where in the evolution of the macroscopic model the values \(\beta ,\gamma \) have been fixed as a result of the first optimization in the pre-lockdown period. The parameters chosen for (43) are \(p = 0.1\) while the constraint is
$$\begin{aligned} \begin{aligned} 0\le H \le 1. \end{aligned} \end{aligned}$$
The second optimisation problem has been solved up to last available data for each country with daily time stepping \(\Delta t = 1\) and over a time window of three days. This has been done with the scope of regularizing possible errors due to late reported infected and smoothing the shape of H. Both optimisation problems (42)–(43) have been tackled using the Matlab functions fmincon in combination with a RK4 integration method of the system of ODEs. In Fig. 6, we present the result of such fitting procedure between the model (31) and the experimental data.
Next, we seek to understand numerically the dependencies of the function H from the number of infected. In particular, we first define the candidate incidence functions \(H_1\), \(H_2\) and \(H_3\) as
$$\begin{aligned} \begin{aligned} H_1(I(t),t)&= \dfrac{c}{1 + \displaystyle \phi I(t)}, \\ H_2(I(t),t)&= \dfrac{c}{1 + \phi \int _0^t I(s)\, ds},\\ H_3(I(t),t)&= \dfrac{c}{\left( 1 + \phi \int _0^t I(s)\, ds\right) ^{1/\alpha }}, \end{aligned} \end{aligned}$$
with \(c>0\), accordingly with (32)–(33) and (34) where \(\phi \) and \(\alpha \) are free parameters which are determined through a least square minimization approach that best fit the estimated curve with conditions \(\phi >0, \alpha \ge 1\). The results of this procedure are presented in Table 2 and in Fig. 7. In Table 2, we reported the values of the fitting coefficients \(\phi \) and \(\alpha \) and to evaluate the goodness of fit we reported the so-called determination coefficient \(R^2\), where \(R^2\approx 1\) indicates perfect fit. We can observe that the optimization gives acceptable results for the different forms of the incidence function especially in the right column of Fig. 7, it appears clearly that the functions \(H_2\) and \(H_3\) are able to better explain the estimated values of H especially after the epidemic peak. In particular, the fits of the model with the available data when \(H_3\) is used are particular good. This fact may indicate that people are rather fast to apply social distancing, and therefore to reduce their average number of contacts, whereas they tend to restore the pre-pandemic average contact rate more slowly, possibly due to further memory effects.
Test 3. Fitting of the parameters of model (31) where \(\beta ,\gamma >0\) were estimated before the lockdown measures assuming \(H \equiv 1\). The parameters characterizing the function \(H(\cdot )\) in (31) have been computed during and after lockdown at regular interval of time, up to July 15. The lockdown measures change in each country (dashed line)
Table 2 Fitting parameters for the estimate of the contact functions \(H_1\), \(H_2\), and \(H_3\) in different countries based on the evolution of H(t) solution of the optimisation problem (43). The corresponding determination coefficient \(R^2\) is also reported
Test 3. Estimated shape of the function H in several European countries (left plots) and its dependency on the variables I(t) and \(\int _{0}^{t}I (s) ds \) (right plots)
Test 4: S-SIR model with fitted contact function
In this last part, we discuss the results of the social-SIR model when the contact function has the shape extrapolated in the previous paragraph. In particular, we aim in studying the role of the extrapolated incidence function H in the fitting of the model with the experimental data. Our choice consists in considering H dependent on the total number of infected, where however, to leave freedom to the model to produce qualitative trends that are in agreement with the data, we leave three free parameters. In details the incidence function takes the form
$$\begin{aligned} H(t,I(t))=\frac{c}{\left( 1+\phi \int _{0}^{t}I(s)ds\right) ^{1/\alpha }}, \end{aligned}$$
where the starting point are the parameters of Table 2 with slight modification with the scope of trying to reproduce the best fit possible with the data at disposal. In order to compare qualitatively the observed curve of infected and the theoretical one, we consider the following setting for the three countries under study: \(\nu =1.65, \delta =1\) and \(x_J=10.25\), \(\Delta t=0.01\), \(\epsilon =0.01\), and \(M=10^5\) particles for the DSMC numerical approximation of the kinetic model. Moreover, we suppose \(S(t=0)\) and \(I(t=0)\) to match the relative number of susceptible and infected of each country at the time in which we start our comparison.
In Fig. 8 we show the profiles of the infected over time together with the shape of the function H again over time. The results show that with the choices done for the incidence rate function, it is possible to reproduce, at least qualitatively, the shape of the trend of infected during the pandemic observed in Italy and in France.
Test 4. Left: Number of infected over time for the S-SIR model when memory effects are taken into account in the contact function. Top left France, Top right Italy, Bottom left Spain, Bottom right effective value of the incidence function
It is worth to remark that the considered social parameters have been estimated only in the case of France, see (Béraud 2015), and we assumed that the initial contact distribution is the same for the Italian case.
We now focus on the case of Spain. For this country, according to Fig. 6, the trend of infected undergoes a deceleration during the lockdown period. This can be also clearly observed in Fig. 7 where the extrapolated shape of the contact function H is shown. Let also observe that while the global behavior of this function is captured by the fitting procedure, we however lose the minimum which takes place around end of April. This minimum is responsible of the deceleration in the number of infected and can be brought back to a strong external intervention in the lifestyle of the Spain country with the scope of reducing the hospitalizations. This effect can be reproduced by our model by imposing the same behavior in the function H. The Fig. 8 reports on the bottom right the shape of the function H over time for in particular this last case. The results show that the S-SIR model is capable to qualitatively reproduce the data.
The development of strategies for mitigating the spreading of a pandemic is an important public health priority. The recent case of COVID-19 pandemic has seen as main strategy restrictive measures on the social contacts of the population, obtained by household quarantine, school or workplace closure, restrictions on travels, and, ultimately, a total lockdown. Mathematical models represent powerful tools for a better understanding of this complex landscape of intervention strategies and for a precise quantification of the relationships between potential costs and benefits of different options (Ferguson 2006). In this direction, we introduced a system of kinetic equations coupling the distribution of social contacts with the spreading of a pandemic driven by the rules of the SIR model, aiming to explicitly quantify the mitigation of the pandemic in terms of the reduction of the number of social contacts of individuals. The kinetic modeling of the statistical distribution of social contacts has been developed according to the recent results in (Béraud 2015), which present an exhaustive description of the contact dynamic in the France population, divided by categories. The characterization of the equilibrium distribution of social contacts in the form of a Gamma density allowed to obtain a new macroscopic system (25)–(27) of six differential equations giving the joint evolution of mass fractions and local mean values of daily contacts for the different classes of individuals: susceptible, infected and recovered. It is worth to notice that the obtained system (27), driving the evolution of the local mean values of social contacts, was found explicitly dependent from a parameter which can be directly linked to the variance of the equilibrium Gamma distribution. This permitted to naturally include in the set of forecasting equations a measurable effect of the heterogeneity of the social contacts. In this direction, with respect to a direct choice of a nonlinear incidence rate in the classical SIR model, as first considered in Capasso and Serio (1978), the system (25)–(27) allows for an explanation of the relation between the contacts among individuals and the spread of an epidemic. Moreover, the new presented model gives a better description of the effects of the contacts reduction policies in the spread of a virus in a population. The numerical experiments confirm that the kinetic system is able to capture most of the macroscopic phenomena related to the effects of partial lockdown strategies, and, eventually to maintain pandemic under control.
Presidenza del Consiglio dei Ministri, https://github.com/pcmdpc/COVID-19
Albi G, Pareschi L, Zanella M (2021) Control with uncertain data of socially structured compartmental models. J. Math. Biol. 82:63
Anderson RM, May RM (1985) Vaccination and herd immunity to infectious diseases. Nature 318:323–329
Anderson RM, May RM (1991) Infectious Diseases of Humans: Dynamics and Control. Oxford Univ. Press, Oxford, UK
Barthélemy B, Barrat A, Pastor-Satorras R, Vespignani A (2005) Dynamical patterns of epidemic outbreaks in complex heterogeneous networks. J Theor Biol 235:275–288
Béraud G et al (2015) The French Connection: the first large population-based contact survey in france relevant for the spread of infectious diseases. PLoS ONE 10(7):e0133203
Block P et al (2020) Social network-based distancing strategies to flatten the COVID-19 curve in a post-lockdown world. Nat Human Behav 4:588–596
Bobylev A (1988) The theory of the nonlinear, spatially uniform Boltzmann equation for Maxwellian molecules. Sov Sco Rev C Math Phys 7:111–233
MATH Google Scholar
Bonaccorsi G et al (2020) Economic and social consequences of human mobility restrictions under COVID-19. PNAS 117(27):15530–15535
Brauer F, Castillo-Chavez C, Feng Z (2019) Mathematical Models in Epidemiology. With a foreword by Simon Levin. Texts in Applied Mathematics, 69. Springer, New York
Britton T, Ball F, Trapman P (2020) A mathematical model reveals the influence of population heterogeneity on herd immunity to SARS-CoV-2. Science 369(6505):846–849
Capaldi A et al (2012) Parameter estimation and uncertainty quantification for an epidemic model. Math Biosci Eng 9(3):553–576
Capasso V, Serio G (1978) A generalization of the Kermack-McKendrick deterministic epidemic model. Math Biosci 42:43–61
Cercignani C (1988) The Boltzmann Equation and its Applications, Springer Series in Applied Mathematical Sciences, vol. 67. Springer-Verlag, New York, NY
Chowell G (2017) Fitting dynamic models to epidemic outbreaks with quantified uncertainty: a primer for parameter uncertainty, identifiability, and forecast. Infect Dis Model 2(3):379–398
Cooke K, Van Den Driessche P, Zou X (1999) Interaction of maturation delay and nonlinear birth in population and epidemic models. J Math Biol 39:332–352
Cordier S, Pareschi L, Toscani G (2005) On a kinetic model for a simple market economy. J Stat Phys 120:253–277
Diekmann O, Heesterbeek JAP, Metz JAJ (1990) On the definition and the computation of the basic reproduction ratio \(R_0\) in models for infectious diseases in heterogeneous populations. J Math Biol 28(4):365–382
Diekmann O, Heesterbeek JAP (2000) Mathematical epidemiology of infectious diseases: model building, analysis and interpretation. Wiley, Chichester, UK
Dimarco G, Toscani G (2019) Kinetic modeling of alcohol consumption. J Stat Phys 177:1022–1042
Dimarco G, Pareschi L, Toscani G, Zanella M (2020) Wealth distribution under the spread of infectious diseases. Phys Rev E 102:022303
Dolbeault J, Turinici G (2020) Heterogeneous social interactions and the COVID-19 lockdown outcome in a multi-group SEIR model. Math Model Nat Pheno 15(36):1–18
Dong E, Du H, Gardner L (2020) An interactive web-based dashboard to track COVID-19 in real time. The Lancet Infectious Diseases . https://plague.com
Ferguson NM et al (2006) Strategies for mitigating an influenza pandemic. Nature 442:448–452
Flaxman et al (2020) Estimating the number of infections and the impact of non-pharmaceutical interventions on COVID-19 in 11 European countries, Report 13. Imperial College COVID-19 Response Team
Fumanelli L et al. (2012) Inferring the Structure of Social Contacts from Demographic Data in the Analysis of Infectious Diseases Spread. Salathé M., editor. PLoS Comput Biol 8: e1002673
Furioli G, Pulvirenti A, Terraneo E, Toscani G (2017) Fokker-Planck equations in the modelling of socio-economic phenomena. Math Mod Meth Appl Sci 27(1):115–158
Furioli G, Pulvirenti A, Terraneo E, Toscani G (2020) Non-Maxwellian kinetic equations modeling the evolution of wealth distribution. Math Mod Meth Appl Sci 30(4):685–725
Gabetta E, Pareschi L, Toscani G (1997) Relaxation schemes for nonlinear kinetic equations. SIAM J Num Anal 34:2168–2194
Gaeta G (2021) A simple SIR model with a large set of asymptomatic infectives. Math Eng 3:1–39
Gatto M et al (2020) Spread and dynamics of the COVID-19 epidemic in Italy: Effects of emergency containment measures. PNAS 117(19):10484–10491
Giordano G et al (2020) Modelling the COVID-19 epidemic and implementation of population-wide interventions in Italy. Nat Med 26:855–860
Gualandi S, Toscani G (2019) Human behavior and lognormal distribution. A kinetic description. Math Mod Meth Appl Sci 29(4):717–753
Hernandez-Vargas EA, Alanis AY, Tetteh J (2019) A new view of multiscale stochastic impulsive systems for modeling and control of epidemics. Annu. Rev. Control 48:242–249
Hethcote HW (2000) The mathematics of infectious diseases. SIAM Rev. 42(4):599–653
Kahneman D, Tversky A (1979) Prospect theory: an analysis of decision under risk. Econometrica 47(2):263–292
Kahneman D, Tversky A (2000) Choices, Values, and Frames. Cambridge University Press, Cambridge, UK
Kehoe T et al (2012) Determining the best population-level alcohol consumption model and its impact on estimates of alcohol-attributable harms. Popul Health Metrics 10(6):1–19
Korobeinikov A, Maini PK (2005) Non-linear incidence and stability of infectious disease models. Math Med Biol 22:113–128
Lienhard JH, Meyer PL (1967) A physical basis for the generalized Gamma distribution. Q Appl Math 25(3):330–334
Liu Y, Gayle AA, Wilder-Smith A, Rocklöv J (2020) The reproductive number of COVID-19 is higher compared to SARS coronavirus. J Travel Med 27(2):1–4
Mossong J et al (2008) Social contacts and mixing pat- terns relevant to the spread of infectious diseases. PLoS Med 5:e74
Nielsen BF, Simonsen L, Sneppen K (2021) COVID-19 Superspreading suggests mitigation by social network modulation. Phys Rev Lett 126:118301
Novozhilov AS (2008) On the spread of epidemics in a closed heterogeneous population. Math Biosci 215:177–185
Pareschi L, Russo G (2001) Time Relaxed Monte Carlo Methods for the Boltzmann Equation. SIAM J Sci Comput 23:1253–1273
Pareschi L, Toscani G (2014) Interacting multiagent systems: kinetic equations and Monte Carlo methods. Oxford University Press, Oxford
Presidenza del Consiglio dei Ministri, Dipartimento della Protezione Civile. GitHub: COVID-19 Italia - Monitoraggio Situazione, https://github.com/pcmdpc/COVID-19
Rehm J et al (2010) Statistical modeling of volume of alcohol exposure for epidemiological studies of population health: the US example. Popul Health Metrics 8(3):1–12
Riley S et al (2003) Transmission dynamics of the etiological agent of SARS in Hong Kong: Impact of public health interventions. Science 300:1961–1966
Roberts MG (2013) Epidemic models with uncertainty in the reproduction. J Math Biol 66:1463–1474
Stacy EW (1962) A generalization of the Gamma distribution. Ann Math Statist 33:1187–1192
Toscani G (2020) Statistical description of human addiction phenomena. In: Nota A, Albi G, Merino-Aceituno S, Zanella M (eds) Trails in Kinetic Theory: foundational aspects and numerical methods. Springer, Berlin
Toscani G (2020) Entropy-type inequalities for generalized Gamma densities. Ric Mat (in press)
Van den Driessche P, Watmough J (2002) Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Math Biosci 180:29–48
This work has been written within the activities of GNFM group of INdAM (National Institute of High Mathematics), and partially supported by MIUR project "Optimal mass transportation, geometrical and functional inequalities with applications". The research was partially supported by the Italian Ministry of Education, University and Research (MIUR): Dipartimenti di Eccellenza Program (2018–2022) - Dept. of Mathematics "F.Casorati", University of Pavia. G.D. would like to thank the Italian Ministry of Instruction, University and Research (MIUR) to support this research with funds coming from PRIN Project 2017 (No. 2017KKJP4X entitled "Innovative numerical methods for evolutionary partial differential equations and applications"). B.P. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 740623).
Open access funding provided by Universitá degli Studi di Pavia within the CRUI-CARE Agreement.
Mathematics and Computer Science Department, University of Ferrara, Ferrara, Italy
G. Dimarco
Sorbonne Université, CNRS, Université de Paris, Inria Laboratoire Jacques-Louis Lions, 75005, Paris, France
B. Perthame
Mathematics Department, University of Pavia, Pavia, Italy
G. Toscani & M. Zanella
G. Toscani
M. Zanella
Correspondence to M. Zanella.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Dimarco, G., Perthame, B., Toscani, G. et al. Kinetic models for epidemic dynamics with social heterogeneity. J. Math. Biol. 83, 4 (2021). https://doi.org/10.1007/s00285-021-01630-1
Revised: 26 May 2021
Mathematics Subject Classification
35Q84
|
CommonCrawl
|
Skip to main content Skip to sections
BioEnergy Research
June 2010 , Volume 3, Issue 2, pp 134–145 | Cite as
Monitoring and Analyzing Process Streams Towards Understanding Ionic Liquid Pretreatment of Switchgrass (Panicum virgatum L.)
Rohit Arora
Chithra Manisseri
Chenlin Li
Markus D. Ong
Henrik Vibe Scheller
Kenneth Vogel
Blake A. Simmons
Seema Singh
First Online: 23 April 2010
Fundamental understanding of biomass pretreatment and its influence on saccharification kinetics, total sugar yield, and inhibitor formation is essential to develop efficient next-generation biofuel strategies, capable of displacing fossil fuels at a commercial level. In this study, we investigated the effect of residence time and temperature during ionic liquid (IL) pretreatment of switchgrass using 1-ethyl-3-methyl imidazolium acetate. The primary metrics of pretreatment performance are biomass delignification, xylan and glucan depolymerization, porosity, surface area, cellulase kinetics, and sugar yields. Compositional analysis and quantification of process streams of saccharides and lignin demonstrate that delignification increases as a function of pretreatment temperature and is hypothesized to be correlated with the apparent glass transition temperature of lignin. IL pretreatment did not generate monosaccharides from hemicellulose. Compared to untreated switchgrass, Brunauer–Emmett–Teller surface area of pretreated switchgrass increased by a factor of ∼30, with a corresponding increase in saccharification kinetics of a factor of ∼40. There is an observed dependence of cellulase kinetics with delignification efficiency. Although complete biomass dissolution is observed after 3 h of IL pretreatment, the pattern of sugar release, saccharification kinetics, and total sugar yields are strongly correlated with temperature.
Biomass Cellulase kinetics Delignification Ionic liquid Pretreatment Porosity Surface area Switchgrass
With the prospect of diminishing fossil fuel supplies and estimated high levels of atmospheric carbon dioxide projected to reach 600 ppm by year 2035 (http://www.occ.gov.uk/activities/stern.htm, Office of Climate Change, UK), there is a great urgency for developing carbon neutral and renewable sources of transportation fuels. Advanced biofuels derived from lignocellulosic biomass are a potential source of renewable transportation fuel, with significantly reduced carbon emissions. Lignocellulosic biomass is composed mainly of cellulose, hemicellulose, and lignin. Cellulose is the most abundant polymer on earth and is composed of fermentable glucose that can be readily converted into biofuel [1]. However, the glucose is hard to liberate cost-effectively due to extensive intermolecular and intramolecular hydrogen bonding of the β-1,4-glycan chains in crystalline cellulose [2]. Hemicellulose is rich in xylose and interacts with cellulose and lignin to strengthen the cell wall. The interactions are mostly noncovalent, but in grasses the arabinoxylan can be covalently linked to lignin through oxidative coupling via ferulate esters [3]. The lignin is a polymer of monolignols, which are phenolic alcohols derived from coumaric acid. Lignin plays an essential role in plants by giving strength to stems and by making tracheary elements watertight and able to withstand the negative pressure in the xylem. Lignin is very difficult to break down and makes polysaccharides inaccessible to enzymes.
Before enzymatic saccharification, lignocellulosic biomass must be pretreated to increase the accessibility of the polysaccharides to hydrolytic enzymes [4]. After pretreatment, enzyme cocktails are capable of hydrolyzing the polysaccharides into simple sugars (C6 and C5) [5] but are currently very expensive. It is therefore crucial that pretreatment methods enable the saccharification to take place without excessive amounts of enzyme. The bond energies of the massive hydrogen bonding network present in cellulose can reach more than 23–25 kJ/mol and render traditional solvents unsuitable for effective biomass dissolution [6, 7]. Various pretreatment technologies are currently being tested and show some promise but face significant commercialization challenges due to an incremental enhancement in saccharification kinetics, requisite high temperature and/or pressure, and overall process economics [8, 9, 10].
Recently, IL pretreatment has been shown to be a promising pretreatment technology due to its ability to solubilize biomass by overcoming the hydrogen bonding within cellulose [10, 11, 12, 13, 14]. Other benefits of IL pretreatment include efficient precipitation and recovery of dissolved polysaccharides upon addition of antisolvent and desirable solvent attributes like low volatility, nonflammability, and thermal stability [10, 11, 12, 13, 14]. In order for ILs to develop beyond these initial positive results into an effective and commercial biomass pretreatment, three main conditions must be met: (1) solubilization of bioenergy crops at high biomass loading (20–30 wt.%), (2) solvent recovery and recycling, and (3) minimal generation of inhibitory by-products that may render cellulolytic enzymes and fermentation microbes inactive.
Towards those goals, a significant amount of research and technology development are needed to understand the nature and extent of biomass solubilization. It has been shown that crystalline cellulose is converted to an amorphous structure during IL pretreatment [10, 15, 16], but the extent of cellulose depolymerization is not sufficiently known. Similarly, the impact of IL pretreatment on hemicellulose is unknown. High-temperature acid pretreatments are very effective in converting hemicellulose to simple sugars but lead to ring opening of glucose and xylose [17, 18] that produce the known microbial inhibitors furfural and hydroxyl methyl furfural [19, 20]. The association of lignin with the polysaccharides has been linked directly to biomass recalcitrance [10, 21, 22]. Although ILs have been shown to be very effective in cellulose solubilization, the fate (referred as disposition in this article) of the lignin carbohydrate complex is not understood. The aim of this study was to develop a fundamental understanding of IL pretreatment by monitoring and analyzing process streams (Fig. 1). The development and optimization of IL pretreatment conditions for the selective depolymerization of either cellulose or lignin, whereby fractionation of different cellulosic and lignin components could be realized, are of great importance and the subject of this study.
a Schematic of ionic liquid process flow and b pictures of untreated, ionic liquid pretreated, supernatant, and regenerated biomass
Detailed parametric studies of temperature and residence time were carried out for a promising IL for biomass solubilization, 1-ethyl-3-methyl imidazolium acetate (abbreviated as [C2mim][OAc]) [23]. We have selected a potential dedicated energy crop switchgrass (Panicum virgatum L.). Switchgrass is native to North America; it is drought resistant and can grow over 1.8 m tall. It is presently used for forage production and soil conservation and has shown potential for biofuel production [24, 25, 26, 27, 28]. The recovered biomass upon antisolvent addition was analyzed in terms of delignification, porosity, surface area, total sugar yield after saccharification, and initial saccharification kinetics. The liquid from the process stream was analyzed for total monomeric sugar yields of cellulose, hemicellulose, xylose, arabinose, glucuronic acid, and other minor C5 and C6 sugars.
Plant Material
Switchgrass was obtained from Ken Vogel at the US Department of Agriculture, Lincoln, NE, USA. It was milled with a Thomas-Wiley Mini Mill fitted with a 40-mesh screen (Model 3383-L10 Arthur H. Thomas Co., Philadelphia, PA, USA).
Preparation of Alcohol-Insoluble Residue
The plant material (50 mg) was treated with 95% ethanol (1:4 w/v) at 100°C for 30 min. After the treatment, sample was centrifuged (10,000×g, 10 min), and the residue was subsequently washed five times with 70% ethanol and dried at 32°C under vacuum. The dried powder obtained after 70% ethanol wash is designated as alcohol-insoluble residue (AIR). The AIR was destarched essentially as described by Obro et al. [29]. AIR was incubated with heat-stable amylase from Bacillus licheniformis (Megazyme, Bray, Ireland) at 0.3 U per 10 mg AIR in 3-(N-morpholino) propanesulfonic acid buffer (50 mM, pH 7.0) at 85°C for 1 h. Subsequently, the sample was incubated with amyloglucosidase from Aspergillus niger (0.33 U per 10 mg AIR) and pullulanase from B. licheniformis (0.04 U/10 mg AIR) in 200 mM sodium acetate (pH 4.5), for 2 h at 50°C. Amyloglucosidase and pullulanase were purchased from Megazyme. The reaction was stopped by adding three volumes of 95% ethanol, vortexed and centrifuged at 10,000×g for 10 min. The residue obtained after centrifugation was washed ten times with 70% ethanol and dried at 32°C under vacuum. The destarched AIR was used as biomass in the pretreatment experiments.
Ionic Liquid Pretreatment of Biomass
Biomass (destarched AIR) was treated with 1-ethyl-3-methylimidazolium acetate (Sigma-Aldrich, St Louis, MO, USA) at a loading of 3% at 110–160°C for 3 h in an oven (Thelco Laboratory oven). Upon cellulose regeneration with water, the pretreated material was washed with deionized hot water. Samples were centrifuged at 10,000×g for 20–25 min, and washes were continued until a colorless supernatant was obtained to ensure complete washing of regenerated biomass. This indicated the absence of ionic liquid in wash which was further confirmed with Fourier transform infrared measurement. The infrared spectrum of supernatant showed no ionic liquid peaks. The pooled washes were concentrated under vacuum for further analysis. For residence time optimization, biomass at 3% loading was treated with 1-ethyl-3-methylimidazolium acetate. The temperature was varied from 110°C to 160°C at increments of 10°C for 3, 6, 24 h, 2, 3, 4, and 5 days in an oven. Pretreated material was washed and pooled as described above.
Monosaccharide Composition
After IL pretreatment and precipitation of cellulose by water, all the supernatants from the washing steps were collected and concentrated. Three-hundred microliters of solution was treated with 150 μl of trifluoroacetic acid (TFA) at 120°C for 1 h. The supernatant was placed in a CentriVap Vacuum Concentrator (Labconco Corp, Kansas City, MO, USA) at 32°C. Monosaccharides produced from untreated and pretreated samples both before and after TFA hydrolysis were analyzed by high-performance anion-exchange chromatography (HPAEC) on an ICS-3000 system equipped with an electrochemical detector and a 4 × 250 mm CarboPac PA™ 20 column (Dionex, Sunnyvale, CA, USA), according to Obro et al. [29]. The monosaccharides including fucose, arabinose, rhamnose, galactose, mannose, xylose, glucose, glucuronic acid, and galacturonic acid used as the external standards for HPAEC were obtained from Sigma-Aldrich and Alfa Aesar (Ward Hill, MA, USA).
Porosimetry of Biomass
Nitrogen porosimetry (Micromeritics ASAP 2020, Norcross, GA, USA) was used to measure the surface area, pore size distribution, and pore volume of the untreated and IL pretreated switchgrass. Samples were degassed at 100°C for 15 h and were cooled in liquid nitrogen, allowing nitrogen gas to condense on the surfaces and within the pores. Each data point along the isotherm was taken with a minimum equilibration time of 100 s to allow the pressure in the sample holder to stabilize. The quantity of gas that condensed could be inferred from the pressure decrease after the sample was exposed to the gas.
Lignin Quantification Using Acetyl Bromide
The lignin content of both untreated and regenerated biomass was determined with a modified acetyl bromide method [30, 31]. Switchgrass powder (5 mg) was treated with 25% (w/w) acetyl bromide in glacial acetic acid (0.2 ml). The tubes were sealed and incubated at 50°C for 2 h at 1,050 rpm on a thermomixer. After digestion, the solutions were diluted with three volumes of acetic acid, and then 0.1 ml was transferred to 15-ml centrifuge tubes and 0.5 ml acetic acid was added to it. The solutions were mixed well and 0.3 M sodium hydroxide (0.3 ml) and 0.5 M hydroxylamine hydrochloride (0.1 ml) were added to it. The final volume was made to 2 ml with the addition of acetic acid. The UV spectra of the solutions were measured against a blank prepared using the same method. The lignin content was determined with the absorbance at 280 nm and calculated with an averaged extinction coefficient of 18.1951 l g−1 cm−1 for grass samples [30]. The reagents used were from Alfa Aesar.
Enzymatic Saccharification
The untreated samples and regenerated biomass from various conditions were hydrolyzed in a batch system. The total batch volume was 10 ml of 50 mM sodium citrate buffer (pH 4.8) with 80 mg glucan contents, cellulase (cellulase from Trichoderma reesei, Worthington Biochemical Corporation, Lakewood, NJ, USA) with a loading of 12.5 IU/ml, and β-glucosidase (Novozyme 188, Novozymes Corporation, Davis, CA, USA) with a loading of 5.1 IU/ml. The digestion vials were incubated in a rotary shaker under the conditions of 150 rpm and 50°C. Experiments were conducted in triplicate for 72 h. The reaction was monitored by periodically taking evenly mixed slurry samples, centrifuging at 16,100×g for 10 min, and measuring the release of soluble reducing sugars by using 3,5-dinitrosalicylic acid (DNS, Sigma-Aldrich) colorimetric assay with d-glucose as a standard [32]. The supernatants (60 μl) were mixed with DNS solution (60 µl) and heated at 95°C for 5 min. After cooling down, their absorbances were taken at 540 nm [33]. The initial rates of formation of total soluble reducing sugars were calculated based on the sugar released in the first 30 min of hydrolysis [13].
Hemicellulose Disposition and Pattern of Sugar Release Using HPAEC
To understand the impact of IL pretreatment on polysaccharides and disposition of hemicellulose, process streams were analyzed and monitored by HPAEC. The results on the HPAEC profile of the supernatant for all the temperature series and residence times of IL pretreatment tested in the present study show only trace amounts of monosaccharides (Fig. 2a, b). This indicates that the IL did not result in complete depolymerization of hemicelluloses. IL pretreatment resulted in some depolymerization of hemicellulose into oligosaccharides which are observed at longer retention times but were not quantified in the current HPAEC system. To gain insight into the pattern of sugar release for various residence times and pretreatment temperatures, the supernatant was first digested with TFA and analyzed.
a HPAEC profile of IL-pretreated switchgrass (supernatant) before TFA hydrolysis showing only trace amounts of monomeric sugars. b Monosaccharide composition after TFA hydrolysis of IL supernatant obtained from switchgrass treated for 5 days at 120°C
The HPAEC profile after TFA digestion (Fig. 3 and Table 1) shows the effect of residence time and temperature during IL pretreatment on the composition of supernatants. The residence time for pretreatment was varied from 3 h to 5 days at 120°C. As shown in Fig. 3, increased IL pretreatment time progressively led to increased oligosaccharide release. The major monosaccharides identified include xylose, arabinose, glucose, galactose, and rhamnose, whereas fucose, galacturonic acid, and glucuronic acid were present in trace amount. At 5 days, 0.074 µg xylose per microgram of biomass was released, which was four times higher than at 3 h and 1.3 times higher than after 24 h of pretreatment. A similar pattern was observed for release of arabinose, glucose, and galactose. The pretreatment temperature was varied from 110°C to 160°C for 3 h residence time. Table 1 illustrates that all sugar yields increased with the temperature increase following incubation for 3 h and pretreatment at 160°C removed a significant amount of hemicellulose, with the total saccharide yields 2–12 times higher than the corresponding values obtained for 110°C to 150°C pretreatments, indicating that IL pretreatment at higher temperature effectively disrupt the carbohydrate–lignin linkages and release hemicellulose. These results show that IL pretreatment is analogous to ammonia fiber expansion [22] pretreatment in terms of polysaccharide depolymerization to oligosaccharides. This is in contrast to acid pretreatment which is reported [10, 34] to release monomeric sugars upon pretreatment and is problematic since, at high pretreatment temperatures, the monomeric sugar may produce inhibitory complexes like furfural and HMF via ring opening of the xylose and glucose, respectively.
Pattern of saccharide release measured by HPAEC and its dependence on residence time
Pattern of saccharide release measured by HPAEC and its dependence on pretreatment temperature
Saccharides (μg/mg sample)
Arabinose
Xylose
Effect of IL Pretreatment on Porosity and Surface Area of Biomass
The surface area was calculated using the Brunauer–Emmett–Teller (BET) model. This model was proposed and named for Brunauer, Emmett, and Teller who published their model in 1938 to propose a generalization to the Langmuir theory of monolayer adsorption [35]. The BET model relates the gas pressures and the volume of gas adsorbed according to the equation:
$$ \frac{p}{{v\left( {{p_0} - p} \right)}} = \frac{1}{{{v_{\rm{m}}}c}} + \frac{{c - 1}}{{{v_{\rm{m}}}c}}\frac{p}{{{p_0}}} $$
where p is the equilibrium pressure, p 0 is the saturation pressure, v is the volume of the adsorbed gas, v m is the volume of gas that would be required to cover all the surfaces with a monolayer, and c is the BET constant. The values of p, p 0, and v are measured directly during the experiment, and the values of v m and c can therefore be inferred by plotting \( \frac{p}{{v\left( {{p_0} - p} \right)}} \) against \( \frac{p}{{{p_0}}} \) and solving for v m and c from the slope \( \frac{{c - 1}}{{{v_{\rm{m}}}c}} \) and the intercept \( \frac{1}{{{v_{\rm{m}}}c}} \). The number of gas molecules that should theoretically cover a monolayer can be calculated from v m, and the BET surface area can be determined by multiplying by the molecular cross section of the adsorbate. For our experiments, the molecular cross section of nitrogen was assumed to be 0.1620 nm2 [35, 36].
Figure 4a, b compares the adsorption isotherms and pore size distributions, respectively, for switchgrass at two pretreatment temperatures with untreated samples. The difference between the untreated switchgrass and the switchgrass treated at 120°C is marginally noticeable but not nearly as dramatic as the difference between the untreated switchgrass and the switchgrass treated at 160°C. Figure 4a shows a significant increase in the quantity of gas adsorbed for the switchgrass treated at 160°C, indicating a higher specific surface area and a greater pore volume. There is a 30-fold increase in the BET surface area (15.8 vs 0.5 m2/g) between the switchgrass treated at 160°C and the untreated material [36]. The pore size and pore volume were calculated using the Barrett–Joyner–Halenda (BJH) method. The BJH method is named for Barrett, Joyner, and Halenda and is the classical model commonly used for determining pore size distributions [36]. In addition to the gas adsorption on the surfaces, additional gas can condense inside pores if the pressure is high enough. This critical pressure is determined from the Kelvin equation. The BJH method starts with the highest equilibration pressures that correspond to the largest pores. It then incrementally calculates the pore volume contained by pores with radii between the two critical radii corresponding to two adjacent equilibration pressures. Since additional gas condenses on the sidewalls, this "wall thickness" is also taken into account when determining the pore size distribution. Using this method, the volume of pores can be plotted as a function of pore radius, thereby producing a pore size distribution and a value for the total pore volume.
a Quantity of nitrogen gas adsorbed, b corresponding pore size distribution, and c quantitative summary of surface area and pore volume measurements
The pore size distributions are shown in Fig. 4b, and a quantitative summary of the measurements is shown in Fig. 4c.
Effect of Pretreatment Temperature
Figure 5a shows the total reducing sugar production and cellulose digestibility as a function of time during the enzymatic hydrolysis of untreated switchgrass and switchgrass pretreated for 3 h at temperatures ranging from 110°C to 160°C. Figure 5b further illustrates the initial rates of hydrolysis to soluble sugars. Taking into account the hydrolysis reaction stoichiometry, 1 g of cellulose upon complete hydrolysis produces 1.11 g of glucose [13]. When compared with untreated switchgrass, the IL pretreated switchgrass exhibited significantly faster cellulose to sugar conversion efficiency. Specifically, the amount of reducing sugars released during the first 3 h increased from 2.84 to 7.44 mg/ml with the temperature increase from 110°C to 160°C. After being hydrolyzed for 24 h, the reducing sugar obtained was 8.20 mg/ml for 160°C pretreated samples with the cellulose digestibility of 92.34%, whereas the sugar and digestibility were 6.75 mg/ml and 76.01% for switchgrass pretreated at 110°C and 0.31 mg/ml and 8.78% for untreated switchgrass, respectively. This implies that the pretreatment with IL at higher temperatures effectively increased the sugar recovery and cellulose digestibility. Correspondingly, as shown in Fig. 5b, the enzymatic kinetics of switchgrass pretreated at 160°C (0.18 mg ml−1 min−1) was 6.1 times higher than 110°C pretreated sample (0.03 mg ml−1 min−1) and up to 39 times higher than the untreated sample (0.0048 mg ml−1 min−1). After 48 h, most of the regenerated switchgrass was converted to soluble sugars and the solutions were clear, while a large fraction of the untreated switchgrass remained suspended in the mixture even after 48 h of digestion with cellulose cocktail, suggesting only partial hydrolysis.
a Effect of pretreatment temperature on enzymatic saccharification showing total reducing sugar yield of untreated and pretreated switchgrass and b effect of pretreatment temperature on the initial rate of reducing sugar formation
Effect of Residence Time
In order to understand the effect of residence time of switchgrass samples in IL, the incubation time was varied from 3 h to 5 days. The regenerated switchgrass was then hydrolyzed in a batch system. Results show that the yield of reducing sugars (Fig. 6a) and the corresponding enzymatic kinetics (Fig. 6b) of switchgrass pretreated with IL at 120°C from 3 h to 5 days were similar, indicating that the pretreatment time has little effect on the enzymatic saccharification. However, pretreatment time has significant effect on the switchgrass pretreated at 160°C with the optimum condition at 3 h, whereas there was very little sugar production from the 24 h sample (Fig. 6c), with the yield even lower than the untreated switchgrass. This is likely due to the reaction of IL with cellulose and warrants further investigation.
a, b Effect of pretreatment time at 120°C on the enzymatic saccharification of untreated and pretreated switchgrass and initial rate of reducing sugar formation. c Effect of pretreatment time at 160°C on the enzymatic saccharification of untreated and pretreated switchgrass
Biomass Delignification
Lignin is the third major component of lignocellulosic biomass and provides a robust linkage between polysaccharide chains. The lignin seal and strong linkages obstruct enzyme accessibility to the polysaccharides [37] and can often irreversibly adsorb cellulases [38, 39], resulting in the need for high enzyme loading for digestion. In order to understand delignification during IL pretreatment, pretreated and regenerated biomass upon antisolvent addition was analyzed for lignin. Figure 7 shows the delignification achieved at different IL pretreatment temperatures. Several other studies also show the delignification during IL pretreatment. Sun et al. investigated the [C2mim][OAc] dissolution of pine and oak wood materials at 110°C for 16 h and achieved lignin reduction of 26.1% for pine and 34.9% for oak [15], which is significantly lower than the delignification efficiency of 73.5% for switchgrass at 160°C observed in this study. Lee et al. [38]used [C2mim][OAc] to extract the lignin from maple wood flour and achieved 85% of lignin after 70 h pretreatment at 90°C. In addition, 93% lignin extraction efficiency was reported from sugarcane bagasse using 1-ethyl-3-methylimidazolium alkyl benzene sulfonate at 190°C for 2 h [40]. The differences in the reported delignification efficiencies are likely due to two main reasons: First, the more effective pretreatments are generally those that employ higher temperatures and incubation times. This general trend shows an inverse relationship between pretreatment time and temperature, and the most efficient delignification temperature is strongly related to average glass transition temperature of 165°C for a given lignin polymer [41]. The actual glass transition temperature of the lignin is dictated by the chemical composition (monolignol composition and concentrations) and varies significantly between grasses, agricultural residues, softwoods, and hardwoods. Secondly, specific ILs have specific interactions with biomass, and those interactions are known to be dependent on the cation, anion, temperature, and time used in the pretreatment process. Correlation of enzymatic kinetics with lignin content (Fig. 7) indicates that increase in lignin removal efficiency is roughly paralleled with the acceleration of enzymatic hydrolysis rate, indicating a strong connection between delignification and enzymatic hydrolysis, which is consistent with the findings of Lee et al. [39]. These observations show that IL pretreatment results in significant level of delignification. In addition to quantification of total lignin amount by acetyl bromide method, our recent study also achieved 69.2% of total lignin removal efficiency with 12.0% of acid-soluble lignin and 57.2% of Klason lignin from switchgrass at 160°C [10]. The acetyl bromide method of lignin quantification was consistent with Klason lignin quantification by the NREL LAP procedure (both in house and by an external service Microbac Laboratory from Boulder, CO, USA), providing confidence in the adaptation of both methods for fairly nascent IL pretreatment technology.
Effect of ionic liquid pretreatment temperature on biomass delignification and correlation between lignin removal efficiency and enzymatic hydrolysis kinetics under various pretreatment temperatures
Mass Balance for the Ionic Liquid Process
An analysis of the mass and composition of the untreated switchgrass and IL pretreatment products has been carried out to develop a proper mass balance for the IL pretreatment process of switchgrass conducted in this study. Figure 8 shows the mass balance for (a) untreated switchgrass and regenerated switchgrass from (b) 120°C and 3 h and (c) 160°C and 3 h pretreatment. In the untreated switchgrass, cellulose, hemicellulose, and lignin are the three major components accounting for 39%, 26%, and 23% total biomass, respectively, with the remaining 12% as structural inorganics, acetyl, and proteins (Fig. 8a). This result is consistent with microbac analysis carried out on the same batch of switchgrass. Process monitoring with HPAEC for total sugar released in the supernatant and for regenerated switchgrass by enzymatic hydrolysis (total reducing sugar yields) provides understanding of the fraction of solid and liquid streams for polysaccharides (cellulose and hemicelluloses). Results show that IL pretreatment at 160°C released only 8% (20.5% of original glucan amount in untreated biomass) of glucan into the supernatant, and regenerated solid is composed of 31% of glucan (80% of the original glucan). In contrast, the majority of xylan (19%) ends up in the liquid stream upon IL pretreatment which is 73% of the original amount in the untreated switchgrass. For IL pretreatment conducted at 160°C (highest temperature condition possible for [C2mim] [OAc] since this IL is thermally unstable at >165°C), 17% lignin (out of a total of 23% lignin in the starting material) was detected in the liquid stream, and the recovered solid (pretreated and regenerated switchgrass) was composed of only 5% lignin. In comparison, ionic pretreatment at 120°C only removed 5% of xylan and 9% of lignin, and 56% of the native carbohydrates in the original switchgrass were regenerated in the cellulose-rich material along with 17% of the native lignin still bonded. The process monitoring and mass balance suggest very little loss of material during handling and IL pretreatment. Detailed studies of the specific composition of the lignin in the supernatant and recovered biomass will provide better insight of the nature of lignin recalcitrance in addition to lignin disposition carried out in present studies.
Mass balances for a untreated switchgrass, b 120°C IL-pretreated, and c 160°C IL-pretreated switchgrass showing the disposition of cellulose, lignin, and hemicelluloses in regenerated biomass and supernatant
Detailed parametric studies of IL pretreatment of switchgrass have been carried out to understand the disposition of cellulose, hemicellulose, and lignin in order to optimize the [C2mim][OAc] pretreatment process. Our findings indicate that efficient depolymerization of hemicellulose occurs regardless of residence time or temperature. The hemicellulose is converted to oligosaccharides, and only trace amounts of monomeric sugars (xylose and glucose) are detected in the IL hydrolysates. Quantification of xylose, glucose, arabinose, galactose, rhamnose, and other minor C6 and C5 sugars after TFA digestion of IL-pretreated and regenerated biomass as a function of temperature and residence time shows very different patterns of sugar release. Temperature studies show that three times as much oligomeric hemicellulose are released at 160°C when compared to 120°C. IL pretreatments conducted at 100–140°C show similar total sugar yields in the IL hydrolysate supernatant.
Three hour IL pretreatment delignified switchgrass by 73.5% at 160°C. An interesting observation was significant enhancement in delignification at 150°C. This is consistent with the reported process temperatures of acid and ammonia fiber expansion pretreatment technologies. These results suggest softening or melting of lignin to be primarily responsible for observed increase in enzymatic hydrolysis kinetics of pretreated biomass. In addition, the temperature variation of IL pretreatment from 110°C to 160°C resulted in lignin removal efficiency that is monotonically related to the increase of enzymatic hydrolysis.
It is also observed that the BET surface area increased ∼30-fold after IL treatment at 160°C. Pore volume (BJH absorption) also increased ∼30-fold after IL pretreatment with average measured pore size of 10–15 nm for [C2mim][OAc]-pretreated switchgrass. Porosimetry data of untreated switchgrass indicate a nonporous matrix with minimal surface accessibility for cellulolytic enzymes.
Time-series experiments show that, for 120°C pretreatment, 3 h IL pretreatment is sufficient since 3 h and 5 day IL pretreatment show no difference in sugar yields. However, pretreatment residence time has a significant effect on switchgrass pretreated at 160°C with the optimum residence time found to be 3 h, whereas there was very little sugar production for the 24 h sample with the yield even lower than the untreated switchgrass. The biomass was completely solubilized for 160°C pretreated sample with a residence time of 5 days and resulted in no recovery of biomass. It is interesting that 160°C and 5 day IL pretreatment resulted in IL hydrolysate which showed presence of oligosaccharides and only trace amounts of monosaccharides by HPAEC measurements. Saccharification kinetics were ∼3.8 times faster for 160°C pretreated switchgrass than 120°C pretreated switchgrass. However, at 160°C, the hydrolysis kinetics increased ∼6.1 times when compared to 110°C pretreatment showing doubling of kinetics for 10°C increase in pretreatment temperature and reached to a maximum of ∼39 times higher than the untreated switchgrass. This observed enhancement of enzymatic hydrolysis kinetics is significant and substantiates the importance of pretreatment technologies for rapid advancement of biofuel production from lignocellulosic biomass.
This work was part of the DOE Joint BioEnergy Institute (http://www.jbei.org) supported by the US Department of Energy, Office of Science, Office of Biological and Environmental Research, through contract DE-AC02-05CH11231 between Lawrence Berkeley National Laboratory and the US Department of Energy. The authors thank Drs. Patanjali Varanasi and Anthe George from the Joint BioEnergy Institute for their help with manuscript proofreading and their valuable comments.
Simmons BA, Loque D, Blanch HW (2008) Next-generation biomass feedstocks for biofuel production. Genome Biol 9(12):242CrossRefPubMedGoogle Scholar
Blanch HW, Wilke CR (1982) Sugars and chemicals from cellulose. Rev Chem Eng 1:71–119Google Scholar
Ralph J, Grabber JH, Hatfield RD (1995) Lignin-ferulate cross-links in grasses—active incorporation of ferulate polysaccharide esters into ryegrass lignins. Carbohydr Res 275(1):167–178CrossRefGoogle Scholar
Yang B, Wyman CE (2008) Pretreatment: the key to unlocking low-cost cellulosic ethanol. Biofuels Bioproducts Biorefining 2(1):26–40CrossRefGoogle Scholar
Zhang YHP, Ding SY, Mielenz JR, Cui JB, Elander RT, Laser M et al (2007) Fractionating recalcitrant lignocellulose at modest reaction conditions. Biotechnol Bioeng 97(2):214–223CrossRefPubMedGoogle Scholar
Bochek AM, Kalyuzhnaya LM (2002) Interaction of water with cellulose and cellulose acetates as influenced by the hydrogen bond system and hydrophilic–hydrophobic balance of the macromolecules. Russ J Appl Chem 75(6):989–993CrossRefGoogle Scholar
Bochek AM (2003) Effect of hydrogen bonding on cellulose solubility in aqueous and nonaqueous solvents. Russ J Appl Chem 76(11):1711–1719CrossRefGoogle Scholar
Lloyd TA, Wyman CE (2005) Combined sugar yields for dilute sulfuric acid pretreatment of corn stover followed by enzymatic hydrolysis of the remaining solids. Bioresour Technol 96(18):1967–1977CrossRefPubMedGoogle Scholar
Lau MW, Dale BE, Balan V (2008) Ethanolic fermentation of hydrolysates from ammonia fiber expansion (AFEX) treated corn stover and distillers grain without detoxification and external nutrient supplementation. Biotechnol Bioeng 99(3):529–539CrossRefPubMedGoogle Scholar
Li C, Knierim B, Manisseri C, Arora R, Scheller HV, Auer M et al (2009) Comparison of dilute acid and ionic liquid pretreatment of switchgrass: biomass recalcitrance, delignification and enzymatic saccharification. BioresourTechnol. doi: 10.1016/j.biortech.2009.10.066 Google Scholar
Swatloski RP, Spear SK, Holbrey JD, Rogers RD (2003) Ionic liquids as green solvents for the dissolution and regeneration of cellulose. Abstr Pap Am Chem Soc 225:U288–U288Google Scholar
Singh S, Simmons BA, Vogel KP (2009) Visualization of biomass solubilization and cellulose regeneration during ionic liquid pretreatment of switchgrass. Biotechnol Bioeng 104(1):68–75CrossRefPubMedGoogle Scholar
Dadi AP, Varanasi S, Schall CA (2006) Enhancement of cellulose saccharification kinetics using an ionic liquid pretreatment step. Biotechnol Bioeng 95(5):904–910CrossRefPubMedGoogle Scholar
Dadi AP, Schall CA, Varanasi S (2007) Mitigation of cellulose recalcitrance to enzymatic hydrolysis by ionic liquid pretreatment. Appl Biochem Biotechnol 137:407–421CrossRefPubMedGoogle Scholar
Sun N, Rahman M, Qin Y, Maxim ML, Rodriguez H, Rogers RD (2009) Complete dissolution and partial delignification of wood in the ionic liquid 1-ethyl-3-methylimidazolium acetate. Green Chemistry 11(5):646–655CrossRefGoogle Scholar
Zhao H, Jones CIL, Baker GA, Xia S, Olubajo O, Person VN (2009) Regenerating cellulose from ionic liquids for an accelerated enzymatic hydrolysis. J Biotechnol 139(1):47–54CrossRefPubMedGoogle Scholar
Kumar R, Wyman CE (2008) The impact of dilute sulfuric acid on the selectivity of xylooligomer depolymerization to monomers. Carbohydr Res 343(2):290–300CrossRefPubMedGoogle Scholar
Chen SF, Mowery RA, Chambliss CK, van Walsum GP (2007) Pseudo reaction kinetics of organic degradation products in dilute-acid-catalyzed corn stover pretreatment hydrolysates. Biotechnol Bioeng 98(6):1135–1145PubMedGoogle Scholar
Ramos LP (2003) The chemistry involved in the steam treatment of lignocellulosic materials. Quim Nova 26(6):863–871Google Scholar
Stoll M, Fengel D (1986) Cellulose crystals in trifluoroacetic-acid (TFE) solutions. Holz Roh Werkst 44(10):394–394CrossRefGoogle Scholar
Mosier N, Hendrickson R, Ho N, Sedlak M, Ladisch MR (2005) Optimization of pH controlled liquid hot water pretreatment of corn stover. Bioresour Technol 96(18):1986–1993CrossRefPubMedGoogle Scholar
Kumar R, Mago G, Balan V, Wyman CE (2009) Physical and chemical characterizations of corn stover and poplar solids resulting from leading pretreatment technologies. Bioresour Technol 100(17):3948–3962CrossRefPubMedGoogle Scholar
Kosan B, Michels C, Meister F (2008) Dissolution and forming of cellulose with ionic liquids. Cellulose 15(1):59–66CrossRefGoogle Scholar
McLaughlin SB, Kszos LA (2005) Development of switchgrass (Panicum virgatum) as a bioenergy feedstock in the United States. Biomass Bioenergy 28(6):515–535CrossRefGoogle Scholar
Sarath G, Baird LM, Vogel KP, Mitchell RB (2007) Internode structure and cell wall composition in maturing tillers of switchgrass (Panicum virgatum L.). Bioresour Technol 98(16):2985–2992CrossRefPubMedGoogle Scholar
Schmer MR, Vogel KP, Mitchell RB, Moser LE, Eskridge KM, Perrin RK (2006) Establishment stand thresholds for switchgrass grown as a bioenergy crop. Crop Sci 46(1):157–161CrossRefGoogle Scholar
Schmer MR, Vogel KP, Mitchell RB, Perrin RK (2008) Net energy of cellulosic ethanol from switchgrass. Proc Natl Acad Sci U S A 105(2):464–469CrossRefPubMedGoogle Scholar
Boylan D, Bush V, Bransby DI (2000) Switchgrass cofiring: pilot scale and field evaluation. Biomass Bioenergy 19(6):411–417CrossRefGoogle Scholar
Obro J, Harholt J, Scheller HV, Orfila C (2004) Rhamnogalacturonan I in Solanum tuberosum tubers contains complex arabinogalactan structures. Phytochemistry 65(10):1429–1438CrossRefPubMedGoogle Scholar
Fukushima RS, Hatfield RD (2004) Comparison of the acetyl bromide spectrophotometric method with other analytical lignin methods for determining lignin concentration in forage samples. J Agric Food Chem 52(12):3713–3720CrossRefPubMedGoogle Scholar
Pandey KK, Pitman AJ (2004) Examination of the lignin content in a softwood and a hardwood decayed by a brown-rot fungus with the acetyl bromide method and Fourier transform infrared spectroscopy. J Polym Sci A Polym Chem 42(10):2340–2346CrossRefGoogle Scholar
Miller GL (1959) Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem 31(3):426–428CrossRefGoogle Scholar
Xiao Z, Storms R, Tsang A (2005) Microplate-based carboxymethyl cellulose assay for endoglucanase activity. Anal Biochem 342:176–178CrossRefPubMedGoogle Scholar
Wyman CE, Dale BE, Elander RT, Holtzapple M, Ladisch MR, Lee YY et al (2009) Comparative sugar recovery and fermentation data following pretreatment of poplar wood by leading technologies. Biotechnol Prog 25(2):333–339CrossRefPubMedGoogle Scholar
Brunauer S, Emmett PH, Teller E (1938) Adsorption of gases in multimolecular layers. J Am Chem Soc 60:309–319CrossRefGoogle Scholar
Barrett EP, Joyner LG, Halenda PP (1951) The determination of pore volume and area distributions in porous substances. 1. Computations from nitrogen isotherms. J Am Chem Soc 73(1):373–380CrossRefGoogle Scholar
Zhu Z, Sathitsuksanoh N, Vinzant T, Schell DJ, McMillan JD, Zhang YH (2009) Comparative study of corn stover pretreated by dilute acid and cellulose solvent-based lignocellulose fractionation: enzymatic hydrolysis, supramolecular structure, and substrate accessibility. Biotechnol Bioeng 103(4):715–724CrossRefPubMedGoogle Scholar
Ooshima H, Sakata M, Harano Y (1986) Enhancement of enzymatic-hydrolysis of cellulose by surfactant. Biotechnol Bioeng 28(11):1727–1734CrossRefPubMedGoogle Scholar
Lee SH, Doherty TV, Linhardt RJ, Dordick JS (2009) Ionic liquid-mediated selective extraction of lignin from wood leading to enhanced enzymatic cellulose hydrolysis. Biotechnol Bioeng 102(5):1368–1376CrossRefPubMedGoogle Scholar
Tan SSY, MacFarlane DR, Upfal J, Edye LA, Doherty WOS, Patti AF et al (2009) Extraction of lignin from lignocellulose at atmospheric pressure using alkyl benzene sulfonate ionic liquid. Green Chem 11(3):339–345CrossRefGoogle Scholar
Mikiji Shigematsu (1994) Enhancement of miscibility between hemicellulose and lignin by addition of their copolymer, the lignin–carbohydrate complex. Macromol Chem Phys 195(8):2827–2837CrossRefGoogle Scholar
© US Government 2010
1.Physical Biosciences Division, Joint BioEnergy InstituteLawrence Berkeley National LaboratoryEmeryvilleUSA
2.Biomass Science and Conversion Technology DepartmentSandia National LaboratoriesLivermoreUSA
3.Energy Nanomaterials DepartmentSandia National LaboratoriesLivermoreUSA
4.United States Department of Agriculture, Grain, Forage, and Bioenergy Research UnitUSDA-ARS, University of NebraskaLincolnUSA
Arora, R., Manisseri, C., Li, C. et al. Bioenerg. Res. (2010) 3: 134. https://doi.org/10.1007/s12155-010-9087-1
First Online 23 April 2010
|
CommonCrawl
|
ГлавнаяБлогБез рубрикиboundary points of a set
boundary points of a set
As a matter of fact, the cell size should be determined experimentally; it could not be too small, otherwise inside the region may appear empty cells. Find out information about Boundary (topology). There are at least two "equivalent" definitions of the boundary of a set: 1. the boundary of a set A is the intersection of the closure of A and the closure of the complement of A. The points (x(k),y(k)) form the boundary. The boundary of A, @A is the collection of boundary points. All limit points of are obviously points of closure of . If is a subset of 0 ⋮ Vote. From Your email address will not be published. Boundary Point. For 3-D problems, k is a triangulation matrix of size mtri-by-3, where mtri is the number of triangular facets on the boundary. ; A point s S is called interior point of S if there exists a neighborhood of s completely contained in S. \(D\) is said to be open if any point in \(D\) is an interior point and it is closed if its boundary \(\partial D\) is contained in \(D\); the closure of D is the union of \(D\) and its boundary: Properties. An example output is here (blue lines are roughly what I need): For example, this set of points may denote a subset It's fairly common to think of open sets as sets which do not contain their boundary, and closed sets as sets which do contain their boundary. Interior and Boundary Points of a Set in a Metric Space Fold Unfold. Does that loop at the top right count as boundary? Solution:A boundary point of a set S, has the property that every neighborhood of the point must contain points in S and points in the complement of S (if not, the point would be an exterior point in the first case and an interior point in the seco nd case). Interior and Boundary Points of a Set in a Metric Space. Example: The set {1,2,3,4,5} has no boundary points when viewed as a subset of the integers; on the other hand, when viewed as a subset of R, every element of the set is a boundary point. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. Commented: Star Strider on 4 Mar 2015 I need the function boundary and i have matlab version 2014a. Practice online or make a printable study sheet. A point on the boundary of S will still have this property when the roles of S and its complement are reversed. From far enough away, it may seem to be part of the boundary, but as one "zooms in", a gap appears between the point and the boundary. Interior and Boundary Points of a Set in a Metric Space. Boundary is the polygon which is formed by the input coordinates for vertices, in such a way that it maximizes the area. In today's blog, I define boundary points and show their relationship to open and closed sets. By default, the shrink factor is 0.5 when it is not specified in the boundary command. Besides, I have no idea about is there any other boundary or not. The set of all boundary points of the point set. • The boundary of a closed set is nowhere dense in a topological space. Table of Contents. Vote. Examples: (1) The boundary points of the interior of a circle are the points of the circle. The set of all boundary points of a set S is called the boundary of the set… Mathematics Foundation 8,337 views https://mathworld.wolfram.com/BoundaryPoint.html. The boundary command has an input s called the "shrink factor." We de ne the closure of Ato be the set A= fx2Xjx= lim n!1 a n; with a n2Afor all ng consisting of limits of sequences in A. s is a scalar between 0 and 1.Setting s to 0 gives the convex hull, and setting s to 1 gives a compact boundary that envelops the points. An average distance between the points could be used as a lower boundary of the cell size. \begin{align} \quad \partial A = \overline{A} \cap \overline{X \setminus A} \quad \blacksquare \end{align} A closed set contains all of its boundary points. a cluster). k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. Walk through homework problems step-by-step from beginning to end. Since, by definition, each boundary point of $$A$$ is also a boundary point of $${A^c}$$ and vice versa, so the boundary of $$A$$ is the same as that of $${A^c}$$, i.e. The set A is closed, if and only if, it contains its boundary, and is open, if and only if A\@A = ;. The set of all boundary points of a set $$A$$ is called the boundary of $$A$$ or the frontier of $$A$$. 2. the boundary of a set A is the set of all elements x of R (in this case) such that every neighborhood of x contains at least one point in A and one point not in A. Join the initiative for modernizing math education. Turk J Math 27 (2003) , 273 { 281. c TUB¨ ITAK_ Boundary Points of Self-A ne Sets in R Ibrahim K rat_ Abstract Let Abe ann nexpanding matrixwith integer entries and D= f0;d 1; ;d N−1g Z nbe a set of N distinct vectors, called an N-digit set.The unique non-empty compact set T = T(A;D) satisfying AT = T+ Dis called a self-a ne set.IfT has positive Lebesgue measure, it is called aself-a ne region. The boundary of a set S in the plane is all the points with this property: every circle centered at the point encloses points in S and also points not in S.: For example, suppose S is the filled-in unit square, painted red on the right. now form a set & consisting of all first points M and all points such that in the given ordering they precede the points M; all other points of the set GX form the set d'. Unlimited random practice problems and answers with built-in Step-by-step solutions. By default, the shrink factor is 0.5 when it is not specified in the boundary command. The set of all boundary points in is called the boundary of and is denoted by . If is neither an interior point nor an exterior point, then it is called a boundary point of . Set Q of all rationals: No interior points. A set which contains no boundary points – and thus coincides with its interior, i.e., the set of its interior points – is called open. In words, the interior consists of points in Afor which all nearby points of X are also in A, whereas the closure allows for \points on the edge of A". Given a set of coordinates, How do we find the boundary coordinates. How can all boundary points of a set be accumulation points AND be isolation points, when a requirement of an isolation point is in fact NOT being an accumulation point? Given a set of coordinates, How do we find the boundary coordinates. Boundary of a set of points in 2-D or 3-D. Every non-isolated boundary point of a set S R is an accumulation point of S. An accumulation point is never an isolated point. The set of all boundary points of a set $$A$$ is called the boundary of $$A$$ or the frontier of $$A$$. Def. All of the points in are interior points… Explanation of boundary point Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. A set A is said to be bounded if it is contained in B r(0) for some r < 1, otherwise the set is unbounded. Do those inner circles count as well, or does the boundary have to enclose the set? point of if every neighborhood You can set up each boundary group with one or more distribution points and state migration points, and you can associate the same distribution points and state migration points with multiple boundary groups. boundary point of S if and only if every neighborhood of P has at least a point in common with S and a point Lors de la distribution de logiciels, les clients demandent un emplacement pour le … 5. Follow 23 views (last 30 days) Benjamin on 6 Dec 2014. Lorsque vous enregistrez cette configuration, les clients dans le groupe de limites Branch Office démarrent la recherche de contenu sur les points de distribution dans le groupe de limites Main Office après 20 minutes. Description. In the familiar setting of a metric space, the open sets have a natural description, which can be thought of as a generalization of an open interval on the real number line. Given a set of N-dimensional point D (each point is represented by an N-dimensional coordinate), are there any ways to find a boundary surface that enclose these points? All points in must be one of the three above; however, another term is often used, even though it is redundant given the other three. Boundary of a set (This is introduced in Problem 19, page 102. In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points which can be approached both from S and from the outside of S. More precisely, it is the set of points in the closure of S not belonging to the interior of S. An element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set. This is finally about to be addressed, first in the context of metric spaces because it is easier to see why the definitions are natural there. Boundary points are useful in data mining applications since they represent a subset of population that possibly straddles two or more classes. The points of the boundary of a set are, intuitively speaking, those points on the edge of S, separating the interior from the exterior. $${F_r}\left( A \right) = {F_r}\left( {{A^c}} \right)$$. consisting of points for which Ais a \neighborhood". Introduced in R2014b. MathWorld--A Wolfram Web Resource. Trivial closed sets: The empty set and the entire set X X X are both closed. Unlike the convex hull, the boundary can shrink towards the interior of the hull to envelop the points. Open sets are the fundamental building blocks of topology. Combinatorial Boundary of a 3D Lattice Point Set Yukiko Kenmochia,∗ Atsushi Imiyab aDepartment of Information Technology, Okayama University, Okayama, Japan bInstitute of Media and Information Technology, Chiba University, Chiba, Japan Abstract Boundary extraction and surface generation are important topological topics for three- dimensional digital image analysis. To get a tighter fit, all you need to do is modify the rejection criteria. A point is called a limit point of if every neighborhood of intersects in at least one point other than . démarcations pl f. boundary nom adjectival — périphérique adj. 0. Set N of all natural numbers: No interior point. Interior and Boundary Points of a Set in a Metric Space. Trying to calculate the boundary of this set is a bit more difficult than just drawing a circle. All boundary points of a set are obviously points of contact of . You should view Problems 19 & 20 as additional sections of the text to study.) Required fields are marked *. In today's blog, I define boundary points and show their relationship to open and closed sets. Interior points, exterior points and boundary points of a set in metric space (Hindi/Urdu) - Duration: 10:01. Hot Network Questions How to pop the last positional argument of a bash function or script? Boundary. In this lab exercise we are going to implement an algorithm that can take a set of points in the x,y plane and construct a boundary that just wraps around the points. Drawing boundary of set of points using QGIS? A point P is an exterior point of a point set S if it has some ε-neighborhood with no points in common with S i.e. k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). Intuitively, an open set is a set that does not contain its boundary, in the same way that the endpoints of an interval are not contained in the interval. However, I'm not sure. Definition 5.1.5: Boundary, Accumulation, Interior, and Isolated Points : Let S be an arbitrary set in the real line R. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S. The set of all boundary points of S is called the boundary of S, denoted by bd(S). THE BOUNDARY OF A FINITE SET OF POINTS 95 KNand we would get a path from A to B with step d. This is a contradiction to the assumption, and so GD,' = GX. The set A in this case must be the convex hull of B. A shrink factor of 1 corresponds to the tightest signel region boundary the points. Explanation of Boundary (topology) There are at least two "equivalent" definitions of the boundary of a set: 1. the boundary of a set A is the intersection of the closure of A and the closure of the complement of A. ; A point s S is called interior point of S if there exists a neighborhood of s completely contained in S. The points (x(k),y(k)) form the boundary. An example is the set C (the Complex Plane). A point each neighbourhood of which contains at least one point of the given set different from it. Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. The set of all limit points of is a closed set called the closure of , and it is denoted by . For the case of , the boundary points are the endpoints of intervals. Unlike the convex hull, the boundary can shrink towards the interior of the hull to envelop the points. Looking for boundary point? Definition: The boundary of a geometric figure is the set of all boundary points of the figure. get arbitrarily close to) a point x using points in a set A. In words, the interior consists of points in Afor which all nearby points of X are also in A, whereas the closure allows for \points on … Definition: The boundary of a geometric figure is the set of all boundary points of the figure. Thus, may or may not include its boundary points. The point and set considered are regarded as belonging to a topological space.A set containing all its limit points is called closed. We de ne the closure of Ato be the set A= fx2Xjx= lim n!1 a n; with a n2Afor all ng consisting of limits of sequences in A. closure of its complement set. Let $$A$$ be a subset of a topological space $$X$$, a point $$x \in X$$ is said to be boundary point or frontier point of $$A$$ if each open set containing at $$x$$ intersects both $$A$$ and $${A^c}$$. If it is, is it the only boundary of $\Bbb{R}$ ? Definition 1: Boundary Point A point x is a boundary point of a set X if for all ε greater than 0, the interval (x - ε, x + ε) contains a point in X and a point in X'. From far enough away, it may seem to be part of the boundary, but as one "zooms in", a gap appears between the point and the boundary. • Let $$X$$ be a topological space. Your email address will not be published. A point which is a member of the set closure of a given set and the set Proof. In the case of open sets, that is, sets in which each point has a neighborhood contained within the set, the boundary points do not belong to the set. point not in . The closure of A is all the points that can The boundary command has an input s called the "shrink factor." Interior and Boundary Points of a Set in a Metric Space. Lemma 1: A set is open when it contains none of its boundary points and it is closed when it contains all of its boundary points. Thus C is closed since it contains all of its boundary points (doesn't have any) and C is open since it doesn't contain any of its boundary points (doesn't have any). For example, 0 and are boundary points of intervals, , , , and . That is if we connect these boundary points with piecewise straight line then this graph will enclose all the other points. • If $$A$$ is a subset of a topological space $$X$$, then $${F_r}\left( A \right) = \overline A – {A^o}$$. <== Figure 1 Given the coordinates in the above set, How can I get the coordinates on the red boundary. The boundary would look like a "staircase", but choosing a smaller cell size would improve the result. 6. • If $$A$$ is a subset of a topological space $$X$$, then $${F_r}\left( A \right) = \overline A \cap \overline {{A^c}} $$. A shrink factor of 0 corresponds to the convex hull of the points. Each row of k defines a triangle in terms of the point indices, and the triangles collectively form a bounding polyhedron. <== Figure 1 Given the coordinates in the above set, How can I get the coordinates on the red boundary. BORDER employs the state-of-the-art database technique - the Gorder kNN join and makes use of the special property of the reverse k-nearest neighbor (RkNN). In other words, for every neighborhood of , (∖ {}) ∩ ≠ ∅. Table of Contents. A point which is a member of the set closure of a given set and the set closure of its complement set. Wrapping a boundary around a set of points. Weisstein, Eric W. "Boundary Point." Theorem: A set A ⊂ X is closed in X iff A contains all of its boundary points. Creating Minimum Convex Polygon - Home Range from Points in QGIS. I think the empty set is the boundary of $\Bbb{R}$ since any neighborhood set in $\Bbb{R}$ includes the empty set. Boundary points are data points that are located at the margin of densely distributed data (e.g. The default shrink factor is 0.5. https://mathworld.wolfram.com/BoundaryPoint.html. Interior and Boundary Points of a Set in a Metric Space Fold Unfold. This follows from the complementary statement about open sets (they contain none of their boundary points), which is proved in the open set wiki. limitrophe adj. The trouble here lies in defining the word 'boundary.' I'm certain that this "conjecture" is in fact true for all nonempty subsets S of R, but from my understanding of each of these definitions, it cannot be true. consisting of points for which Ais a \neighborhood". It is denoted by $${F_r}\left( A \right)$$. An open set contains none of its boundary points. Hints help you try the next step on your own. Explore anything with the first computational knowledge engine. The #1 tool for creating Demonstrations and anything technical. Then any closed subset of $$X$$ is the disjoint union of its interior and its boundary, in the sense that it contains these sets, they are disjoint, and it is their union. data points that are located at the margin of densely distributed data (or cluster). Boundary of a set of points in 2-D or 3-D. Finally, here is a theorem that relates these topological concepts with our previous notion of sequences. Creating Groups of points based on proximity in QGIS? The concept of boundary can be extended to any ordered set … In this paper, we propose a simple yet novel approach BORDER (a BOundaRy points DEtectoR) to detect such points. , then a point is a boundary It is denoted by $${F_r}\left( A \right)$$. For 2-D problems, k is a column vector of point indices representing the sequence of points around the boundary, which is a polygon. Visualize a point "close" to the boundary of a figure, but not on the boundary. If a set contains none of its boundary points (marked by dashed line), it is open. Theorem 5.1.8: Closed Sets, Accumulation Points… $\begingroup$ Suppose we plot the finite set of points on X-Y plane and suppose these points form a cluster. https://goo.gl/JQ8Nys Finding the Interior, Exterior, and Boundary of a Set Topology A set which contains all its boundary points – and thus is the complement of its exterior – is called closed. The point a does not belong to the boundary of S because, as the magnification reveals, a sufficiently small circle centered at a contains no points of S. It has no boundary points. Limit Points . k = boundary(___,s) specifies shrink factor s using any of the previous syntaxes. • If $$A$$ is a subset of a topological space $$X$$, the $$A$$ is open $$ \Leftrightarrow A \cap {F_r}\left( A \right) = \phi $$. Is the empty set boundary of $\Bbb{R}$ ? Indeed, the boundary points of Z Z Z are precisely the points which have distance 0 0 0 from both Z Z Z and its complement. Note the difference between a boundary point and an accumulation point. The set of all boundary points of a set forms its boundary. Note that . A shrink factor of 0 corresponds to the convex hull of the points. Where can I get this function?? k = boundary(x,y) returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). For this discussion, think in terms of trying to approximate (i.e. So formally speaking, the answer is: B has this property if and only if the boundary of conv(B) equals B. Then by boundary points of the set I mean the boundary point of this cluster of points. Boundary of a set of points in 2-D or 3-D. 5. of contains at least one point in and at least one A shrink factor of 1 corresponds to the tightest signel region boundary the points. • A subset of a topological space has an empty boundary if and only if it is both open and closed. Please Subscribe here, thank you!!! If is a subset of , then a point is a boundary point of if every neighborhood of contains at least one point in and at least one point not in . Exterior point of a point set. In the basic gift-wrapping algorithm, you start at a point known to be on the boundary (the left-most point), and pick points such that for each new point you pick, every other point in the set is to the right of the line formed between the new point and the previous point. Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). Note S is the boundary of all four of B, D, H and itself. Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. Our … Looking for Boundary (topology)? This MATLAB function returns a vector of point indices representing a single conforming 2-D boundary around the points (x,y). You set the distribution point fallback time to 20. Also, some sets can be both open and closed. Knowledge-based programming for everyone. Let S be an arbitrary set in the real line R.. A point b R is called boundary point of S if every non-empty neighborhood of b intersects S and the complement of S.The set of all boundary points of S is called the boundary of S, denoted by bd(S). The interior of S is the complement of the closure of the complement of S.In this sense interior and closure are dual notions.. The set of interior points in D constitutes its interior, \(\mathrm{int}(D)\), and the set of boundary points its boundary, \(\partial D\). Learn more about bounding regions MATLAB Interior points, boundary points, open and closed sets. Whole of N is its boundary, Its complement is the set of its exterior points (In the metric space R). 2. the boundary of a set A is the set of all elements x of R (in this case) such that every neighborhood of x contains at least one point in A and one point not in A. What about the points sitting by themselves? Let \((X,d)\) be a metric space with distance \(d\colon X \times X \to [0,\infty)\). Find out information about boundary point. • A subset of a topological space $$X$$ is closed if and only if it contains its boundary. Example, 0 and are boundary points of intervals,,, and it is open. Here lies in defining the word 'boundary. with built-in step-by-step solutions unlike the convex,! Find the boundary command has an input S called the closure of sets are the endpoints intervals... ∩ ≠ ∅ not on the boundary coordinates to ) a point on the boundary command has input! When it is denoted by $ $ X $ $ 4 Mar 2015 I the... Have this property when the roles of S and its complement set of mtri-by-3. Sets: the boundary can shrink towards the interior of the text to study. some sets can be open! An interior point is 0.5 when it is, is it the only boundary of a circle triangular facets the! Natural numbers: No interior point nor an exterior point, then is! Just drawing a circle are the points such a way that it maximizes the.... None of its boundary points of the figure of population that possibly straddles or! A is the polygon which is a triangulation matrix of size mtri-by-3, where mtri is the collection boundary! Have to enclose the set closure of its complement set drawing a circle are endpoints... The Metric space ( Hindi/Urdu ) - Duration: 10:01 specified in the boundary command has an S...: the boundary can shrink towards the interior of the set of points for which Ais a \neighborhood '' step-by-step! ( Hindi/Urdu ) - Duration: 10:01 function or script exterior points and show their relationship open... Intersects in at least one point other than set, How do find. Fundamental building blocks of topology then it is not specified in the boundary shrink! Idea about is there any other boundary or not an exterior point, then it is specified. Not include its boundary, its complement set $ is closed if and only if it contains boundary. In other words, for every boundary points of a set of intersects in at least one point than... Defining the word 'boundary. defines a triangle in terms of the to. Set the distribution point fallback time to 20 approach BORDER ( a points! Coordinates on the boundary point of if every neighborhood of, ( ∖ { } ) ≠! Emplacement pour le de logiciels, les clients demandent un emplacement pour le both open and closed sets (! Topological space has an empty boundary if and only if it contains its boundary of... At the top right count as boundary all rationals: No interior points, exterior points and boundary of... Yet novel approach BORDER ( a \right ) $ $ X $ $ $... The cell size $ $ be a topological space has an empty boundary if and only if it is by... Explanation of boundary ( topology ) boundary points, open and closed limit point of if every of... Note the difference between a boundary point of S. an accumulation point of this cluster of points based on in! Set which contains all of its boundary points above set, How do we find the.... Each row of k defines a triangle in terms of the set closure of (. ) form the boundary of a set of its boundary points and boundary points of a set a in case! Modify the rejection criteria any of the hull to envelop the points hull, the factor. Convex polygon - Home Range from points in 2-D or 3-D complement.. Set the distribution point fallback time to 20 exterior – is called a boundary points in 2-D or 3-D between. Simple yet novel approach BORDER ( a \right ) $ $: No interior points where mtri the! Interior of the set closure of a set in a topological space can I get the coordinates on red., the shrink factor is 0.5 when it is both open and closed set I mean boundary! For every neighborhood of intersects in at least one point other than:.... Factor. Duration: 10:01, think in terms of trying to approximate (.... Include its boundary points additional sections of the previous boundary points of a set point boundary of a geometric figure is polygon. Points and boundary points of is a member of the previous syntaxes above set, How can I get coordinates. 'S blog, I define boundary points of the set of all natural:... Home Range from points in 2-D or 3-D straddles two or more classes that... Towards the interior of the set of points for which Ais a \neighborhood '' of to... Adjectival — périphérique adj is boundary points of a set by $ $ set which contains all of its is! This discussion, think in terms of the previous syntaxes consisting of in. How can I get the coordinates on the boundary command today 's blog, I have version... The convex hull of B, D, H and itself is, is it the only of! Mathematics Foundation 8,337 views boundary of a set in a set of all boundary points of obviously. As a lower boundary of a set of points for which Ais a \neighborhood.. Which contains all of its boundary, its complement are reversed the of. Definition: the empty set and the triangles collectively form a bounding polyhedron the closure... And itself intervals,,,,, and it is, is it the only boundary of figure... Row of k defines a triangle in terms of the set closure of complement!, open and closed mean the boundary of a set in a Metric.! Denoted by $ $ points ( X, y ( k ) ) form the boundary shrink... # 1 tool for creating Demonstrations and anything technical 20 as additional sections of the hull to envelop the of... The figure your own are regarded as belonging to a topological space has an boundary... Vector of point indices representing a single conforming 2-D boundary around the (... At the top right count as boundary Questions How to pop the last positional argument a. A geometric figure is the collection of boundary points of contact of count as,... S and its complement are reversed the function boundary and I have No idea about there! Star Strider on 4 Mar 2015 I need the function boundary and I matlab... Way that it maximizes the area a ⊂ X is closed in X iff a contains all of exterior. R ) closed set contains none of its complement is the set closure of closed! Limit points is called closed property when the roles of S and its complement set boundary (,. Loop at the margin of densely distributed data ( e.g for example, 0 and are boundary points boundary. Set, How do we find the boundary $ X $ $ X $ $, y ( )... Unlike the convex hull of the circle the collection of boundary points boundary points of a set 2-D or 3-D an is. ∖ { } ) ∩ ≠ ∅ boundary is the empty set boundary of a set this. A subset of population that possibly straddles two or more classes, 0 and are points... ( X, y ( k ) ) form the boundary explanation of boundary point and an accumulation.... Paper, we propose a simple yet novel approach BORDER ( a boundary points are useful in mining. All natural numbers: No interior points, boundary points the margin of densely data. An isolated point straight line then this graph will enclose all the other points its –! 30 days ) Benjamin on 6 Dec 2014 ∩ ≠ ∅ returns a vector of point indices and... R ) calculate the boundary coordinates hull to envelop the boundary points of a set ( in the Metric space Fold.. $ is closed if and only if it is both open and closed sets: the of! Metric space of size mtri-by-3, where mtri is the polygon which is formed by the input coordinates for,! Defining the word 'boundary. ) form the boundary of this cluster of points a... Sections of the hull to envelop the points could be used as a lower of! Thus is the empty set and the set ≠ ∅ calculate the boundary of a set of boundary! Set the distribution point fallback time to 20 shrink towards the interior of the hull to the. Difference between a boundary point and an accumulation point for vertices, in such a way that maximizes... View problems 19 & 20 as additional sections of the text to study. above! Fallback time to 20 to study. set a between a boundary of... Around the points ( in the above set, How can I get the coordinates on the.! Point indices representing a single conforming 2-D boundary around the points ( in the Metric space, think in of! A contains all its boundary points of intervals \left ( a boundary point of geometric... @ a is the set of all rationals: No interior point an! Explanation of boundary ( ___, S ) specifies shrink factor. figure, but not on the boundary a. • a subset of population that possibly boundary points of a set two or more classes clients demandent un emplacement pour le are endpoints. Facets on the boundary vector of point indices, and the triangles collectively form a bounding polyhedron called.. 0 and are boundary points on 4 Mar 2015 I need the function boundary and I have version. Between the points ( X, y ( k ) ) form the boundary of \Bbb... ( topology ) boundary points of the set is modify the rejection criteria `` close '' the. Set are obviously points of are obviously points of are obviously points of contact of an average distance the...
Chocolate Covered Strawberries Delivery Uk, Coca Cola Symbol Meaning, Woolworths Peanut Butter Price, Oxidation Number Of C In Hco3-, The Oxford Illustrated History Of Britain Pdf, Accounting Course Online, Surgeon Salary Washington State, Canadian Golden Quinoa, Crustacés In English, Lyman Tower Sargent Utopianism Pdf, Drl Deep Reinforcement Learning, Davines Shampoo Oi,
Дизайн интерьера › boundary points of a set
|
CommonCrawl
|
Explore 18,617 preprints on the Authorea Preprint Repository
A preprint on Authorea can be a complete scientific manuscript submitted to a journal, an essay, a whitepaper, or a blog post. Preprints on Authorea can contain datasets, code, figures, interactive visualizations and computational notebooks.
Read more about preprints.
Optimal Power Flow using Metaheuristic Optimization Methods
Jordan Radosavljević
This paper presents a MATLAB GUI based software tool to solve the optimal power flow (OPF) problem in power systems. The computer program, called optimal power flow graphical user interface (opfgui), has been developed to present the efficiency of different metaheuristic optimization methods in solving the OPF problem. The opfgui program offers a choice of seven standard IEEE test systems, six objective functions, and ten optimization methods. The program generates not only optimal solution, that is, optimum control variables and objective function, but also important results such as, convergence profile, bus voltages and bus powers, brunch power flows and losses, violating constraints (if exist), and statistical evaluation of the results. The software aims to support students in the course of power system analysis that includes studies of the OPF. Using opfgui, the students can compare the performances of different optimization methods in solving the OPF problem.
A NEW WAY TO VISUALLY REPRESENT DOMINANCE IN ECOLOGICAL COMMUNITIES
Raul Ortiz-Pulido
Dominance hierarchies have been visually represented in several ways, but most leave it difficult to quickly understand complex interactions between multiple entities in a community. Here we propose a new way to visually represent the hierarchy of dominance between entities in such systems called an "agonistic diagram". We demonstrate this method using data from nectar-feeding bird communities in Australia and America, then using data from inquiline ants, European Badgers, and urban cats. The advantages of using agonistic diagrams are: (1) that the agonistic diagram can be compared visually with other interaction diagrams in related fields, like mutualism, and (2) that the analytical tools used in other fields can be used to assess agonistic networks. Thus, agonistic networks can be quantified in new ways, making it possible to obtain with relatively minor changes, automated agonistic diagrams from the computational programs and ecological metrics that are currently used to understand mutualistic interactions. This includes metrics of nestedness, modularity, and robustness, the identity of core and peripheral species, and the effects of extinction on networks, among other information.
Topology Delimited Radical-Scavenging Propensity of Monohydroxycinnamic Acids
Lyuben Borislavov
Hydroxyl derivatives of cinnamic acid, both natural and synthetic, are well-known antioxidants. However, not all of them feature the same radical-scavenging propensity. Establishing the relation between structure and reactivity towards radical of those species plays a crucial role in the design of novel antioxidant pharmaceuticals founded on the same parent structure. The study aims at clarifying the relationship between topology, geometry, electron and spin density distribution and the radical-scavenging activity. Different mechanisms are discussed based on the enthalpies of the possible structures generated in the process of dissociation of the OH-bonds. All structures are modelled utilizing first principles methods and accounting for the polar medium at neutral pH (B3LYP/6-311++G**/PCM). A hybrid mechanism is suggested applicable not only to hydroxylated cinnamic acids but to phenolic acids in polar environment in general.
MIMO Antenna with Pattern Reconfiguration and Correlation Reduction for WLAN Applicat...
SAGIRU GAYA
In this paper, a novel beam steerable 2.4 GHz MIMO antenna array is proposed based on the Yagi-Uda principle. The antenna consists of two co-axially excited patch radiators with modified ground plane. A conducting strip with an integrated PIN diode is optimally placed between the patch radiators to act as a director or a reflector to steer the main beam by an angle of +/- 60◦. For all switching modes, the MIMO antenna demonstrates an average gain and efficiency of 5 dB and 92%, respectively, at the resonance frequency of 2.4 GHz. Reduced envelope correlation coefficient in one switching mode exhibited 17 dB improvement in mutual isolation. The simulated results agreed well with measured data. This simple, low-cost, efficient, and mutually isolated antenna array can be very useful in MIMO WLAN applications.
SIGNAL STABILIZATION OF LIMIT CYCLING TWO DIMENSIONAL MEMORY TYPE NON LINEAR SYSTEMS...
kartik Patra
Quenching of Limit cycles in nonlinear multivariable systems is a formidable task and for memory type systems in particular. The phenomenon of signal stabilization with random inputs has been investigated for 2x2 memory type nonlinear self oscillating systems. The results obtained developing a computer programmes for a novel digital simulation process have been substantiated by use of SIMULINK of MATLAB.
Non-polynomial Cubic Spline Method for the Solution of Second-order Linear Hyperbolic...
nazan caglar
Second-order linear hyperbolic equations are solved by using a new three level method based on non-polynomial spline in the space direction and Taylor expansion in the time direction. Numerical results reveal that three level method based on non-polynomial spline is implemented and effective.
Existence and Uniqueness Results for Hilfer-Generalized Proportional Derivatives with...
Idris Ahmed
In this paper, motivated by Hilfer and Hilfer-Katugampola fractional derivatives, we introduce new Hilfer-generalized proportional derivatives which interpolate the classical fractional derivatives of Hilfer, Riemann-Liouville, Caputo and generalized proportional fractional derivatives. We also present some important properties of the proposed derivatives. Furthermore, as an application, we show that this equation is equivalent to the Volterra integral equation and prove the existence, uniqueness of the solution to the Cauchy problem with the nonlocal initial condition. Finally, two examples were given to illustrate the results.
Validating prediction models for use in clinical practice: concept, steps and procedu...
Mohammad Chowdhury
Prediction models are extensively used in numerous areas including clinical settings where a prediction model helps to detect or screen high-risk subjects for early interventions to prevent an adverse outcome, assist in medical decision-making to help both doctors and patients to make an informed choice regarding the treatment, and assist in healthcare services with planning and quality management. There are two main components of prediction modeling: model development and model validation. Once a model is developed using an appropriate modeling strategy, its utility is assessed through model validation. Model validation provides a true test of a model's predictive ability when the model is applied on an independent data set. A model may show outstanding predictive accuracy in a dataset that was used to develop the model, but its predictive accuracy may decline radically when applied to a different dataset. In the era of precision health where disease prevention through early detection is highly encouraged, accurate prediction of a validated model has become even more important for successful screening. Different clinical practice guidelines also recommend incorporating only those prediction models in clinical practice that has demonstrated good predictive accuracy in multiple validation studies. Our purpose is to introduce the readers with the basic concept of model validation and illustrate the fundamental steps and procedures that are necessary to implement model validation.
Mathematical Analysis of Memristor through Fractal-Fractional Differential Operator:...
Kashif Ali Abro
The newly generalized energy storage component namely memristor is a fundamental circuit element so called universal charge-controlled mem-element is proposed for controlling the analysis and coexisting attractors. The governing differential equations of memristor are highly non-linear for mathematical relationships. The mathematical model of memristor is established in terms of newly defined fractal-fractional differential operators so called Atangana-Baleanu, Caputo-Fabrizio and Caputo fractal-fractional differential operator. A novel numerical approach is developed for the governing differential equations of memristor on the basis of Atangana-Baleanu, Caputo-Fabrizio and Caputo fractal-fractional differential operator. We discussed chaotic behavior of memristor under three criteria as (i) varying fractal order, we fixed fractional order, (ii) varying fractional order, we fixed fractal order and (ii) varying fractal and fractional orders simultaneously. Our investigated graphical illustrations and simulated results via MATLAB for the chaotic behaviors of memristor suggest that newly presented Atangana-Baleanu, Caputo-Fabrizio and Caputo fractal-fractional differential operator has generates significant results as compared with classical approach.
A new study on Riesz summability method
Şebnem YILDIZ
Quite recently, Bor \cite{Bor4} has proved a new result on weighted arithmetic mean summability factors of non-decreasing sequences, which includes some known results. In this paper, we extend his result to more general matrix summability method by using an almost increasing sequence and normal matrices in place of a positive non-decreasing sequence and weighted mean matrices, respectively.
Asymptotic profiles of the endemic equilibrium of a diffusive SIS epidemic system wit...
Zhang Jialiang
An SIS epidemic reaction-diffusion model with saturated incidence rate and spontaneous infection is considered. We establish the existence of endemic equilibrium by using a fixed point theorem. We mainly investigate the effects of diffusion and saturation on asymptotic profiles of the endemic equilibrium. Our analysis shows that the spontaneous infection can enhance persistence of infectious disease.
An algorithm for two-variable rational interpolation suitable for matrix manipulation...
Katerina Hadjifotinou
An algorithm for two-variable rational interpolation is developed. The algorithm is suitable for interpolation cases where neither the number of interpolation points to be used nor the final degrees of the rational interpolant are known a priori. Instead, a maximum degree for the interpolant's numerator and denominator is assumed, and, by testing the condition number of the interpolation system's matrix at each step, the necessary reductions are made so as to cope with non-normality and unattainability occasions. The algorithm can be used for applications of the Evaluation-Interpolation technique in matrix manipulations, such as finding the inverse of a matrix with elements rational functions in two variables. The algorithm avoids completely symbolic calculations, thus keeping the execution time very low even if the system size is large, and achieves accurate function recoveries for greater polynomial degrees than other bivariate rational interpolation methods.
Sign-changing solutions for the nonlinear Schrödinger equation with generalized Chern...
Liejun Shen
We study the existence and asymptotic behavior of least energy sign-changing solutions for the nonlinear Schr\"{o}dinger equation coupled with the Chern-Simons gauge theory \[ \left\{ \begin{gathered} -\Delta u+ \omega u+\lambda \sum_{j=1}^k\bigg( \frac{h^2(|x|)}{|x|^2}u^{2(j-1)} +\frac{1}{j}\int_{|x|}^\infty \frac{h(s)}{s}u^{2j}(s) ds \bigg)u= f(u) \ \ \text{in}\ \ \mathbb{R}^2 , \hfill \\ {\text{ }}u \in {H^1_r}({\mathbb{R}^2}), \hfill \\ \end{gathered} \right. \] where $\omega, ~\lambda >0$ are constants, $k\in \mathbb{N}^+$ and \[ h(s)=\int_0^s\frac{r}{2}u^2(r)dr. \] Under some suitable assumptions on $f\in C(\R)$, with the help of the Gagliardo-Nirenberg inequality, we apply the constraint minimization argument to obtain a least energy sign-changing solution $u_\lambda$ with precisely two nodal domains. Furthermore, we prove that the energy of $u_\lambda$ is strictly larger than two times of the ground state energy and analyze the asymptotic behavior of $u_\lambda$ as $\lambda\searrow0^+$. Our results cover and improve the existing ones for the gauged nonlinear Schr\"{o}dinger equation when $k\equiv1$.
A family of novel exact solutions to $(2+1)$-dimensional Boiti-Leon-Manna-Pempinelli...
Nadia Mahak
In this manuscript, some novel exact traveling wave solutions are constructed for $(2+1)$-dimensional Boiti-Leon-Manna-Pempinelli(BLMP) equation. The analytical techniques, namely extended rational sine-cosine method and extended rational sinh-cosh method are utilized for constructing the new solitary wave solutions of BLMP equation. The proposed techniques provides different types of solutions which are expressed in terms of singular periodic wave, solitary waves, bright solitons, dark solitons, periodic wave and kink wave solutions with specific values of parameters.
Solutions of sum-type singular fractional q-integro-differential equation with $m$-po...
Ali Ahmadian
In this study, we investigate the sum-type singular nonlinear fractional q-integro-differential $m$-point boundary value problem. The existence of positive solutions is obtained by the properties of the Green function, standard Caputo $q$-derivative, Riemann-Liouville fractional $q$-integral and the means of a fixed point theorem on a real Banach space $(\mathcal{X}, \|.\|)$ which has a partially order by using a cone $P \subset \mathcal{X}$. The proofs are based on solving the operator equation $\mathcal{O}_1 x + \mathcal{O}_2 x = x $ such that the operator $\mathcal{O}_1$, $\mathcal{O}_2$ are $r$-convex, sub-homogeneous, respectively and define on cone $P$. As applications, we provide an example illustrating the primary effects.
Advanced thermoelastic fractional heat conduction model with two-parameters and phase...
Amr Soleiman
The present paper treats with constructing a generalized two-fractional-parameter heat conduction model of thermoelasticity with multi-phase-lags. In this the new model, the Fourier heat conduction is replaced by a formula that is more general. In the limited cases, the proposed model reduces to several models of generalized thermoelasticity in the presence and absence of fractional derivatives. The model is then adopted to investigate the problem of a semi-infinite medium subjected a body force and exposed to decaying varying heat. Using the Laplace transform procedure, we obtain the analytical solution for various physical fields. Numerical calculations are depicted in tables and graphs to clarify the effects of the two fractional parameters, external force, and decaying parameter. Finally, the results obtained are discussed in detail and also confirmed with those in the previous literature.
Decay of solutions for a viscoelastic wave equation with acoustic boundary conditions
abita rahmoune
In this report we prove that the hypothesis on the memory term $g$ in \cite% {WenjunYunSun} can be modified to be $g^{\prime }(t)\leq -\zeta (t)g^{p}(t)$% , $t\geq 0,$ $1\leq p<\frac{3}{2}$ where $\zeta (t)$ provides% \begin{equation*} \zeta \left( 0\right) >0,\text{ }\zeta ^{\prime }(t)\leq 0,\text{ }% \int_{0}^{\infty }\zeta \left( s\right) ds=+\infty . \end{equation*}% So the optimal decay results are extended.
Dynamics of antibody levels: asymptotic properties
Katarzyna Pichór
We study properties of a piecewise deterministic Markov process modeling the changes in concentration of specific antibodies.The evolution of densities of the process is described by a stochastic semigroup. The long-time behaviour of this semigroup is studied. In particular we prove theorems on its asymptotic stability.
Multi-Robot System Dynamics and Path Tracking
Yousif Kheerallah
Leader detection and follow it are the main challenges in designing a leader-follower multi-robot system, in addition to the challenge of achieving the formation between the members while tracking the leader. The biological system is one of the main sources of inspiration for understanding and designing such multi-robot systems, especially, the aggregations that follow an external stimulus such as the population of Artemia. In this paper, a dynamic model of a multi-robot system following a spot of light, as a leader will design based on the collective motion behavior of the aggregations of Artemia. The kinematic model will derive based on observation of Artemia behavior under external stimuli, while the dynamic model will be derived based on the newton equation and its parameters will be evaluated by two methods: first one based on the physical structure of the mobile robot and the other based on Least Square Parameter Estimation method. Several experiments have been implemented in order to check the success of the proposed system, which are divided into four scenarios of simulation according to four trajectories, the straight line, circle, zigzag and compound path pattern. V-Rep software has been used for the simulation and results appeared the success of the proposed system and the high performance when robots are tracking the leader.
EXISTENCE OF ALMOST AUTOMORPHIC SOLUTION IN DISTRIBUTION FOR A CLASS OF STOCHASTIC IN...
Solym MANOU-ABI
We investigate a new class of stochastic integro-differential equations driven by L´evy noise. Particularly, based on Schauder's fixed point theorem, the existence of square-mean almost automorphic mild solution in distribution is obtained by using some conditions which are weaker than Lipschitz conditions. Our result can be seen as a generalisation of the result of [17] and [28] based on the compactness of solution semigroup operators of our slightly different stochastic model. We provide an example to illustrate ours results.
Well-Conditioned Galerkin Spectral Method for Two-Sided Fractional Diffusion Equation...
Xudong Wang
In this paper, we focus on designing a well-conditioned Glarkin spectral methods for solving a two-sided fractional diffusion equations with drift, in which the fractional operators are defined neither in Riemann-Liouville nor Caputo sense, and its physical meaning is clear. Based on the image spaces of Riemann-Liouville fractional integral operators on Lp([a, b]) space discussed in our previous work, after a step by step deduction, three kinds of Galerkin spectral formulations are proposed, the final obtained corresponding scheme of which shows to be well-conditioned—the condition number of the stiff matrix can be reduced from O(N2α) to O(Nα), where N is the degree of the polynomials used in the approximation. Another point is that the obtained schemes can also be applied successfully to approximate fractional Laplacian with generalized homogeneous boundary conditions, whose fractional order α ∈ (0, 2), not only having to be limited to α ∈ (1, 2). Several numerical experiments demonstrate the effectiveness of the derived schemes. Besides, based on the numerical results, we can observe the behavior of mean first exit time, an interesting quantity that can provide us with a further understanding about the mechanism of abnormal diffusion.
Study and assembly of Quadrotor UAV for telecommunication applications
hicham megnafi
UAVs are defined to be aerial vehicles controlled without humans onboard and are used in a large array of missions where tasks automation and human user protection are necessary. The use of UAVs is growing quickly increasing in many application domains as military surveillance, military fight, and frameworks monitoring…etc. UAVs can carry multiple devices in order to execute these functions like cameras, weapons, and equipment of chemical and biological detection. Nowadays, the development of UAVs became the centre of interest of many research workers who are looking to explore its fields of application. There is currently a large array of projects and research subjects emerging in this field. Our work revolves around an assembly and configuration of Quadrotor Drones in telecommunication inspections operations of transmission networks because of their easiest construction and their rapidly services. The user of the realized UAV can control and schedule the operation so intuitive thanks to its graphic control interface.
On establishing qualitative theory to nonlinear Boundary Value Problem of fractional...
Amjad Ali
In concerned article, we investigate a class of boundary value problem of non-linear fractional differential equations. The aforesaid work is committed to the existence, uniqueness and stability analysis for boundary value problem of fractional differential equation. We used the tools of analysis and fixed point theory to establish the conditions for deserted results. At the end, we provided two examples to illustrate the concerned problem.
ENCRYPTION THROUGH MOLECULAR GRAPHS
Shobana L
Encryption and decryption mostly emerge from mathematical discipline. Molecular graphs are models of molecules in which atoms are represented by vertices and chemical bonds by edges of a graph. Graph invariant numbers reflect certain structural features of a molecule that are derived from its molecular graph, known as topological indices. A topological index is a numerical descriptor of a molecule, based on a certain topological features of the corresponding molecular graph. One of the most widely known topological descriptor is the Wiener index. Wiener number is employed to predict boiling point, molar volumes and large number of physico-chemical properties of alkenes. In this paper a new technique is employed to encrypt and decrypt message through the topological index of molecular graph using the linear congruence equations.
← Previous 1 2 … 736 737 738 739 740 741 742 743 744 … 775 776 Next →
|
CommonCrawl
|
Bots & Agents
Probabilistic Programming
Time Series and Forecasting
Categorical Quantum Computing
Topological Computing
Anyons
Quantum Computing, Topological Quantum Computing
The world around us consists of two families; bosons and fermions. The two families coexist in harmony but have a very different behavior, among other things their statistical behavior is different. On a quantum level this is noticeable when you try to swap two identical members; if you swap two bosons no blip but if you swap fermions you get a minus sign. The minus sign is, ultimately, the reason why our universe does not collapse (via the Pauli exclusion principle). So, one can effectively split the universe in a +1 family and a -1 family.
Things get weird when you travel from one dimension to another. It's a scientific curiosity (pointed out by Hawking and Penrose) that one cannot have the same organic life in 2D as we have in 3D. Mammals fall apart in 2D.
A similar curiosity is that a 5D-being can escape any prison in 4D via a translation in time. The playing around with dimensions is a common technical trick in physics; dimensional regularization, dimensional analysis and so on. At the same time, certain models require higher dimensions, like superstrings and Kaluza-Klein theories. There are also interesting effects in nature when 3D things are constrained to lower dimensions and the effect of forcing electrons to move in a plane under intense magnetic field is one of them. The fractional quantum Hall effect creates via a collective effect quasi-particles (quarticles) which behave as if they do not belong to either fermions or bosons but something in between, hence the name any-ons. Rather than having +1 or -1 the particle get an arbitrary complex number $e^{i\theta}$ when swapped.
The reason that one has anyons in 2D but not in 3D can be explained in various ways. The easiest conceptual explanation is that in three dimensions one can retract any loop to a point, even if there are point-like obstacle in space. Not so in a plane. Look at the picure below first in 3D and then in 2D; in the first case you can move the loop off the plane away from the point p but not in 2D.
One says that the the homotopy group $\pi_1$ is different. From a different perspective the issue can be casted in group theoretic language; the rotations in three or more dimensions has a double covering leading to the first homotopy group equal to $\mathbb{Z}_2$. This directly translates to the existence of fermions and bosons. On the other hand, the first homotopy group of the rotations in two dimensions is equal to $\mathbb{Z}$. When looking at permutations (i.e. swapping things) the permutation group in two dimensions becomes the first braid group while in three or more dimensions it has two irreducible one-dimensional representation (+1 and -1).
Identical things means that swapping them should have no effect. That's only true in a classical world however, the Aharonov-Bohm effect shows that in quantum world it can induced a phase shift. If you look at the swapping in the image below you will understand that all of these changes have no effect in three or more dimensions because the paths can be retracted to a point while in 2D the complementary point is a topological obstruction.
The total wave function has a factor $\Phi(\phi)$ which depends on the angle. In fact, from the point of view of the complementary particle the swapping is a loop of one around the other. The additivity of the factor $\Phi(\phi)$ meands $\Phi(\phi_1)\Phi(\phi_2) = \Phi(\phi_1 + \phi_2)$ and this only works with $\Phi(\phi) = e^{i\eta\phi}$ with $\eta$ an arbitrary real number. This matches with the Aharonov-Bohm factor of course. The arbitrary complex number (with modulus one) is what makes anyons special, you can have a swapping behavior that is neither bosonic nor fermionic but anything in between.
In order to describe an anyonic system it's obvious that the usual Lagrangians (say, Maxwell or Klein-Gordon) will not do since they do not reproduce anyonic stats. At the same time there are not that many relativistic gauge-invariant Lagrangians out there. The Chern-Simons does the trick and we'll show next how Abelian anyons make it happen.
Take the Abelian Chern-Simons dynamics bound to a matter current $J$:
$$\mathcal{L}_{CS} = \frac{1}{2} \kappa\epsilon^{\alpha\beta\rho} A_{\alpha} \partial_{\beta} A_{\rho} – A_{\mu} J^\mu $$
This does not seem to be gauge invariant but if you assume the usual boundary conditions and $\partial_\mu J^\mu = 0$ conservation it's straighforward to see that $A_\mu\mapsto A_\mu+\partial_\mu \Lambda$ induces
$$\frac{1}{2}\kappa\,\partial_\mu(\Lambda \epsilon^{\alpha\beta\rho}\partial_\beta A_\rho).$$
The Euler-Lagrange dynamics is
$$\frac{\delta \mathcal{L}_{CS}} {\delta A_\mu} = 0 = \frac{1}{2}\kappa\epsilon^{\alpha\beta\rho} F_{\beta\rho} – J^\alpha$$
or with the help of $\epsilon^{\alpha\beta\sigma}\epsilon_{\alpha\mu\nu} = \delta^\beta_\mu\delta^\sigma_\nu – \delta^\sigma_\mu\delta^\beta_\nu$
$$F_{\mu\nu} = \frac{\kappa}{2}\,\epsilon_{\mu\nu\rho}J^\rho.$$
If you set $J = (\rho, \bar{J})$ and use the last equation in 2D you get
$$\begin{align}\rho & = \kappa B \\ J^i & = \kappa\epsilon^{ij} E_j\end{align}$$
with $B$ the magnetic field and $E$ the electric field. This result is a bit unusual since it says that the magnetic strength is proportional to the electric charge. This, however, is precisely what one needs to describe anynionic dynamics because if use electrons constrained to 2D you get a magnetic flux off the plane and through this a phase shift when looping around one another. So, let's set
$$\rho(x,t) = e\sum_{a\in N} \delta(x – x_a(t))$$
describing $N$ particles with worldline $x_a$. The current density is correspondingly
$$\bar{j}(x,t) = e\sum_{a\in N} \dot{x}_a\,\delta(x – x_a(t))$$
and the (total) action of the system becomes
$$ S = \frac{m}{2}\sum_a\int dt \,\bar{v}_a^2 +\frac{\kappa}{2} \int d^3x \,\epsilon^{\mu\nu\rho}\, A_{\mu} \partial_{\nu} A_{\rho} – \int d^3x \, A_{\mu} J^{\mu}.$$
As it stands we are still free to pick out any gauge we like, so let's take $A_0=0$ and $\nabla \cdot A =0$. Recall that in two dimensions the Green function is
$$\Delta\left(\frac{1}{2\pi}\log|\bar{x}-\bar{y}|\right) = \delta^{(2)}(\bar{x}-\bar{y})$$
so the vector potential can be resolved to
$$A^i(\bar{x},t) = \frac{1}{2\pi\,\kappa}\int d^2y \,\epsilon^{ij}\frac{x^j-y^j}{|\bar{x}-\bar{y}|^2}\rho(\bar{y},t)$$
and with the density as above this reduces to
$$A^i(\bar{x},t) = \frac{e}{2\pi\kappa}\sum_a\epsilon^{ij}\frac{x^j-x^j_a(t)}{|\bar{x}-\bar{x}_a(t)|^2}.$$
From this the magnetic field can be computed and is indeed as seen earlier
$$ B(\bar{x}_a) = \frac{e}{\kappa} \sum_{a\neq b} \delta(\bar{x}_a- \bar{x}_b)$$
and each anyon carries a flux $\Phi = e/\kappa$. Finally, the phase shift from moving one anyon around another is given by the Wilson loop
$$\exp ie\oint A.dx = \exp\left(\frac{ie^2}{\kappa}\right).$$
This, in a nutshell, shows that Chern-Simons theory is the bridge between QFT and anyons.
The details of how anyons behave are however not necessary towards quantum computations. The situation is a bit similar to computing scattering amplitudes from Feynman diagrams rather than from first principles. In fact, how do anyons combine and interact? What happens when anyons split?
At this point we'll assume that
there is a vacuum which we denote by 1
anyon and anti-anyon pairs can be created out of the vacuum (initialization)
interactions occurs in the shape of braid (a quantum algorithm)
the final state can be measured (result)
As a toy-example, imagine the anyon anti-anyon pair $a, \bar{a}$. In standard QFT the following would be the same
but in the case of anyons the twisting has an effect on the end-result. Also, in the following situation; can one map the results?
Recall that the n-dimensional Artin braid group $B_n$ is defined as
$$\sigma_i\sigma_{i+1}\sigma_i= \sigma_{i+1}\sigma_i\sigma_{i+1} \\ \sigma_i\sigma_j =\sigma_j\sigma_i \;\text{if}\;|i-j|>1.$$
The first relation is called the Yang-Baxter relation and describes the equivalence of the diagram below
At this point at quantum algorithm can be seen as a representation of the braid group or as a representation of anyonic interactions. Let's focus on the braid group first.
If we denote by $R$ the swap of two particles the Yand-Baxter relation can be seen as an expression relating direct products:
$$(R\otimes 1)(1\otimes R)(R\otimes 1) = (1\otimes R)(R\otimes 1)(1\otimes R).$$
The general solution of the Yang-Baxter relation is not easy but in the case of 4×4 matrices the generic solution can be one of the three matrices:
where $a, b, c, d$ are unit complex numbers. If you take the standard basis ${|00\rangle\, |01\rangle\, |10\rangle\, |11\rangle}$ you get for example
$$\begin{align}
R|00\rangle &= \frac{1}{\sqrt{2}}|00\rangle – \frac{1}{\sqrt{2}}|11\rangle \\
R|01\rangle &= \frac{1}{\sqrt{2}}|01\rangle + \frac{1}{\sqrt{2}}|10\rangle \\
R|10\rangle &= -\frac{1}{\sqrt{2}}|01\rangle + \frac{1}{\sqrt{2}}|10\rangle \\
\end{align}$$
which can be recognized as the Bell basis via an Hadamard transform. The general theorem this little computation suggests is the following theorem:
the $R, R', R"$ are universal quantum gates.
The proof of this is not complex, see the wonderful articles by Louis Kauffman. The essence is this: there is an intimate connection between representations of the braid group and quantum algorithms (in the sense that such an algorithm is just a sequence of quantum gates). An even more fascinating result in this direction is the Brylinski theorem which says that any universal quantum gate has to be entangling and vice-versa. In other words, quantum algorithms are entangling qubits. Entanglement is the essence of quantum algorithms. Looking at famous examples like the Shor algorithm it's quite obvious that the magic happens because of entanglement.
Returning to anyons and their interaction one can formalize general interactions by means of basic interactions as follows. The two pair creation processes are related by a linear mapping $R$:
The intertwinning of three anyons as in the following diagrams are equivalent:
The double-splitting of anyons can also be mapped
This is called a recoupling and the theory behind this is called recoupling theory. If you look at the recoupling figure and view it as a mirroring of the tree you will have no difficulty to recognize that if you use this on a tree with an extra branch you get the so-called pentagon identity:
Finally, if you combine the recoupling with the intertwinning identity you get a more involved hexagon identity:
Just take a piece of paper and go through the steps one by one, it's a lot of fun. Well, behind the fun there is a serious body of mathematics related to angular momenta, cobordism, category theory and spin networks.
Let's look at a concrete example of recoupling, the so-called Fibonacci anyons.
A Fibonacci anyon has only two types of interactions:
two anyons merge into the vacuum: $a,a\mapsto 1$
two anyons merge into a new one: $a,a\mapsto a.$
These rules are usually called fusion rules and one can summarize it in one line:
$$a\times a = 1 \times a$$
where you have to read it like you read the decomposition of direct group products $3\otimes 3 \otimes 3 = 10 \oplus 8 \oplus 8 \oplus 1.$ How can one obtain an anyon from a fusion process using the Fibonacci rule?
0: if there is nothing it gives nothing
1: if there is one anyon you get one anyon
2: with two anyons there is only one way to get an anyon
3: in this case there are two ways to obtain an anyon (see figure)
If you continue like this you notice that the next level depends on the previous one and the combinatorics reproduce the Fibonacci series, hence the name.
There is also another popular fusion model called the Ising anyonic model but the thinking is very much like the Fibonacci. See "A short introduction to Fibonacci anyon models" for more details.
In the next article we'll go deeper on the relation between Chern-Simons and the Jones polynomial.
January 4, 2018 /by Orbifold
https://orbifold.net/default/wp-content/uploads/2018/01/symmetric2.png 1042 2709 Orbifold /default/wp-content/uploads/2016/11/OrbifoldNextLogo.png Orbifold2018-01-04 15:34:022018-12-08 15:30:14Anyons
© Copyright 2005-2018 - Orbifold Consulting |Terms | Privacy
Polynomials from quantum loops Some Chern-Simons theory
|
CommonCrawl
|
Marking And Breaking Sticks
A person makes two marks - randomly and independently - on a stick, after which the stick is broken into $n$ pieces. What is the probability that the two marks are found on the same piece?
Compare two cases: when the pieces are equal and when the division is random.
As usual, when no distribution is specified the word "random" refers to the uniform distribution. "Independently" means independent of any previous action. This is especially important in the the second part of the problem. To avoid ambiguity, assume that, prior to breaking the stick, the $n-1$ marks are made randomly and independently of all the marks already made.
For the first part, think of the probability of the second mark falling onto the piece which contains the first mark. The second part is rather combinatorial. In all there are $n+1$ marks; of interest are those markings in which the first two are located successively, with no "break" marks between them.
For the case of equal pieces, one of the marks ought to be located on one of the pieces. The second mark is located on the same piece with the probability of $1/n.$
For the random lengths, imagine that first $n-1$ marks are placed at the break points, making the total number of marks $n+1.$ There are ${n+1\choose 2}$ ways to pick $2$ marks out of $n+1.$ Of interest are those in which two original marks follow each other, with no "break" marks in-between. In other words, if the marks are numbered from $1$ through $n+1,$ we are interested in the distributions where the original marks bear successive numbers: $1$ and $2,$ or $2$ and $3,\ldots,$ or $n$ and $n+1.$ There are $n$ such cases (out of ${n+1\choose 2}),$ implying that the thought probability is
$\displaystyle\frac{n}{{n+1\choose 2}}=\frac{n\cdot 2!(n-1)!}{(n+1)!}=\frac{2}{n+1},$
showing a rather remarkable increase compared to the case of equal pieces.
Question to ponder
We actually had two problems, each with its own solution. It is obvious that the solution of the first problem could not apply to resolve the second one. However, it is worth giving a thought to the question whether the solution to the second problem could not be used to solve the first one. If it could, then the two answers conflict with each other. If it could not, then it is reasonable to inquire, Why?
Paul Nahin, Will You Be Alive In 10 Years From Now?, Princeton University Press, 2013 (36-41)
Geometric Probability
Geometric Probabilities
Are Most Triangles Obtuse?
Eight Selections in Six Sectors
Three Random Points on a Circle
Barycentric Coordinates and Geometric Probability
Stick Broken Into Three Pieces (Trilinear Coordinates)
Stick Broken Into Three Pieces. Solution in Cartesian Coordinatess
Bertrand's Paradox
Birds On a Wire (Problem and Interactive Simulation)
Birds on a Wire: Solution by Nathan Bowler
Birds on a Wire. Solution by Mark Huber
Birds on a Wire: a probabilistic simulation. Solution by Moshe Eliner
Birds on a Wire. Solution by Stuart Anderson
Birds on a Wire. Solution by Bogdan Lataianu
Buffon's Noodle Simulation
Averaging Raindrops - an exercise in geometric probability
Averaging Raindrops, Part 2
Rectangle on a Chessboard: an Introduction
Random Points on a Segment
Semicircle Coverage
Hemisphere Coverage
Overlapping Random Intervals
Random Intervals with One Dominant
Points on a Square Grid
Flat Probabilities on a Sphere
Probability in Triangle
|Contact| |Front page| |Contents| |Algebra| |Probability|
|
CommonCrawl
|
M5-S8: Solubility Product and Predicting Formation of Precipitate
Derive equilibrium expressions for saturated solutions in terms of Ksp and calculate the solubility of an ionic substance from its Ksp value
Predict the formation of a precipitate given the standard reference values for Ksp
Solubility Product Ksp
Dissolution of ionic compounds forms an equilibrium between dissolved ions and undissolved solids.
Dissolution of lead (II) fluoride:
$${\rm PbF}_{2(s)}\rightleftharpoons {\rm Pb}_{(aq)}^{2+}+{2F}_{(aq)}^-$$
The expression of equilibrium constant or solubility product for this reaction is:
$$K_{sp}=[{Pb}^{2+}][F^-]^2$$
· The concentration of solids (ionic compounds) is not included in the expression because it stays relatively constant compared to the dissolved ions. The concentrations of dissociated ions are always expressed in mol L-1 (M).
· Solids and liquids are also heterogenous in that they do not disperse throughout the solution.
· At equilibrium, the rates of dissolution and re-formation of ionic compound are equal. This is known as the point of saturation as no more ions can dissolve in the solvent. Adding more solutes to a solution at saturation point (equilibrium) will cause precipitation to happen.
Figure: Dissociation of solid PbI2 in a beaker.
Solubility product constant varies between ionic compounds. It indicates how far the dissolution proceeds at equilibrium.
A large solubility product value indicates high solubility as a relatively large quantity of ions are dissolved at equilibrium.
Vice versa, a small solubility product value indicates low solubility as relatively small quantity of ions are dissolved at equilibrium.
Table: Solubility product expressions of selected few ionic compounds.
Dissolution reaction
Solubility product
$${\rm MgCO}_{3(s)}\rightleftharpoons{\rm Mg}_{(aq)}^{2+}+{\rm CO}_{3(aq)}^{2-}$$ $$K_{sp}=[{\rm Mg}^{2+}][{\rm CO}_3^{2-}]$$
Iron (II) hydroxide
$${\rm Fe(OH)}_{2(s)}\rightleftharpoons{\rm Fe}_{(aq)}^{2+}+{\rm 2OH}_{(aq)}^-$$ $$K_{sp}=[{\rm Fe}^{2+}][{\rm OH}^-]^2$$
$${{\rm Ca}_3(PO_4)}_{2(s)}\rightleftharpoons{\rm 3Ca}_{(aq)}^{2+}+{\rm 2PO}_{4(aq)}^{3-}$$ $$K_{sp}=[{\rm Ca}^{2+}]^3[{\rm PO}_4^{3-}]^2$$
The following table shows the solubility product constants of selected ionic compounds at 25ºC.
Name, formula
Aluminium hydroxide, Al(OH)3
`3 xx 10^(-34)`
Lead(II) fluoride, PbF2
`3.6 xx 10^(-8)`
Silver sulfide, Ag2S
`8.0 xx 10^(-48)`
Cobalt(II) carbonate, CoCO3
Rank the ionic compounds in the table in order of increasing solubility.
Solubility of Ionic Compounds
The solubility of an ionic compound is usually expressed as the amount of solute dissolved per volume of solvent. This can be either expressed as mass per volume e.g. g/100 mL or as molarity, mol L-1 (M) .
When solubility of an ionic compound is known, its Ksp can be determined. For example, at 25ºC, the solubility of PbF2 is found to be 0.64 g/L. Calculate the Ksp of PbF2.
Step 1: Convert solubility into mol L-1 by dividing by the molar mass of the ionic compound. In this case, the molar mass of PbF2 = 245.2 g mol-1
$$n=\frac{0.64\ g}{245.2\ g\ {\rm mol}^{-1}}$$
$$n=0.0026\ mol$$
Therefore, solubility = 0.0026 mol L-1 (M)
Step 2: Write an equation representing the dissociation of the ionic compound.
$${\rm PbF}_{2(s)}\rightleftharpoons{\rm Pb}_{(aq)}^{2+}+{\rm 2F}_{(aq)}^-$$
Step 3: Write an expression for the solubility product of the ionic compound and find the concentration of each ion.
$$K_{sp}=[{\rm Pb}^{2+}][{\rm F}^-]^2$$
$$K_{sp}=[0.0026][2×0.0026]^2$$
$$K_{sp}=7.1\times{10}^{-8}$$
Using Ksp values from the data sheet, solubility of various ionic compounds can be determined.
$${\rm PbI}_{2(s)}\rightleftharpoons{\rm Pb}_{(aq)}^{2+}+{\rm 2I}_{(aq)}^-$$
Step 2: Write an expression for the solubility product of the ionic compound.
$$K_{sp}=[{\rm Pb}^{2+}][{\rm I}^-]^2$$
Step 3: Let s be the solubility of the ionic compound. Assign concentration of ions in terms of s.
s s 2s
Step 4: Substitute Ksp (from data sheet) and concentrations of ions in terms of s into Ksp expression.
$$9.8\times{10}^{-9}=[s][2s]^2$$
Step 5: Solve for solubility (s).
$$9.8\times{10}^{-9}=4s^3$$
$$s=1.3\times{10}^{-3} mol L^{-1}$$
Determining Ksp from solubility
(a) Lead(II) sulfate (PbSO4) is a key component in lead acid car batteries.
(b) Its solubility in water at 25°C is 4.25 `xx` 10-3 g/100 mL. What is the Ksp of PbSO4?
(c) In terms of their constituent ions, explain why lead(II) fluoride has a much greater solubility than lead(II) sulfate.
The solubility of silver chloride in water at 25ºC is 1.34 `xx` 10-5.What is the Ksp of silver chloride?
The solubility of calcium hydroxide in water at 25ºC is 0.074 g/100 mL. What is the Ksp of calcium hydroxide?
Determining solubility from Ksp
Ksp of barium sulfate is 11.1 `xx` 10-10. What is the solubility of barium sulfate in moles per litre? What about in grams per mL?
Ksp of iron(III) hydroxide is 14.4 `xx` 10-38. What is the solubility of iron(III) hydroxide in moles per litre? What about in grams per mL?
Quotient Solubility Product Q
Similar to equilibrium quotient Q, the solubility quotient is the value assigned to a dissociation reaction when it is not at equilibrium.
When Q < Ksp, the dissociation system has not reached the point of saturation, it is unsaturated. This means more ions will dissolve and become hydrated by solvent molecules e.g. water.
When Q > Ksp, the dissociation system has exceeded the point of saturation, it is saturated. This means precipitate will form as no more ions can be hydrated by solvent molecules.
Important: high Ksp indicates more ions can dissolve before saturation (equilibrium) is reached. This means high solubility. In contrast, low Ksp indicates less ions can dissolve before saturation is reached, low solubility.
Determining formation of precipitate
0.100 g of BaSO4 is added to 500.0 mL of water at 25 ºC. Calculate the Qsp of this barium sulfate solution and thus determine whether a precipitate will form.
Will lead(II) chloride precipitate when 50 mL of 0.10 M Pb(NO3)2 solution is mixed with 50 mL of 0.10 M NaCl solution? Support your answer with a balanced chemical equation and calculations.
Equal volumes of 0.25 mol L–1 solutions of silver nitrate and calcium chloride are mixed at 25 ºC. Predict whether a precipitate will form. Support your answer with calculations.
Previous section: Dissolution of Ionic Compound
Next section: Common Ion Effect
|
CommonCrawl
|
Journal of Classification
pp 1–27 | Cite as
Suboptimal Comparison of Partitions
Jonathon J. O'Brien
Michael T. Lawson
Devin K. Schweppe
Bahjat F. Qaqish
First Online: 11 July 2019
The distinction between classification and clustering is often based on a priori knowledge of classification labels. However, in the purely theoretical situation where a data-generating model is known, the optimal solutions for clustering do not necessarily correspond to optimal solutions for classification. Exploring this divergence leads us to conclude that no standard measures of either internal or external validation can guarantee a correspondence with optimal clustering performance. We provide recommendations for the suboptimal evaluation of clustering performance. Such suboptimal approaches can provide valuable insight to researchers hoping to add a post hoc interpretation to their clusters. Indices based on pairwise linkage provide the clearest probabilistic interpretation, while a triplet-based index yields information on higher level structures in the data. Finally, a graphical examination of receiver operating characteristics generated from hierarchical clustering dendrograms can convey information that would be lost in any one number summary.
Classification Clustering Sensitivity Specificity Triplet index Hierarchical receiver operating characteristic
The authors thank the National Cancer Institute for supporting this research through the training grant "Biostatistics for Research in Genomics and Cancer," NCI grant 5T32CA106209-07 (T32), and the National Institute of Environmental Health Sciences for supporting it through the training grant T32ES007018.
A.1 Proofs
Here, we present the proof from Section 2.2 that when G = 2 the clustering induced by the optimal classifier is equivalent to the optimal clustering.
We identify the two classes with the symbols 0 and 1. Any partition of the set N := {1,…,n} into two (or fewer) subsets corresponds to exactly two classifications which are mirror images of each other. The term "mirror-image" refers to flipping 0's to 1's and 1's to 0's. The optimal classifier is based on the posterior probabilities pi := P(Yi = 1|xi),i = 1,…,n. Observation i is classified as a "1" if pi > 0.5 and to "0" if pi < 0.5. The case pi = 0.5 is ignored since the xi's are continuous (by assumption) and that event has probability 0. For a given data set X, we refer to this as the optimal classification. This optimal classification induces a partition of N. Our task is to show that the posterior probability of that partition exceeds that of any other partition (into two or fewer subsets).
Define ai = max(pi, 1 − pi) and bi = 1 − ai. The posterior probability of the optimal classification is \(A = {\prod }_{i=1}^{n} a_{i}\). The mirror-image classification has posterior probability \(B = {\prod }_{i=1}^{n} b_{i}\). Both the optimal classifier and its mirror image induce the same unique partition induced by no other classification. The posterior probability of that partition is then Q = A + B. Now we wish to prove that Q is larger than the the posterior probability of any other partition. Any alternative partition can be viewed as one induced by a modification of the optimal classification in which the classification of observation i is flipped for each i ∈ S, where S is some proper subset of N. We do not allow S = N since that leads to the same partition. We use Sc to denote the complement of S in N. Define
$$ A_{1} = \prod\limits_{i \in S^{c}} a_{i}, \quad A_{2} = \prod\limits_{i \in S} a_{i}, \quad B_{1} = \prod\limits_{i \in S^{c}} b_{i}, \quad B_{2} = \prod\limits_{i \in S} b_{i}. $$
Note that A1 > B1 and A2 > B2 because ai > bi for each i. Now, clearly, A = A1A2,B = B1B2,Q = A1A2 + B1B2, and the posterior probability of the alternative partition is Q∗ = A1B2 + B1A2. Our aim is to prove that Q > Q∗.
Define λ = A2/(A2 + B2) and note that 1 > λ > 0.5 since A2 > B2 > 0. Since A1 > B1, we obtain the following obvious statement about convex combinations of A1 and B1,
$$ \lambda A_{1} + (1 - \lambda) B_{1} > (A_{1} + B_{1} ) / 2 > (1 - \lambda) A_{1} + \lambda B_{1}. $$
Multiplying the leftmost and rightmost terms in the above inequality by (A2 + B2), and using the relationships λ(A2 + B2) = A2 and (1 − λ)(A2 + B2) = B2, we obtain
$$ A_{1} A_{2} + B_{1} B_{2} > A_{1} B_{2} + B_{1} A_{2}, $$
which is simply Q > Q∗, the desired result.
A.2 Surprising Examples
In this section, we show two examples that provide informative lessons regarding optimal criteria. Using the notation defined in Section 2, we consider a mixture of two normals; a N(0, 1) with probability π1 and a N(μ,σ2) with probability π2 = 1 − π1. The parameters are: (π1,μ,σ2). Since a linear transformation leaves the problem essentially unchanged, it suffices to consider μ > 0,σ2 ≥ 1 and π1 ∈ (0, 1).
Now we will derive optimal classifiers for two special cases, i.e., the ones that assign \(\hat {g}_{i} = \hat {g}(x_{i}) = 1\) if π1f1(xi) > π2f2(xi), and \(\hat {g}_{i} = 2\) otherwise. We will use ϕ(.,a,b) to denote the pdf of the normal distribution with mean a and variance b, and Φ() to denote the cdf of the standard normal distribution.
A.2.1 The Relationship Between K and G
One surprising result from our exploration is that optimal classification can occur when the number of true groups has been misspecified. Here, we show how this can occur.
Suppose σ2 > 1. The ratio
$$ \frac{f_{1}(x)}{f_{2}(x)} = \sigma \exp\left( \frac{\mu^{2}}{2(\sigma^{2}-1)} \right) \sqrt{2 \pi \tau^{2}} \phi\left( x, \theta, \tau^{2} \right) $$
has a maximum of
$$ M := {\sigma} \exp\left( \frac{\mu^{2}}{2(\sigma^{2}-1)} \right). $$
In the above,
$$ \theta := \frac{-\mu}{\sigma^{2} - 1}, \quad \tau^{2} := \frac{\sigma^{2}}{\sigma^{2} - 1}. $$
Note that M > 1 since σ2 > 1. So if M < π2/π1 then π1f1(x) < π2f2(x) and \(\hat {g}(x) = 2\) for all x.
This can arise only if π2 > 1/2. An example: π2 = 0.75,μ = 1.5,σ2 = 4,M = 2.91 < π2/π1 = 3. In this case, the optimal classifier assigns all observations to a single class, even though the model with two classes and all its parameters are completely known. It shows that accurate estimation of the number of classes is not a requirement for optimal classification.
The fact that optimal solutions can occur with the number of classes misspecified might cause some concern for researchers who design algorithms for selecting the number of classes. Popular methods for achieving this objective include the Gap Statistic (Tibshirani et al. 2001) and the Silhouette Score (Kaufman and Rousseeuw 2005).
A.2.2 Compactness
The above example also demonstrates that optimally assigned classes need not be compact. If M ≥ π2/π1 then \(\hat {g}(x) = 1\) if
$$ |x - \theta| < \sqrt{ 2 \tau^{2} \log \frac{M \pi_{1}}{\pi_{2}} }. $$
Otherwise, we assign \(\hat {g}(x) = 2\). That is, x values within \( \sqrt { 2 \tau ^{2} \log \frac {M \pi _{1}}{\pi _{2}} } \) from 𝜃 are assigned to class 1. More extreme values, above and below 𝜃 are assigned to class 2. Hence, the region assigned to class 2 is a union of two disjoint sets. This shows that the notion that observations close together should be placed in the same class is generally false.
A.3 Theoretical Derivation of the Pairwise Indices
Now we give general expressions for the indices and the probabilities of relevant events. Let πg = P(Ai = g),g = 1,⋯ ,G, and let π denote the column vector (π1,⋯ ,πG)⊤. Of course, \({\sum }_{g=1}^{G} \pi _{g} = 1\).
For two independent observations, say observations 1 and 2, \(P(A_{1} = A_{2} ) = {\sum }_{g=1}^{G} {\pi _{g}^{2}} = \pi ^{\top } \pi = ssq(\pi )\), ssq denotes the sum of squares.
Define \(b_{gj} = P(\hat {g}_{i} = j | A_{i} = g)\) for g = 1,⋯ ,G; j = 1,⋯ ,K. The bgj's are collected into the G × K matrix B.
The marginal distribution of \(\hat {g}_{i}\) is given by
$$ \begin{array}{@{}rcl@{}} P(\hat{g}_{1} = j ) &=& \sum\limits_{g=1}^{G} P(A_{1} = g) P(\hat{g}_{1} = j | A_{1} = g) \\ &=& \sum\limits_{g=1}^{G} \pi_{g} b_{gj} = (B^{\top} \pi)_{j}. \end{array} $$
That is, the K-vector B⊤π is the pmf of \(\hat {g}_{i}\), and
$$ P(\hat{g}_{1} = \hat{g}_{2}) = \pi^{\top} B B^{\top} \pi = ssq(B^{\top} \pi). $$
The joint probability of true linkage and its detection in a sample is
$$ \begin{array}{@{}rcl@{}} P(\hat{g}_{1} = \hat{g}_{2} , A_{1} = A_{2}) &=& \sum\limits_{g=1}^{G} \sum\limits_{j=1}^{K} P(\hat{g}_{1} = \hat{g}_{2} =j, A_{1} = A_{2} = g) \\ &=& \sum\limits_{g=1}^{G} \sum\limits_{j=1}^{K} P(A_{1} = A_{2} = g) P(\hat{g}_{1} = \hat{g}_{2} =j | A_{1} = A_{2} = g) \\ &=& \sum\limits_{g=1}^{G} \pi_{g} \sum\limits_{j=1}^{K} b_{gj}^{2} \\ &=& trace(B^{\top} diag({\pi_{g}^{2}}) B) \\ &=& trace(C^{\top} C ), \end{array} $$
where C = diag(πg)B.
The relevant probabilities can be displayed in a 2 × 2 table as shown in Table 2. Now we easily obtain
$$ \begin{array}{@{}rcl@{}} \gamma_{2} &=& ssq(B^{\top} \pi) - trace(C^{\top} C ), \\ \gamma_{3} &=& ssq(\pi) - trace(C^{\top} C ), \\ \gamma_{1} &=& 1 - ssq(\pi) - ssq(B^{\top} \pi) + trace(C^{\top} C ), \\ CSENS &=& \gamma_{4} / (\gamma_{3} + \gamma_{4}), \\ CSPEC &=& \gamma_{1} / (\gamma_{1} + \gamma_{2}), \\ CPPV &=& \gamma_{4} / (\gamma_{2} + \gamma_{4}), \\ CNPV &=& \gamma_{1} / (\gamma_{1} + \gamma_{3}). \\ \end{array} $$
Rand's paremeter is γ2 + γ4. The kappa parameter is
$$ \kappa = \frac{2 \delta} {1 - \gamma_{1} - \gamma_{4} + 2 \delta}, $$
where δ = γ1γ4 − γ2γ3.
The 2 × 2 table of probabilities used to compute the clustering indices
\(\hat {g}_{1} \ne \hat {g}_{2}\)
\(\hat {g}_{1} = \hat {g}_{2}\)
A1≠A2
γ 1
1 − ssq(π)
A1 = A2
γ4 = trace(C⊤C)
ssq(π)
1 − ssq(B⊤π)
ssq(B⊤π)
A.4 Tables
A.4.1 Mixed Tumor Data
Below are the full index values for the mixed tumor example studied in Appendix. Indices are computed across clustering algorithms and the number of assumed subgroups.
CSENS
Average linkage HC
Single linkage HC
Complete linkage HC
CSPEC
CSENS + CSPEC
CPPV
CNPV
CPPV + CNPV
A.4.2 Lung Cancer Subtype Data
Below are the full index values for the lung cancer example studied in Appendix. Indices are computed across clustering algorithms and the number of assumed subgroups. We also show the pairwise kappa index for both datasets in Fig. 7. Notice that these plots are almost identical to the sensitivity plus specificity plots but have clear differences when compared with the triplet kappa.
Pairwise kappa for the mixed tumor dataset (Hoshida et al. 2007) (top panel) and lung cancer dataset (Bhattacharjee et al. 2001) (bottom panel). There are noticeable differences between this plot and the one presented in the paper for triplet kappa. This is highly suggestive that the triplet based index is capable of picking up information from higher order structures in the data
CPPV+CNPV
6.5 Distribution of Hierarchical Clustering Assignments
The below contingency tables show distribution of subtypes in each cluster. Hierarchical clustering has a tendency to put outliers into separate clusters. Consequently, we see all of the subgroups being placed into the same cluster when K = 6. As more clusters are allowed we start to see some separation among the subtypes. This corresponds to the improved performance indices discussed in the paper.
Contingency table of cluster assignments for the mixed tumor data when K = 6
Pathology/cluster
Aidos, H., Duin, R., Fred, A. (2013). The area under the ROC curve as a criterion for clustering evaluation. In ICPRAM 2013 - proceedings of the 2nd international conference on pattern recognition applications and methods (pp. 276–280).Google Scholar
Albatineh, A.N., Niewiadomska-Bugaj, M., Mihalko, D. (2006). On similarity indices and correction for chance agreement. Journal of Classification, 23, 301–313.MathSciNetCrossRefzbMATHGoogle Scholar
Arbelaitz, O., Gurrutxaga, I., Muguerza, J., Pérez, J.M., Perona, I. (2013). An extensive comparative study of cluster validity indices. Pattern Recognition, 46, 243–256.CrossRefGoogle Scholar
Ashburner, M., Ball, C.A., Blake, J.A., Botstein, D., Butler, H., Cherry, J.M., Davis, A.P., Dolinski, K., Dwight, S.S., Eppig, J.T., Harris, M.A., Hill, D.P., Issel-Tarver, L., Kasarskis, A., Lewis, S., Matese, J.C., Richardson, J.E., Ringwald, M., Rubin, G.M., Sherlock, G. (2000). Gene ontology: tool for the unification of biology. Nature Genetics, 25(1), 25–29.CrossRefGoogle Scholar
Baulieu, F. (1997). Two variant axiom systems for presence/absence based dissimilarity coefficients. Journal of Classification, 14(1), 159–170.MathSciNetCrossRefzbMATHGoogle Scholar
Baulieu, F.B. (1989). A classification of presence/absence based dissimilarity coefficients. Journal of Classification, 6(1), 233–246.MathSciNetCrossRefzbMATHGoogle Scholar
Bhattacharjee, A., Richards, W.G., Staunton, J., Li, C., Monti, S., Vasa, P., Ladd, C., Beheshti, J., Bueno, R., Gillette, M., Loda, M., Weber, G., Mark, E.J., Lander, E.S., Wong, W., Johnson, B.E., Golub, T.R., Sugarbaker, D.J., Meyerson, M. (2001). Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses. Proceedings of the National Academy of Sciences of the United States of America, 98, 13790–13795.CrossRefGoogle Scholar
Brun, M., Sima, C., Hua, J., Lowey, J., Carroll, B., Suh, E., Dougherty, E.R. (2007). Model-based evaluation of clustering validation measures. Pattern Recognition, 40(3), 807–824.CrossRefzbMATHGoogle Scholar
Daws, J.T. (1996). The analysis of free-sorting data: beyond pairwise cooccurrences. Journal of Classification, 13(1), 57–80.CrossRefzbMATHGoogle Scholar
Dougherty, E.R., & Brun, M. (2004). A probabilistic theory of clustering. Pattern Recognition, 37(5), 917–925.CrossRefzbMATHGoogle Scholar
Gower, J.C., & Legendre, P. (1986). Metric and Euclidean properties of dissimilarity coefficients. Journal of Classification, 3(1), 5–48.MathSciNetCrossRefzbMATHGoogle Scholar
Handl, J., Knowles, J., Kell, D.B. (2005). Computational cluster validation in post-genomic data analysis. Bioinformatics, 21(15), 3201–3212.CrossRefGoogle Scholar
Hennig, C. (2015). What are the true clusters? Pattern Recognition Letters, 64, 53–62.CrossRefzbMATHGoogle Scholar
Hennig, C., & Liao, T.F. (2013). How to find an appropriate clustering for mixed-type variables with application to socio-economic stratification. Journal of the Royal Statistical Society: Series C (Applied Statistics), 62(3), 309–369.MathSciNetCrossRefGoogle Scholar
Hoshida, Y., Brunet, J.P., Tamayo, P., Golub, T.R., Mesirov, J.P. (2007). Subclass mapping: identifying common subtypes in independent disease data sets. PLoS ONE, 2(11), e1195.CrossRefGoogle Scholar
Hubalek, Z. (1982). Coefficients of association and similarity, based on binary (presence-absence) data: an evaluation. Biological Reviews, 57(4), 669–689.CrossRefGoogle Scholar
Hubert, L., & Arabie, P. (1985). Comparing partitions. Journal of Classification, 2, 193–218.CrossRefzbMATHGoogle Scholar
Jain, A.K. (2010). Data clustering: 50 years beyond k-means. Pattern Recognition Letters, 31(8), 651–666.CrossRefGoogle Scholar
Kaufman, L., & Rousseeuw, P.J. (Eds.). (2005). Finding groups in data: an introduction to cluster analysis. Wiley series in probability and statistics. Hoboken: Wiley.Google Scholar
McLachlan, G.J., & Basford, K.E. (1987). Mixture models: inference and applications to clustering. New York: Taylor & Francis.zbMATHGoogle Scholar
Olsen, J.V., Vermeulen, M., Santamaria, A., Kumar, C., Miller, M.L., Jensen, L.J., Gnad, F., Cox, J., Jensen, T.S., Nigg, E.A., Brunak, S., Mann, M. (2010). Quantitative phosphoproteomics reveals widespread full phosphorylation site occupancy during mitosis. Science Signaling, 3(104), ra3–ra3.CrossRefGoogle Scholar
Qaqish, B.F., O'Brien, J.J., Hibbard, J.C., Clowers, K.J. (2017). Gene expression accelerating high-dimensional clustering with lossless data reduction. Bioinformatics, 33(18), 2867–2872.CrossRefGoogle Scholar
Rand, W.M. (1971). Objective criteria for the evaluation of clustering methods. Journal of the American Statistical Association, 66, 846–850.CrossRefGoogle Scholar
Rezaei, M., & Franti, P. (2016). Set matching measures for external cluster validity. IEEE Transactions on Knowledge and Data Engineering, 28(8), 2173–2186.CrossRefGoogle Scholar
Seber, G.A.F. (2009). Multivariate observations. New York: Wiley.zbMATHGoogle Scholar
Sing, T., Sander, O., Beerenwinkel, N., Lengauer, T. (2005). ROCR: visualizing classifier performance in R. Bioinformatics, 21(20), 3940–3941.CrossRefGoogle Scholar
Thalamuthu, A., Mukhopadhyay, I., Zheng, X., Tseng, G.C. (2006). Evaluation and comparison of gene clustering methods in microarray analysis. Bioinformatics, 22 (19), 2405–2412.CrossRefGoogle Scholar
Tibshirani, R., Hastie, T., Narasimhan, B., Soltys, S., Shi, G., Koong, A., Le, Q.T. (2004). Sample classification from protein mass spectrometry, by 'Peak Probability Contrasts'. Bioinformatics, 20, 3034–3044.CrossRefGoogle Scholar
Tibshirani, R., Walther, G., Hastie, T. (2001). Estimating the number of clusters in a data set via the gap statistic. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2), 411–423.MathSciNetCrossRefzbMATHGoogle Scholar
Warrens, M.J. (2008a). On association coefficients for 2 × 2 tables and properties that do not depend on the marginal distributions. Psychometrika, 73(4), 777–789.MathSciNetCrossRefzbMATHGoogle Scholar
Warrens, M.J. (2008b). On the equivalence of cohen's kappa and the Hubert-Arabie adjusted rand index. Journal of Classification, 25(2), 177–183.MathSciNetCrossRefzbMATHGoogle Scholar
Xuan Vinh, N, Julien Epps, U., Bailey, J. (2010). Information theoretic measures for clusterings comparison: variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11, 2837–2854.MathSciNetzbMATHGoogle Scholar
© The Classification Society 2019
1.Department of Cell BiologyHarvard Medical SchoolBostonUSA
2.Department of BiostatisticsUniversity of North Carolina at Chapel HillChapel HillUSA
O'Brien, J.J., Lawson, M.T., Schweppe, D.K. et al. J Classif (2019). https://doi.org/10.1007/s00357-019-09329-1
First Online 11 July 2019
DOI https://doi.org/10.1007/s00357-019-09329-1
|
CommonCrawl
|
Transformation from calcium sulfate to calcium phosphate in biological environment
Part of a collection:
Biomaterials Synthesis and Characterization
Ying-Cen Chen1,
Wei-Hsing Tuan ORCID: orcid.org/0000-0001-9000-06181 &
Po-Liang Lai2
Journal of Materials Science: Materials in Medicine volume 32, Article number: 146 (2021) Cite this article
252 Accesses
The formation of a nano-apatite surface layer is frequently considered a measure of bioactivity, especially for non-phosphate bioceramics. In the present study, strontium-doped calcium sulfate, (Ca,Sr)SO4, was used to verify the feasibility of this measure. The (Ca,Sr)SO4 specimen was prepared by mixing 10% SrSO4 by weight with 90% CaSO4·½H2O powder by weight. A solid solution of (Ca,7.6%Sr)SO4 was then produced by heating the powder mixture at 1100 °C for 1 h. The resulting (Ca,Sr)SO4 specimen was readily degradable in phosphate solution. A newly formed surface layer in the form of flakes was formed within one day of specimen immersion in phosphate solution. Structural and microstructure–compositional analyses indicated that the flakes were composed of octacalcium phosphate (OCP) crystals. An amorphous interface containing OCP nanocrystals was found between the newly formed surface layer and the remaining (Ca,Sr)SO4 specimen. The specimen was also implanted into a rat distal femur bone defect. In addition to new bone, fibrous tissue and inflammatory cells were found to interlace the (Ca,Sr)SO4 specimen. The present study indicated that a more comprehensive evaluation is needed to assess the bioactivity of non-phosphate bioceramics.
The newly formed surface layer on the (Ca,Sr)SO4 specimen after soaking in phosphate solution for 28 days.
Though bioactivity is an important issue for bioactive ceramics, it is challenging to define the extent of bioactivity. Since apatite comprises much of the structure and composition of bone, the presence of a nano-apatite surface layer was frequently considered a measure of bioceramic bioactivity [1]. Indeed, the presence of calcium phosphate could induce the differentiation of mesenchymal stem cells to bone tissue [2]. A recent study further indicated that the attachment of cells could be related to the re-structuring of protein on apatite [3]. Thamma et al. suggested that the bioactivity of the bioceramic was determined only by its surface, with the composition of the bulk bioceramic itself mattering little [3].
An interesting example of bioceramics is bioactive glass [4]. The bioactive glass is usually composed of silicate units, SiO44−, and calcium ions, Ca2+. During the soaking of the bioactive glass in simulated body fluid, a carbonated hydroxyapatite (CHA) surface layer was observed within a matter of minutes [5]. The size of the apatite crystals in this CHA layer was on the nanoscopic scale. The formation of such a nano-apatite layer was believed to be the key to the bioactivity of bioactive glass [6, 7].
In the present study, such a hypothesis is evaluated with a non-phosphate bioceramic, the solid solution of calcium sulfate and strontium sulfate, (Ca,Sr)SO4. This solid solution had been implanted into rats, and new bone had been observed at the interface between implant and bone [8]. However, the bioactivity of this solid solution had not been investigated.
Calcium sulfate exists in three levels of hydration, calcium sulfate dihydrate (CaSO4·2H2O), calcium sulfate hemihydrate (CaSO4·½H2O) and calcium sulfate anhydrite (CaSO4). The hemihydrate is unique in its self-setting capability, namely, it transforms to the dihydrate through the addition of water [9]. Furthermore, this transformation is accompanied with the release of a small amount of heat. The hemihydrate had thus been used either as the outside fixture for bone fracture or the inner filler for bone defect for more than a hundred years [10]. Apart from the hemihydrate, recent studies had demonstrated that the anhydrite could also be used as a filler for bone defects [8]. The anhydrite could be prepared by heating the hemihydrate or the dihydrate above 220 °C [9, 11]. During heat treatment at elevated temperatures, some useful ions, such as silicon [12] or strontium [13], could be dissolved into the anhydrite phase. For example, the addition of strontium sulfate into calcium sulfate hemihydrate could produce a solid solution of strontium-doped calcium sulfate, (Ca,Sr)SO4, after heating above 1000 °C. The Sr ions could enhance bone formation and delay bone resorption [14]. Furthermore, the addition of Sr ions barely affected the degradation of CaSO4 [8]. Regardless, this solid solution is not a phosphate, and the mechanism for its bioactivity has not been addressed. In the present study, the degradation behavior of a (Ca,Sr)SO4 solid solution both in vitro and in vivo is investigated. Further attention is given to the surface layer of the solid solution. The relationship between the surface apatite layer and its bioactivity is investigated.
Specimen preparation and its characterization
Two powders, calcium sulfate hemihydrate (CaSO4·1/2H2O, JT Baker, USA) and strontium sulfate (SrSO4, Alfa Aesar, USA) powders, were used as the starting materials. The weight fraction of SrSO4 in the powder mixture was 10%. Two powders were mixed together in a turbo mixer with milling media of zirconia balls and ethyl alcohol. The milling time was 4 h. After drying in a rotary evaporator, the dried powder lumps were crushed with mortar and pestle. Disc specimens were prepared by die-pressing at 25 MPa. A heat treatment was carried out at 1100 °C for 1 h. Since the solubility of strontium sulfate into calcium sulfate was higher than 10 wt% [13], it would result in a (Ca,7.6%Sr)SO4 solid solution.
The density of the specimen was assessed through the measurement of dimensions and weight. The microstructure of the specimen was characterized using two techniques, scanning electron microscopy (SEM, JSA6510, JEOL, Japan) and transmission electron microscopy (TEM, Talos F200X, Thermo Fisher Scientific, USA). For the SEM observation, both the surface and cross section of the degraded specimens were observed. The polished section of the specimen was also observed. To estimate the grain size, a thermal etching at 1000 °C for 1 h was used to reveal the grain boundaries. A line intercept technique was applied to determine the grain size. More than 200 grains were counted. For the TEM characterization, a focus ion beam technique (FIB, Helios NanoLab 400 s, Thermo Fisher Scientific Co., USA) was used to collect a thin section from the specimen. The TEM electron beam was 2 nm; the area for diffraction analysis was about 200 nm. The local composition was analyzed through TEM-EDX (Energy-dispersive X-ray) technique.
The phase analysis was conducted using X-ray diffraction (XRD, TTRAX 3, Rigaku Co., Japan) and Raman spectroscopic techniques (iHR550, Horiba Co., Japan). For Raman technique, Ar+ laser beam with wavelength of 514.5 nm was used to excite the specimen. Both the biaxial strength and compressive strength were measured. The compressive strength of specimens was measured using a universal testing machine (MTS 810, MTS Co., USA). The ratio of height to diameter was close to unity. The loading rate was 1 mm/min; five specimens were measured. The flexural strength was measured under a biaxial load. The load was applied through a ball-on-three-balls jig. The diameter of the disc specimens was 21 mm. The displacement rate was 0.48 mm/min; four specimens were used.
Degradation behavior in phosphate solution
The degradation behavior of (Ca,Sr)SO4 specimen was evaluated in a phosphate buffered saline (PBS, Gibco Co. USA) solution. The composition of the solution, as reported by the manufacturer, was potassium chloride (KCl, 200 mg/L), potassium phosphate monobasic (KH2PO4, 200 mg/L), sodium chloride (NaCl, 8000 mg/L) and sodium phosphate dibasic (Na2HPO4.7H2O, 2160 mg/L). The specimen weight to the solution volume was kept at a constant ratio of 1 mg: 10 ml. The specimen and solution were shaken within a test tube to simulate the dynamic situation in vivo. The solution was refreshed every day. After each day, the remain of disc specimen was collected from the solution, then dried in an oven at 100 °C for 1 h. The weight of the specimen was measured. The rate of degradation was expressed in terms of weight loss on a daily basis. The concentration of Ca and Sr ions in the collected solution was measured with an inductively coupled plasma-mass spectroscopy (ICP-MS, Perkin Elmer Co., USA). The [Ca2+] and [Sr2+] in the fresh PBS were measured with the same technique. The pH value of the solution was also monitored.
In vivo study
The animal model used in the present study was a rat distal femur model. The study was approved by Chung Gung Memorial hospital (approval number: IACUC 2016092004). Four Sprague Dawley rats were used. Cylindrical defect, with the size of 3 mm in diameter and 4 mm in depth, was introduced into the distal femur by a drill. The (Ca,Sr)SO4 specimen of the same size was press fit into the defect. The recovery of these rats after surgery was normal. The rats were sacrificed 3 months postoperatively. The micro computed tomography (micro-CT, NanoSPECT/CT, MediSo Co., Hungary) of the femur was taken first. Before the histology observation, the femur was dehydrated, decalcificated, and then fixed in paraffin, cut into thin slices (thickness: 4-5 um). Masson's trichrome (ArrayBiotech Co., Taiwan) were used as the stain.
Characteristics of the specimens
The calcium sulfate hemihydrate releases its water of crystallization at a temperature above 220 °C [9, 11]. The hemihydrate thus transforms to the anhydrite before the heat-treatment temperature, 1100 °C, is reached. The starting composition of the specimen is 90% CaSO4·½H2O and 10% SrSO4 by weight. After heating at 1100 °C for 1 h, only the anhydrite is detected in the sintered specimen (Fig. 1). This indicates that the SrSO4 dissolved into the CaSO4 to form (Ca,Sr)SO4 at the elevated temperature. The resulting composition after heat treatment is (Ca,7.6%Sr)SO4. Since the theoretical densities of CaSO4 and SrSO4 are 2.96 g/cm3 [15, 16] and 3.96 g/cm3 [17], respectively, the theoretical density of (Ca,7.6%Sr)SO4 is 3.06 g/cm3. The density after heat treatment at 1100 °C for 1 h is 2.84 ± 0.01 g/cm3, corresponding to a relative density of 93.5 ± 0.3%. The remaining pores within the specimen are not likely interconnected [18]. This is confirmed by dropping water onto the specimen, where no water penetration is observed. The interactions of the specimen in aqueous media would therefore start from the surface. These interactions include the degradation of (Ca,Sr)SO4 in phosphate solution (in vitro), as well as the formation of new bone within bone defects (in vivo). In contrast to porous bone grafts [19], the use of a dense disc specimen allows for unambiguous opportunity to locate the sites where reactions take place.
XRD patterns of the (Ca,Sr)SO4 specimen after heat treatment (lower pattern) and after degradation in phosphate solution for 28 days (upper pattern).
The grain size, as determined with the line intercept technique, is 20 μm (Table 1). The biaxial strength of the specimen is 29 ± 2 MPa. The compressive strength is 70 ± 18 MPa, which is close to that of cortical bone [20, 21].
Table 1 Characteristics of the (Ca,Sr)SO4 specimen investigated in the present study.
Degradation of the (Ca,Sr)SO4 specimen in phosphate solution
The degradation behavior of the (Ca,Sr)SO4 specimen is evaluated by soaking it in a phosphate solution. Figure 2 shows the weight loss of the disc specimen in the solution within a time span of 28 days. The phosphate solution was refreshed every 24 h. The weight of the specimen was monitored every day, and the pH value of the solution was also measured daily. The weight loss takes place at a steady rate of ~2.2% per day. From this figure, we can estimate that a complete degradation may take place at roughly 45 days. The pH value of the solution is lower than 7 and decreases slightly with time. The concentration of Ca and Sr ions in the solution is shown in Fig. 3. The [Ca2+] in the fresh phosphate solution is very low, at 4.0 ± 0.3 ppm, as determined with the ICP technique. The [Ca2+] in the solution after soaking the specimen is much higher than this value. In the first few days, [Ca2+] in the solution is ~300 ppm, before gradually increasing to ~600 ppm. There is no [Sr2+] in the fresh phosphate solution. After the specimen is soaked, ~80 ppm of Sr ions is released into the solution per day.
Daily weight loss of the (Ca,Sr)SO4 specimen in phosphate solution. The daily pH value of the phosphate solution after soaking of the specimen is also shown. The phosphate solution is refreshed every day.
Concentration of Ca and Sr ions in the phosphate solution after soaking the (Ca,Sr)SO4 specimen. The phosphate solution is refreshed every day.
The surface of the disc specimen before degradation is shown in Fig. 4(a). The specimen surface is dense and rather smooth after heat treatment. The SEM micrograph demonstrates that the size of the (Ca,Sr)SO4 grains is around 20 μm. After soaking in the phosphate solution for one day, small flakes with a width of 1 to 2 μm and thickness of <0.1 μm are observed (Fig. 4b). The size of the flakes increases with soaking time (Fig. 4c–e). After degradation in phosphate solution, the XRD analysis detects a new phase (Fig. 1) apart from the calcium sulfate anhydrite (PDF #37-1496). The XRD peaks for the new phase are broad and low. These peaks may belong to either hydroxyapatite (HAp, Ca10(PO4)6(OH)2; PDF #73-0294) or octacalcium phosphate (OCP, Ca8(HPO4)2(PO4)4·5H2O; PDF #26-1056). Apart from the XRD analysis, Raman spectroscopy was also performed on the specimen after degradation (Fig. 5). The spectrum for the calcium sulfate specimen before degradation is also shown for comparison. The fingerprint for the PO43− of OCP is detected [22, 23]. Both phase analysis techniques, XRD and Raman spectroscopy, confirm the formation of apatite on the surface of the (Ca,Sr)SO4 specimen after degradation in phosphate solution. The crystalline structure of the apatite is mainly OCP.
a Surface morphology of the (Ca,Sr)SO4 specimen before soaking. b–e Surface morphology of the specimen after soaking in phosphate solution for (b) 1, (c) 7, (d) 21, and (e) 28 days. The phosphate solution is refreshed every day.
Raman spectroscopy of the (Ca,Sr)SO4 specimen after heat treatment (lower spectrum) and after degradation in phosphate solution for 28 days (upper spectrum).
Transformation from calcium sulfate to calcium phosphate in phosphate solution
Observation of a cross-section of the degraded specimen is then conducted. Figure 6(a) shows a typical cross-section of the (Ca,Sr)SO4 specimen after soaking in phosphate solution for 28 days. A newly formed layer, around 200 μm, is observed on the specimen surface (Fig. 6b). The outer part of the newly formed layer is mainly composed of large flakes (Fig. 6c). The flakes reach a length of up to 50 μm; nevertheless, the flakes remain thin (<1 μm) after soaking in phosphate solution for 28 days. The flakes are smaller toward the inner part of the surface layer. The inner part is relatively dense (Fig. 6d). A porous region containing large grains (Fig. 6e) is found under the newly formed layer. This porous region is around 500 μm in thickness (Fig. 6a). Some small flakes are observed on the surface of the large grains (Fig. 6e).
a Cross-section of the (Ca,Sr)SO4 specimen after soaking in phosphate solution for 28 days. b Newly formed surface layer. The (c) outer and (d) inner parts of the surface layer. e The coarse-grain layer under the newly formed surface layer.
Structural and compositional analyses on the newly formed layer are also conducted using TEM techniques. A low-magnification bright-field image for the newly formed layer is shown in Fig. 7(a). This TEM micrograph confirms that the newly formed surface layer is composed of flakes. The flakes on the outer region are relatively large. A typical lattice image for the large flake is shown in Fig. 7(b). Apart from several diffraction spots, diffraction rings are observed. This suggests that the large flakes are composed of many finer grains. The ring pattern indicates that the crystalline phase of the flake comprises octacalcium phosphate (OCP) [24]. Another high-resolution bright-field image is taken near the bottom of the outer porous layer (Fig. 7c). The flakes in this region are much smaller in size. The ring pattern indicates that many fine OCP crystals are present within these small flakes. Phase analyses using the TEM technique (Fig. 7b, c) confirms the results from the XRD and Raman analyses indicating that the crystalline phase of the newly formed surface layer is OCP. Furthermore, the size of the OCP crystals decreases toward the bottom of this surface layer.
a TEM micrograph for the newly formed surface layer on the (Ca,Sr)SO4 specimen after soaking in phosphate solution for 28 days. b–e Corresponding high-resolution TEM micrographs at the locations indicated in (a). The diffraction pattern is shown in the inset.
A thin layer is observed at the interface between the surface layer and the large grains (Fig. 7d). This interface is relatively thin, measuring ~300 nm. A high-resolution bright field image is taken at this layer. This high-resolution TEM micrograph depicts an amorphous characteristic; nevertheless, the diffused ring pattern suggests that there are many OCP nanocrystals (<5 nm) within this interface.
The grains under the newly formed surface layer are large, and two such grains are shown in Fig. 7(a). The diffraction pattern of one large grain, shown in the inset, suggests that it is an orthorhombic CaSO4 grain [25]. This confirms that this porous layer is a remnant of the specimen after degradation.
The corresponding TEM-EDX analysis (Table 2) confirms the phase analysis results. The composition of the newly formed layer is mainly calcium and phosphorus. The Ca/P ratio is around 1.4, which is a value close to that of OCP (Ca8(HPO4)2(PO4)4·5H2O). Furthermore, the amount of Sr in this calcium phosphate layer is very low. The remaining substance after degradation is calcium sulfate (Fig. 7e). The Ca content of the large grain is relatively low; instead, the Sr content is high. The interface, shown in Fig. 7(d), is a transition region between sulfate and phosphate, containing Ca2+, Sr2+, PO43−, and SO42− ions (Table 2).
Table 2 The TEM-EDX results for the micrographs shown in Fig. 7b–e.
In vivo testing
Since the (Ca,Sr)SO4 specimen is relatively dense, all of the interactions with the surrounding bone tissue originate from the surface. The three-month post-operative micro-CT image of the rat distal femur is shown in Fig. 8(a). The remaining implant is indicated with an arrow. The (Ca,Sr)SO4 implant is no longer one cylindrical disc. The implant has broken into two large pieces and several smaller pieces (Fig. 8b). This indicates that the (Ca,Sr)SO4 specimen has degraded in vivo. New bone together with fibrous tissue can be observed (Fig. 8c). Three regions can be defined between the remaining implant (which has since decalcified) and the bone tissue (Fig. 8d). Within the new bone (region 1), some lacunae are observed. Osteocytes are found inside the lacunae. A small gap between region 1 and region 2 is noted; this may have resulted from the dehydration pre-treatment. The location of the (Ca,Sr)SO4 implant is surrounded by fibrous tissue (region 2). This fibrous tissue is also found between the broken pieces of the implant. Abundant small arterioles are noted within this region. A transition zone (region 3) is observed between the remaining (Ca,Sr)SO4 implant and the fibrous tissue. In this transition zone, fibrous cells and inflammatory cells are observed to interlace the location of implant.
a A 3-month post-operative micro-CT image of the bone defect on the rat distal femur. The remaining (Ca,Sr)SO4 specimen is indicated by an arrow. The corresponding histology after decalcification is shown in (b–d). The histology at (b) low, (c) intermediate, and (d) high magnifications. In (d), region 1 corresponds to mineralized bone matrix (new bone), with some osteocytes present inside the lacune; region 2 corresponds to aligned fibrous tissue, where many arterioles are observed; and region 3 corresponds to a transition zone between the specimen and fibrous tissue. In this region, fibrous cells and inflammatory cells interlace the surface of the disc.
Transformation from calcium sulfate to calcium phosphate
The chemistry is one of the key factors that can affect the bioactivity of this bone graft. Since the structure and chemistry of human bone mainly comprises calcium phosphates, the formation of apatite is a must for a successful bone graft [26], especially in non-phosphate bioceramics. For example, the formation of an apatite surface layer on bioactive glass is taken as evidence for its bioactivity. Indeed, the presence of an apatite layer induces the attachment, proliferation, and differentiation of cells [3, 27]. In the present study, a non-phosphate bioceramic, CaSO4, is used. The transformation from calcium sulfate to calcium phosphate in vitro and in vivo is investigated.
Calcium sulfate is commonly known for its inability to induce osteoinduction and osteogenesis [28]. A small amount of strontium sulfate is thus added to the calcium sulfate. Strontium may enhance bone formation and inhibit bone resorption [14]. Furthermore, the charge of Sr ions is the same as that of Ca ions, and the size of Sr ions is close to that of Ca ions. Therefore, the Sr ions can replace the Ca ions during heat treatment at elevated temperatures [13]. In the present study, Sr replaces 7.6% of the Ca (by mole) in the CaSO4. After heating to 1100 °C, apart from the solution of Sr, most of the pores are removed from the (Ca,Sr)SO4 specimen. The residual pores are no longer interconnected. The degradation process would therefore start from the surface of the specimen.
A newly formed layer materialized on the surface of the (Ca,Sr)SO4 specimen within one day of soaking in PBS (Fig. 4b). Microstructure analysis indicates that the surface layer is composed of nano-crystallized apatite. Structural analyses (XRD and Raman) suggest that the apatite layer is composed of octacalcium phosphate (OCP) crystals. These crystals take the form of flakes. The outer surface is observed to be relatively porous, while the inner layer is observed to be relatively dense (Figs. 6, 7). Furthermore, an amorphous layer is found to be located between the (Ca,Sr)SO4 grains and the OCP flakes (Fig. 7d). The TEM-EDX analysis detects Ca2+, Sr2+, PO43−, and SO42− ions in this interface (Table 2). A schematic for the transformation from calcium sulfate to calcium phosphate is shown in Fig. 9. The newly formed surface layer is composed of OCP flakes. The flakes decrease in their size toward the inner part of the layer. Many small flakes are found in the inner part of the surface layer; furthermore, the size of these flakes is smaller than the size of the (Ca,Sr)SO4 grains. This implies that a transformation from calcium sulfate to calcium phosphate is likely a multi-step process, similar to the transformation of calcium sulfate from the hemihydrate form to the dihydrate form [29]. The thin interface provides a mass transport path between sulfate and phosphate. Since the weight loss shows a linear relationship with time (Fig. 2), this suggests that the degradation process is a reaction-controlled process. Furthermore, the thin interface is likely to be porous in nature.
Schematic for the newly formed surface layer on the (Ca,Sr)SO4 specimen. The outer surface layer is composed of large Ca8(HPO4)2(PO4)4·5H2O (OCP) flakes. The OCP flakes decrease in size, moving toward the inner surface layer. A thin layer composed of amorphous and OCP nanocrystals is found at the interface between the newly formed surface layer and the remaining (Ca,Sr)SO4 specimen. The remaining (Ca,Sr)SO4 specimen is no longer dense, demonstrating degradation of the specimen.
The reaction on the surface of the (Ca,Sr)SO4 specimen is likely one of dissolution and precipitation. The (Ca,Sr)SO4 specimen releases both Ca and Sr ions into solution during degradation (Fig. 3). However, the precipitation process involves the formation of OCP flakes only, and Sr ions remain in solution. The following dissolution and precipitation reactions are likely to have taken place during the degradation process:
Dissolution of strontium-doped calcium sulfate:
$$\left( {Ca,Sr} \right)SO_4 = Ca^{2 + } + Sr^{2 + } + SO_4^{2 - }$$
Precipitation of octacalcium phosphate in phosphate solution:
$$\begin{array}{l}8\,Ca^{2 + } + 2\left( {HPO_4} \right)^{2 - } + 4\left( {PO_4} \right)^{3 - }\\ \qquad \quad \,+ \,5\,H_2O = Ca_8\left( {HPO_4} \right)_2\left( {PO_4} \right)_4 \cdot 5\,H_2O\end{array}$$
Since the dissolution and precipitation processes take place at a relatively low temperature (37 °C), the OCP crystals that formed are relatively small. The dissolution of Ca from (Ca,Sr)SO4 is faster than the dissolution of Sr from (Ca,Sr)SO4; therefore, many Sr ions are left behind within the (Ca,Sr)SO4 grain (Table 2).
Through the dissolution and precipitation processes described above, the calcium sulfate in the (Ca,Sr)SO4 specimen transforms to calcium phosphate. The present study is the first investigation to provide microstructure evidence on the transformation from calcium sulfate to calcium phosphate.
Bioactivity of (Ca,Sr)SO4
The present study conducts both an in vitro degradation test and an in vivo evaluation. The implantation of (Ca,Sr)SO4 induces the formation of new bone (Fig. 8), and fibrous tissue and giant cells surround the degradable implant 3 months post-operation (Fig. 8d). Though fibrous tissue and inflammatory cells are necessary for the healing processes, they should disappear sooner rather than later.
The formation of a nano-apatite layer is frequently considered a sign of bioactivity [26, 30]. With the help of such an apatite interface, a bioactive ceramic can form strong bonds with bone tissue. Calcium sulfate has been generally regarded as bio-compatible [10, 28, 31, 32]. However, many previous studies have also indicated that calcium sulfate lacks osteoinductivity [10, 32,33,34,35,36]. The present study demonstrates two important characteristics that support the bioactivity of a (Ca,Sr)SO4 specimen. The first characteristic is the formation of a nano-apatite layer (Fig. 7), and the second characteristic is the formation of new bone (Fig. 8). These two pieces of evidence indicate that (Ca,Sr)SO4 is not only osteoconductive but also osteoinductive. One of the issues, however, is that the degradation rate of (Ca,Sr)SO4 specimen is too high; the degradation rate of the (Ca,Sr)SO4 is higher than the rate of new bone formation.
The (Ca,Sr)SO4 specimen is degradable both in vitro and in vivo; the specimen is broken into small pieces after degradation. A gap is formed between two broken pieces of the specimen because of degradation of the implant (Fig. 8). Only fibrous tissue is observed within this gap; this demonstrates that the fibrous tissue forms rapidly during the absorption process. At the same time, giant cells are generated during this stage of the healing process (Fig. 8d). This implies that the properties of osteoconductivity and osteoinduction are not sufficient for a resorbable bioceramic; that is, a third characteristic, a degradation rate slower than the formation rate of new bone, is also necessary. Otherwise, any gap that is formed only encourages the formation of fibrous tissue. These findings thus indicate that the degradation rate of the (Ca,Sr)SO4 specimen must be slowed down. In the present study, a heat-treatment technique is applied to remove interconnected pores. However, the degradation rate is not low enough. Other possible strategies for decreasing the degradation rate should be considered.
In the present study, the transformation from strontium-doped calcium sulfate to calcium phosphate in phosphate solution is investigated. Apart from the kinetics of the degradation behavior, the structure and composition of the newly formed calcium phosphate is characterized. Several key findings are listed below:
The formation of calcium phosphate on the surface of strontium-doped calcium sulfate is a fast process. This can be related to the fact that the degradation rate of the (Ca,7.6%Sr)SO4 specimen is fast, at ~2.2% per day.
The newly formed apatite layer is composed of OCP nanocrystals.
Both Ca and Sr ions are released from the (Ca,Sr)SO4 specimen; however, Sr ions are not precipitated. The amount of Sr in the newly formed calcium phosphate is very low.
A rat distal-femur model detects the formation of new bone. However, fibrous cells and inflammatory cells space the gap between new bone and the remaining (Ca,Sr)SO4 implant.
The present study indicates that calcium sulfate is both osteoconductive and osteoinductive. Nevertheless, the bioactivity of bioceramics is a complicated issue. The formation of a nano-sized apatite surface layer does not guarantee osteogenesis. The degradation rate is another factor that must also be considered.
Jodati H, Yılmaz B, Evis Z. A review of bioceramic porous scaffolds for hard tissue applications: effects of structural features. Ceram Int.2020;46:15725–39.
Yamasaki H, Sakai OH. Osteogenic response to porous hydroxyapatite ceramics under the skin of dogs. Biomaterials. 1992;13:308–12.
Thamma U, Kowal TJ, Falk MM, Jain H. Nanostructure of bioactive glass affects bone cell attachment via protein restructuring upon adsorption. Sci Rep. 2021;11:57–63.
Hench L. Bioceramics : from concept to clinic. Am Ceram Soc Bull. 1993;72:93–98.
Kim CY, Clark AE, Hench LL. Early stages of calcium-phosphate layer formation in bioglasses. J Non-Crystalline Solids. 1989;113:195–202.
Greenspan DC. Glass and medicine: the Larry Hench story. Int J Glass Sci. 2016;7:134–8.
Greenspan DC. Bioglass at 50–a look at Larry Hench's legacy and bioactive materials. Biomed Glasses. 2019;5:178–84.
Chen YC, Hsu PY, Tuan WH, Chen CY, Wu CJ, Lai PL. Long-term in vitro degradation and in vivo evaluation of resorbable bioceramics. J Mater Sci Mater Med. 2021;32:13.
Singh NB, Middendorf B. Calcium sulfate hemihydrate hydration leading to gypsum crystallization. Prog Cryst Growth Charact Mater. 2007;53:57–77.
Pietrzak WS, Ronk R. Calcium sulfate bone void filler: a review and a look ahead. J Craniofac Surg. 2000;11:327–33.
Mirwald PW. Experimental study of the dehydration reactions gypsum-bassanite and bassanite-anhydrite at high pressure: indication of anomalous behavior of H2O at high pressure in the temperature range of 50–300 C. J Chem Phys. 2008;128:074502.
Hsu PY, Kuo HC, Tuan WH, Shih SJ, Naito M, Lai PL. Manipulation of the degradation behavior of calcium sulfate by the addition of bioglass. Prog Biomater. 2019;8:115–25.
Chen YC, Hsu PY, Tuan WH, Lai PL. From phase diagram to the design of strontium-containing carrier. J Asian Ceram Soc. 2020;8:677–84.
Jiménez M, Abradelo C, Román JS, Rojo L. Bibliographic review on the state of the art of strontium and zinc based regenerative therapies, Recent developments and clinical applications. J Mater Chem. 2019;B7:1974–85.
Kirfel A, Will G. Charge density in anhydrite, CaSO4, from X-ray and neutron diffraction measurements. Acta Crystallogr Sect B Struct Crystallogr Cryst Chem. 1980;36:2881–90.
Vasiliu CEC. Assembly of hydroxy apatite: β tricalcium phosphate: calcium sulfate bone engineering scaffolds. Oklahoma State University 2008.
Pina CM, Enders M, Putnis A. The composition of solid solutions crystallising from aqueous solutions: the influence of supersaturation and growth mechanisms. Chem Geology. 2000;168:195–10.
Coble RL. Sintering crystalline solids. I. Intermediate and final state diffusion models. J Appl Phys. 1061;32:787–92.
Chang HY, Tuan WH, Lai PL. Biphasic ceramic bone graft with biphasic degradation rates. Mater Sci Eng C. 2021;118:111421.
Manzoor K, Ahmad M, Ahmad S, Ikram S. Resorbable biomaterials: role of chitosan as a graft in bone tissue engineering. in Materials for Biomedical Engineering, ed. by Holban AM, Grumezescu AM. Elsevier, 2019; page 23–44.
Moore WR, Graves SE, Bain GI. Synthetic bone graft substitutes. ANZ J Surg. 2001;71:354–61.
Ramírez-Rodríguez GB, Delgado-López JM, Gómez-Morales J. Evolution of calcium phosphate precipitation in hanging drop vapor diffusion by in situ Raman microspectroscopy. Cryst Eng Comm. 2013;15:2206–12.
Tsuda H, Arends J. Raman spectra of human dental calculus. J Dent Res. 1993;72:1609–13.
Saha A, Lee J, Pancera SM, Bräeu MF, Kempter A, Tripathi A, et al. New Insights into the transformation of calcium sulfate hemihydrate to gypsum using time-resolved cryogenic transmission electron microscopy. Langmuir. 2012;28:11182–87.
Wirsching F. Calcium sulfate. Ullmann's Encycl Ind Chem. 2012;6:519–50.
Seah RK, Garland M, Loo JS, Widjaja E. Use of Raman microscopy and multivariate data analysis to observe the biomimetic growth of carbonated hydroxyapatite on bioactive glass. Anal Chem. 2009;81:1442–9.
Das I, Hupa GE, Vallittu PK. Porous SiO2 nanofiber grafted novel bioactive glass-ceramic coating: a structural scaffold for uniform apatite precipitation and oriented cell proliferation on inert implant. Mater Sci Eng C. 2016;62:206–14.
Pförringer D, Harrasser N, Mühlhofer H, Kiokekli M, Stemberger A, Griensven M, et al. Osteoinduction and-conduction through absorbable bone substitute materials based on calcium sulfate: in vivo biological behavior in a rabbit model. J Mater Sci Mater Med. 2018;29:1–14.
Le NTN, Le NTT, Nguyen QL, Pham TLB, Le MTN, Nguyen DH. A facile synthesis process and evaluations of α-calcium sulfate hemihydrate for bone substitute. Mater. 2020;13:3099.
Bellantone M, Coleman NJ, Hench LL. Bacteriostatic action of a novel four‐component bioactive glass. J Biomed Mater Res. 2000;51:484–90.
Shih TC, Teng NC, Wang PD, Lin CT, Yang JC, Fong SW, et al. In vivo evaluation of resorbable bone graft substitutes in beagles: histological properties. J Biomed Mater Res Part A. 2013;101:2405–11.
Nilsson M, Zheng MH, Tägil M. The composite of hydroxyapatite and calcium sulfate: a review of preclinical evaluation and clinical applications. Expert Rev Med Devices. 2013;10:675–84.
Cortez PP, Silva MA, Santos M, Armada-da-Silva P, Afonso A, Lopes MA, et al. A glass-reinforced hydroxyapatite and surgical-grade calcium sulfate for bone regeneration: In vivo biological behavior in a sheep model. J Biomater Appl. 2012;27:201–17.
Sony S, Babu SS, Nishad KV, Varma H, Komath M. Development of an injectable bioactive bone filler cement with hydrogen orthophosphate incorporated calcium sulfate. J Mater Sci Mater Med. 2015;26:31–44.
Oh CW, Kim PT, Ihn JC. The use of calcium sulfate as a bone substitute. J Orthop Surg. 1998;6:1–10.
Intini G, Andreana S, Intini FE, Buhite RJ, Bobek LA. Calcium sulfate and platelet-rich plasma make a novel osteoinductive biomaterial for bone regeneration. J Transl Med. 2007;5:1–13.
Financial support for this project was provided by the Ministry of Science and Technology (MOST 109-2221-E-002-122) and Chang Gung Memorial Hospital at Linkou (CMRPG3G0573 & 107-2923-B-182A-001-MY3). The authors wish to thank for the technical support from Laboratory Animal Center and Microscope Core Laboratory, Chang Gung Memorial Hospital, Linkou. The help from Dr. P.Y. Hsu on the measurement of strength is appreciated.
Department of Materials Science and Engineering, National Taiwan University, Taipei, Taiwan
Ying-Cen Chen & Wei-Hsing Tuan
Department of Orthopedic Surgery, Bone and Joint Research Center, Chang Gung Memorial Hospital at Linkou, College of Medicine, Chang Gung University, Taoyuan, Taiwan
Po-Liang Lai
Ying-Cen Chen
Wei-Hsing Tuan
Corresponding authors
Correspondence to Wei-Hsing Tuan or Po-Liang Lai.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Chen, YC., Tuan, WH. & Lai, PL. Transformation from calcium sulfate to calcium phosphate in biological environment. J Mater Sci: Mater Med 32, 146 (2021). https://doi.org/10.1007/s10856-021-06622-7
|
CommonCrawl
|
How does the generalized linear model generalize the general linear model?
The general linear model (GLM) is a statistical linear model. It may be written as1 $$ \mathbf{Y} = \mathbf{X}\mathbf{B} + \mathbf{U}, $$ where $Y$ is a matrix with series of multivariate measurements, $X$ is a matrix that might be a design matrix, $B$ is a matrix containing parameters that are usually to be estimated and $U$ is a matrix containing errors or noise. The errors are usually assumed to follow a multivariate normal distribution.
It then says
If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about $Y$ and $U$.
I was wondering how the generalized linear models relax assumptions about $Y$ and $U$ in the general linear models?
Note that I can understand their another relation in the opposite direction:
The general linear model may be viewed as a case of the generalized linear model with identity link.
But I doubt this will help with my question.
regression generalized-linear-model assumptions
Consider a case where your response variable is a set of 'successes' and 'failures' (also represented as 'yeses' and 'nos', $1$s and $0$s, etc.). If this were true, it cannot be the case that your error term is normally distributed. Instead, your error term would be Bernoulli, by definition. Thus, one of the assumptions that are alluded to is violated. Another such assumption is that of homoskedasticity, but this would be violated as well, because the variance is a function of the mean. So we can see that the (OLS) GLM is inappropriate for this case.
Note that, for a typical linear regression model, what you are predicting (i.e., $\hat y_i$) is $\mu_i$, the mean of the conditional normal distribution of the response at that exact spot where $X=x_i$. What we need in this case is to predict $\hat\pi_i$, the probability of 'success' at that spot. So we think of our response distribution as Bernoulli, and we are predicting the parameter that controls the behavior of that distribution. There is one important complication here, however. Specifically, there will be some values for $\bf X$ that, in combination with your estimates $\boldsymbol\beta$ will yield predicted values of $\hat y_i$ (i.e, $\hat\pi_i$) that will be either $<0$ or $>1$. But this is impossible, because the range of $\pi$ is $(0,~1)$. Thus we need to transform the parameter $\pi$ so that it can range $(-\infty,~\infty)$, just as the right hand side of your GLiM can. Hence, you need a link function.
At this point, we have stipulated a response distribution (Bernoulli) and a link function (perhaps the logit transformation). We already have a structural part of our model: $\bf X \boldsymbol \beta$. So now we have all the required parts of our model. This is now the generalized linear model, because we have 'relaxed' the assumptions about our response variable and the errors.
To answer your specific questions more directly, the generalized linear model relaxes assumptions about $\bf Y$ and $\bf U$ by positing a response distribution (in the exponential family) and a link function that maps the parameter in question to the interval $(-\infty,~\infty)$.
For more on this topic, it may help you to read my answer to this question: Difference between logit and probit models.
gung - Reinstate Monica♦gung - Reinstate Monica
$\begingroup$ (+1) Concise and understandable answer. $\endgroup$ – COOLSerdash Jun 27 '13 at 12:10
$\begingroup$ Thanks, gung! In general linear models, "the errors are usually assumed to follow a multivariate normal distribution." Is it correct that general linear models are not necessarily parametric in the sense that they don't fully specify the form of the distribution of Y given X? Because a generalized linear model always specify the distribution of Y given X to be an exponential family distribution, is it correct that a general linear model may not be a generalized linear model? $\endgroup$ – StackExchange for All Aug 1 '13 at 13:42
$\begingroup$ No, the general linear model is fully specified; it is always a special case of the generalized linear model. $\endgroup$ – gung - Reinstate Monica♦ Aug 1 '13 at 13:46
$\begingroup$ Does "the errors are usually assumed to follow a multivariate normal distribution" in WIkipedia for the general linear model mean that the error may not be normally distributed? $\endgroup$ – StackExchange for All Aug 1 '13 at 13:55
Not the answer you're looking for? Browse other questions tagged regression generalized-linear-model assumptions or ask your own question.
Difference between logit and probit models
How can generalized linear mixed model produce non-linear regression?
Difference between general linear model and generalized linear model
Linear model with log-transformed response vs. generalized linear model with log link
Generalized Linear Model estimating multiple parameters concurrently
Is the explanatory variable in the generalized linear model viewed as a random variable or parameter?
Can the likelihood function in generalized linear model be written in terms of the model parameter and the input variable?
Multiplicative error and additive error for generalized linear model
Generalized linear models and central limit theorem
Why does the linear test statistic of GLM follow F-distribution?
|
CommonCrawl
|
A counterexample for Sard's theorem in $C^1$ regularity
I can't seem to find an example of a function $f \colon \mathbb{R}^2\to \mathbb{R}$ which is $C^1$ and such that the set of its critical values is not of zero measure.
What examples are there? $\phantom{aaa}$
real-analysis ca.classical-analysis-and-odes counterexamples
YCor
Benny ZakBenny Zak
$\begingroup$ Remark: the question originally asked by the OP asked about functions $\mathbb{R}^2 \to \mathbb{R}$. An edit (by another party?) changed the question to be about $\mathbb{R}^2 \to \mathbb{R}^2$, which is why there are two different flavors of answers below, some addressing the original question, and some addressing the updated one. $\endgroup$ – Willie Wong Mar 18 '18 at 2:11
$\begingroup$ I re-edited the question to its original form about functions from $\mathbb{R}^2$ to $\mathbb{R}$. $\endgroup$ – Piotr Hajlasz Mar 18 '18 at 14:22
My favourite example is as follows. Let the simple curve $\kappa:[0,1]\to K\subset \mathbb{R}^2$ be a parametrization of (half of) the Koch curve, and let $\phi:K\to[0,1]$ be its inverse; it is a continuous function, and, due to the fact that $\kappa$ has infinite variation on any non-empty interval $J\subset [0,1]$, it can be chosen in such a way that it satisfies $$|\phi(x)-\phi(y)|=o(|x-y|)$$ uniformly on $K$. Therefore the data $\phi$ together with the zero field on $K$ satisfy the hypotheses of the Whitney extension theorem for the case of $C^1$ regularity. Thus $\phi$ extends to a $C^1$ function $f:\mathbb{R}^2\to\mathbb{R}$ whose gradient vanishes identically on $K$.
$$*$$ Details. The standard parametrization of the Koch curve may be defined as the unique bounded function $\kappa:[0,1]\to\mathbb{C}$ satisfying the (linear, non-homogeneous) functional equation
$$3\kappa(x)=\cases{\kappa(4x)& if $\;0\le x< {1\over4}$\\\\ 1+e^{i\pi/3}\kappa(4x-1)& if $\;{1\over4}\le x< {2\over4}$\\\\ 1+e^{i\pi/3}-e^{i\pi/3}\kappa(4x-2)& if $\;{2\over4}\le x< {3\over4}$\\\\ 2+\kappa(4x-3)& if $\;{3\over4}\le x\le 1$}$$
that is $\kappa$ is the fixed point of an affine $1/3$-norm contraction on the Banach space of $\mathbb{C}$-valued bounded functions on $[0,1]$, whence its existence and uniqueness. It also follows from this, that $\kappa$ is $\alpha$-Hölder, with $\alpha:={\log3\over\log4}$, and in fact, for some constants $0<c<C$ it verifies, for all $x$ and $y$ in $[0,1]$ $$c|x-y|^\alpha\le|\kappa(x)-\kappa(y)|\le C|x-y|^\alpha,$$
which implies that its inverse $\phi$ satisfies a Hölder condition with exponent $1/\alpha$, larger than $1$ (a phenomenon that is not possible for non-constant functions on an interval, or more generally on metric spaces connected by rectifiable curves); in particular, it satisfies the stated $|\phi(x)-\phi(y)|=o(|x-y|)$.
Pietro MajerPietro Majer
$\begingroup$ Very interesting. I actually had not heard about Whitney's extension theorem (Tras. AMS 1934) before. It is a very neat analogue to Tietze-Urysohn extension theorem. $\endgroup$ – BigM Dec 27 '16 at 23:13
$\begingroup$ For generalizations of this example see mathscinet.ams.org/mathscinet-getitem?mr=1991757 $\endgroup$ – Piotr Hajlasz Mar 16 '18 at 20:50
$\begingroup$ $\kappa$ having infinite variation is not sufficient to get the estimate on $\phi$: build "one third" of the Koch snowflake starting from the trivial map $\gamma_0:[0,1]\to\mathbb R^2$ (with $\gamma_0(t)=(t,0)$), subdividing $[0,1]$ into three thirds and replacing the straight segment with a tent on the middle interval (thus obtaining $\gamma_1$), and so on. The limiting map $f_\infty:[0,1]\to\mathbb R^2$ will have $f_\infty(0)=(0,0)$ and $f_\infty(3^{-k})=(3^{-k},0)$! Maybe you are reparametrizing by arclength before taking the limit? (I don't know how to deal with $f_\infty$ in that case...) $\endgroup$ – Mizar Jun 8 '18 at 16:24
$\begingroup$ Ah yes, now it's clear! So at each iteration you are reparametrizing by arclength, right? If you don't do that, having infinite variation does not give $|\phi(x)-\phi(y)|=o(|x-y|)$ (this fails if you don't reparametrize by arclength, as I was saying in my previous comment). $\endgroup$ – Mizar Apr 13 '19 at 11:43
$\begingroup$ (I edited and added details and rectified the unclear sentence about infinite variation. Thank you!) $\endgroup$ – Pietro Majer Apr 13 '19 at 16:41
If $f\in C^1(\mathbb{R}^2,\mathbb{R})$, then the set of critical values may have positive measure.
The classical construction due to Whitney was mentioned in the answer by T. Amdeberhan. However, the simplest one is due to E.L.Grinberg. The idea is as follows:
If $C$ is the Cantor middle thirds set, then $C+C=[0,2]$.
There is a $C^1$ function $g:\mathbb{R}\to\mathbb{R}$ such that the critical values of $g$ contain $C$.
Now $f:\mathbb{R}^2\to\mathbb{R}$ defined by $f(x,y)=g(x)+g(y)$ is of class $C^1$ and $C+C=[0,2]$ is contained in the set of critical values of $f$.
Piotr HajlaszPiotr Hajlasz
This has been known for some time, including the higher-dimensional problem, in $\mathbb{R}^n$, that if $f\in C^k$ where $k<n$ then the set of critical points need not be of zero measure.
H. Whitney, A function not constant on a connected set of its critical points, Duke Math. J. 1 (1935), 514-517.
T. AmdeberhanT. Amdeberhan
I decided to challenge myself to make pictures of Piotr Hajlasz's example, partly for fun and partly for the next time I teach this. Let $C_3$ be the standard Cantor middle thirds set: $$C_3 = \left\{ \sum_{k=1}^{\infty} \frac{a_k}{3^k} : a_1, a_2, a_3, \cdots, \in \{ 0,2 \} \right\}.$$ Let $C_4$ be the variant "middle halves set" $$C_4 = \left\{ \sum_{k=1}^{\infty} \frac{b_k}{4^k} : b_1, b_2, b_3, \cdots, \in \{ 0,3 \} \right\}.$$ Note that every $z$ in $[0,1]$ can be written as $(2/3) x + (1/3) y$ for $x$, $y \in C_4$ in either $1$ or $2$ ways. Here is a drawing of $C_4 \times C_4$ and its intersection with the lines $(2/3)x + (1/3) y = k/16$ for $0 \leq k \leq 16$:
Here is a $C^1$ function $\phi(x)$ which maps $C_3$ to $C_4$ in an order preserving manner, with derivative $0$ at each point of $C_3$. On each interval of $[0,1] \setminus C_3$, it is an appropriately chosen sine curve.
And here is the final product, a depiction of the function $f(x,y) = (2/3) \phi(x) + (1/3) \phi(y)$. The level curves show $f(x,y) = k/16$ for $0 \leq k \leq 16$. The black dots are the critical points $C_3 \times C_3$; the red dots demonstrate how every level curve contains $1$ or $2$ critical points.
David E Speyer
Not the answer you're looking for? Browse other questions tagged real-analysis ca.classical-analysis-and-odes counterexamples or ask your own question.
The Koch snow flake, Holder exponents of conformal mappings
Analogues of Luzin's theorem
Nondifferentiability set of an arbitrary real function
Symmetric functions and regularity
Measuring almost-critical values of smooth functions.
Counterexample to Sard Theorem for a not-C1 map
Is it possible to have the set $f^{-1}(\lbrace x \rbrace)$ perfect for every $x$?
An open mapping theorem for homogeneous functions?
Baire's simple limit theorem "almost everywhere"
Whether $\varphi(E)$ is a Jordan measurable set?
|
CommonCrawl
|
Why do we care about $L^p$ spaces besides $p = 1$, $p = 2$, and $p = \infty$?
I was helping a student study for a functional analysis exam and the question came up as to when, in practice, one needs to consider the Banach space $L^p$ for some value of $p$ other than the obvious ones of $p=1$, $p=2$, and $p=\infty$. I don't know much analysis and the best thing I could think of was Littlewood's 4/3 inequality. In its most elementary form, this inequality states that if $A = (a_{ij})$ is an $m\times n$ matrix with real entries, and we define the norm $$||A|| = \sup\biggl(\left|\sum_{i=1}^m \sum_{j=1}^n a_{ij}s_it_j\right| : |s_i| \le 1, |t_j| \le 1\biggr)$$ then $$\biggl(\sum_{i,j} |a_{ij}|^{4/3}\biggr)^{3/4} \le \sqrt{2} ||A||.$$ Are there more convincing examples of the importance of "exotic" values of $p$? I remember wondering about this as an undergraduate but never pursued it. As I think about it now, it does seem a bit odd from a pedagogical point of view that none of the textbooks I've seen give any applications involving specific values of $p$. I didn't run into Littlewood's 4/3 inequality until later in life.
[Edit: Thanks for the many responses, which exceeded my expectations! Perhaps I should have anticipated that this question would generate a big list; at any rate, I have added the big-list tag. My choice of which answer to accept was necessarily somewhat arbitrary; all the top responses are excellent.]
fa.functional-analysis mathematics-education inequalities big-list
Timothy Chow
$\begingroup$ Looking again, here comes the "silly" answer: There are reasons why $L^1$ and $L^\infty$ are bad e.g. not uniformly convex. Then we are left with $L^2$. Now, there are functions you can be interested in, that are not in $L^2$. For example $f(x) = (1 + x^2)^{-\gamma/2}$ for $\gamma > 0$ small enough. That justifies $p > 2$. For $p < 2$, you have to look at local singularities. $\endgroup$ – Helge Jun 14 '10 at 20:44
$\begingroup$ Or look at the Fourier Transformation which is in its classical (i.e. non-distributional) form a map $L^p \to L^p$ with $p\in [1,2]$. $\endgroup$ – Johannes Hahn Jun 15 '10 at 0:06
$\begingroup$ There is another related question: Why do we care so much about the L^p spaces compared to other more general Banach spaces. (Orlitz spaces to start with and completely general spaces.) $\endgroup$ – Gil Kalai Jun 28 '10 at 15:51
Huge chunks of the theory of nonlinear PDEs rely critically on analysis in $L^p$-spaces.
Let's take the 3D Navier-Stokes equations for example. Leray proved in 1933 existence of a weak solution to the corresponding Cauchy problem with initial data from the space $L^2(\mathbb R^3)$. Unfortunately, it is still a major open problem whether the Leray weak solution is unique. But if one chooses the initial data from $L^3(\mathbb R^3)$, then Kato showed that there is a unique strong solution to the Navier-Stokes equations (which is known to exist locally in time). $L^3$ is the "weakest" $L^p$-space of initial data which is known to give rise to unique solutions of the 3D Navier-Stokes.
In some cases the structure of the equations suggests the choice of $L^p$ as the most natural space to work in. For instance, many equations stemming from non-Newtonian fluid dynamics and image processing involve the $p$-Laplacian $\nabla\left(|\nabla u|^{p-2}\nabla u\right)$ with $1 < p < \infty.$ Here the $L^p$-space and $L^p$-based Sobolev spaces provide a natural framework to study well-posedness and regularity issues.
Yet another example from harmonic analysis (which goes back to Paley and Zigmund, I think). Let $$F(x,\omega)=\sum\limits_{n\in\mathbb Z^d} g_n(\omega)c_ne^{inx},\quad x\in \mathbb T^d,$$ where $g_n$ is a sequence of independent normalized Gaussians and $(c_n)$ is a non-random element of $l^2(\mathbb Z^d)$. Then the function $F$ belongs almost surely to any $L^p(\mathbb T^d)$, $2\leq p <\infty$ and it does not belong almost surely to $L^{\infty}(\mathbb T^d)$.
There have been very recent applications of this resut to the existence of solutions to the nonlinear Schrodinger equations with random initial data (due to Burq, Gérard, Tzvetkov et al).
Andrey Rekalo
I feel as though this question may have come up before. Anyhow, the $\ell_4$ norm, and more generally the $\ell_{2k}$ norm for any positive integer $k$, come up naturally in Fourier analysis, since the $\ell_{2k}$ norm of the Fourier transform of $f$ equals the sum of $f(x_1)...f(x_k)\overline{f(y_1)...f(y_k)}$ over all $x_1+...+x_k=y_1+...+y_k$. That sort of sum comes up a lot in additive combinatorics, especially when $f$ is closely related to the characteristic function of a set. And you can get other norms by duality -- for instance the $4/3$ norm is the dual of the 4-norm, and therefore comes up too.
gowers
$\begingroup$ And the additive combinatorics can be applied to non-linear PDEs on tori, related to restriction theorems for Fourier transform. Bourgain's 1993 GAFA paper comes to mind. (Long title starting with "Fourier transform restriction phenomena for certain lattice subsets...") $\endgroup$ – Willie Wong Jun 15 '10 at 22:21
$\begingroup$ The article @WillieWong references: Bourgain - Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations I II (MSN). $\endgroup$ – LSpice Feb 14 at 16:26
Tim, I've got two words for you: interpolation theorems (e.g., Riesz-Thorin and Marcinkiewicz interpolation theorems). Such theorems let you pass from information about some operators on $L^1$ and $L^\infty$ to some operators on $L^2$ using all the intermediate exponents $p$.
The point here is not that one actually cares about $L^{37.24}$ for its own sake, but the interpolation theorems show you that such "exotic" $L^p$-spaces can be at the service of her majesty $L^2$. I think for a student, these interpolation theorems provide an attractive reason to care about $L^p$ for all $p \geq 1$.
This is not my area at all, so I welcome follow-up comments from analysts on this answer.
KConrad
$\begingroup$ I tend to agree with you to some extent. With $L^\infty$ and $L^1$, things are much simpler computationally, and with $L^2$ you have the immense advantage of Hilbert space tools like Plancherel's identity. If not for Hilbert space techniques, one would likely find $L^2$ just as exotic as the others. The interpolation theorems basically say that understanding two of these spaces is enough to understand everything in between. So long as the adjoint of the operator is similarly behaved, just two is enough to understand all $p\ge 1$. This is the case with the Hilbert transform, for instance. $\endgroup$ – Peter Luthy Jun 14 '10 at 22:11
$\begingroup$ @KConrad: I screwed around with your post to make the LaTeX render properly. $\endgroup$ – Harry Gindi Jun 15 '10 at 11:32
In PDEs, various values of p arise as degrees of regularity. The Sobolev embedding theorems let you "trade in" generalized derivatives for classical derivatives. You might need the exponent p to be above a certain threshold to get a desired regularity result.
Still, I agree with your observation that much of the time the values of p that matter are 1, 2, and infinity.
John D. Cook
$\begingroup$ This is interesting...can you provide a few more details or a concrete example? $\endgroup$ – Timothy Chow Jun 14 '10 at 18:27
$\begingroup$ A good reference for this stuff is the book by Adams and Fournier, Sobolev Spaces. Wikipedia also has the relevant statements. $\endgroup$ – Dan Ramras Jun 14 '10 at 18:48
$\begingroup$ Short version of one statement: suppose $f \in L^p(\mathbb{R}^n)$, and that the first k generalized derivatives (in the sense of distributions, say) are also in $L^p$. Then f is actually a $C^m$ function, where $m=k−\frac{n}{p}−1$ (i.e. all classical partial derivatives of order up to m exist and are continuous.) So if you're in $\mathbb{R}^4$ and have 5 derivatives in $L^2$, you're $C^2$; if you have 5 derivatives in $L^4$ you're $C^3$. $\endgroup$ – Nate Eldredge Jun 14 '10 at 21:57
$\begingroup$ I think Terence Tao wrote a blog post on this subject, although I can't find it at the moment. $\endgroup$ – Qiaochu Yuan Jun 14 '10 at 22:11
$\begingroup$ I think it's not that these spaces are the ones that matter but rather these spaces are the only ones where it is convenient to perform computations. Like I said in a comment to another answer, $L^2$ would be just as exotic if not for access to great Hilbert space tools. $\endgroup$ – Peter Luthy Jun 15 '10 at 4:40
In probability, the $L^p$ norms give you the $p$-th moments of a random variable, and the relationship between them can tell you a lot about its distribution. For example, the 3rd and 4th moments tell you something about how symmetric and how concentrated about its mean a distribution is. Statisticians give them cool names like "skewness" and "kurtosis".
I'll also mention a recent startling theorem of Nualart et al, which says that a sequence of random variables taken from a Wiener chaos converge in distribution to a certain limit if and only if their fourth moments are converging to the right thing. (First and second moments are not sufficient.)
Tim, here is one very specific example that a computer scientist who cares only about $L_1$ and $L_2$ should find appealing. The norm of $\ell_1^n$ is, up to a constant, the same as the $\ell_p^n$ norm when the conjugate index to $p$ is $\log n$. As was already mentioned in this thread, the $L_p^n$ norm is uniformly convex when $1<p<\infty$ and the modulus of convexity is known. This fact, frequently used by researchers in Banach space theory, was used by Lee and Naor to give a strikingly simple proof of the Brinkman-Charikar result on the impossibility of dimension reduction in $L_1$.
Bill Johnson
I think a nice example are Lieb--Thirring inequalities. Consider the Schroedinger operator $H = \Delta + V$ with potential $V \in L^{\gamma + d/2}(\mathbb{R}^{d})$, where $d \geq 3$. Then H defines an (unbounded) operator on $L^2(\mathbb{R}^d)$, whose essential spectrum is $[0,\infty)$ and which has negative eigenvalues $E_j$ (countably many). The Lieb--Thirring inequality then tells us
$$ \sum |E_j|^{\gamma} \leq const \|V\|_{L^{\gamma + d/2}(\mathbb{R}^{d})}. $$
This inequalities requires $L^p$ for $p \in (0,\infty)$.
There are other examples, but they are somewhat more technical to state ...
$\begingroup$ What is $f\mbox{}$? $\endgroup$ – Mariano Suárez-Álvarez Jun 14 '10 at 21:22
$\begingroup$ Fixed. It was meant to be V. $\endgroup$ – Helge Jun 14 '10 at 23:11
Although nonlinear PDE's are mentioned by John Cook, he seems to still concede that $p = 1, 2, \infty$ are the most important. I beg to differ. I will give only one specific example that I am familiar with. As others have noted, Terry Tao has written a lot about using $L_p$ estimates to study other types of nonlinear PDE's.
In the 70's and 80's, there were major breakthroughs in the use of elliptic PDE's to prove global theorems in geometry and topology by Sacks-Uhlenbeck and Schoen-Simon-Yau in minimal surfaces, Uhlenbeck, Taubes, and Donaldson in Yang-Mills theory, and Gao and Anderson-Cheeger in Einstein manifolds. The critical tool used here were sharp Sobolev inequalities and $L_p$ estimates of the first derivative to the solution of a nonlinear PDE. The technical tool that is often used is called Moser iteration, where initial $L_p$ bounds on the gradient are bootstrapped into stronger bounds on the solution. These types of estimates can also be applied to nonlinear parabolic PDE's, including the Ricci flow.
All of this has led to a tremendous growth in the study of nonlinear elliptic PDE's, along with their applications to global differential geometry and topology, as well as mathematical physics. The $L_p$ theory, where $p \ne 2$, plays a crucial role in most of this work.
Deane Yang
Here is an algebraic answer to your question. See a related answer of mine for more details.
First, as I explain in the answer cited above, it is very natural to replace the number p by its reciprocal 1/p, i.e., define L_p := L^{1/p}. Thus L^1, L^2, L^∞ are denoted by L_1, L_{1/2}, and L_0 in this notation.
For an arbitrary measurable space Z (i.e., a commutative von Neumann algebra), and, more generally, for an arbitrary noncommutative measurable space Z (i.e., a noncommutative von Neumann algebra) we can define the space L_p(Z) for all p∈CP, where CP is the set of all complex numbers with a nonnegative real part. Note that no choice of measure (weight) on Z is needed to construct L_p(Z). Note that L_0 just consists of bounded functions on Z and L_1 consists of finite complex-valued measures (weights) on Z. These spaces are graded components of a complex unital CP-graded *-algebra. In a certain precise sense one can say that this CP-graded *-algebra is a free algebra generated by all bounded functions on Z in grading 0 and all finite complex-valued measures (weights) on Z in grading 1, with the obvious relations coming from the Radon-Nikodym theorem and (in the noncommutative case) the modular automorphism group (which essentially explains how weights (i.e., noncommutative measures) commute with bounded functions).
If ℜp∈[0,1] then L_p(Z) is a Banach space, otherwise it is a quasi-Banach space. Also, if ℜp∈[0,1], then L_p(Z) can be obtained as the complex interpolation of L_0(Z) and L_1(Z) corresponding to the parameter p.
A lot of theorems in the (noncommutative) integration theory can be proved by simple algebraic manipulations in this CP-graded *-algebra. These algebraic manipulations require that one has access to all graded components, not just components with gradings 0, 1/2, and 1.
Let me give just one example for L_p spaces where p is imaginary. Suppose μ is a weight on M (if μ is bounded, i.e., μ(1)<∞, then μ∈L_1(Z), otherwise we should think of μ as an element of the extended positive cone EL_1^+(Z)). Suppose furthermore that t is an imaginary number and x∈L_0(Z). Then we have μ^t∈L_t(Z), x∈L_0(Z), μ^{-t}∈L_{-t}(Z) and their product is σ^μ_t(x) := μ^t x μ^{-t} ∈ L_0(Z). The one-parameter automorphism group σ^μ is called the modular automorphism group of the weight μ and it is a very important notion of noncommutative geometry. (In the commutative case we always have σ^μ_t(x)=x.)
Dmitri Pavlov
$\begingroup$ Pretty cool stuff! +1 :-) $\endgroup$ – Johannes Hahn Jun 15 '10 at 0:03
$\begingroup$ I agree. Is there a canonical reference for this? (I usually approach non-comm L^p spaces through interpolation, which does need one to fix a weight). $\endgroup$ – Matthew Daws Jun 15 '10 at 11:25
$\begingroup$ @Matthew: In my opinion, the best reference for noncommutative L_p-spaces is Yamagami's paper "Algebraic aspects in modular theory". It explains how weights, Radon-Nikodym derivatives, modular automorphism groups, operator valued weights etc. fit together in a nice algebraic formalism of noncommutative L_p-spaces (including their relative versions). His other paper "Modular theory for bimodules" extends this formalism to bimodules, including spatial derivative and index theory. $\endgroup$ – Dmitri Pavlov Jun 15 '10 at 15:54
$\begingroup$ @Dmitri: Very nice! Thank you very much! $\endgroup$ – Matthew Daws Jun 15 '10 at 19:22
$\begingroup$ Every time I see an interesting-looking reference here, I feel a compulsion to go and hunt it down. The Yamagami paper is at ems-ph.org/journals/…. $\endgroup$ – LSpice Feb 17 '13 at 18:26
I had a similar question when I was first learning about the Lebesgue spaces: does anyone actually use these spaces when p<1? There are obvious technical problems with these spaces since the unit balls are not convex; despite this fact, the answer is yes. There are a number of interesting multilinear operators which are bounded maps from $L^{p_1}\times L^{p_2}\times ...\times L^{p_n}\rightarrow L^r$ where the exponents satisfy the condition $\displaystyle{\sum_{j=1}^n\frac{1}{p_j}=\frac{1}{r}}$. Now, if $n\ge 3$ and $p_i=2$ for all i, then this forces r to be some fraction less than 1. So, even if one only cared about $L^2$ and $L^\infty$, one would still run into values of $p<1$.
One such class of operators is comprised of multilinear variants of the Hardy-Littlewood maximal operator. They were studied fairly recently by Ciprian Demeter, Terence Tao, and Christoph Thiele, e.g. in this paper: http://arxiv.org/abs/math/0510581. Another type of operator in this spirit is the Biest operator studied by Camil Muscalu, Terence Tao, and Christoph Thiele (e.g. http://front.math.ucdavis.edu/math.CA/0102084).
Peter Luthy
The last paragraph of section 6.1, "Basic theory of $L^p$ spaces", in Folland's Real Analysis neatly summarizes a lot of the points made in other answers:
We conclude this section with a few remarks about the significance of the $L^p$ spaces. The three most obviously important ones are $L^1$, $L^2$, and $L^\infty$. With $L^1$ we are already familiar [from the development of Lebesgue integration in earlier chapters]; $L^2$ is special because it is a Hilbert space; and the topology on $L^\infty$ is closely related to the topology of uniform convergence. Unfortunately, $L^1$ and $L^\infty$ are pathological in many respects, and it is more fruitful to deal with the intermediate $L^p$ spaces. One manifestation of this is the duality theory in section 6.2; another is the fact that many operators of interest in Fourier analysis and differential equations are bounded on $L^p$ for $1 < p < \infty$ but not on $L^1$ or $L^\infty$.
Mark Meckes
$L^0$, which for a finite measure is just the set of all measurable functions, is important in probability. When you're modeling some physical phenomenom, there is often no canonical choice of probability measure, so you sometimes work with a class of equivalent probabilities, i.e. ones that have the same null sets. The only two $L^p$ spaces that are invariant under changing to an equivalent probability are $L^\infty$ and $L^0$. The former space is often too small for modeling purposes, and so you are forced to work with $L^0$. It's topologized by convergence in measure/probability, and it's pretty horribly non-convex.
weakstar
I guess that $L^1$, $L^2$, and $L^\infty$ just seem natural because they are so intimately related to obvious everyday concepts -- sums, averages, maxima, root-mean-square (Hilbert space, ...). So I doubt that the occasional theorem that involves another $L^p$ space explicitly will ever convince anyone that any other $L^p$ space is equally significant.
But let me just mention another one of them anyway: there's a marvelous theorem of Beurling that states that the family of functions of the form $f(x) = \sum_{k=1}^n a_k\rho(\theta_k/x)$---where $\rho(x) = x -\lfloor x\rfloor$ and $\sum a_k\theta_k = 0$---is dense in $L^p(0,1)$ (for some $p\in[1,\infty]$) iff the Riemann Zeta function has no zeros in the half-plane $\sigma > 1/p$.
Carl Offner
Here is an example from probability theory. Let $X_i$ be a sequence of independent identically distributed random variables in $L^1$. The strong law of large numbers asserts that the mean converges to the expectation.
$$a.e. \quad {1\over n}\ \sum_{k=0}^{n-1} X_k \rightarrow E(X_0).$$
What can be said about the speed of convergence ? If we assume that the $X_i$ are in $L^p$ for some $p\in ]1,2[$, then we have:
$$a.e. \quad {1\over n}\ \sum_{k=0}^{n-1} X_k = E(X_0) + o(n^{1/p-1}).$$
coudy
$\begingroup$ Of course, it suffices to have the $X_i$ in $L^2$, which in applications is often the condition you actually use. (There are not so many interesting distributions that are in $L^p$ for some $p \in ]1,2[$ but not in $L^2$.) So this may not really be a counterexample. $\endgroup$ – Nate Eldredge Jun 15 '10 at 14:58
$\begingroup$ Are you joking, Nate? The $p$ stable R.V. for $0<p<2$ are among the most important and studied R.V. after Gaussians and Bernoullis. A $p$ stable R.V. is in $L_r$ for $r<p$ but not in $L_p$. $\endgroup$ – Bill Johnson Jun 16 '10 at 6:49
$\begingroup$ Oh, good point. Mea culpa. $\endgroup$ – Nate Eldredge Jun 28 '10 at 16:38
Hypercontractivity is a powerful technique that makes heavy use of $L^p$ spaces for $p \in (1,\infty)$. Let $||\cdot||_p$ denote the $L^p$ norm. Such results establish for an operator $T$ and function $f$ that $$ ||f||_q \leq ||Tf||_p $$ where $1<p < q$. Perhaps the most striking example is the Bonami-Beckner inequality (originally due to Gross), which establish hypercontractivity for the Ornstein-Uhlenbeck operator and the noise operator, both parameterized by some variable $\epsilon$, on Boolean functions for appropriate values of $p, q$ and $\epsilon$. The most famous application of the Bonami-Beckner inequality to the analysis of Boolean functions is the KKL inequality, which has had an enormous influence on the field. Moreover, any time you see the words log-Sobolev inequality (which happens a lot when studying concentration of measure), hypercontractivity is lurking.
There are also reverse hypercontractivity results for $q < p < 1$. In particular, there are reverse versions for the noise operator and Ornstein-Uhlenbeck operator. These are used in the proof that the majority function is the most stable Boolean function (see here).
You can read more about hypercontractivity for Boolean functions in Ryan O'Donnell's book.
Zachary Hamaker
In fact $L^1$ and $L^\infty$ though natural in a naive sense are less well behaved than the $L^p$ spaces for $1<p<\infty$ from the perspective of dealing with PDEs. Elliptic operators are not well behaved on the Sobolev spaces based on $L^1$ and $L^\infty$ while they are on the other $L^p$ spaces. From the PDE perspective the more subtle friends of $L^1$ and $L^\infty$, namely the Hardy space and its dual BMO of functions of bounded mean oscillation a have a good elliptic theory. (See Stein's big book.)
Tom Mrowka
$L^{-1}$, "harmonic mean", makes sense sometimes in engineering applications. If one would dare to call such thing $L^{-1}$, that is. :-)
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis mathematics-education inequalities big-list or ask your own question.
|
CommonCrawl
|
Entropy and specific heat capacity
I have seen the equation $S(T_2)=S(T_1)+C_p\ln(T_2/T_1)$ where $C_p$ is the molar heat capacity at a constant pressure. I understand that this assumes that the temperature range is sufficiently small that the constant pressure heat capacity does not vary significantly over it.
My question pertains to the derivation of this expression. One of the first steps is substituting $q=C_pdT$ into $dS=\frac{q_{rev}}{T}$. I am not 100% sure as to why these enthalpies are equated. Is it because the heat exchanged by a system at a constant pressure is always reversible heat, or is this not true?
If the above were true, it would explain to me why I never see an expression involving constant volume heat capacity. If the above is not true, could someone also explain why suh a form does not exist?
Finally, is the equation $S(T_2)=S(T_1)+C_p\ln(T_2/T_1)$ only applicable where the pressures are the same at the initial and final temperatures? On the one hand, the derivation assumes a constant pressure in the infintesimal changes from the initial to final temperature (and of course uses a constant pressure heat capacity!); however on the other hand entropy is a state function. So I would think that if one could find how entropy varies wit pressure at a constant temperature, and the change from the final to the initial state involved both a temperature and pressure change, then I would think that the above expression can be used to find the change in entropy due to the temperature change only, and then the additional factor from the pressure could be used?
EDIT: For the second part, from having a quick think about the reversible heat enxchange in and isothermal process, I think $S(T_2,p_2)=S(T_2,p_1)+R\ \ln(p_1/p_2)$. So then one could construct a Hess-like cycle to go from one temperature and pressure/volume, to another temperature and pressure/volume. The use of the cycle and step-wise calulcation works because entropy is a state function I think?
Hopefully I answered the second part of my query (I would appreciate if someone could verify that it is correct), although I am still not sure about why the heats used are the same in the derivation, and whether the expression really does refer to the same pressure at the initial and final temperatures, even though the process in getting from these two temperatures can involve pressure changes (again, thinking about $S$ being a state function, so the process itself when changing states does not matter).
thermodynamics enthalpy temperature entropy pressure
21joanna1221joanna12
It is not true that heat exchanged at constant pressure is always reversible.
But, if you want to determine the change in entropy from thermodynamic equilibrium state 1 at $(T_1,P)$ to state 2 at $(T_2,P)$, you need to forget entirely about the actual irreversible process path that took you from state 1 to state 2. It is of no further use. You instead need to focus exclusively on the two end states. And, in order to apply the entropy equation, you need to devise a reversible path between these exact same two ends states. Any reversible path will do, because they will all give the same value for the entropy change.
In the case of your example, you can choose a constant pressure reversible path for convenience. So, for such a path, from the first law we know that dH=dq. But, now how do we figure out a reversible path between the two states in which the system temperature is changing reversibly along the path? If we contact the system with at constant temperature reservoir held at the final thermodynamic equilibrium temperature T2, that path would not be reversible because there would be a finite temperature difference between the system and the reservoir over most of the path. But, now, suppose we contact the system, not with a single constant temperature reservoir, but with a sequence of constant temperature reservoirs, each at a slightly different temperature over the range T1 to T2. This would guarantee that, over the entire path, the system would differ only slightly from the reservoir it is currently in contact with. So, in this case, $dq_{rev}=dH=C_pdT$, where dT is the change in the system temperature over the present increment of heat exchanged. We use the equation $$dH=C_pdT$$ because of the precise definition of constant pressure heat capacity in thermodynamics: $$C_p\equiv\left(\frac{\partial H}{\partial T}\right)_P$$So now, for the change in entropy, we have: $$\Delta S=\int{\frac{dq_{rev}}{T}}=\int_{T_1}^{T_2}\frac{C_pdT}{T}$$ So the key thing to remember here is that, if you want to determine the change in entropy for an irreversible process, you first need to determine the initial and final end states (say, using the first law of thermo) and then devise a reversible path between these same two end states to calculate the integral of $\frac{dq_{rev}}{T}$ for that path.
Chet MillerChet Miller
Not the answer you're looking for? Browse other questions tagged thermodynamics enthalpy temperature entropy pressure or ask your own question.
Derivation of the relation between temperature and pressure for an irreversible adiabatic expansion
What is the entropy of mixing of two ideal gases starting out with different pressures?
Reversible thermodynamic machine
Is ln(K2/K1) = (∆h°rxn/R)(1/T1 - 1/T2) even when ∆h°rxn varies strongly with temperature?
Entropy increase when heating a gas
How do you find the change in internal energy (q+w)?
General expression for entropy and free energy
Finding final temperature for adiabatic expansion
How to derive the relation between gibbs energy and equilibrium constant?
Heat of formation method
|
CommonCrawl
|
How can magnets be used to pick up pieces of metal when the force from a magnetic field does no work?
I learned that the force from a magnetic field does no work. However I was wondering how magnets can be used to pick up pieces of metal like small paperclips and stuff. I also was wondering how magnets can stick to pieces of metal like a refrigerator?
electromagnetism everyday-life magnetic-fields work
Ben Crowell
sTr8_StrugginsTr8_Struggin
$\begingroup$ A reason that magnets can stick to a refrigerator without doing work is that, since the magnet does not move while sticking, the distance moved is 0, and thus no work is done, by magnetic forces or otherwise. $\endgroup$ – Lily Chung Jun 12 '13 at 19:33
$\begingroup$ related: physics.stackexchange.com/q/10565 $\endgroup$ – leongz Jun 12 '13 at 19:59
$\begingroup$ The related question is good, but the top voted answers are wrong. This link has a truly great discussion on the issue of magnetic fields and work. They start with the opinion the magnetic fields actually do work but eventually (with some very nice arguments) they all agree that there is never any work done by magnetic fields/forces $\endgroup$ – Jim Jun 12 '13 at 20:28
$\begingroup$ Magnets do not pick up "metal". They pick up materials with a high permeability, including those that are ferromagnetic. Most metals are not picked up. $\endgroup$ – Kaz Jun 12 '13 at 23:54
$\begingroup$ @Kaz That's useless nitpicking - the OP quite rightly said that magnets can pick up metal, and never said they can pick up all types of metal. I can buy milk from the store, but I can't buy all kinds of milk from every store. $\endgroup$ – Ken Williams Feb 16 '16 at 20:23
The Lorentz force $\textbf{F}=q\textbf{v}\times\textbf{B}$ never does work on the particle with charge $q$. This is not the same thing as saying that the magnetic field never does work. The issue is that not every system can be correctly described as a single isolated point charge
For example, a magnetic field does work on a dipole when the dipole's orientation changes. A nonuniform magnetic field can also do work on a dipole. For example, suppose that an electron, with magnetic dipole moment $\textbf{m}$ oriented along the $z$ axis, is released at rest in a nonuniform magnetic field having a nonvanishing $\partial B_z/\partial z$. Then the electron feels a force $F_z=\pm |\textbf{m}| \partial B_z/\partial z$. This force accelerates the electron from rest, giving it kinetic energy; it does work on the electron. For more detail on this scenario, see this question.
You can also have composite (non-fundamental) systems in which the parts interact through other types of forces. For example, when a current-carrying wire passes through a magnetic field, the field does work on the wire as a whole, but the field doesn't do work on the electrons.
When we say "the field does work on the wire," that statement is open to some interpretation because the wire is composite rather than fundamental. Work is defined as a mechanical transfer of energy, where "mechanical" is meant to distinguish an energy transfer through a macroscopically measurable force from an energy transfer at the microscopic scale, as in heat conduction, which is not considered a form of work. In the example of the wire, any macroscopic measurement will confirm that the field makes a force on the wire, and the force has a component parallel to the motion of the wire. Since work is defined operationally in purely macroscopic terms, the field is definitely doing work on the wire. However, at the microscopic scale, what is happening is that the field is exerting a force on the electrons, which the electrons then transmit through electrical forces to the bulk matter of the wire. So as viewed at the macroscopic level (which is the level at which mechanical work is defined), the work is done by the magnetic field, but at the microscopic level it's done by an electrical interaction.
It's a similar but more complicated situation when you use a magnet to pick up a paperclip; the magnet does work on the paperclip in the sense that the macroscopically observable force has a component in the direction of the motion of the paperclip.
$\begingroup$ Wikipedia, " It is often claimed that the magnetic force can do work to a non-elementary magnetic dipole, or to charged particles whose motion is constrained by other forces, but this is incorrect[19] because the work in those cases is performed by the electric forces of the charges deflected by the magnetic field." $\endgroup$ – Jim Jun 12 '13 at 15:15
$\begingroup$ that qualifier is used because claims were only made that it did work on the non-elementary dipoles because it was accepted that no work is done on elementary dipoles $\endgroup$ – Jim Jun 12 '13 at 15:25
$\begingroup$ @BenCrowell here's an excellent link to read. A magnetic field does no work period. It turns out to be indirectly caused by it being divergence-less. The magnetic field can redirect and/or carry work but not do work itself. What may seem in some cases to be work done by a magnetic field is actually work done by induced electric fields. $\endgroup$ – Jim Jun 12 '13 at 19:29
$\begingroup$ I took the last couple hours to look this up in more detail. It seems that there is some confusion everywhere. In physics forums (including Physics.SE) it is the general opinion the magnetic fields and forces DO work (mainly based on intuition and anecdotes) and most of the top answers state this. However, in pretty much all official sources (texts, paper, etc) it is that they DON'T do work; it is always the result of induced electric fields/forces or external forces that there is any work done along the direction of the magnetic force. $\endgroup$ – Jim Jun 12 '13 at 19:31
$\begingroup$ Furthermore, were a magnetic field to accelerate an electron, acceleration is not itself doing work; work requires a net displacement. Once the electron starts moving though, we can agree that the Lorentz force would quickly become the dominant force, as it is now a charged particle with a velocity. You also mentioned that macroscopically, it looks like the magnetic field does work. I agree completely. A magnet does do work on the paperclip. As long as everyone agrees and understands that deeper down this work is being done by induced electric fields in the magnet, I have absolutely no issue $\endgroup$ – Jim Jun 13 '13 at 14:18
Although what Ben and others have said might be sufficient, I would like to state my point.
Consider a piece of conductor being lifted up by the magnetic force. The current is towards the right(with velocity $w$), and the magnetic field is going into the page. Hence, the magnetic force is upwards. Now, as the conductor moves up, it gains a velocity $u$ in the upward direction. Hence the magnetic force changes direction as shown in the figure, but the upward component remains the same.
Now, observe that the horizontal component of the magnetic force is acting against the current. To maintain the current, the battery responsible for the current does work against this force and is the source of work done.
A popular analogue in Classical Mechanics is that of the role of normal force in pushing a block upwards a slope. The normal force does no work but is required to move the block up the slope. It's role is simply to redirect $F_{mop}$ in the upward direction. This is exactly the role of magnetic force in lifting stuff.$
Source of Images and Knowledge:Griffiths' Introduction to Electrodynamics
CheekuCheeku
$\begingroup$ I think this is the clearest answer here, although it does not really answer the question. How do you explain a magnet lifting a paperclip? $\endgroup$ – Andrea Jan 22 '16 at 12:54
$\begingroup$ @AndreaDiBiagio The permanent magnet induces poles in the paper clip in such way that the currents in the permanent magnet and the paper clip are parallel.These parallel currents attract each other. The magnetic field is doing some positive work on these objects and exactly same amount of negative work on the moving charges producing the current. So, net amount of work done by magnetic field is zero. The electric field responsible for generating the currents is doing exactly same amount of positive work to keep the current flowing.So, finally, it is that electric field which is doing the work. $\endgroup$ – Archisman Panigrahi Apr 8 '18 at 4:40
$\begingroup$ @Cheeku We are not obliged to "maintain the current". We can connect two plates in a parallel plate capacitor for current flow. $\endgroup$ – Razor Nov 14 '18 at 15:46
The Lorentz force is the only force on a classical charged point particle (charge $q$ - see Ben Crowell's answer about nonclassical particles with fundamental magnetic moment such as the electron). The magnetic component of the Lorentz force $q \mathbf{v} \wedge \mathbf{B}$, as you know, is always at right angles to the velocity $\mathbf{v}$, so there is no work done "directly" by a magnetic field $\mathbf{B}$ on this charged particle.
However, it is highly misleading to say that the magnetic field cannot do work at all because:
A time varying magnetic field always begets an electric field which can do work on a classical point charge - you can't separate the electric and magnetic field from this standpoint. "Doing work" is about making a change on a system, and "drawing work from a system" is about letting the system change so that it can work on you. So we're always talking about a dynamic field in talking about energy transfer and in this situation you must think of the electromagnetic field as a unified whole. This is part of the meaning of the curl Maxwell equations (Faraday's and Ampère's laws). Moreover, once things (i.e. charges and current elements) get moving, it becomes easier sometimes to think about forces from reference frames stationary with respect to them: Lorentz transformations then "mix" electric and magnetic fields in a fundamental way.
A classical point charge belonging to a composite system (such as a "classical" electron in a metal lattice in a wire) acted on by the magnetic field through $q \mathbf{v} \wedge \mathbf{B}$ thrusts sideways on the wire (actually it shifts sideways a little until the charge imbalance arising from its displacement begets an electric field to support it in the lattice against the magnetic field's thrust). The magnetic field does not speed the charge up, so it does not work on the charge directly, but the sideways thrust imparted through the charge can do work on the surrounding lattice. Current elements not aligned to the magnetic field have torques on them through the same mechanism and these torques can do work. These mechanisms underly electric motors.
Another way to summarise statements 1. and 2. is (as discussed in more detail below) that magnetic field has energy density $\frac{|\mathbf{B}|^2}{2\mu_0}$. To tap the energy in this field, you must let the magnetic field dwindle with time, and electric field arising from the time varying magnetic field can work on charges to retrieve the work stored in the magnetic field.
The thinking of current elements shrunken down to infinitesimal sizes is a classical motivation for thinking about the interaction between magnetic fields and the nonclassical particles with fundamental magnetic moments, as in Ben Crowell's answer (I say a motivation because if you go too far classically with this one you have to think of electrons as spread out charges spinning so swiftly that their outsides would be at greater than light speed - an idea that put Wolfgang Pauli into quite a spin).
We can put most of the mechanisms discussed in statements 1. and 2. into symbols: suppose we wish to set up a system of currents of current density $\mathbf{J}$ in perfect conductors (so that there is no ohmic loss). Around the currents, there is a magnetic field; if we wish to increase the currents, we will cause a time variation in this magnetic field, whence an electric field $\mathbf{E}$ that pushes back on our currents. So in the dynamic period when our current changes, to keep the current increasing we must do work per unit volume on the currents at a rate of $\mathrm{d}_t w = -\mathbf{J} \cdot \mathbf{E}$.
However, we can rewrite our current system $\mathbf{J}$ with the help of Ampère's law:
$\mathrm{d}_t w = -\mathbf{J} \cdot \mathbf{E} = -(\nabla \wedge \mathbf{H}) \cdot \mathbf{E} + \epsilon_0 \mathbf{E} \cdot \partial_t \mathbf{E}$
then with the help of the standard identity $\nabla \cdot (\mathbf{E} \wedge \mathbf{H})=(\nabla \wedge \mathbf{E})\cdot\mathbf{H} - (\nabla \wedge \mathbf{H})\cdot\mathbf{E}$ we can write:
$\mathrm{d}_t w = -(\nabla \wedge \mathbf{E}) \cdot \mathbf{H} + \nabla \cdot (\mathbf{E} \wedge \mathbf{H})+\partial_t\left(\frac{1}{2}\epsilon_0 |\mathbf{E}|^2\right)$
and then with the help of Faraday's law:
$\mathrm{d}_t w = +\mu_0 \mathbf{H} \cdot \partial_t \mathbf{H} + \nabla \cdot (\mathbf{E} \wedge \mathbf{H})+\frac{1}{2}\epsilon_0 |\mathbf{E}|^2 = + \nabla \cdot (\mathbf{E} \wedge \mathbf{H})+ \partial_t\left(\frac{1}{2}\epsilon_0 |\mathbf{E}|^2+\frac{1}{2}\mu_0 |\mathbf{H}|^2\right)$
and lastly if we integrate this per volume expression over a volume $V$ that includes all of our system of currents:
$\mathrm{d}_t W = \mathrm{d}_t \int_V\left(\frac{1}{2}\epsilon_0 |\mathbf{E}|^2+\frac{1}{2}\mu_0 |\mathbf{H}|^2\right)\,\mathrm{d}\,V + \oint_{\partial V} (\mathbf{E} \wedge \mathbf{H}).\hat{\mathbf{n}} \,\mathrm{d}\,S$
(the volume integral becomes a surface integral by dint of the Gauss divergence theorem). For many fields, particularly quasi-static ones, as $V$ gets very big, the Poynting vector ($\mathbf{E} \wedge \mathbf{H}$ - which represents radiation), integrated over $\partial V$ is negligible, which leads us to the idea that the store of our work is the volume integral of $\frac{1}{2}\epsilon_0 |\mathbf{E}|^2+\frac{1}{2}\mu_0 |\mathbf{H}|^2$, so the magnetic field contributes to the stored work. It should be clear that this discussion is a general description of any dynamic electromagnetic situation and is wholly independent of the sign of $\mathrm{d}_t W$. So it applies equally whether we are working through the currents on the field or the field is working on us.
The above is very general: we can bring it into sharper focus with a specific example where it is almost wholly the magnetic field storing and doing work: say we have a sheet current circulating around in a solenoid shape so that there is a near-uniform magnetic field inside. For a solenoid of radius $r$, the flux through the solenoid is $\pi r^2 |\mathbf{B}|$ and the magnetic induction if the sheet current density is $\sigma$ amperes for each metre of solenoid is $|\mathbf{B}| = \mu_0 \sigma$. If we raise the current density, there is a back EMF (transient electric field) around the surface current which we must work against and the work done per unit length of the solenoid is:
$\mathrm{d}_t W = \sigma \pi r^2 \mathrm{d}_t |\mathbf{B}| = \frac{1}{2} \mu_0 \pi r^2 \mathrm{d}_t \sigma^2 = \pi r^2 \times \mathrm{d}_t \frac{|\mathbf{B}|^2}{2 \mu_0}$
This all assumes the rate of change is such that the wavelength is much, much larger than $r$. So now, the energy store is purely magnetic field: the electric field energy density $\frac{1}{2}\epsilon_0 |\mathbf{E}|^2$ is negligible for this example, as is the contribution from the Poynting vector (take the volume $V$ in the above argument to be a cylindrical surface just outside the solenoid: just outside the solenoid, the magnetic field vanishes and the Poynting vectors are radial at the ends of the cylinder so they don't contribute either. The above analysis works in reverse: if we let the currents run down, the electromagnetic field can work of the currents and thus the stored magnetic energy can be retrieved.
WetSavannaAnimalWetSavannaAnimal
$\begingroup$ I would suggest weakening the statement in the first paragraph, since it only applies to point charges. In particular, it's false for a fundamental dipole such as an electron. $\endgroup$ – Ben Crowell Aug 13 '13 at 2:32
$\begingroup$ @BenCrowell Hopefully done now without nicking too much of your answer. $\endgroup$ – WetSavannaAnimal Aug 13 '13 at 4:15
$\begingroup$ Nice answer. Two comments. (1) It may be "highly misleading to say that the magnetic field cannot do work at all", but I think your answer is consistent with the following assertions for classical physics: magnetic fields don't do work, electric fields do, and changing magnetic fields produce electric fields (and vice versa). (2) Also, not every "change on a system" does work, since that's precisely what the Lorentz force doesn't do... $\endgroup$ – Art Brown Dec 9 '13 at 6:41
$\begingroup$ @ArtBrown I guess it's a matter of taste: I guess I'm saying that you can't really sunder electric from magnetic fields. You need it in particular to understand the $\frac{1}{2} |\mathbf{B}|^2/\mu_0$ energy density term, which becomes a bit baffling if you take "magnetic fields can't do work" too literally. Your last sentence is true, but I guess emphasizing its converse: change is a necessary but not sufficient condition to do work $\endgroup$ – WetSavannaAnimal Dec 9 '13 at 23:34
$\begingroup$ Although the Maxwell equations (and relativity) mixes up the magnetic and electric field in a fundamental way, there is an unambiguous distinction as to which force is doing the work! The magnetic field (whether time-dependent or not) never works on a classical charged particle by itself. As you said, it induces the electric field which in turn does the work. Unless we invoke the objects with intrinsic magnetic dipole moment (such as an electron with a quantum mechanical spin), there is no way a magnetic field can perform work by itself. $\endgroup$ – Dvij Mankad Nov 4 '18 at 21:31
A magnet picks up pieces of iron because someone has configured that system to have the initial conditions such that this happens. The magnet was moved into a particular location near some pieces of ferro-magnetic metal, or vice versa.
The pieces of metal move because doing so reduces their potential energy in the magnetic field by a greater amount than it increases their gravitational potential.
The system releases energy. When the iron piece strikes the magnet and sticks to it, it produces a sound, and heat. It's not really a question or who or what does the work, but a situation in which a physical system has rearranged itself and changed energy from one form to another.
When the pieces are next to the magnet, they cause the field to be concentrated through them because they are highly permeable. As the magnet is covered with pieces, more and more of its field is concentrated through the pieces, and less and less of it is available for attracting new pieces. It is like a discharged battery.
Eventually you have to "recharge" the system by cleaning up the magnet so that you can keep using it. When you separate the pieces from the magnet, you have to put in energy.
$\begingroup$ It's not really a question or who or what does the work But that's the question that was asked. A given process may be describable both in terms of mechanical work and in terms of an energy transformation. $\endgroup$ – Ben Crowell Jun 13 '13 at 0:31
Below is opinion of Landau & Lifshitz.
Quote from "ELECTRODYNAMICS OF CONTINUOUS MEDIA" (Second Edition), page 128:
"When a conductor moves, the forces
$$\overrightarrow f=\overrightarrow j\times \overrightarrow H\frac{1}{c};\;\text{(1)}$$
($\overrightarrow j$ current density. $\overrightarrow H$ magnetic field)
do mechanical work on it.
At first sight it might appear that this contradicts the result that the Lorentz forces do no work on moving charges.
In reality, of course, there is no contradiction, since the work done by the Lorentz forces in a moving conductor includes not only the mechanical work but also the work done by the electromotive forces induced in the conductor during its motion.
These two quantities of work are equal and opposite.
In the expression (1) $\overrightarrow H$ is the true value of the magnetic field due both to external sources and to the currents themselves on which the force (1) acts.
The total force exerted by a magnetic field on a conductor carrying a current is given by the integral
$$\overrightarrow F=\int\overrightarrow j\times \overrightarrow H\frac{dV}{c};\;\text{(2)}$$
In calculating the total force from (2), however, we can take $\overrightarrow H$ to be simply the external field in which the conductor carrying a current is placed.
The field of the conductor itself cannot, by the law of conservation of momentum, contribute to the total force acting on the conductor."
End of the quote.
Martin GalesMartin Gales
$\begingroup$ Interesting, thanks. I would call L&L's style "terse". It's clever how "Lorentz" force is expanded to include the E-field, which is certainly true but sidesteps the original question. I think Feynman's lecture does quite a bit more preparation for that statement "These two quantities of work are equal and opposite." Are L&L just invoking conservation of energy, or do they have additional reasons? $\endgroup$ – Art Brown Dec 11 '13 at 18:11
$\begingroup$ When a conductor moves, the forces f=j×H do mechanical work on it... In reality, of course, there is no contradiction, since the work done by the Lorentz forces in a moving conductor includes not only the mechanical work but also the work done by the electromotive forces induced in the conductor during its motion. L&L text does talk about the problem "how can magnetic force do work ?" here, but they do not give any explanation, only say that there is another work connected to the magnetic force involved. It is not clear how this statement helps to resolve the problem. $\endgroup$ – Ján Lalinský Jun 1 '14 at 20:09
$\begingroup$ They return to this problem in sec. 63 in a footnote and explain what they mean: they show that magnetic electromotive intensity $\mathbf v \times \mathbf B$ acting on the conduction current produces heat in the conductor per unit time equal to the kinetic energy lost by the conductor per unit time due to mechanical work of the magnetic force damping its motion. This is true, but they mistakenly claim that this resolves the problem with magnetic force doing work. It does not. $\endgroup$ – Ján Lalinský Jun 1 '14 at 20:15
$\begingroup$ First, using the formula for the electromotive intensity $\mathbf v \times \mathbf B$, although correct, has the same problem they are trying to address: it looks like magnetic force doing work on the conduction charges. Second, they only showed that kinetic energy turns into internal energy of the conductor while magnetic field playing some role in the description, not how magnetic force can do any energy transfer (mechanical or thermal) at all. $\endgroup$ – Ján Lalinský Jun 1 '14 at 20:22
$\begingroup$ I have a high opinion L&L but this quote does not go to the core of the issue. $\endgroup$ – my2cts May 25 '19 at 9:59
The work in picking up something is not done by the magnet, but by you!
Were a magnet and a piece of iron in free space (i.e. vacuum and no gravity), they'd simply start approaching one another, converting the potential energy of the magnetic field into kinetic energy. In gravity field, both would fall downwards, but e.g. if the magnet were above the iron, the magnet would fall slightly faster and the iron slightly slower due to the common attraction.
But now there's you (or e.g. a crane) holding the magnet in a fixed position (and the floor preventing the iron from falling via reactive forces). There are two scenarios:
You put the magnet on the iron. In that case they simply stick together and when you lift the two, it is obviously you doing the work
You hover the magnet over the iron. When close enough, the iron will rise to the magnet. But the magnet is also attracted to the iron and pulls downwards. But you won't let it get down, you strain your muscles slightly more in order to counter this. You are doing work when putting the magnetic above the iron, and it is (basically) exactly the amount required to add potential energy to the iron now attached to your magnet.
Tobias KienzlerTobias Kienzler
$\begingroup$ No for the second case. The definition of "work" in the context of physics is the product of force and distance, and as the magnet does not move over a distance, you do no "work" on the magnet. Indeed, we could replace your arm by a bracket and literally no work would be done (except for the tiny distortion of the bracket). However, your muscles are not a conservative system - resisting a force with your muscles does require energy through chemical metabolism, which is a form of "work" but not work which acts on the magnet. $\endgroup$ – Chris Stratton Jun 13 '13 at 21:17
$\begingroup$ @ChrisStratton You're right, it's more like Kaz's answer states - you're adding energy to the system by bringing magnet and iron closer together while forcing the iron to stay on ground. $\endgroup$ – Tobias Kienzler Jun 16 '13 at 8:49
$\begingroup$ Also no for the first case: when you pull up, you are doing work on the magnet. The energy necessary to lift the clip is definitely inputted by you, but you are doing no work directly on the clip! $\endgroup$ – Andrea Jan 22 '16 at 12:48
$\begingroup$ @AndreaDiBiagio I wasn't stating otherwise, though my wording is probably improvable... $\endgroup$ – Tobias Kienzler Jan 22 '16 at 12:52
$\begingroup$ So the question is still open! How does the magnet do work on the clip, if it does no work on any of the particles composing it! $\endgroup$ – Andrea Jan 22 '16 at 12:55
From the formula of the Lorentz force it appears that the magnetic field does not do work by definition. The magnetic contribution is perpendicular to the displacement it causes. However the time derivative of the magnetic field is identical to the rotation of the electric field, so it implies the existence of an electric field that does work. So while formally B does not do work, a changing magnetic field is directly associated with work.
The root cause of this confusion is that E and B are not independent quantities, although from the Lorentz force alone they seem to be.
my2ctsmy2cts
The magnetic field causes the orientation without actually doing any work. The formula for work is not $F$, it is $W = \int F \cdot dr$, where the integral is along the path and $\cdot$ is the vector dot-product.
If you do your calculations correctly, you will see $W = 0$.
Mag FieldMag Field
Well, here we go in maths.
W = SF.ds F = ma = m d2/dt2. F = qv x B = - vB x v = vmB x (v x dv/dt) F = vmB x 0.5x(v x ds2/d2t + dv/dt x dv/dv) F = When v = constant => dv/dt & other derivates are 0), thus SF.ds = 0
$\begingroup$ I am thoroughly confused as to how you transitioned between those steps specifically how you got to F = -vB x v. I would appreciate a bit of explanation as I cannot follow your thought process. $\endgroup$ – sTr8_Struggin Jun 12 '13 at 19:28
$\begingroup$ @Mag Field: Welcome to physics.SE. Instead of writing two answers, it would be better to simply edit your original answer to add more detail. You can still do that now, and delete the superfluous answer after copying it into the other answer. Your math will be much more readable if you mark it up using LaTeX, as explained here: physics.stackexchange.com/help/notation $\endgroup$ – Ben Crowell Jun 12 '13 at 22:36
Not the answer you're looking for? Browse other questions tagged electromagnetism everyday-life magnetic-fields work or ask your own question.
Work done by magnetic field
How can magnetic fields have energy and yet be unable to do work with that energy?
Questions about Magnetic field
Work done by magnetic forces
Magnetic Forces and Junkyard Magnets: What's Going On?
Work by magnetic field
Contradiction in the fact that magnetic fields do zero work
Work done by a magnetostatic field
How does permanent magnets attract each other if the $B$-field can do no work?
Magnetic field and magnet
How can a magnetic field produce an emf if the force produced does no work?
Can the magnetic force between two magnets be explained classically via magnetization current?
Do metal shavings affect the magnetic field they are used to visualize?
What would the magnetic field of two (practically identical) static cylindrical permanent magnets stuck together (called compound magnets) look like?
How can a refrigerator magnet be stronger than the Earth's magnetic field?
Does the magnetic part of the Lorentz force do work?
|
CommonCrawl
|
Malaria Journal
Willingness-to-pay for long-lasting insecticide-treated bed nets: a discrete choice experiment with real payment in Ghana
Y. Natalia Alfonso
Matthew Lynch
Elorm Mensah
Danielle Piccinini
David Bishai
Expanding access to long-lasting insecticidal nets (LLINs) is difficult if one is limited to government and donor financial resources. Private commercial markets could play a larger role in the continuous distribution of LLINs by offering differentiated LLINs to middle-class Ghanaians. This population segment has disposable income and may be willing to pay for LLINs that meet their preferences. Measuring the willingness-to-pay (WTP) for LLINs with specialty features that appeal to middle-class Ghanaians could help malaria control programmes understand what is the potential for private markets to work alongside fully subsidized LLIN distribution channels to assist in spreading this commodity.
This study conducted a discrete choice experiment (DCE) including a real payment choice among a representative sample of 628 middle-income households living in Ashanti, Greater Accra, and Western regions in Ghana. The DCE presented 18 paired combinations of LLIN features and various prices. Respondents indicated which LLIN of each pair they preferred and whether they would purchase it. To validate stated willingness-to-pay, each participant was given a cash payment of $14.30 (GHS 65) that they could either keep or immediately spend on one of the LLIN products.
The households' average probability of purchasing a LLIN with specialty features was 43.8% (S.D. 0.07) and WTP was $7.48 (GHS34.0). The preferred LLIN features were conical or rectangular one-point-hang shape, queen size, and zipper entry. The average WTP for a LLIN with all the preferred features was $18.48 (GHS 84). In a scenario with the private LLIN market, the public sector outlay could be reduced by 39% and private LLIN sales would generate $8.1 million ($311 per every 100 households) in revenue in the study area that would support jobs for Ghanaian retailers, distributors, and importers of LLINs.
Results support a scenario in which commercial markets for LLINs could play a significant role in improving access to LLINs for middle-income Ghanaians. Manufacturers interested could offer LLIN designs with features that are most highly valued among middle-income households in Ghana and maintain a retail price that could yield sufficient economic returns.
Malaria Long-lasting insecticide nets Commercial private markets Discrete choice experiment Willingness-to-pay Middle-income Ghana
LLIN
long-lasting insecticidal bed nets
disability-adjusted-life-year
insecticide-treated net
Global Technical Strategy
willingness-to-pay
discrete choice experiment
Ghana Statistical Services
rectangular four-point hang
rectangular single-point hang
fractional factorial design
socio-demographic variables
One of the most cost-effective strategies for reducing the global malaria burden is sleeping under long-lasting insecticidal bed nets (LLINs). It is estimated that LLINs offer a cost of $27 per disability-adjusted-life-year (DALY) averted [1, 2]. They are effective even in areas with mosquito resistance to insecticides [3]. Between 2010 and 2016, the proportion of people at risk of malaria in Africa sleeping under an insecticide-treated net (ITN), including LLINs, increased from 30 to 54% [4]. The 2016–2030 goal of the Malaria Global Technical Strategy (GTS) and the World Health Organization (WHO) is to achieve and maintain universal coverage with LLINs, specifically one net for every two persons at risk of malaria. Multiple strategies will be required to grow more coverage including mass free net distribution campaigns and the growth of new and under-utilized channels, such as commercial sector channels [5].
Expanding access to malaria control measures is difficult given the many demands on limited -government health budgets [6]. A strategy that has gained traction but needs further research is expanding the role private commercial markets could play in the distribution of LLINs. The essential principle is focusing scarce government resources on offering protection to those who cannot afford commercially sold products, but to allow the emergence of an upscale market to take some of the financial pressure off of the government. This pressure relief system cannot take root unless commercial firms choose to enter markets where they are competing against a flood of free bed net products. Commercial private sector sales can be an important source for supplying LLINs to non-poor households willing-to-pay for a LLIN. The commercial sector can also serve as a backstop for poor households in the event that public sector funding and channels cannot supply enough nets to increase coverage or replace worn-out nets [2].
Prior to the era of publicly-funded mass distribution campaigns for LLINs, a commercial market for LLINs existed in several African countries. While mass campaigns rapidly increased ownership of LLINs bringing major benefits to millions of families, it came at a cost to the commercial market, which has diminished due to a lack of incentives for the private sector given users' dependency on donor-provided free nets. The absence of a commercial market puts the entire financial burden of LLINs for both low- and middle-income households on the public sector and that burden can be overwhelming.
In 2016, governments and international partners spent US$ 2.7 billion on global malaria control and elimination (e.g. below $2 per person at risk of malaria) [4]. Out of the total, 74% was spent in Africa and the African governments' contribution was 31%. These expenditure level would need to triple by 2030 in order to meet global malaria reduction targets [4, 7, 8]. Households also bear significant out-of-pocket (OOP) costs related to the treatment of malaria. For example, in 2014, Ghanaian households paid an average of $2.10 and $11.8 OOP in direct and indirect costs, respectively, per malaria treatment at formal health facilities [9]. Similarly, local businesses in high-burden countries are also affected by increased staff absenteeism and private healthcare costs.
Likewise, while the WHO predicted that 21 countries could eliminate malaria by 2020, 11 of these have shown marginal but consistent increases in malaria cases, and another 25 countries, mostly in Africa, show case increases of 20% [4, 6]. One contributory cause for stalled progress is a lack of sustainable and predictable funding [6]. As the stakes for controlling malaria increase, strengthening and identifying efficient strategies to widen funding sources for LLINs is crucial.
Little is known about the potential demand for high quality LLINs that could be sold in commercial markets, particularly among non-poor households. Three studies in Tanzania, Madagascar and India, using rigorous consumer preference measures, found low demand for LLINs among mostly poor-income households, but also indicated the potential for making demand stronger through micro-consumer loans and voucher subsidies [10, 11, 12]. The study in Tanzania also found strong demand (44%) for LLINs among a sub-group of least-poor households and significant willingness-to-pay for LLINs that matched consumer preferences for nets' shape and size. However, the focus of the Tanzania study was not the non-poor households. Understanding what is the market potential among the non-poor households is important given that they make 52% of the population needing LLINs in high malaria risk countries in Africa and that they are the population more likely to create a sustainable commercial market of LLINs [13].
This study seeks to evaluate the demand and willingness-to-pay (WTP) for LLINs with characteristics that match consumer preferences among the middle-income households in a high malaria risk country such as Ghana. The study assesses whether there is a statistically significant demand for buying LLINs among middle-income Ghanaians and determines what LLIN features or "attributes" consumers find most attractive. The study also estimates how many LLINs are likely to sell and the government's net costs and savings in a scenario where all LLINs are distributed free (i.e., only a public programme exists, "the status-quo") versus a scenario in which only the poor get subsided LLINs (i.e., a public programme of free LLINs targeting poor households and a private market of improved LLINs targeting non-poor households coexists). Evidence from this study can inform decision-making whether private commercial markets can play a larger role in the continuous distribution of LLINs, help increase access to LLINs, and reduce funding gaps in malaria prevention.
Estimates of the demand and WTP of LLINs were based on a representative sample of middle-income households from three regions in Ghana: Ashanti, Greater Accra, and Western. The evaluation used a discrete choice experiment (DCE) design with a real payment choice. A DCE is a quantitative technique based on conjoint-analysis theory that elicits consumer stated preferences for commodities from a sample population. The DCE technique was selected over other stated preference techniques, such as contingent valuation, because it allows for the valuation of trade-offs between multiple net characteristics or "attributes" (i.e., size, shape) and characteristic types or "attribute levels" (i.e., colour types: white, blue, green) [14]. DCEs are a widely applied approach in research associated with health commodities [15].
The study targeted the regions in Ghana where people are at risk of contracting malaria, have the lowest saturation of household LLIN ownership and have purchasing power to buy their own LLINs. These three criteria ensured that the evaluation focused on the areas with the potential to capture a market share for a sustainable commercial market of LLIN. In 2017, the number of confirmed malaria cases in Ghana was 150 per 1000 people [16]. Out of the ten regions in Ghana, Ashanti, Greater Accra, and Western regions had high incidence of malaria ranging between 100 and more than 300 cases per 1000 people, with a very few small areas with lower rates) [16]. These three regions were also the most urbanized (64%, 92% and 45%, respectively) and had low LLIN ownership (70%, 61% and 67%, respectively, owning at least one LLIN) [17, 18]. The selection of areas with high purchasing power (i.e., the middle-income population) was based on areas with the lowest poverty rates. The poverty rates used to select regions and districts into the sampling strategy were those estimated by the Ghana Statistical Services (GSS) 2015 Ghana poverty mapping report [19]. The GSS estimates poverty at the district and lower levels of disaggregation based on estimates of per capita consumption by combining information from censuses and household consumption surveys. Ashanti, Greater Accra, and Western regions had the lowest poverty incidence (15%, 5.6%, and 21%, respectively) [17, 18, 19]. Households in these regions had a per capita income of at least US$4 (GHS 18) per day, with an average yearly household income of US$7775 (GHS 34,445), which is two times the yearly national average of household income, US$3757 (GHS 16,645) [17]. Within these three regions, the study focused on the 28 non-poor districts (i.e., districts with poverty rates lower than 9.6%) [19]. Individuals eligible to participate in the DCE were adult (18+ year olds) household members with knowledge about the use of bed nets and finances in the household.
A cross-sectional study design was used. Out of 1075 households recruited for a broader LLIN household survey [20] evaluating malaria ideation and LLIN usage among the same study population, a random sub-sample of 628 households were selected to take part in the DCE. The sampling frame used a stratified two-stage cluster sampling method, see Additional file 1: Appendix A for sampling details.
To evaluate household preferences between 13 different LLIN attribute levels and survey questions with 3 choice set alternatives in a DCE (see DCE sections below for details), the minimum sample size required was 600 respondents. This sample size provides sufficient statistical power for the DCE based on having a minimum of 50 respondents per alternative plus an additional 50 (3 × 50 + 50 = 200) [21, 22] and 200 participants per sub-group analysis (200 × 3 regions = 600) [23]. This sampling strategy has been used in prior DCE studies and meets other literature's minimum sample size criterion of having at least 30 respondents for every level tested (13 levels × 30 = 390) [24, 25]. This statistical power is needed in order to statistically differentiate the effect of price between different attribute levels.
Identifying the DCE attributes and assigning attribute levels
The selection of LLIN attributes and attribute levels included in the questionnaire was supported by data from a pilot DCE with 50 respondents and qualitative techniques collected from the same study population [26]. These mixed research methods were designed to understand what improvements to the LLIN attributes were the most desired and which were affordable and feasible for manufacturing. Qualitative techniques included focus groups (with 60 adults and 30 teenagers) employing both semi-structured opened-ended questions and human-centered design (HCD) [26]. These methods focused discussion to understand people's facilitators and inhibitors to LLIN use and product preferences. Other qualitative methods included retail audit reviews, key informant interviews with LLIN supply chain importers and wholesalers, and recommendations from malaria programme experts [20, 27]. Evidence from the qualitative data was used to construct the list of LLIN attributes tested in the DCE, see Table 1 for the final list of the four attributes (i.e. shape, size, entry-design and price) and 13 levels. The attributes shape and entry-design each included 3 levels. The attribute size included two levels, and the price included 5 levels. The minimum and maximum prices tested were $1.10 (5 Ghanaian Cedi, GHS) and $14.30 (65 GHS), respectively. The maximum price was equivalent to a 25% margin above the estimated cost for manufacturing and distributing the LLIN with all the most expensive attribute levels (i.e. size queen, zipper entry, rectangular 4-point-hang). The maximum LLIN price estimate, $11.44 (GHS 52), was derived from information provided by retailers during retail audit interviews.
List of LLIN attributes and attribute levels
The 13 attribute levels were the final list out of 17 levels considered for the model. See Additional file 1 for details on the inclusion criteria for attribute levels
Designing the DCE choice sets
The DCE survey instrument was composed of 18 choice questions. Each question provided participants three alternatives: whether to buy "LLIN A", to buy "LLIN B", or to buy neither "to opt out". LLIN alternatives A and B each specified a level for each attribute tested in the model. The different types of LLINs that could be created combining 13 levels from four attributes is 90 (= 3 × 2 × 3 × 5). These 90 LLINs could be combined into (90 × (90 − 1)) = 8010 choice pairs (i.e. LLIN choice A or B pairs), known as the "Full Factorial Design". This large array of choice questions was reduced to a manageable number using orthogonal fractional factorial design (FFD) [14, 28, 29]. FFD is a statistical technique commonly used for DCE designs that draws a small sample of choice-pairs such that each level appears enough times in the survey for the analysis to capture the effect of changes to each level on the probability of purchase ("LLIN demand"). The maximum and minimum number of survey questions recommended for a DCE, ensuring both the collection of enough data for drawing statistical inferences and the reduction of participant exhaustion, is between K/(J − 1) and 18, where K is the number of attributes (4) and J is the number of choice "alternatives" (A, B, Neither = 3), thus between 2 and 18 questions. As such, we designed a survey with the maximum of 18 choice questions to maximize statistical power, where 15 choice questions were used for the FFD and the remaining 3 choice questions were used to test for participants' response rationality and consistency, see Additional file 1: Appendix B for details and the questionnaire design. The FFD was calculated using the statistical software R version 3.4.3.
Lastly, after the DCE, the survey also included questions about malaria ideation, reasons for bed net ownership and use as well as the use of insect sprays and the presence of air-conditioners in the home.
Binding intention to buy the product
The DCE survey questions asked participants: "Which LLIN are you most likely to purchase: Bed Net A, Bed Net B, or Neither A nor B is preferred?" To mimic as close as possible an everyday purchasing situation, each participant was given money to elicit a validated "bidding" purchase choice (i.e., a true stated preference) instead of a hypothetical choice [27]. Each respondent received a cash payment of $14.30 (65GHS) in the local currency. This amount was sufficient to pay for the highest LLIN price in the survey. Thus, for each of the 18 choice questions, respondents knew they would be immediately able to buy any LLIN option if they wanted to buy it and retain any remaining change (the difference between $14.30 and the LLIN price specified in the alternative). Likewise, they were explicitly told they could opt out and keep all of the cash—just like in a real shopping situation.
It was not logistically possible for survey administrators to carry 18 types of nets with them in the field, thus only four real net types were stocked. The four net types in stock were the nets specified in two out of the 18 choice questions. At the end of the survey an electronically generated number randomized one out of the two survey questions with nets in stock. Survey administrators were blind to the randomly generated number. The respondent's actual stated preference corresponding to the randomly chosen question was reviewed with them and based off their response the participant received either the Net A and the balance of remaining change, Net B and the balance of remaining change, or all of the cash payment and no net. Participants were alerted from the outset about how their stated preferences would be made binding and have real consequences.
Before the participants were administered the DCE survey, they were provided detailed contextual information about how the study procedures worked. LLIN attributes and levels were defined using both standardized text and 11 × 11 inch laminated cards with pictures, and the experiment was preceded by a practice mini-DCE using candies to help participants understand the exercise, see Additional file 1: Appendix B for details. During the experiment, each choice set was also illustrated in laminated cards with pictures making each choice set visual and facilitating comparison. DCE survey facilitators were trained in administrating the survey and answering participant questions. Participants signed consent forms and interviewers administered the DCE survey using electronic tablets. DCE survey questions appeared in random order and the order in which each question was answered for each participant was recorded and used in the analysis as a control for participant survey exhaustion. See Additional file 1: Appendix B for DCE design and procedure details. All human subject research activities were reviewed and approved by both the Johns Hopkins University and Ghana Health Service internal research review boards.
Statistical strategy
Assessment of LLIN demand was done using multivariate logistic regression with random effects, a model of observation errors clustered by person-question-ID to correct for unobserved or random preference variation [23], with controls for attribute levels, competing alternatives, and respondents' demographic and socio-economic characteristics, using the following equation:
$$\begin{aligned} BuysLLIN & = \beta_{0} + \beta_{1} Price + \beta_{2} Shape + \beta_{3} Size + \beta_{4} EntryDesign \\ & \quad + \beta_{5} OtherALTlevels + \beta_{6} SES + \beta_{7} HHmmbrs + \beta_{8} Qorder \\ & \quad + \beta_{9} InterviewID + \beta_{10} Price*Female + \beta_{11} Price*Rural + \varepsilon . \\ \end{aligned}$$
Although a respondent was making only 18 declarations of (A, B, or Neither), we can view the exercise as a set of 36 forced choice binary declarations of "Yes I would buy the option on this card" albeit each of these declarations was made in the context of a defined competing alternative. BuysLLIN is a binary response variable equal to one if the respondent's choice was to buy the net and zero otherwise. Price is the LLIN price coded as a continuous variable. As shown in Table 1 above, shape is a vector of three levels, each coded as dummy variable, including rectangular 4-point hang (R4p), rectangular 1-point hang (R1p) and conical (omitted in the analysis as a base-level of comparison); size is a dummy equal to one for queen and zero for double; and entry design is a vector of three levels including lift-over-head, zipper, and the base-level flap-overlapping. Coefficients can be interpreted as the change in the probability of buying a LLIN with that attribute level compared to the base-level, holding other variables at their means.
Other ALT levels is a set of vectors of the Price, Shape, Size, and EntryDesign of the alternative card that was the context for the one that was under consideration. The model included various socio-economic and demographic variables about the respondent including: a sex dummy equal to one for females and zero otherwise, a secondary education dummy, a married dummy, a vector of three region variables, including Ashanti, Western and the base-level Greater Accra, a SES vector of five wealth dummies (all pertaining to "non-poor" Ghanaians) where the base-level is the lowest wealth, and a dummy for residency type equal to one if rural. HHmmbrs is a continuous variable on the number of household members. Qorder is a continuous variable on the order in which that survey question appeared, InterviwerID is a vector of all 12 survey interviewer ID dummies added to control for the influence of individual interviewers on the choice to buy, and lastly, Female*Price and Rural*Price are interaction terms on the difference between the price females and males pay, and between the price individuals in rural and urban areas pay, respectively.
The error term, ε, was modelled as a random intercept that could be decomposed into components from within the individual and within a particular card set to account for the non-independence of observations that were clustered.
Three of the 18 questions were planted only to test for invalid responses (e.g., they contained "no brainer" options where one alternative was superior across all domains). The analysis was run on the subset of 15 questions from the FFD. The logistic regression coefficient values were converted to marginal effects to ease their interpretation as elasticities of the probability of purchase. Demand curves plotted the predicted probability of purchase from the model vs. price. Estimates of average total revenue (ATR) at any given price was calculated as the product of price times probability of purchase for every 100 individuals encountering an opportunity to purchase. Over 50 regression specifications were tested adding one SES variable a time and testing consistency of results. The robustness of results were also explored removing the irrational, inconsistent or always-buyers ("disengaged") responses [23]. Probability of purchase and WTP estimates were stratified by key LLIN attributes and individual characteristics. Similarly, the probability of purchase for both the least and most attractive LLINs were estimated at the average WTP price as well as at the low and high price points of $4.40 and $13.20 (GHS 20 and 60). The analysis was computed using STATA software version 14.
Lastly, we used a microsimulation model to estimate the total public cost and coverage outcomes under two policy scenarios. In scenario one, the public sector buys at least one LLIN for households in a defined population. In scenario two, there is a commercial market that conforms to the WTP estimated by the model. A Monte Carlo simulation with 1,000,000 iterations of the model was run to produce confidence intervals (CI) around cost and savings estimates.
Among the 628 DCE participants, a majority were females (61.3%), had secondary education (70.0%), and were heads of the household (75.2%), see Table 2. About half of the population were over age 35 and married (52.2%), most lived in urban areas (89.2%) and were employed. The average household size was 3.1 members and the majority of households (69.3%) did not have a bed net. The average net ownership was 1.7 nets per household. Among the minority (30.7%) with at least one bed net only 0.9% of them reported using it. The majority of the study population believed that malaria is not easy to treat, that insecticide-treated nets are effective and that it is easier to get a good night sleep when sleeping under a bed net. But, they also believed that it is difficult to sleep well under a bed net when the weather is warm. See Additional file 1: Appendix C for more results on net ownership and malaria ideation.
Study population descriptive statistics, total n = 628
Participants were also asked about the reasons for not owning a bed net and ownership of other mosquito control products (results not listed in the table). The main reasons for not owning a bed net were use of other malaria control products (58%), the weather was too hot for using them (32%), and not getting a LLIN during the mass campaign (17%). Less than 10% of the households mentioned other reasons for not owning a bed net, including could not afford it, it feels restrictive, has adverse health reactions and gets worn out. A total of 82% of the study participants used other mosquito control products including air conditioning, electric fans, aerosol insecticide sprays and coils. Out of the study population, about one-fifth (19%) had air conditioners in their household, 5% of them used the air conditioners exclusively to protect against mosquito bites and none of the households used LLINs. Out of the study population, 94% had electric fans in the household, one-fifth (18%) of them used the electric fans exclusively to protect against mosquito bites, and a very small percentage (0.2%) used both LLINs and electric fans complementarily. Likewise, 50% of the air-conditioner users and 61% of the electric fan users used these mosquito control products instead of LLINs because these were easier to use. Another 50% of the air-conditioner users and 34% of the electric fan users used these mosquito control products instead of LLINs because they believed those were more effective preventing mosquito bites. Aerosol insecticide sprays were the most widely non-LLIN malaria control product used (61%) among the study population. Out of the households using insecticide sprays, 24%, 56% and 20% used them because they believed that compared to LLINs those were more effective controlling against Malaria, easier to use and more affordable, respectively. The majority of mosquito coils and repellent users believed that those products where either easier to use or more affordable than LLINs.
Internal validity tests revealed that 9.24% (58), 5.41% (34) and 48.25% (303) of participants made choices that were irrational, inconsistent or anchored to buying every choice ("always-buyers"), respectively. See Additional file 1: Appendix C Tables S2 and S3 for details. Knowing that some DCE respondents violated assumptions about rationality and consistency led us to restricted results the sub-sample of 541 respondents who were neither irrational nor inconsistent.
The LLIN demand curve
The sample had an overall probability of purchasing a LLIN of 43.8%. On average, purchasers were willing to pay $7.48 (GHS 34.0) for a LLIN across all presented attributes. Based on these results, for every 100 middle-income Ghanaian households in the study areas, the average total revenue (ATR = price x quantity) from LLINs purchased would be $327.49 (or GHS 1488.61). The difference in the mean probability of purchase between those who already owned a LLIN and those who did not own at least one LLIN was a 1.5% percentage point difference (p value < 0.00). As expected, regression results showed that price negatively affected the probability of LLIN purchase, see Table 3. For every price increase of $0.22 (or GHS 1), the probability of purchase decreased by an average of 0.13% (p < 0.00), holding all other net attribute levels and respondent characteristics at their means.
Estimate of the DCE demand model
R4p and R1p is rectangular 4-point hang and 1-point hang, respectively. The number of asterisks indicates the level of statistical significance where: *** is p-value < 0.00, ** is p-value < 0.05, * is p-value 0.10, and no asterisk means not statistically significant. The list of interviewer dummies is not shown to reduce the size of the table. The marginal effect for each region and price interactions were not estimable. However, we show results for each region in the table of stratified analysis
Figure 1 shows that within the range of prices tested in the DCE, $1.10–14.30 (GHS 5.0–65.0), the price elasticity of demand was inelastic, with an average elasticity of − 0.11 (C.I. − 0.087 to − 0.13). This means that changes in the price only produce modest changes in the quantity demanded. An elasticity of − 0.11 means that a one percent increase in the LLINs' price would decrease demand by 0.11% (on average, a one percent price hike is $0.07 or GHS 0.34). Because of the inelasticity, in a hypothetical population of 100 middle-income households, increasing the price by one percentage points above the average would increase total revenue from $327.49 to $366.85. Likewise, the proportion of respondents willing to pay the highest price tested in the analysis, $14.30 (GHS 65.0), was only slightly lower than the proportion willing to pay for the average WTP (39.5% vs. 43.8%). At the highest price tested, $14.30, the average total revenue is $565.06 (or GHS 2,568,44) per 100 households. See Additional file 1: Appendix Tables S4a, b for tabulations of demand probabilities and price elasticities of demand by WTP.
Long-lasting insecticide-treated bed nets (LLIN) demand curve. Note: The dotted line is the mean linearly fitted demand probabilities. The solid line are the raw data (responses from the 628 respondents to each of 15 choice questions)
Effect of attribute changes on demand
Table 3 also shows the effect of substituting levels on the probability of purchase. The tradeoff between shape R4p and R1p, or from R4p to conical, increased the average probability of purchase by 2.22% (p < 0.05) and 3.56% (p < 0.00) percentage points, respectively. The difference in the probability of purchase between R1p and conical was not statistically significantly different. Increasing net size from double to queen increased demand by 3.31% (p < 0.00) and changing the net entry design from either lift-over-head to zipper, or from flap-overlapping to zipper, increased demand by 3.83% (p < 0.00) and 4.22% (p < 0.00), respectively. The difference in the demand between lift-over-head and flap-overlapping was not statistically significantly different.
Effect of respondent characteristics on demand
Various sociodemographic characteristics also changed demand for LLINs. Males' had a slight 0.31% percentage point (p < 0.00) higher demand than females. Demand by those 18–25 years old was also slightly higher, by 2.08% percentage points (p < 0.10 and 0.00), compared to those 46 and older. Having secondary education or being from the highest income group (within our middle-income population sample) slightly decreased demand by 1.03% (p < 0.00) and 1.52% (p ≪ 0.05), respectively. Living in a rural area increased demand by 0.99 percentage points (p < 0.00).
Sensitivity analysis of demand changes
Some analysts believe that real consumers exhibit features of non-rationality and anchoring in their market behavior. Thus, additional models were examined to assess whether the results would change by including responses from the respondents who showed irrational or inconsistent choice behaviour. In general, findings were very similar to the main results, see Additional file 1: Appendix C Table S5.
Similarly, models excluding the sub-group (n = 303) who anchored to always buying in all 18 choice-sets were also examined. Excluding the always-buyers made the shape level R1p not statistically preferable to R4p (i.e., only conical would be the preferred shape) and all other results remained constant, see Additional file 1: Appendix C for details. In our main results, including the always-buyers, both the R1p and conical shapes were preferred to the R4p. This may indicate that the R1p feature was especially appealing to this subset of always-buyers.
Stratification of analysis by sub-populations
Sub-group analysis of individuals with and without LLIN in the household, females vs. males, and urban vs. non-urban were generally the same with demand elasticities ranging between 0.04 and 0.15% (p-values < 0.00). See Additional file 1: Appendix C Table S6.
Attribute combinations most and least likely to increase LLIN demand
Changing LLIN attributes from the least attractive levels (e.g., R4p, double size, lift or flap entry) to the most attractive (e.g., conical or R1p, queen size, and a zipper entry) shifts the demand curve to the right, see Fig. 2. With this shift, holding the average probability of purchase constant (43.8%), the average WTP increased from $3.30 (GHS 15) to $18.48 (GHS 84), respectively. Re-engineering product attributes from the least attractive, which are the most common type of nets given through free distribution channels, to the most attractive LLIN would improve ATR from $144.48 (GHS 656.74) to $809.10 (GHS 3677.73) per every 100 households.
Demand shifts. Notes: Lines are mean linearly fitted demand probabilities using multivariate regressions and margins
Public cost savings and commercial market revenue
Given that the data sample is representative of all the 2.6 million households living in the three study regions (e.g. 32 districts with lower than the average poverty rate: 0.7–9.6%) it is possible to extrapolate data from the sample to the population in these regions. Assuming that current LLIN coverage levels are met and the cost per LLIN is the manufacturers' price of $4.38 [20], the total cost of providing one standard unenhanced LLIN to each current owner (0.87 million households) would be $3.7 million in 2017 USD, see Additional file 1: Appendix Figure S1a and Table S7a, b for model calculations and parameters. In an alternative scenario in which the commercial market offers an enhanced LLIN for sale at a price of $7.48 per LLIN, we project private sales of LLINs producing revenue valued at $8.1 million (95% C.I. 5.9 –10.6 M) from 1.08 million non-poor households (or $311 per every 100 households in the study area). Revenue from private sales would support jobs for Ghanaian retailers, distributors, and manufacturers of LLINs. Total LLIN coverage would increase by 85% from new LLIN owners from non-poor households who did not already have a LLIN. Of note, we project that the public sector outlay would be reduced by 39% (95% C.I. 50.30–27.89%), from $3.7 million to $2.3 million, from people who are now buying their own LLINs, rather than relying on the ones given through free distribution. Estimates assume a conservative scenario in which poor households do not buy LLINs. The Additional file 1: Appendix Figure S1c shows additional estimates for scenarios in which LLINs are purchased to close the LLIN coverage gap as opposed to meeting current coverage levels.
The results support a scenario in which commercial markets for LLINs could play a significant role in improving access to LLINs for non-poor Ghanaians in malaria-endemic areas. Despite low bed net ownership and significant ownership of non-LLIN malaria control products the discrete choice experiment showed strong demand and high willingness to pay for improved LLINs. Low ownership and use of LLINs among the population may be due to low supply of nets that consumers like [20]. For instance, only 7% [20] of local markets in the study area sold nets and those most commonly available had the combination of characteristics that were the least preferred out of all the attributes tested in the DCE (i.e., rectangular 4-point-hang, size double, and lift overhead entry LLINs).
The study showed that the average demand for LLINs with a variety of attributes is strong among both current net owners (44.8%) and non-net owners (43.3%) asserting and demonstrating a high willingness to purchase. The study validated their statements by observing not just their statements in a DCE but their actual purchases with money that they could have kept for any alternative use. Likewise, participants' mean willingness-to-pay, $7.48, was much higher than the mean price of $4.38 for imported nets currently sold in local markets [20]. The price elasticity for LLINs was inelastic, thus, changes in price around the range tested will not significantly decrease demand. However, the price elasticity of demand may be elastic (see limitations below) at prices higher than the price tested in this analysis (e.g., $14.30). Marketing LLINs with the most attractive attributes has the potential to increase average demand and WTP by up to 8.27% percentage points and $18.48, respectively. Results also indicate that individual characteristics, such as living in a rural area, being male, being less wealthy and living in the Ashanti Region were also associated with higher LLIN demand.
The private and public commercial markets for LLINs would augment each other offering different products and targeting different segments of the population. The private market would develop if suppliers focus on addressing the demands of the middle-income population. The public market of free LLINs could then continue meeting the public sector's demands for LLINs distributed to low-income populations with freed up resources. Together, the private and public strategies encouraging commercial markets for LLINs will help increase the population's access to LLINs as long as there remains a public commitment to ensure access to free LLINs for those for who cannot pay for them out-of-pocket. Likewise, building a stronger private commercial market for the LLINs with improved attributes could spare the public sector more than half of the cost of supplying the current free LLINs to individuals who have the means and willingness to buy LLINs with improved features. The public sector could increase the low-income populations' access to LLINs by allocating newly freed up funds to either expand free LLINs distribution campaigns or subsidize LLINs available in the private market. The demand exists to generate substantial revenue for a private commercial market. This market would also benefit society by creating new economic opportunities for local Ghanaians retailers. The strong demand for LLINs also has the potential to increase LLIN usage because middle-income households would be acquiring a product that they like and thus be more likely to use it.
Six studies in sub-Saharan Africa have looked at the WTP of LLINs, including one from Ghana. All studies find a negative association between price and demand, but some find different levels of demand and WTP. Some of these studies used less robust preference-based study designs that suffer from significant bias for estimating WTP, targeted poor households, or tested different products, making results incomparable. For example, Gingrich et al. using a DCE in Tanzania (testing preferences for insecticide-treatment, shape, and size) also found significant demand for LLINs, inelastic price elasticity (among both poor and least poor populations), increased demand for specialty features (e.g. rectangular, larger size, and insecticide-treatment), and higher demand in rural areas [12]. However, their study found a much lower WTP (between $0.5 and 1.4). The difference between the studies' results may be explained by differences in the study designs. Gingrich et al. capped prices tested much lower (8000TSH or about $3.5 vs. $14.30 in our study) and focused on poor-income groups–which have lowered WTP, and this study tested other new improved net features.
Another study in Ghana using an auction study design found significant WTP for a solar-panel net-fan (about $13), suggesting that their fan product could be a complementary good for bed nets, presumably also increasing demand for bed nets [30]. However, they did not measure WTP for bed nets.
Two other studies, in Ethiopia [31] and Nigeria [32], found large demand for LLIN for low WTP values and an elastic WTP. However, both studies employed direct consumer survey stated preference techniques which are less robust than conjoint analysis [33]. Another study in Madagascar by Comfort and Krezanoski, using revealed preference data from a RCT field experiment and subsidizing LLINs at 0%, 25%, 50%, 75% and 100% for a maximum cost of $2.20 among low-income groups, found a very elastic price of demand for higher prices (i.e. for each $0.55 increment, demand decreased by 23.1% points) [10]. Similarly, Tarozzi et al. using a field experiment in India among poor individuals found very elastic prices of demand for LLINs (i.e., demand decreased by 50% when the price increased by 20%) [11]. Likewise, Dupas using subsidies in a field experiment in Kenya found elastic LLIN prices [34]. These three field experiments are consistent with prior literature by Cohen and Dupas indicating high price elasticity of demand for health products among poor households [35]. The diminished price sensitivity of the study's sample is to be expected precisely because the population intentionally sampled were the non-poor households for whom the next best use of funds was much less likely to be an essential need.
The results are unlike the prior studies that included poor households, for which the price elasticity of demand for LLINs was typically elastic. These results are consistent with higher-income individuals being less responsive to changes in prices than low-income individuals [36]. However, in this study, estimates are restricted by the top LLIN price tested, $14.30. It is possible that higher prices than those tested would have revealed more sensitivity to the price. Future experiments involving middle-income populations would need to incorporate wider price ranges to determine the price elasticity at higher LLIN prices [37].
DCEs can be limited when there is poor respondent understanding of attribute levels or cognitive fatigue. However, this bias was mitigated by piloting the DCE so that the study instruments could improve contextual information. The study also used visual aids, illustrating each of the 18 question choices and each attribute level. The DCE module occurred early in the household survey modules to reduce respondent fatigue and included a short 2-question practice DCE before administrating the actual DCE to respondents. Similarly, survey administrators reported in general having no issues with participants' understanding of the various attribute levels or the exercise. Likewise, to mitigate the influence of priming respondents about the importance of malaria, which could bias results upward, we administered the malaria ideation question after the DCE.
Likewise, prior studies have suggested that providing a cash transfer might bias choice behaviour upward toward buying. However, the study design took pains to emphasize that the respondent could absolutely keep the endowment if they did not spend it on the purchase of a net. The presence of the cash transfer in the study was a necessary component for this DCE to validate stated preferences with observed real market behaviour [11, 33, 38].
Also, a sub-set of the study population behaved unexpectedly in that they anchored to buying a net for all 18 choice-questions in the experiment. Such behaviour could bias choice data if participants are not providing their actual preference between the various net features. Excluding the 303 households anchoring to always buying made the shape rectangular-1-point-hang not statistically preferable indicating that this feature is especially appealing to this subset of always-buyers. Other than this difference, sensitivity analysis showed results where robust to the removal of either irrational respondents or always-buyers.
Lastly, out of all the attributes tested in the DCE, the rectangular-1-point-hang was the only attribute that participants did not have experience with prior to the DCE, given that the idea for a rectangular-1-point-hang came directly from the qualitative work preceding the DCE. This unfamiliarity with the design may have also impacted the results.
An important strength of this study is that the respondents knew there was money at stake. They knew they would have to back up their statements of willingness to purchase LLINs with actual purchases of the nets they said they would purchase at the prices they agreed to. They knew they could leave the encounter with cash payments if they felt they were better off with having cash instead of immediately using the cash to purchase an LLIN. This design feature improves the internal validity and helps us support claims that stated preferences and willingness to purchase are reflective of actual revealed preferences about how people would spend their own money on. However, there may still be some leftover social acceptability bias that may have skewed some respondents to be more willing to spend their new cash on the offer of a LLIN.
Using rigorous techniques from a sample of middle-income households in Ghana, this study shows that there is a strong demand for LLINs in the private market, particularly for LLINs with the characteristics that meet consumers' preferences. This evidence shows that the private commercial sector could be a viable channel for distributing LLINs. A private market would augment the public market. It could help increase LLIN coverage, and help the public sector save money from households buying their own nets. Evidence from this study may help manufacturers and retailers better understand the revenue opportunities linked to supplying key consumer-preferred LLINs. The public and donor sectors should incorporate policies into their national malaria prevention plans supporting commercial markets for consumer-preferred LLINs. However, policies would have to remain in place that assists poor households in acquiring LLINs so that an equitable market could combine both efficiency and fairness.
Supplementary information accompanies this paper at https://doi.org/10.1186/s12936-019-3082-6.
The authors thank URIKA research, a Ghana-based market research firm, for their work administrating the DCE survey.
All authors conceived of the experiment. YNA designed the DCE questionnaire, trained survey administrators, oversaw data collection, conducted the DCE analysis and wrote the first manuscript draft. DB and ML conceptualized the project idea, helped develop the DCE design and review the manuscript. DB helped oversee the DCE design, data analysis, and edited the manuscript. EM managed data collection and cleaning, survey administrators' trainings, helped develop the questionnaire design and review the manuscript. DP develop the DCE questionnaire design, oversaw data collection and review the manuscript. All authors read and approved the final manuscript.
This research was supported by the Government of the United Kingdom of Great Britain and Northern Ireland, acting through the Department for International Development (DFID) (grant 300191, Components 101, 102 and 103). The content is solely the responsibility of the authors and does not necessarily represent the official views of DFID or its member countries.
All human subject research activities were reviewed and approved by both the Johns Hopkins University and Ghana Health Service internal research review boards. The household survey only recruited adults 18 and older and all participants were consented.
12936_2019_3082_MOESM1_ESM.docx (1.4 mb)
Additional file 1. The appendix supporting the conclusions of this article is included within the article ("LLIN DCE Appendix").
White MT, Conteh L, Cibulskis R, Ghani AC. Costs and cost-effectiveness of malaria control interventions—a systematic review. Malar J. 2011;10:337.CrossRefGoogle Scholar
WHO. Achieving and maintaining universal coverage with long-lasting insecticidal nets for malaria control. Geneva: World Health Organization; 2017.Google Scholar
WHO. Implications of insecticide resistance for malaria vector control. Geneva: World Health Organization; 2016.Google Scholar
WHO. World malaria report 2017. Geneva: World Health Organization; 2017.Google Scholar
Theiss-Nyland K, Lynch M, Lines J. Assessing the availability of LLINs for continuous distribution through routine antenatal care and the Expanded Programme on Immunizations in sub-Saharan Africa. Malar J. 2016;15:255.CrossRefGoogle Scholar
Alonso P, Noor AM. The global fight against malaria is at crossroads. Lancet. 2017;390:2532–4.CrossRefGoogle Scholar
WHO. Global Technical Strategy for Malaria 2016–2030. Geneva: World Health Organization; 2015.Google Scholar
Patouillard E, Griffin J, Bhatt S, Ghani A, Cibulskis R. Global investment targets for malaria control and elimination between 2016 and 2030. BMJ Global Health. 2017;2:e000176.CrossRefGoogle Scholar
Nonvignon J, Aryeetey GC, Malm KL, Agyemang SA, Aubyn VNA, Peprah NY, et al. Economic burden of malaria on businesses in Ghana: a case for private sector investment in malaria control. Malar J. 2016;15:454.CrossRefGoogle Scholar
Comfort AB, Krezanoski PJ. The effect of price on demand for and use of bednets: evidence from a randomized experiment in Madagascar. Health Policy Plan. 2017;32:178–93.PubMedGoogle Scholar
Tarozzi A, Mahajan A, Blackburn B, Kopf D, Krishnan L, Yoong J. Micro-loans, insecticide-treated bednets, and malaria: evidence from a randomized controlled trial in Orissa, India. Am Econ Rev. 2014;104:1909–41.CrossRefGoogle Scholar
Gingrich CD, Ricotta E, Kahwa A, Kahabuka C, Koenker H. Demand and willingness-to-pay for bed nets in Tanzania: results from a choice experiment. Malar J. 2017;16:285.CrossRefGoogle Scholar
Kochhar R. A global middle class is more promise than reality: from 2001 to 2011, nearly 700 million step out of poverty, but most only barely. Washington, D.C: Pew Research Center; 2015.Google Scholar
Adamowicz W, Boxall P, Williams M, Louviere J. Stated preference approaches for measuring passive use values: choice experiments and contingent valuation. Am J Agric Econ. 1998;80:64–75.CrossRefGoogle Scholar
Johnson FR, Lancsar E, Marshall D, Kilambi V, Mühlbacher A, Regier DA, et al. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health. 2013;16:3–13.CrossRefGoogle Scholar
Ghana Statistical Service (GSS). Ghana Living Standards Service Round 6. 2014.Google Scholar
Ghana Statistical Service (GSS), Ghana Health Service (GHS), ICF. Ghana Malaria Indicator Survey 2016. Accra, Ghana, and Rockville, Maryland, USA: GSS, GHS, and ICF; 2017.Google Scholar
Ghana Statistical Service (GSS). Ghana Poverty Mapping Report. 2015.Google Scholar
Mensah E, Piccinini D, Osei T, Dontoh A, Kim S, Alfonso YN. Catalyzing the commercial market for LLINs in Ghana, a market analysis. URIKA Research and PSMP project; 2018.Google Scholar
Louviere J, Hensher D, Swait J. Stated choice methods analysis and application. Cambridge: Cambridge University Press; 2000.CrossRefGoogle Scholar
Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making. PharmacoEconomics. 2008;26:661–77.CrossRefGoogle Scholar
Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, et al. Conjoint analysis applications in health—a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health. 2011;14:403–13.CrossRefGoogle Scholar
Adams J, Bateman B, Becker F, Cresswell T, Flynn D, McNaughton R, et al. Effectiveness and acceptability of parental financial incentives and quasi-mandatory schemes for increasing uptake of vaccinations in preschool children: systematic review, qualitative study and discrete choice experiment. Health Technol Assess. 2015;19:1–76.CrossRefGoogle Scholar
Carlsson F, Martinsson P. Design techniques for stated preference methods in health economics. Health Econ. 2003;12:281–94.CrossRefGoogle Scholar
Louviere JJ, Hout M. Analyzing decision making: Metric conjoint analysis. Thousand Oaks: Sage; 1988.CrossRefGoogle Scholar
Moser R, Raffaelli R, Notaro S. Testing hypothetical bias with a real choice experiment using respondents' own money. Eur Rev Agric Econ. 2014;41:25–46.CrossRefGoogle Scholar
Louviere JJ, Woodworth G. Design and analysis of simulated consumer choice or allocation experiments: an approach based on aggregate data. J Mark Res. 1983;20:350–67.CrossRefGoogle Scholar
Aizaki H, Nishimura K. Design and analysis of choice experiments using R: a brief introduction. Agricult Inform Res. 2008;17:86–94.CrossRefGoogle Scholar
Yukich JO, Briët OJT, Ahorlu CK, Nardini P, Keating J. Willingness to pay for small solar powered bed net fans: results of a Becker–DeGroot–Marschak auction in Ghana. Malar J. 2017;16:316.CrossRefGoogle Scholar
Aleme A, Girma E, Fentahun N. Willingness to pay for insecticide-treated nets in Berehet District, Amhara Region, Northern Ethiopia: implication of social marketing. Ethiop J Health Sci. 2014;24:75–84.CrossRefGoogle Scholar
Onwujekwe O, Hanson K, Fox-Rushby J. Inequalities in purchase of mosquito nets and willingness to pay for insecticide-treated nets in Nigeria: challenges for malaria control interventions. Malar J. 2004;3:6.CrossRefGoogle Scholar
Breidert C, Hahsler M, Reutterer T. A review of methods for measuring willingness-to-pay. Innovative Marketing. 2006;2:8–32.Google Scholar
Dupas P. Short-run subsidies and long-run adoption of new health products: evidence from a field experiment. Econometrica. 2014;82:197–228.CrossRefGoogle Scholar
Cohen J, Dupas P. Free distribution or cost-sharing? Evidence from a randomized malaria prevention experiment. Q J Econ. 2010;125:1–45.CrossRefGoogle Scholar
Jack W. The demand for health care services. Principles of health economics for developing countries. Washington: The World Bank; 1999. p. 55–90.Google Scholar
Nghiem N, Wilson N, Genç M, Blakely T. Understanding price elasticities to inform public health research and intervention studies: key issues. Am J Public Health. 2013;103:1954–61.CrossRefGoogle Scholar
Loureiro ML, Umberger WJ, Hine S. Testing the initial endowment effect in experimental auctions. Appl Econ Lett. 2003;10:271–5.CrossRefGoogle Scholar
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
1.Johns Hopkins Bloomberg School of Public HealthBaltimoreUSA
2.Johns Hopkins Center for Communication ProgramsBaltimoreUSA
3.URIKA ResearchTemaGhana
Alfonso, Y.N., Lynch, M., Mensah, E. et al. Malar J (2020) 19: 14. https://doi.org/10.1186/s12936-019-3082-6
Publisher Name BioMed Central
|
CommonCrawl
|
Only $35.99/year
IB HISTORY INDIA VOCAB QUIZ 1
Vocab set for matching quiz
Royal document granting a specified group the right to form to voyage to an area on behalf of the crown in exchange for a trade monopoly.
Muslim state (1526-1857) exercising dominion over most of India in the sixteenth and seventeenth centuries.
The East India Company was an English, and later British, joint-stock company founded in 1600 and dissolved in 1874. It was formed to trade in the Indian Ocean region, initially with the East Indies. Formed government and a military
Battle of Plassey
Victory for the East India Company in Bengal. Led by Robert Clive this battle confirmed biritish supremacy in the region
Robert Clive
led the comapny's 3000 man army and became Benegal's governor
Sepoy Rebellion
The revolt of Indian soldiers in 1857 against certain practices that violated religious customs such as making soldiers come into contact with beef and pork grease; also known as the Sepoy Mutiny.
Government of India Act 1858
The Government of India Act of 1858 was an Act of the British parliament that transferred the government and territories of the East India Company to the British Crown. The company's rule over British territories in India came to an end and it was passed directly to the British government.
A governor or ruler exercising authority on behalf of a sovereign in a province or colony
Indian Civil Service
The elite professional class of officials who administered the government of British India. Originally composed exclusively of well-educated British men, it gradually added qualified Indians.
Princely States
Domains of Indian princes allied with the British Raj; agents of East India Company were stationed at the rulers courts to ensure compliance; made up over one-third of the British Indian Empire
Indian National Congress
Founded in 1885 as the first major nationalist movement to occur in India. Did not want self-government at first. Originally quite weak and full of tension . Later became the main leader in furthering india's independence.
Muslim League
an organization formed in 1906 to protect the interests of India's Muslims, which later proposed that India be divided into separate Muslim and Hindu nations
Dominion Status
a nation within the British Empire that controls its own domestic and foreign affairs, but is tied to Britain by allegiance to the British monarch
Bal Gangadhar Tilak
1856-1920; Indian nationalist who demanded immediate independence from Britain, mobilizing Hindu religious symbolism to develop a mass following and arguing that violence was an acceptable tactic for anticolonial partisans
Put forward the theory of "drain of wealth". The constant flow of wealth from india to England for which india did not get compensation or benefits.
Head of the Muslim League, and founder of Pakistan. He successfully demanded partition of India into Muslim and Hindu countries, an idea to which Gandhi was fundamentally opposed.
Gandhi's message to people of India about self-rule. Swaraj literally means "self rule"
Swadeshi
boycott of British goods to make the English make concessions for Indians. Furthered by boycotts of British policy
Drain of Wealth Theory
Claims that is colonialism had not happened, then Indian surpluses would have been invested into Indian growth instead of British
Partition of Bengal
In 1905 Viceroy Lord Curzon decided to divide the province of Bengal into two halves for administrative efficiency. He did not see that he was dividing West Bengal and East Bengal into Hindu-majority and Muslim-majority regions and thus angered locals because he was breaking them apart. This action was protested by all Indians because of its 'divide-and-rule tactic. Britain revoked the partition in 1911.
Morley-Minto Reforms
Provided educated Indians with considerably expanded opportunities to elect and serve on local and all-India legislative councils.
Lucknow Pact
(December 1916), agreement made by the Indian National Congress headed by Maratha leader Bal Gangadhar Tilak and the All-India Muslim League led by Muhammad Ali Jinnah; it was adopted by the Congress at its Lucknow session on December 29 and by the league on Dec. 31, 1916. The meeting at Lucknow marked the reunion of the moderate and radical wings of the Congress.
Defense of India Act
Intended to combat subversive activities in WW1. Many believed that by the end of the war the act would be repealed and india would receive more autonomy.
Rowlatt Acts
Laws passed in 1919 which essentially extended the repressive wartime measures. Allowd the Raj to intern indians suspected of crimes without trial.
Amritsar Massacre
Attack on a gathering of 10000 indians present at Jullianuala Bagh by Brigadier General Reginald Dyer. The people present were there to celebrate Baisakhi. Almost 400 died and 1200 injured. Sparked outrage across india leading to a greater sense of unity. Occured due to prior mob violence. Dyer resigned.
Indian statesman. He succeeded Mohandas K. Gandhi as leader of the Indian National Congress. He negotiated the end of British colonial rule in India and became India's first prime minister (1947-1964).
Government of India Act of 1935
the British retained control of the central administration and turned over provincial governments to Indians chosen by an expanded electorate.
Leader of Indian National Congress faction who believed violence in defense of Indian nationalism was justifiable; formed the Indian National Army in 1942 and allied with the Axis Powers. Didn't do much
Purna Swaraj Resolution
A declaration urging for India to fight for complete independence/self-rule from the British. The INC asked that January 26th be recognized as independence day.
Salt March
A non-violent campaign where Gandhi led a march over 240 miles to protest the British monopoly on salt in India. He made illegal salt and inspired many others to do the same.
Satyagraha
"Truth force," a term used by Gandhi to describe peaceful boycotts, strikes, noncooperation, and mass demonstrations to promote Indian independence. Ex: Salt March
padyatra
A journey taken by politicians or political leaders involving the close interaction with all aspects of society. Ex: Salt March
an Indian homespun cotton cloth that was spun in a boycott against British cloth.
Irwin Declaration
The Irwin Declaration was a statement made by Lord Irwin, then Viceroy of India, on 31 October 1929 regarding the status of India in the British empire. It was intended to placate leaders of the Indian nationalist movement who had become increasingly vocal in demanding dominion status for India. However, the promise was empty, only further upsetting the indian leaders.
You know this. Connection: India was manipulated by Britain in WWII for manpower and resouces, increasing dissent.
mass civil disobedience campaign against the British rulers of India in 1942.
Civil War Test
IB Spanish 1 Midterm Vocab
IB History Midterm Socrative Review
Geometry quadrilateral vocab credit
On the lines provided, rewrite the following sentences, correcting errors in the use of abbreviations. **Example 1.** We sent the supplies to Dr. Jeffrey Hellmer, MD, in Rochester, NY. $\underline{\text{We sent the supplies to Dr. Jeffrey Hellmer in Rochester, New York.}}$ 10. The boards are all cut at $10 \mathrm{ft}$ long, but we can make them shorter if you like, Mr Jackson.
King and Malcolm X use the boldfaced words to express their opposing views. Restate each phrase, using a different word or words for the boldfaced term. cannot **succumb** to defeatist attitudes
Choose a name that reflects the unit topic. Our group's name: _____
For each term or name, write a sentence explaining its significance. •Napoleon Bonaparte •coup d'état •plebiscite •lycée •concordat •Napoleonic Code •Battle of Trafalgar
dmangan_wgu
Instrumental and Operant Conditioning
wesley_lewis50
KrazyPotato
|
CommonCrawl
|
coordination number and oxidation state of fecl2 in water, plus religious boarding schools and military boarding schools.
coordination number and oxidation state of fecl2 in water
Home » coordination number and oxidation state of fecl2 in water
Coordination compound - Coordination compound - Aqua complexes: Few ligands equal water with respect to the number and variety of metal ions with which they form complexes. "��]�ƨU�g�oڕ�ZR��.���]d ���а��c����xN���1h�{�*�V������-���*S�$7>p �' �U�w����؝X}ɭI Our aerobic life is based on this strategy, mastered by the natural Photosystem II enzyme, using a tetranuclear Mn–oxo complex as oxygen evolving center. Thanks in advance. ; Solution: The oxidation number of C is -3.; The oxidation number of H is +1 (H+ has an oxidation number of +1). Some Cl ... • The coordination number is (mostly) constant for a metal with a given oxidation number. Contains:-(1)- introduction of coordination compound The oxidation number of diatomic and uncombined elements is zero. However, only the oxidation states of ruthenium and osmium are reviewed here. Oxidation number or state of periodic table elements in a chemical compound or molecule is the formal charges (positive or negative) which assigned to the element if all the bonds in the compounds are ionic. Write down the IUPAC name for each of the following complexes and indicate the oxidation state, electronic configuration and coordination number. has an eight coordination number (dodecahedron, sq antiprism). The oxidation state can be given as: x + 0 = +2. Only in 1893 was the mode of bonding in the complexes established, by Alfred Werner (who was awarded the Nobel Prize in 1913 for this work). Ligand coordination shifts the E⁰ of metal ions NOTE: Oxidation-reduction (Redox) reactions are written from the reduction perspective. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. has high conductivity. Although coordination complexes are particularly important in the chemistry of the transition metals, some main group elements also form complexes. Join Now. Reaction between acetone and methyl magnesium chloride followed by hydrolysis will give : Identify the correct statements from the following: Coordination number is determined by compound. Mn +2 is a d 5 case, paramagnetic. of electrons required to get octect configuraion or pseudo inert gas configuration. Metal ions can be described as consisting of series of two concentric coordination spheres, the first and second. The oxidation number must be calculated, however. The second coordination sphere consists of a water of crystallization and sulfate, which interact with the [Fe(H 2 O) 6] 2+ centers. The number of ligands bound to the transition metal ion is called the coordination number. Examples of oxidation number in the following topics: Oxidation Numbers of Metals in Coordination Compounds. Identify which reactants are being oxidized (the oxidation number increases when it reacts) and which are being reduced (the oxidation number goes down). The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. The oxidation number in coordination chemistry has a slightly different meaning. For example, Fe has preferrable CN=6 for fluorine anions, but CN=4 for chloride anions. The coordination number of a cation in a solid phase is an important parameter. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. It is the number of pairs of electrons that coordinate to the transition metal atom. Note the change of name when water is a ligand from the previous "aquo" to the new "aqua". Write down the transfer of electrons. 5.5 Coordination number of the lanthanide metal in the lanthanide higher oxides. The coordination number is 6. • Some metals, such as chromium(III) and cobalt(III), consistently have the same coordination number (6 in the case of these two metals). Such aqua complexes include hydrated ions in aqueous solution as well as hydrated salts such as hexaaquachromium(3+) … The number of bonds depends on the size, charge, and … %%EOF The number of ions formed on dissolving one molecule of FeSO4(NH4)2SO4.6H2O in water is a) 4 b) 5 c) 3 d) 6 4.The number of unidentate ligands in the complex ion is called a. Given that the ionic product of $Ni(OH)_2$ is $2 \times 10^{-15}$. contains free electrons. (2)-most of them display numerous oxidation states which vary by steps of 1 rather than 2 as is usually the case with those main-group elements which exhibit more than one oxidation state. Father of coordination chemistry First inorganic chemist to win Nobel Prize •suggested in 1893 that metal ions have primaryand secondaryvalences. The oxidation number is the charge it would have if all bonds were ionic, and involves the ligands and also the ions not bound in the complex ion itself. The d orbital occupation for Mn is t 2g 3 e g 2. This is demonstrated herein by comparing crystalline oxides consisting of Mn3+ … I�*�:]g3���xF>u���ޝ�.s9I�� �aU:p��"�l�yમf�uzu4�c{��h�-�`Jo.o���������~��9x=�@�W�KV4�s�i�k��9aL����>&J�u�w�0���l�H"$d�S�/���8�G6D:"tXd��H�����I�KE:R�.�D����0[��û~U}#������!��B��@�E������> ��w�U1�����͖���WY�]�!È1:rY��z墰�����g…:�:T�p�����M[�����!Ofp\Ϊy^.�u^��&ߎ�yݸ�2��{�����}�Z�P1M�nf4��Om�p�ݲ��g��H)}c�0�p:xN��[xo8q~��(&�D$���f8'Zoc���zD8�M�ODIp7��{� "�Ra#=.�㶴֊!�I�� ��^��H�C��y�N���{0}�� HO�k�-�JX�*����^P�zB��x�r��;�&�-�}�s���K��4��=��aE,[m5T� �-Ƶ"4�(���_e�B.-4D�gC5{�?-a��V�*J|ԋ��C Can someone please explain this to me. In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. The coordination number is 6. contains free electrons. The oxidation number is synonymous with the oxidation state. forms many complexes with neutral donors. The oxidation number is synonymous with the oxidation state. The coordination number is the the number of attachments that all of the ligands make to the transition metal. [Co(NH3)5Cl]Cl2. h��R�n�0���f���6)�.� forms halides in the oxidation state of +3 only. Oxidation state = +2 d-orbital occupation is d 5 (t 2 g 3 e g 2) 2 the oxidation state of the transition metal 3 the type of ligand; The transition metal. (a) $CO_2(g)$ is used as refrigerant for ice-cream and frozen food. A square planar molecule can never be cis and trans at the same time, because the coordination number is 4. The chemical properties in the same oxidation state of the 4f and 5f elements are 3B. FeCl2. Z.C. here Cobalt ,atomic no. It is normal for an element to have several different CN. Consider Fe3+: Half-reaction- Fe3+(aq) + e-Fe2+(aq) E⁰ = +0.77 V vs Standard Hydrogen Electrode (SHE) In a reducing environment, ligand-free Fe has an oxidation state … is 6 because 6 NH3 are surronuding the Co. Oxidation no. ���>��� By the mid-1870s Sophus Jørgensen in Denmark had systematized the synthetic methods for preparing the coordination compounds that were known at that time, especially those of cobalt(III). h�bbd```b``Y"CA$C'�d� �L��$����DF���s�30120.�g`$@�g`8� � � Werner concluded that most coordination complexes were essentially octahedral , with six ligands bonded to a central metal ion (more or less, one above, one below, and fou… The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). The study of rhenium hydrides can be traced to the 1950s and included reports of the "rhenide" anion, supposedly Re −. • Dissolve in water and add AgNO3. For K[Pt(NH3)Cl5], the coordination number for Pt is 6, because there are 6 ligands on the Pt atom. Note: It has been pointed out to me that there are a handful of obscure compounds of the elements sodium to caesium where the metal forms a negative ion - for example, Na-.That would give an oxidation state of -1. The transition metals have certain colours, or colour ranges that are typical of that metal. exists in black and gold forms. Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). The oxidation state, sometimes referred to as oxidation number, describes the degree of oxidation (loss of electrons) of an atom in a chemical compound.Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. We got [math]Ni(+II)[/math]. 2 the oxidation state of the transition metal 3 the type of ligand; The transition metal. For the copper(II) ion in [CuCl 4 ] 2− , the coordination number is four, whereas for the cobalt(II) ion in [Co(H 2 O) 6 ] 2+ the coordination number … Co+3. When we assign an oxidation number, we strip down the metal complex/compound such that we find the eventual charge of the metal after we strip the ligands away. In Volume 5, the material for each element is organised by oxidation state of the metal and also by the nature of the ligands involved, with additional sections covering special features of the coordination chemistry and applications of the complexes. O2- and S2- have oxidation numbers of -2.; In a molecule or compound, the oxidation number is the sum of the oxidation numbers of its constituent atoms. The interaction between a metal atom and the ligands can be thought of as Lewis acid-base reaction. 1.6 Relative Stability of their Oxidation States 1.7 Coordination number and Geometry 1.8 Summary 1.9 Terminal Questions 1.10 Answers 1.1 OBJECTIVES The objective of writing the text material of this unit is to acquaint the readers to the characteristic properties of the d … Consider Fe3+: Half-reaction- Fe3+(aq) + e-Fe2+(aq) E⁰ = +0.77 V vs Standard Hydrogen Electrode (SHE) In a reducing environment, ligand-free Fe has an oxidation state … endstream endobj 276 0 obj <> endobj 277 0 obj <> endobj 278 0 obj <>stream The two chloride ions attached to the complex ion tells us that the complex ion has a net charge of +2. [Cr (H_2O)_6]^3+ b. The ReH 2− 9 anion is a rare example of a coordination complex bearing only hydride ligands.. History. The pressure oxidation of acidic FeCI~ solution with oxygen. forms halides in the oxidation state of +3 only. The oxidation of aqueous solutions of FeCl~ in HC1 media has been investigated in a flow reactor. For K4[Fe(CN)6], coordination number and oxidation state are respectively. Calculate Coordination Number of an Element. Solar-to-fuel energy conversion relies on the invention of efficient catalysts enabling water oxidation through low-energy pathways. In a particular isomer of $\left[Co\left(NH_{3}\right)_{4}Cl_{2}\right]^{0},$ the $Cl-Co-Cl$ angle is $90^\circ$, the isomer is known as, Compounds, $ {{[PtC{{l}_{2}}N{{H}_{3}})}_{4}}]B{{r}_{2}} $ and $ [PtB{{r}_{2}}{{(N{{H}_{3}})}_{4}}]C{{l}_{2}} $ , shows the following type of isomerism. Co 2+ is a d 5 case, paramagnetic (iv)[Mn(H 2 0) 6] 2+ S0 4 2-x+0f+2.•. Within artificial devices, water can be oxidized efficiently on tailored metal-oxide surfaces such as RuO2. It is well known that in the lanthanide higher oxides the coordination number of the lanthanide cation is 6, 7, or 8. 3B. endstream endobj 279 0 obj <>stream There is only one possible way to bond the two ligands to the metal in the coordination … as for why I too am dying to know as I haven't been able to find any good explanations on the internet $\endgroup$ – bon Jan 10 '15 at 21:50 Low Oxidation States In [Co(NH3)5Cl]^2+ cobalt would have an oxidation number of +3 since ammonia is zero and Cl is -1. The oxidation number is represented by a Roman numeral in parenthesis following the name of the coordination entity. Which of the following set of molecules will have zero dipole moment ? chemistry. �l8N���;A���c�JA�NGܝ��#[�@PX�A��7B���#t��>�'�Ж����A=��Ps��]�a�(�T�7n��g�^l�sOQ,cT��{��8�>�P��ȡr���` �. %PDF-1.5 %���� Nearly all metallic elements form aqua complexes, frequently in more than one oxidation state. The coordination number of the central metal ion or atom is the number of donor atoms bonded to it. The reduced coordination number of Ni(Fe)‐O and elevated valence state of Ni(Fe) in NiFe‐MOFs layer greatly bolster OER, and the shifting of oxygen evolution sites from Ov‐BiVO 4 to NiFe‐MOFs promotes Ov stabilization. The oxidation and coordination numbers of cobalt in the compound [Co(NH3)5Cl)Cl2 are, respectively 3 and 6 In the compound K[Co(C2O4)2(H20)2] (where C2O4 2-=oxalate) the oxidation number and coordination number of cobalt are, respectively Oxidation state, x = + 2 Coordination number=4. Coordination Number! (iv) [Mn(H2O)6]SO4 The central metal ion is Mn. Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. Which kind of isomerism is exhibited by octahedral $[Co(NH_3)_4Br_2]Cl$ ? forms many complexes with neutral donors. 284 0 obj <>/Filter/FlateDecode/ID[<555F95D379893CC2D6C24F9C567AAAAB><4E49F0E988384A418AFA5C39DBE2F409>]/Index[275 18]/Info 274 0 R/Length 70/Prev 465748/Root 276 0 R/Size 293/Type/XRef/W[1 3 1]>>stream What is the coordination number of Fe in [F e C l 2 (e n) 2 ] C l? Class XII Chapter 9 – Coordination Compounds Chemistry The oxidation state can be given as: x − 4 = −2 x=+2 The d orbital occupation for Co2+ is eg4 t2g3. Due to the relatively low reactivity of unpaired d electrons, these metals typically form several oxidation states and therefore can have several oxidation numbers. Identify compound X in the following sequence of reactions: Identify a molecule which does not exist. h�*9��-���%����VD���iJГ�C,�c��(���'`���m":��Uf��Qo#2 The oxidation state can be given as: x + 0 = +2 x = +2 The d orbital occupation for Mn is t2g3 eg2. Oxidation Number: The number that is assigned to an element to indicate the loss or gain of electrons by an atom of that element is called as the oxidation number. —��,�~3�3��G� D��� The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. Nearly all metallic elements form aqua complexes, frequently in more than one oxidation state. Our aerobic life is based on this strategy, mastered by the natural Photosystem II enzyme, using a tetranuclear Mn–oxo complex as oxygen evolving center. Oxidation number are typically represented by … Such aqua complexes include hydrated ions in aqueous solution as well as hydrated salts such as hexaaquachromium(3+) … In the examples above, the hexaaqua complexes have a coordination number of 6 and the tetrachlorocuprate (II) complex has a coordination number of 4. Correct increasing order for the wavelength of absorption in the visible region for the complexes of $Co^{3+}$ is: When $Ag^+$ reacts with excess of sodium-thiosulphate then he obtained species having charge and geometry respectively : In Wolff‐Kishner reduction, the carbonyl group of aldehydes and ketones is converted into. Ligand, Coordination Number, Coordination Sphere & Oxidation Number Ligand. The oxidation state is important. The oxidation state. The atomic radiusis: Find out the solubility of $Ni(OH)_2$ in 0.1 M NaOH. The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. When heated or under extreme pressure preferable CN may change as well. ��q �r�$�փ�]����5�j�>������}��^|���1����(Kb���i��˴sA�7%��.&��C�|��+Y���ޥ�D��j2���DX��4��1� Thorium: forms in the 2, 3, and 4 oxidation states. eg: [Co(NH3)6] here coordination no. eg:1. I initially thought that EDTA Tetra-dentate and H2O was Mono-dentate so i figured the Coordination number of Fe was 5 and that was wrong. • The … ØPrimary valence equals the metal's oxidation number ØSecondary valence is the number of atoms directly bonded to the metal (coordination number) Co(III) oxidation state Coordination # is 6 Cl- Here, the ligand is the NEUTRAL water molecule. They are positive and negative numbers used for balancing the redox reaction. The coordination number for the silver ion in [Ag(NH 3 ) 2 ] + is two ( Figure 19.14 ). Coordination number c. Primary valency c. Oxidation number 5.The oxidation state … Since iron is in the oxidation state +2, the compound is called iron (ii) chloride. Coordination compound - Coordination compound - Aqua complexes: Few ligands equal water with respect to the number and variety of metal ions with which they form complexes. In coordination chemistry, the coordination number is the number of donor atoms attached to the central ion. Which of the following compounds show optical isomerism? h��UmO�0�+��i*~w Uj���HU?��k��I� ����t݀1>m���?���$"ք���:�>&�c�e�O�`p of ligands that are surrounding a centrl metal ion. Within artificial devices, water can be oxidized efficiently on tailored metal-oxide surfaces such as RuO2. Carefully, insert coefficients, if necessary, to make the numbers of oxidized and reduced atoms equal on the two sides of each redox couples. Which one of the following is heteroleptic complex? has all whites are solids. Since linear complexes only have a 180 degree bond angle, it cannot have cis or trans isomers. Hydrometallurgy: 6: 339--346. The coordination numbers and geometries of copper complexes vary with oxidation state. The coordination number, oxidation number and the number of d-electrons in the metal ion of the complex $\ce{[COCl_2 -(en)2]Cl}$, are respectively (atomic number of Co=27) KEAM 2015 2. has all whites are solids. In the complex [CO(en)2 Cl2 ]Br, the total number of donor atoms is 6 (4 from two 'en' moles + 2 Cl- ions).Let the oxidation state of 'Co' be x.x + (0 × 2) + (-1 × 2) + (-1) = 0 x + 0 - 2 - 1 = 0 x - 3 = 0 x = +3 $\begingroup$ the coordination number is the number of coordinate bonds to the metal ion: six for the hexaaqua complex and four for the tetrathiocyanate complex. Which of the following complex is optically inactive, The compound $\ce{[Pt(NH3)2Cl2]}$ can exhibit. Coordination numbers are normally between 2 and 9. Protactinium: Question35. The coordination number is 6. means no. Protactinium: Copper salts, for example, are usually blue or green, iron has salts that are pale green, yellow or orange. Which of the following complexes exists as pair of enantiomers? hR�kQ�Mv7[M�n٦[ܦ��[�^��6mI=t�vI�-$�~���H� m`��P�`=HP�� 0 ��1�X Կ�������a-�霑%��ųS�0t!�s82��;��`�*F(����qo}C�.p�#]�o��8�K�z���a�b���|�z����I���lX���ı?�%�{55�lJ��Y���w ��X$�2���! Effective atomic number b. 292 0 obj <>stream M 2 O 3, to permit estimation of the metal's two oxidation states.) Aluminum, tin, and lead, for example, form complexes such as the AlF 6 3-, SnCl 4 2-and PbI 4 2-ions. Oxidation state, x- + 2 Coordination number is 6. • Pale blue [Cu(H2O)4] 2+ can be converted into dark blue [Cu(NH x = +2. exists in black and gold forms. Im assuming that the EDTA is 4- and the overall charge is 1- so the charge on Fe should be 3+. Preferable CN can change for an element depending on the atom's neighbors. Surface-directed corner-sharing MnO6 octahedra within numerous manganese oxide compounds containing Mn3+ or Mn4+ oxidation states show strikingly different catalytic activities for water oxidation, paradoxically poorest for Mn4+ oxides, regardless of oxidation assay (photochemical and electrochemical). The highest observed coordination number (the number of atoms that a central atom has as its neighbours in a compound) of chlorine (oxidation state of +7) toward oxygen is 4 (i.e., the chlorine atom is surrounded by four oxygen atoms), as found in the perchlorate ion, (ClO 4) −, whereas… Determine the oxidation state and coordination number of the metal ion in each complex ion. Copper salts, for example, are usually blue or green, iron has salts that are pale green, yellow or orange. Oxidation Number of Periodic Table Elements. endstream endobj startxref William Adamson, Chen Jia, Yibing Li, Chuan Zhao, Cobalt oxide micro flowers derived from hydrothermal synthesised cobalt sulphide pre-catalyst for enhanced water oxidation, Electrochimica Acta, 10.1016/j.electacta.2020.136802, 355, (136802), (2020).
Farms For Sale Near Columbus Ohio, Rosemary Plant Care Indoor, Mandarin Collar Short Sleeve Shirt, Light Mountain Hair Color Reviews, Homes For Sale Vancleave, Ms, How To Get Strawberry Seeds, German Engineering Abbreviations,
Coordination compound - Coordination compound - Aqua complexes: Few ligands equal water with respect to the number and variety of metal ions with which they form complexes. "��]�ƨU�g�oڕ�ZR��.���]d ���а��c����xN���1h�{�*�V������-���*S�$7>p �' �U�w����؝X}ɭI Our aerobic life is based on this strategy, mastered by the natural Photosystem II enzyme, using a tetranuclear Mn–oxo complex as oxygen evolving center. Thanks in advance. ; Solution: The oxidation number of C is -3.; The oxidation number of H is +1 (H+ has an oxidation number of +1). Some Cl ... • The coordination number is (mostly) constant for a metal with a given oxidation number. Contains:-(1)- introduction of coordination compound The oxidation number of diatomic and uncombined elements is zero. However, only the oxidation states of ruthenium and osmium are reviewed here. Oxidation number or state of periodic table elements in a chemical compound or molecule is the formal charges (positive or negative) which assigned to the element if all the bonds in the compounds are ionic. Write down the IUPAC name for each of the following complexes and indicate the oxidation state, electronic configuration and coordination number. has an eight coordination number (dodecahedron, sq antiprism). The oxidation state can be given as: x + 0 = +2. Only in 1893 was the mode of bonding in the complexes established, by Alfred Werner (who was awarded the Nobel Prize in 1913 for this work). Ligand coordination shifts the E⁰ of metal ions NOTE: Oxidation-reduction (Redox) reactions are written from the reduction perspective. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. has high conductivity. Although coordination complexes are particularly important in the chemistry of the transition metals, some main group elements also form complexes. Join Now. Reaction between acetone and methyl magnesium chloride followed by hydrolysis will give : Identify the correct statements from the following: Coordination number is determined by compound. Mn +2 is a d 5 case, paramagnetic. of electrons required to get octect configuraion or pseudo inert gas configuration. Metal ions can be described as consisting of series of two concentric coordination spheres, the first and second. The oxidation number must be calculated, however. The second coordination sphere consists of a water of crystallization and sulfate, which interact with the [Fe(H 2 O) 6] 2+ centers. The number of ligands bound to the transition metal ion is called the coordination number. Examples of oxidation number in the following topics: Oxidation Numbers of Metals in Coordination Compounds. Identify which reactants are being oxidized (the oxidation number increases when it reacts) and which are being reduced (the oxidation number goes down). The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. The oxidation number in coordination chemistry has a slightly different meaning. For example, Fe has preferrable CN=6 for fluorine anions, but CN=4 for chloride anions. The coordination number of a cation in a solid phase is an important parameter. The coordination number of an atom in a molecule is the number of atoms bonded to the atom. It is the number of pairs of electrons that coordinate to the transition metal atom. Note the change of name when water is a ligand from the previous "aquo" to the new "aqua". Write down the transfer of electrons. 5.5 Coordination number of the lanthanide metal in the lanthanide higher oxides. The coordination number is 6. • Some metals, such as chromium(III) and cobalt(III), consistently have the same coordination number (6 in the case of these two metals). Such aqua complexes include hydrated ions in aqueous solution as well as hydrated salts such as hexaaquachromium(3+) … The number of bonds depends on the size, charge, and … %%EOF The number of ions formed on dissolving one molecule of FeSO4(NH4)2SO4.6H2O in water is a) 4 b) 5 c) 3 d) 6 4.The number of unidentate ligands in the complex ion is called a. Given that the ionic product of $Ni(OH)_2$ is $2 \times 10^{-15}$. contains free electrons. (2)-most of them display numerous oxidation states which vary by steps of 1 rather than 2 as is usually the case with those main-group elements which exhibit more than one oxidation state. Father of coordination chemistry First inorganic chemist to win Nobel Prize •suggested in 1893 that metal ions have primaryand secondaryvalences. The oxidation number is the charge it would have if all bonds were ionic, and involves the ligands and also the ions not bound in the complex ion itself. The d orbital occupation for Mn is t 2g 3 e g 2. This is demonstrated herein by comparing crystalline oxides consisting of Mn3+ … I�*�:]g3���xF>u���ޝ�.s9I�� �aU:p��"�l�yમf�uzu4�c{��h�-�`Jo.o���������~��9x=�@�W�KV4�s�i�k��9aL����>&J�u�w�0���l�H"$d�S�/���8�G6D:"tXd��H�����I�KE:R�.�D����0[��û~U}#������!��B��@�E������> ��w�U1�����͖���WY�]�!È1:rY��z墰�����g…:�:T�p�����M[�����!Ofp\Ϊy^.�u^��&ߎ�yݸ�2��{�����}�Z�P1M�nf4��Om�p�ݲ��g��H)}c�0�p:xN��[xo8q~��(&�D$���f8'Zoc���zD8�M�ODIp7��{� "�Ra#=.�㶴֊!�I�� ��^��H�C��y�N���{0}�� HO�k�-�JX�*����^P�zB��x�r��;�&�-�}�s���K��4��=��aE,[m5T� �-Ƶ"4�(���_e�B.-4D�gC5{�?-a��V�*J|ԋ��C Can someone please explain this to me. In chemistry and crystallography, the coordination number describes the number of neighbor atoms with respect to a central atom. The coordination number is 6. contains free electrons. The oxidation number is synonymous with the oxidation state. forms many complexes with neutral donors. The oxidation number is synonymous with the oxidation state. The coordination number is the the number of attachments that all of the ligands make to the transition metal. [Co(NH3)5Cl]Cl2. h��R�n�0���f���6)�.� forms halides in the oxidation state of +3 only. Oxidation state = +2 d-orbital occupation is d 5 (t 2 g 3 e g 2) 2 the oxidation state of the transition metal 3 the type of ligand; The transition metal. (a) $CO_2(g)$ is used as refrigerant for ice-cream and frozen food. A square planar molecule can never be cis and trans at the same time, because the coordination number is 4. The chemical properties in the same oxidation state of the 4f and 5f elements are 3B. FeCl2. Z.C. here Cobalt ,atomic no. It is normal for an element to have several different CN. Consider Fe3+: Half-reaction- Fe3+(aq) + e-Fe2+(aq) E⁰ = +0.77 V vs Standard Hydrogen Electrode (SHE) In a reducing environment, ligand-free Fe has an oxidation state … is 6 because 6 NH3 are surronuding the Co. Oxidation no. ���>��� By the mid-1870s Sophus Jørgensen in Denmark had systematized the synthetic methods for preparing the coordination compounds that were known at that time, especially those of cobalt(III). h�bbd```b``Y"CA$C'�d� �L��$����DF���s�30120.�g`$@�g`8� � � Werner concluded that most coordination complexes were essentially octahedral , with six ligands bonded to a central metal ion (more or less, one above, one below, and fou… The term was originally defined in 1893 by Swiss chemist Alfred Werner (1866–1919). The study of rhenium hydrides can be traced to the 1950s and included reports of the "rhenide" anion, supposedly Re −. • Dissolve in water and add AgNO3. For K[Pt(NH3)Cl5], the coordination number for Pt is 6, because there are 6 ligands on the Pt atom. Note: It has been pointed out to me that there are a handful of obscure compounds of the elements sodium to caesium where the metal forms a negative ion - for example, Na-.That would give an oxidation state of -1. The transition metals have certain colours, or colour ranges that are typical of that metal. exists in black and gold forms. Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). The oxidation state, sometimes referred to as oxidation number, describes the degree of oxidation (loss of electrons) of an atom in a chemical compound.Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. We got [math]Ni(+II)[/math]. 2 the oxidation state of the transition metal 3 the type of ligand; The transition metal. For the copper(II) ion in [CuCl 4 ] 2− , the coordination number is four, whereas for the cobalt(II) ion in [Co(H 2 O) 6 ] 2+ the coordination number … Co+3. When we assign an oxidation number, we strip down the metal complex/compound such that we find the eventual charge of the metal after we strip the ligands away. In Volume 5, the material for each element is organised by oxidation state of the metal and also by the nature of the ligands involved, with additional sections covering special features of the coordination chemistry and applications of the complexes. O2- and S2- have oxidation numbers of -2.; In a molecule or compound, the oxidation number is the sum of the oxidation numbers of its constituent atoms. The interaction between a metal atom and the ligands can be thought of as Lewis acid-base reaction. 1.6 Relative Stability of their Oxidation States 1.7 Coordination number and Geometry 1.8 Summary 1.9 Terminal Questions 1.10 Answers 1.1 OBJECTIVES The objective of writing the text material of this unit is to acquaint the readers to the characteristic properties of the d … Consider Fe3+: Half-reaction- Fe3+(aq) + e-Fe2+(aq) E⁰ = +0.77 V vs Standard Hydrogen Electrode (SHE) In a reducing environment, ligand-free Fe has an oxidation state … endstream endobj 276 0 obj <> endobj 277 0 obj <> endobj 278 0 obj <>stream The two chloride ions attached to the complex ion tells us that the complex ion has a net charge of +2. [Cr (H_2O)_6]^3+ b. The ReH 2− 9 anion is a rare example of a coordination complex bearing only hydride ligands.. History. The pressure oxidation of acidic FeCI~ solution with oxygen. forms halides in the oxidation state of +3 only. The oxidation of aqueous solutions of FeCl~ in HC1 media has been investigated in a flow reactor. For K4[Fe(CN)6], coordination number and oxidation state are respectively. Calculate Coordination Number of an Element. Solar-to-fuel energy conversion relies on the invention of efficient catalysts enabling water oxidation through low-energy pathways. In a particular isomer of $\left[Co\left(NH_{3}\right)_{4}Cl_{2}\right]^{0},$ the $Cl-Co-Cl$ angle is $90^\circ$, the isomer is known as, Compounds, $ {{[PtC{{l}_{2}}N{{H}_{3}})}_{4}}]B{{r}_{2}} $ and $ [PtB{{r}_{2}}{{(N{{H}_{3}})}_{4}}]C{{l}_{2}} $ , shows the following type of isomerism. Co 2+ is a d 5 case, paramagnetic (iv)[Mn(H 2 0) 6] 2+ S0 4 2-x+0f+2.•. Within artificial devices, water can be oxidized efficiently on tailored metal-oxide surfaces such as RuO2. It is well known that in the lanthanide higher oxides the coordination number of the lanthanide cation is 6, 7, or 8. 3B. endstream endobj 279 0 obj <>stream There is only one possible way to bond the two ligands to the metal in the coordination … as for why I too am dying to know as I haven't been able to find any good explanations on the internet $\endgroup$ – bon Jan 10 '15 at 21:50 Low Oxidation States In [Co(NH3)5Cl]^2+ cobalt would have an oxidation number of +3 since ammonia is zero and Cl is -1. The oxidation number is represented by a Roman numeral in parenthesis following the name of the coordination entity. Which of the following set of molecules will have zero dipole moment ? chemistry. �l8N���;A���c�JA�NGܝ��#[�@PX�A��7B���#t��>�'�Ж����A=��Ps��]�a�(�T�7n��g�^l�sOQ,cT��{��8�>�P��ȡr���` �. %PDF-1.5 %���� Nearly all metallic elements form aqua complexes, frequently in more than one oxidation state. The coordination number of the central metal ion or atom is the number of donor atoms bonded to it. The reduced coordination number of Ni(Fe)‐O and elevated valence state of Ni(Fe) in NiFe‐MOFs layer greatly bolster OER, and the shifting of oxygen evolution sites from Ov‐BiVO 4 to NiFe‐MOFs promotes Ov stabilization. The oxidation and coordination numbers of cobalt in the compound [Co(NH3)5Cl)Cl2 are, respectively 3 and 6 In the compound K[Co(C2O4)2(H20)2] (where C2O4 2-=oxalate) the oxidation number and coordination number of cobalt are, respectively Oxidation state, x = + 2 Coordination number=4. Coordination Number! (iv) [Mn(H2O)6]SO4 The central metal ion is Mn. Conceptually, the oxidation state, which may be positive, negative or zero, is the hypothetical charge that an atom would have if all bonds to atoms of different elements were 100% ionic, with no covalent component. Which kind of isomerism is exhibited by octahedral $[Co(NH_3)_4Br_2]Cl$ ? forms many complexes with neutral donors. 284 0 obj <>/Filter/FlateDecode/ID[<555F95D379893CC2D6C24F9C567AAAAB><4E49F0E988384A418AFA5C39DBE2F409>]/Index[275 18]/Info 274 0 R/Length 70/Prev 465748/Root 276 0 R/Size 293/Type/XRef/W[1 3 1]>>stream What is the coordination number of Fe in [F e C l 2 (e n) 2 ] C l? Class XII Chapter 9 – Coordination Compounds Chemistry The oxidation state can be given as: x − 4 = −2 x=+2 The d orbital occupation for Co2+ is eg4 t2g3. Due to the relatively low reactivity of unpaired d electrons, these metals typically form several oxidation states and therefore can have several oxidation numbers. Identify compound X in the following sequence of reactions: Identify a molecule which does not exist. h�*9��-���%����VD���iJГ�C,�c��(���'`���m":��Uf��Qo#2 The oxidation state can be given as: x + 0 = +2 x = +2 The d orbital occupation for Mn is t2g3 eg2. Oxidation Number: The number that is assigned to an element to indicate the loss or gain of electrons by an atom of that element is called as the oxidation number. —��,�~3�3��G� D��� The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. Nearly all metallic elements form aqua complexes, frequently in more than one oxidation state. Our aerobic life is based on this strategy, mastered by the natural Photosystem II enzyme, using a tetranuclear Mn–oxo complex as oxygen evolving center. Oxidation number are typically represented by … Such aqua complexes include hydrated ions in aqueous solution as well as hydrated salts such as hexaaquachromium(3+) … In the examples above, the hexaaqua complexes have a coordination number of 6 and the tetrachlorocuprate (II) complex has a coordination number of 4. Correct increasing order for the wavelength of absorption in the visible region for the complexes of $Co^{3+}$ is: When $Ag^+$ reacts with excess of sodium-thiosulphate then he obtained species having charge and geometry respectively : In Wolff‐Kishner reduction, the carbonyl group of aldehydes and ketones is converted into. Ligand, Coordination Number, Coordination Sphere & Oxidation Number Ligand. The oxidation state is important. The oxidation state. The atomic radiusis: Find out the solubility of $Ni(OH)_2$ in 0.1 M NaOH. The oxidation state of an atom is the charge of this atom after ionic approximation of its heteronuclear bonds. When heated or under extreme pressure preferable CN may change as well. ��q �r�$�փ�]����5�j�>������}��^|���1����(Kb���i��˴sA�7%��.&��C�|��+Y���ޥ�D��j2���DX��4��1� Thorium: forms in the 2, 3, and 4 oxidation states. eg: [Co(NH3)6] here coordination no. eg:1. I initially thought that EDTA Tetra-dentate and H2O was Mono-dentate so i figured the Coordination number of Fe was 5 and that was wrong. • The … ØPrimary valence equals the metal's oxidation number ØSecondary valence is the number of atoms directly bonded to the metal (coordination number) Co(III) oxidation state Coordination # is 6 Cl- Here, the ligand is the NEUTRAL water molecule. They are positive and negative numbers used for balancing the redox reaction. The coordination number for the silver ion in [Ag(NH 3 ) 2 ] + is two ( Figure 19.14 ). Coordination number c. Primary valency c. Oxidation number 5.The oxidation state … Since iron is in the oxidation state +2, the compound is called iron (ii) chloride. Coordination compound - Coordination compound - Aqua complexes: Few ligands equal water with respect to the number and variety of metal ions with which they form complexes. In coordination chemistry, the coordination number is the number of donor atoms attached to the central ion. Which of the following compounds show optical isomerism? h��UmO�0�+��i*~w Uj���HU?��k��I� ����t݀1>m���?���$"ք���:�>&�c�e�O�`p of ligands that are surrounding a centrl metal ion. Within artificial devices, water can be oxidized efficiently on tailored metal-oxide surfaces such as RuO2. Carefully, insert coefficients, if necessary, to make the numbers of oxidized and reduced atoms equal on the two sides of each redox couples. Which one of the following is heteroleptic complex? has all whites are solids. Since linear complexes only have a 180 degree bond angle, it cannot have cis or trans isomers. Hydrometallurgy: 6: 339--346. The coordination numbers and geometries of copper complexes vary with oxidation state. The coordination number, oxidation number and the number of d-electrons in the metal ion of the complex $\ce{[COCl_2 -(en)2]Cl}$, are respectively (atomic number of Co=27) KEAM 2015 2. has all whites are solids. In the complex [CO(en)2 Cl2 ]Br, the total number of donor atoms is 6 (4 from two 'en' moles + 2 Cl- ions).Let the oxidation state of 'Co' be x.x + (0 × 2) + (-1 × 2) + (-1) = 0 x + 0 - 2 - 1 = 0 x - 3 = 0 x = +3 $\begingroup$ the coordination number is the number of coordinate bonds to the metal ion: six for the hexaaqua complex and four for the tetrathiocyanate complex. Which of the following complex is optically inactive, The compound $\ce{[Pt(NH3)2Cl2]}$ can exhibit. Coordination numbers are normally between 2 and 9. Protactinium: Question35. The coordination number is 6. means no. Protactinium: Copper salts, for example, are usually blue or green, iron has salts that are pale green, yellow or orange. Which of the following complexes exists as pair of enantiomers? hR�kQ�Mv7[M�n٦[ܦ��[�^��6mI=t�vI�-$�~���H� m`��P�`=HP�� 0 ��1�X Կ�������a-�霑%��ųS�0t!�s82��;��`�*F(����qo}C�.p�#]�o��8�K�z���a�b���|�z����I���lX���ı?�%�{55�lJ��Y���w ��X$�2���! Effective atomic number b. 292 0 obj <>stream M 2 O 3, to permit estimation of the metal's two oxidation states.) Aluminum, tin, and lead, for example, form complexes such as the AlF 6 3-, SnCl 4 2-and PbI 4 2-ions. Oxidation state, x- + 2 Coordination number is 6. • Pale blue [Cu(H2O)4] 2+ can be converted into dark blue [Cu(NH x = +2. exists in black and gold forms. Im assuming that the EDTA is 4- and the overall charge is 1- so the charge on Fe should be 3+. Preferable CN can change for an element depending on the atom's neighbors. Surface-directed corner-sharing MnO6 octahedra within numerous manganese oxide compounds containing Mn3+ or Mn4+ oxidation states show strikingly different catalytic activities for water oxidation, paradoxically poorest for Mn4+ oxides, regardless of oxidation assay (photochemical and electrochemical). The highest observed coordination number (the number of atoms that a central atom has as its neighbours in a compound) of chlorine (oxidation state of +7) toward oxygen is 4 (i.e., the chlorine atom is surrounded by four oxygen atoms), as found in the perchlorate ion, (ClO 4) −, whereas… Determine the oxidation state and coordination number of the metal ion in each complex ion. Copper salts, for example, are usually blue or green, iron has salts that are pale green, yellow or orange. Oxidation Number of Periodic Table Elements. endstream endobj startxref William Adamson, Chen Jia, Yibing Li, Chuan Zhao, Cobalt oxide micro flowers derived from hydrothermal synthesised cobalt sulphide pre-catalyst for enhanced water oxidation, Electrochimica Acta, 10.1016/j.electacta.2020.136802, 355, (136802), (2020). Farms For Sale Near Columbus Ohio, Rosemary Plant Care Indoor, Mandarin Collar Short Sleeve Shirt, Light Mountain Hair Color Reviews, Homes For Sale Vancleave, Ms, How To Get Strawberry Seeds, German Engineering Abbreviations, More
|
CommonCrawl
|
Damage Estimation using Shock Zones: A case study of Amphan tropical cyclone
Medha, Biswajit Mondal, Gour Doloi, S.M. Tafsirul Islam, Murari Mohan Bera
posted 27 Oct, 2021
The tropical cyclone affects millions of people living in the coastal regions. The changing climate has led to an increased intensity and frequency of cyclones, therefore increasing the damage caused to people, the environment, and property. The Bay of Bengal is most prone to tropical cyclones, which affects Bangladesh and the eastern coastal region of India due to geographical proximity. Hence, a comprehensive understanding of the inundation damage and intensity caused becomes essential to focus the relief efforts on the affected districts. This study identified the shock zone and assessed the inundation associated damage caused by recent cyclone Amphan in the area of Bangladesh and West Bengal in India. The shock zonation was based on the track of cyclones, cyclone wind speed zones, elevation, wind impact potentiality, and agricultural population area. The identification of the affected area was done using integrated Landsat and SAR data, and economic damage cost was assessed using the Asian Development Bank's (ADB) Unit price approach. The total people affected due to inundation are 2.4 million in India and 1.4 million in Bangladesh and the damage totaled up to 5.4 million USD. The results of this study can be used by concerned authorities to identify the shock zones and be used for rapid assessment of the damages.
Tropical Cyclone
Landsat and SAR data
Shock Zonation
Tropical cyclones are storms that cause extensive damage to property, disruption of transport and communication networks, loss of human and animal lives, and environmental degradation (Dube et al., 2009; Krapivin et al., 2012; Sahoo and Bhaskaran, 2018, Ying et al., 2014; Needham et al., 2015; Bakkensen and Mendelsohn, 2019). Around the world, around ninety tropical cyclones are formed per year, which causes catastrophic disasters (Murakami et al., 2013). Globally, tropical cyclones have caused the deaths of about 1.9 million people over the past two centuries (Shultz et al., 2005; Hoque et al., 2018). The approximate damage estimated was 26 billion USD each year (Mendelsohn et al., 2012; Hoque et al., 2019). Many studies have predicted an increased number and intensity of tropical cyclones over the years (Mendelsohn et al., 2012; Ranson et al., 2014; Varotsos et al., 2015; Alam and Dominey-Howes, 2015; Walsh et al., 2016; Moon et al., 2019). This increases the risk of impact on coastal communities, animals, environment, and properties (Varotsos and Efstathiou, 2013; Hoque et al., 2019). According to UNISDR's recent report 'Economic loss, poverty, and disasters, 1998-2017 climate-related disaster made over 4.4 billion people homeless, displaced, and injured worldwide. In India and Bangladesh, approximately 5.5% of the population was directly exposed to disasters in this period. India faced an absolute economic loss of 79.9 billion USD during 1998-2017. World Bank estimates suggest disaster causing over 16 billion USD in total damage in Bangladesh during 1980-2008(UNISDR, 2018).
The Bay of Bengal(BOB) is frequently affected by tropical cyclones. The geographical proximity of Bangladesh and the eastern coast of India to BOB makes the regions highly prone to cyclonic disasters (Islam and Peterson, 2009; Paul et al., 2010; Ahmed et al., 2016; Islam et al., 2016). In the last 100 years, around 17% of tropical cyclones have made landfall on coastal areas of the Bay of Bengal (Hoque et al., 2019). High-intensity tropical cyclones re-occur frequently causing extensive damage in the coastal region of both countries (Alamand Collins, 2010; Mallick et al., 2017). Further, these coastal regions are highly vulnerable due to large population density, high poverty rates, and the presence of temporary infrastructure. According to Paul and Dutt (2010), more than 1 million people were killed by cyclonic disasters since 1877 in coastal Bangladesh. Further, sea-level rise due to global warming will intensify the impacts of tropical cyclones on people's lives and livelihood across the coastal districts of both India and Bangladesh (Karim and Mimura, 2008; Sarwar, 2013; Abedin et al., 2019).
On 20th May 2020, the tropical cyclone 'Amphan' hit the coast of India and Bangladesh, accompanied by severe storm surges and rainfall (wind speeds up to 195kmph or 121mph). The cyclone caused causalities, killing around 88 people and leaving thousands homeless in India and Bangladesh (Aljazeera, 2020). The cyclone struck at a time when the region had already been ailing with the impact of the COVID-19 pandemic. In such a situation, the relief and recovery measures get further complicated. Therefore, finding the risk zones and estimating the damage is essential to provide an idea about the loss of property, agricultural and livestock, and various primary livelihoods. Some news reports and government organizations published estimated damage for a particular area (Sud and Rajaram, 2020) or specific aspects. Detailed reports on risks and overall damages in the entire cyclonic affected coastal and adjacent districts were not available.
Risk mapping is a fundamental technique to derive spatial information. Risk mapping assesses the impacts of any hazard or disaster that make people, properties, and environments vulnerable (Pradhan and Lee, 2010; Mohammady et al., 2012; Zare et al., 2013; Rashid, 2013; Pradhan et al., 2014; Youssef and Al-Kathery, 2015; Dieu et al., 2016; Aghdam et al., 2016). Remote sensing and geospatial technique have been used effectively for mapping risk-prone areas (Yin et al., 2010; Poompavai and Ramalingam, 2013; World Meteorological Organization Communications and Public Affairs Office Final, 2011). MODIS, Sentinel, Landsat data, and Census information are frequently used to understand flood, landslide, earthquake, cyclone impact on land-use land cover and socio-economic situation (Agnihotri et al., 2019; Haraguchi et al., 2019; Jeyaseelan, 2003; Tay et al., 2020; Aksha et al., 2020). A review of existing literature shows the spatial analysis techniques are commonly used for mapping risk (Kunte et al., 2014; Mori and Takemi, 2016; Hoque et al., 2017; Hoque et al., 2018; Karim and Mimura, 2008; Kumar et al., 2011; Dasgupta et al., 2011; Roy and Blaschke, 2013) with the use of multi-criteria based approach being used the most (Poompavai and Ramalingam, 2013; Gao et al., 2014; Quader et al., 2017). The spatial risk assessment model has proven useful in minimizing the loss of life and the socio-economic impact (Yin et al., 2013; Mahapatra et al., 2015; Masuya et al., 2015), while the current GEOSOM based shock assessment model can assist in mitigation and current impact assessment for a specific event.
Studies related to tropical cyclone risk mapping are done widely, but studies on spatial damage and loss estimation and mapping due to cyclones are very inadequate. An understanding of the socio-economic damages caused by tropical cyclones is important to undertake the proper recovery measures(Ahmed et al., 2016; Joyce et al., 2009). Moreover, spatial loss assessment is crucial for the allocation of resources for agricultural activities, regeneration of jobs, and other socio-economic activities by the funding agencies. There are two popular typologies, direct and indirect estimation. Direct cost considers the immediate cost of any disaster, whereas indirect estimation focuses on the disaster-associated consequences. Rather than distinguishing direct and indirect loss, several studies focus on the assets and output loss approach (Hallegatte, 2015). The Multi-sectorial Input-output model (Haque & Jahan, 2015), unit cost-based model (GoB, 2008; Roy et al., 2009; Government of Odisha, 2013) are commonly used to estimate the output loss. Significant advancements have been made in the damage assessment framework (Dolman et al., 2018). The availability of real-time satellite data and global socio-economic datasets has significantly improved the damage and loss estimation accuracy. Therefore, our recent study provides deep and elaborate details of damage estimation of the entire flood inundated areas after the cyclone.
This study develops a spatial framework that includes cyclone shock zones and damage and output loss intensity. UN-SPIDER recommended damage estimation practice and unit cost methods were combined to estimates output loss for the entire flood inundated areas caused by cyclone Amphan. This study seeks to analyse the situation of the areas majorly affected by recent inundation and flooding caused by the Amphan cyclone. Firstly, the study assesses the categories of Amphan shock zones to identify potentially exposed areas, rather than following the common risk zonation approach. Secondly, developing a spatial damages assessment framework to account for the economic cost of inundation and flooding on the crop, livestock, and housing units. In this study, the maps produced by risk assessment would be very helpful to identify the spread and intensity of disaster to create the most effective disaster mitigation plan in this area. This understanding of the socio-economic damages caused by tropical cyclones is important for reducing the losses by adopting proper recovery measures(Ahmed et al., 2016; Joyce et al., 2009).
2. Data
This study uses socio-economic, disaster, and climate-related data, administrative GIS layers, and satellite data to estimate the cyclone severity and damage intensity. Socio-economic data was used to estimate human exposure to the disaster and the household crisis. Climate data depicts the last 48 hr. update of the cyclone event i.e., its track, intensity, and area of influence. The historical disaster records were used to assess the current loss and to compute area-wise disaster damage intensity. Further, GIS layers and remote sensing data served the local to regional level damage and loss information of crop, forest, property, and human life. The database and its preliminary preparation process are illustrated in Table 1.
Details of data used to estimate cyclone shock zones and damage intensity
DETAILS OF DATA PREPARATION
Bangladesh Socio-economic
District total population 2018, Crop Yield 2018-19; Unit price of crop and livestock unit 2019-2020; Unit price for property 2018; Poverty rate 2016
Bangladesh Bureau of Statistics; Yearbook of Agricultural Statistics 2019; Department of Agricultural Marketing; Food for Nation, Ministry of Agriculture; HIES 2016
• Official population data is used to validate the WorldPop population grid data
• Crop yield, livestock product/unit, and unit price data used to estimate the output value/unit crop and livestock
• Standard property reconstruction price/unit data used to estimate the total cost of reconstruction
IndiaSocio-economic
District total population 2011, Crop Yield; Unit price of crop and livestock 2019-20; Unit price for property 2018-19; the Poverty rate
Census of India 2011; DACNET; Agmarknet; NABARD;PMAY-G; Poverty Grid, Livemint
• Official population data is used to validate the World Pop population grid data
• Crop yield, total livestock, and unit price data were used to estimate the output value/unit crop and livestock
GIS layers
Previous cyclone path; global exposure (low-income group) 2015; Bangladesh district administrative boundary from HDX and
Indian district boundary 2011 from Datameet (open platform); FAO Livestock population 2010; GHSL 2014; Population grid 2020
Humanitarian Data Exchange (HDX);Data meet; FAO agricultural statistics; European commission; WorldPop
• History of previous cyclone track data is valuable to understand the regional risk from the cyclone events
• Spatial estimated population dataset is useful to estimate the current population under threats
• The reliability of the population spatial dataset is checked using 2011 as a reference year.
• Total low-income rural population data is extracted from the global human exposure dataset 2015.
• The 2010 man-animal proportion is used to compute the livestock population in 2020.
• Built-up grid data is used to validate the household unit for the year 2014.
• WorldPop population grid data is used to estimation the inundation impact on human life
Sentinel-1C-band, and Landsat 8, SRTM DEM data
Flood inundation extent map on 1st May 2020 and 22nd May 2020; LULC based standing cropland, built-up, flood inundation 4th May 2020; Elevation
LANCE; USGS
• SAR, Landsat 8 data, and random forest classification scheme is used to prepare the LULC map for the year May 2020 in the Google earth engine.
• DEM elevation data is used to validate the inundated area
CycloneAmphan track
IMD and BMD
• Details track of Amphan 2020 is used to demarcate the risk zones based on wind speed: High risk [wind speed: Above 120kmph], medium risk [wind speed: 90-120kmph], low risk zone [Below 90kmph]).
Category wise damage and loss from previous cyclones
Government reports on Feni, Alia andPhalin, Sidr; Bangladesh Disaster Report 2009-2014; Reliable online media
• Reviews on previous cyclones, its intensity,damage, and loss amount are collected; Current media gross damage information is used to validate our estimated information.
The study was carried out in four major steps. First, shock zones were defined using the cyclone characteristics and socio-demographic situation shock zones. Second, using remote sensing and GIS tools LULC and flood-affected areas were demarcated. Third, the impacts of inundation were estimated using LULC, inundation area, population, and poverty situation. Finally, cyclone shock zones and their associated cost are estimated to understand the association between cyclone intensity and damage. Detail workflow of these steps is illustrated in Figure 1.
3.1 LULC and flood mapping
Medium resolution optical (30 m x 30 m) Landsat 8 data along with 10 m resolution Sentinel-1 C-band GRD data dated 4th May 2020 was used. A combination of SAR and optical data is used to reduce the chances of misclassification. The present study follows the Anderson et al., (1976) LULC classification scheme to prepare the LULC data in the Google Earth Engine (GEE). A Random Forest (RF) algorithm is used for the large area, because of its high accuracy(Abdullah et al., 2019). Six broad spectral classes were used, i.e., Built-up, Open land, Cropland, Vegetation, Sand, and Water bodies using 100 training pixels for each class. The overall accuracy of the LULC data was computed to be 91% and the Kappa coefficient was 89%.
Inundation change analysis performed in the GEE Sentinel-1C-band GRD imagery with VV, VH polarization, and 'DESCENDING' pass direction for the dates 4th May 2020 and 22nd May 2020 was used for pre and post-inundation situation (Uddin et al., 2019). In the GEE, all the data is pre-processed (i.e., noise removal, radiometric correction, and terrain correction, and finally, backscatter scattering to decibel conversion). The intensity of change per pixel was estimated by dividing after flood mosaic with before flood mosaic. A binary flood layer was prepared using a threshold of 1.25, where values above 1.25 were assigned a score of 1 and all other pixels are assigned a score of 0 .
Grid level sector-wise loss estimation
Croploss estimation
\(Crop Cost=Innundation affected croppedarea*\sum \left({Yield}_{ij}*{unitprice}_{ij}\right)\)
Where i represent district and j represents crop types
Livestock loss estimation
\(Livestock Cost=\left({livestock density}_{ij}*{innundated crop and built-up area}_{ia}\right)*{unit price oflivestock}_{j}\)
Where i represent grid within a specific district and j represents livestock types; a represent grid within the ith district
Housing loss estimation
\(HousingCost=\left(\frac{{innundated built-up area}_{i}}{standard reconstruction area}\right)*{unit price ofpartial reconstruction}_{national}\)
Where i represent grid within a specific district
Low-income rural population around the flooded (<1km)was estimated for each grid
3.2 Damage and loss estimation
Details on average damage and loss pattern of cyclone events were compiled from different reports. Based on the data availability this study estimated damage for the housing, crop, and livestock sectors including human life. Damage and loss quantity vary significantly between rural and urban areas. The damage assessment for the infrastructure and forestry sectors was not attempted due to inconsistencies in the media report and the absence of reliable spatial data.
This study followed two steps to estimate the loss value which is in a comparable form. Firstly, inundation affected cropland, livestock, and housing loss area was estimated using the 5km x 5km grid. Secondly, inundated cropland and the built-up area within each grid was multiplied with the unit price of the crop, livestock, and housing. The values are converted into USD using country-specific exchange rate. Details steps are illustrated in Table 2.The total cyclone damage and loss amount are estimated for each cyclone shock zone and district.
3.3Amphan and poverty severity
Grid wise population data and flood-affected areas were intersected to estimate population exposure to inundation. Also, low income rural and urban population from the global exposure dataset 2014 were used to estimate the district-wise poor population share. Next, the association between district-level poverty rate and damage intensity was computed to understand the cyclone induced poverty. Further, food accessibility and total risk information were combined to assess the severity of poverty. This estimation identifies the areas which require urgent relief and the approximate amount of monetary support needed.
3.4Amphan cyclone severity zone
Cyclone wind speed zones, elevation, wind impact, distance from cropland and vegetation, business loss, and agricultural population area was used to prepare Amphan severity zones. All these variables were normalized such that higher values indicate high severity and lower value indicate low severity. These zones are then intersected with the damage data to understand the association between cyclone shocks intensity and damage intensity.
Four zones were extracted using GeoSOM Ward's criterion-based 'Contiguity- constrained hierarchical clustering' process, available in the SPAWNN toolkit algorithm (Hagenauer and Helbich, 2016). The spatial clustering approach is a two-step process to define the Best Matching Unit (BMU). Firstly, based on the input dataset i.e., wind speed, distance from the cyclone track, elevation, and proportion of rural area closest neurons are identified. Secondly, a distance criterion is applied to the identified closest neurons. This study used radius = 1 to reduce the spatial dependencies in BMU. Each input variable and its BMU structure are presented in a hexagonal space. This spatial clustering is based on self-organising neural network techniques which are useful for its information extraction capability from the large spatial dataset.
4.1 Bay of Bengal and tropical cyclones
The coastline of India and Bangladesh is highly vulnerable to tropical cyclones. Between 1877 and 2016, 525 cyclonic storms were recorded (Bandopadhay et al., 2018). The recorded trend shows that the yearly incidence of the cyclone has increased by about 35.9% during the past century. The maximum wind speed has increased by 18.5% in the past century, with a rapid rise after 1960. The median landfall locations of the tropical cyclone are distributed in a broad arc from the central coastline of Odisha to the Western coastline of Bangladesh. The median landfall has shifted over the years. Median landfalls were in Odisha from 1877 to 1920, the border of Odisha and West Bengal from 1921 to 1960, and the border of West Bengal and Bangladesh from 1961 to 2016. Between 1981 and 2000 the cyclone intensities were the highest at West Bengal and Northern Orissa with 32-37 strikes per year. Bandyopadhyay et al. (2018).This intensity declines incrementally in both the direction. While the highest mean speed of 58-63 knots occurred in the eastern coastal area of Bangladesh.Since2000 over 180 cyclones hit this coastal area. In general, the cyclones occur in the months of April-June and September-November. According Indian Meteorological Department during 2000-2019 cyclone intensification probability from depression to the severe cyclone is quite high in the month of April (60%), May (45%), November (40%) followed by December and October. Cyclones which developed in the Bay of Bengal between 2000-2019 moved towards two-dominant directions, one is towards north-west (Odisha, Andhra Pradesh, and central India), and the other move towards north-east (West Bengal and Bangladesh), as illustrated in Figure 2.
Cyclonic storms occur frequently over the regions of the eastern coast of India and Bangladesh. Since, 2007, 1-2 cyclonic storms occur each year. The wind speed ranges from 75km/hr to 260km/hr, with half of the cyclones falling in the super cyclonic storm or extremely severe cyclonic storm range. The overall losses depend upon the intensity of the cyclone and the geographic region where it hits (Table 3).
Amphan made landfall on the 20th of May affecting districts in the states of Odisha, West Bengal in India, and few districts in Bangladesh. This study considers the areas lying around 200Kms from the path of the cyclone. The area of interest includes 26 districts in Bangladesh, 12 districts from West Bengal, and 1 district of Odisha, India. The total track length of the cyclone Amphan was 1765 Km. It crossed the West Bengal coast as an extremely severe cyclonic storm with a speed of 185-190 Kmh. Rapid intensification was observed after landfall over the West Bengal coast, after which it moved north-east into Bangladesh as a severe cyclonic storm. It weakened into a deep depression as it moved further north-east. It causes storm surge with a height of around 4.5-5 meters and heavy rainfall in the coastal area of Odisha, West Bengal, and Bangladesh.
Estimated damages and wind speed of severe cyclones which occurred in the 21st century in coastal areas of India and Bangladesh
Cyclone Name
Date of occurrence
Wind speed (km/hr)
Damage/Losses [USD]
2007; May
Sidr
2007; November
2.31 billion
2008; October
12.9 billion
Vivaru
Phailin
2015; July
Roanu
5.58 million
Amphan
4.2Damage estimation
Amphan cyclone caused inundation, which severely affected the cropland, livestock, and property in 18 districts of India and 26 districts in Bangladesh. The districts of Purba Medinipur, Nadia, South 24 Pargana, and in Bangladesh &Barisal, Phirojpur, Bagerhat, Gopalganj, Meherpur, Chuadanga, Kushtia, Natore, Rajshahi, Bogra, and Gaibandhawere badly affected due to severe rainfall and storm surge. Inundation caused total damage of around 5.4billion USD in Bangladesh and India, which is 0.03% and 0.23% of the state GDP of Bangladesh and India, respectively(Table 4). The total damage amount is expected to increase exceptionally if all sectors are included in the estimation.
Estimated damage in Bangladesh and India
No of unit
Cost (USD)
GDP share
107215 ha
$10,09,46,527
GDP 2019-20 (current price)
$3,31,56,01,37,539
4.2.1 Crop damage
The agriculture and animal husbandry sectors are a major contributor to the State Gross Domestic Product of West Bengal. Almost 22% GSVA came from the agriculture, livestock, fisheries, and forestry sectors in the year 2017-18. According to the Census of India 2011, more than 68% of people depended on the agricultural sector and around 16 % of people (cultivators and agricultural labour) were directly engaged with this sector, which is spread over 5.25mha (60%) area. Paddy and Pulses are the dominant crops followed by Jute, Maize, Wheat, Potato, Sugarcane, Til, Mung, Vegetables. In the months of April-June Aus paddy, pulses, vegetables, and fruits (mango).
In Bangladesh, more than 10% of GDP came from the agriculture, livestock, fisheries, and forestry sectors in the financial year 2019. This sector manages to feed more than 40% of the labour force. More than 83% of households in the country live in rural areas and are dependent on agricultural and allied sectors. Agriculture is still the dominant activity in the country with 8 Mha net cropped area (54%). Paddy is the major crop (77%), followed by pulses (2.8%), oilseed (2.78%), spices and condiments (2.53%), and vegetable (1.27%).
Cyclone Amphan severely affected the agricultural sector in India and Bangladesh. Severe rainfall caused heavy crop damage. A total of 0.11 Mha and 0.06 Mha cropland was severely affected due to inundation in the districts of West Bengal, and Bangladesh respectively, which was under kharif crops (Figure 4). Mostly rice, pulses, vegetation, and mango fruit were affected in the districts of West Bengal, whereas largely paddy cultivation was affected in Gopalganj (4.7 million USD), Jassore (4.4 million) and Jhenaidah (3.8 million USD) districts of Bangladesh due to heavy rainfall. The loss in monetary terms was more than 288 million USD in West Bengal and 37 million USDin Bangladesh, which is around 0.15% and 0.01% of state GDP respectively. Purba Midnapur (146.6 million USD), Paschim Midnapur (55 million USD), and Howrah (17.9 million USD) were the worst affected districts in India.
4.2.2 Livestock damage
A large proportion of poor rural households are directly or indirectly dependent on the livestock product as a food source and household income in West Bengal, India, and Bangladesh. The livestock sector is an important contributor to the West Bengal state economy with a share of 4.41% of the state domestic product. Bangladesh's rural livestock economy contributes 1.43% of GDP..
The recent cyclone caused extensive damage to the livestock economy in West Bengal and Bangladesh. Livestock death has not been considered as this information was not available. This study assumed that flooded areas were more prone to livestock deaths; therefore damage associated with the inundation was estimated here. In West Bengal, a total of 3.2 thousand livestock and 11.3 thousand poultry units were affected due to the submergence of built-up and crop area, whereas 7.2 thousand livestock and 47.4 thousand poultry unit were affected in Bangladesh. Amphan flood caused around 3.04 million USD livestock damage in Bangladesh and 1.51 million USD in West Bengal, India. Purba Medinipur (0.59 million USD), Paschim Medinipur (0.30 million USD), South 24 Parganas (0.23 million USD), and Howrah (0.22 million USD) has been severely affected in West Bengal. Jhalokati (0.59 million USD), Barguna (0.54 million USD), and Bagerhat (0.46 million USD) districts have been severely affected due to flooding in Bangladesh (Figure 5).
4.2.3 Housing damage
Cyclone Amphan caused extensive damage to the housing units. Both kutcha and semi-pucca houses were affected due to flooding and associated cyclonic wind. The assessment considered household units that were submerged underwater for around 3-4 days after the cyclone, as spatial information on affected kutcha and pucca structures is unavailable. Therefore, this study assumes that both kutcha and pucca structures were partially damaged due to heavy rainfall and storm surge. Almost 0.2 million housing units were affected due to recent inundation in West Bengal, where districts Purba Medinipur, Hoogly, Howrah, S. 24 Parganas, and Paschim Medinipur were severely affected. In Bangladesh, approximately0.9 million housing units were severely affected due to flooding.
The partia lcost for reconstruction of houses in West Bengal amounted to 150.17 million USD and 60.8 million USD in Bangladesh. Housing units in Purba Medinipur, S 24 Parganas, N 24 Parganas in India, Gaibandha, Phirojpur, Natore, Khustia, and Bogradistrict were badly affected (Figure 6).
4.3 Impact on low-income household
Disaster damage has a strong positive relationship with the cancelation of poverty eradication. Bangladesh is still considered at high risk for disaster-induced poverty (ODI, 2013).The situation of Bangladesh is extremely critical as every year disaster-related damages increase the poverty incidence. A high proportion of the low-income rural populations, high disaster frequency, and spread of settlement over the low-lying areas are the major cause of high disaster damage. The situation in West Bengal is different from Bangladesh as the high-density population, physical setting and unplanned development activity increase the damage intensity and resultant poverty rate. Even though the disaster early warning has helped in reducing the degree of disaster losses, the displacement due to disaster and loss of livelihood loss due to disaster is increasing.
Recent inundation in Bangladesh directly affected 1.4 million people, which is almost 3 percent of the total population in the study region. Jhalokati (28% population), Phirojpur (23%), Bagerhat (11%), Barguna (11%), Barisal (9%) and Gopalganj (8%) districts are extremely affected due to the inundation. Among these districts a high number of poor populations accounted in Phirojpur (17.6%) are poor followed by Gopalganj (15.5%), Bagerhat (14%), Barisal (13.6%), and Barguna (12%). Bagerhat are in the most critical state in term of poverty rate and damage share (11.8%) followed by Barisal (19.6%), Gopalganj (10.7%), Phirojpur (7%) and Barguna (7%) (Figure 7). Although the number of people affected in the Patuakhali is small, but the poverty rate (24.4%) and damage share (8%) are significantly high. Similarly, a large number of poor populations are affected in India. A total of 2.3 million people is directly affected due to heavy rainfall and inundation damage. Purba Medinipur is the worst affected district in West Bengal (19% population), followed Howrah (10%), Paschim Medinipur (5%) and Balasore (7%) district of Odisha. The poverty rate of Purba Medinipur, Paschim Medinipur, Howrah and Baleshwar are 17.7 %, 23%, 14.3% and 33% respectively. Paschim Medinipur are in the most critical state in term of poverty rate and damage share (15.6%) followed by Purba Medinipur (52%) and S 24 Parganas (6%) (Figure 8). The share of the poor population in Bangladesh is quite high as compare to India, therefore Amphan damage intensity has made a severe crisis in Bangladesh.
4.4 Cyclone shock zones
The shock zones define the varied vulnerability (Figure 8). The districts were clustered into four shock zones using the variables- wind speed, distance from the cyclone track, elevation, and proportion of the rural area. The highest shock zone lies along the cyclone track, experienced the highest wind speed, was prone to storm surge due to low elevation, and had a greater proportion of rural areas (Table 5).
In India, the wind speed experienced was highest for Zone I, followed by Zone II, Zone III, and lastly Zone IV. The wind speed showed a declining trend from Southeast to North West. Zone I in Bangladesh faced the greatest wind speed, followed by Zone III, Zone IV, and Zone II respectively. Wind speed showed a decline from west to east and from south to north in Bangladesh.
The cyclone travelled from south-west to north-east. Hence, the north-west and south-east area of interest lie farthest from the cyclone track. In India, Zone I and II and in Bangladesh Zone IV and Zone I were closest to the path of the cyclone.
The lower elevation areas have the highest chance of being affected due to cyclone, experiencing flood and storm surges. In India, for elevation Zone I lies at the lowest elevation, followed by zone II and zone III respectively. In Bangladesh, Zone I had the lowest elevation followed closely by zone 3 and zone II.
Rural tracts have greater vulnerability owing to the damage caused to agricultural land, livestock and temporary infrastructures such as mud and thatched houses. Bangladesh is predominantly rural, hence more vulnerable as compared to India
Summary of cyclone Amphan shock zones
RISK ZONES
FROM TRACK
SHARE OF
AND ODISHA
PARTIAL AREA
ZONE I
Very Close
Lowest to Low
Medium to High
ZONE III
High and Low
Close to Medium
ZONE IV
Farthest to Far
Very High to High
Medium to Low
Far to Medium
Wind score: 0.0-0.2 (very low), 0.2-0.4 (low), 0.4-0.6 (medium), 0.6-0.8 (high), 0.8-1.0 (very high); Track proximity: 0.0-0.2 (farthest), 0.2-0.4 (far), 0.4-0.6 (medium), 0.6-0.8 (close), 0.8-1.0 (very close); Proportion of rural population: 0-20 (lowest rural), 20-40 (low rural), 20-60 (Medium rural), 60-80 (High rural), 80-100 (Highest rural); Elevation: 0.0-0.2 (Highest); 0.2-0.4 (high); 0.4-0.6 (medium); 0.6-0.8 (low); 0.8-1.0 (lowest)
Cyclone shock zones and damage intensity
Shock zones wise damage share (%)
Zone I for West Bengal and Zone II for Bangladesh was the most affected by Amphan(Table 6).Zone I lied directly in the path of the cyclone, hence experienced high wind speed. Zone I (India) had the lowest elevation, hence making the area highly vulnerable to the damage caused by the cyclone. Zone II (Bangladesh) was not close to the cyclone track but near to the coast and the entire region is in low-lying area, hence experienced the highest storm surge, rainfall, and high wind speed.
5. Discussion And Conclusion
Spatial risk zonation is commonly used to identify the exposed area to certain disasters. In general, historical data and large heterogeneous actors are incorporated into the risk assessment models. Based on the exposed area measurement approach this study demarcates four shock zones based on elevation, distance from cyclone track, wind speed, and rural area share based on GEOSOM clustering. These zones explain the degree of initial shock that Amphan created over India and Bangladesh. Extracted shock zones are useful to understand the potential exposed area and damage intensity. This study used these zones to estimate the inundation and flood-related damage caused by the cyclone. Estimation suggested that the cyclone intensity and inundation damage was not mutually inclusive. The northern and eastern parts of the study area did not face severe wind and were quite far from the eye of the cyclone, but inundation damage was still high. Physical setting and rainfall intensity caused huge damage within the 200km radius. A sudden weakening of the translation speed of Amphan caused an increase in rainfall and inundation damage in Bangladesh, as also noticed in Kossain's (2018) findings. These generalized climatic risk micro-zonation and ranking mechanisms can induce event-specific bias to future management plans.
The spatial damage assessment framework integrated UN-SPIDER SAR-based flood damage estimation and Asian Development Bank's (ADB) unit price approach to assess the economic loss. All these types of damage and loss estimation framework largely underestimate the actual cost. However, the current spatial approach is more useful and cost-effective to identify the worst affected location and damage intensity more clearly. Only the inundation associated with economic loss was captured. News reports and govt. organizations published estimated total damage, but a detailed report on Amphan inundation impact was not available for the entire cyclonic affected coastal and adjacent districts.According to government sources the total estimated damages of Amphan is 132 million USD in Bangladesh and PCD Global estimates show 12.4 billion USD in India, whereas our estimation suggests only flood and inundation caused total damage of 5.4 billion USD. Flood and inundation caused 31 % of total damage in India, 7% in Bangladesh. If we considered the global average flood impact, up to average of 5.5% people will be expected to fall below the poverty line in Bangladesh and West Bengal.
This framework is more reliable and fit for flood damage assessment but can be usedto estimate damage and loss for all types of natural disasters if the required information is available. However, the current approach has several limitations such as Grid level assessment (5km x 5 Km), partial flood damage assessment, which leads to gross generalization due to unit cost approach, and classification error. First, in general, the grid-based estimation is sometimes unable to provide high spatial accuracy, which can also increase the underestimation error. However, large coverage areas and different sources of information can be easily integrated using this approach. Second, partial estimation of inundation damage is captured through the above methodology as data on types of standing crops was not available. Therefore, district-level gross estimates were apportioned to the grid level. Similarly, damaged kutcha and pucca household locations were not available, even the degree of damage information was not available. Hence, partial damage cost is uniformly applied to estimate the property damage. Third, the unit cost approach introduces large generalization. Even though the previous estimates largely follow this approach, no other standard universal tools are available. Fourth, some LULC classification errors and flood detection errors are unavoidable, although these are minimized using rigorous accuracy checks and post-classification data standardization.
Even though the spatial damage estimation has always been a subject to underestimation as it only captures the material cost, hence underestimating the actual economic cost. The spatial approach is still useful as it is suitable to assess the pattern and degree of damage and identifies the worst affected location. This spatial damage estimate framework is indispensable to improve the relief and support mechanism. The damage estimation is essential to channelize the resources and funds to the affected people and proper locations. This method is quite cost-effective and efficient as large-scale spatial datasets as tools and techniques are readily available. Remote sensing and GIS technology can be used to for a rapid estimation of the aftereffect of any disaster event possible. This research adds to the research by developing a spatial direct damage estimation framework using freely available spatial dataset.
We confirm that any aspect of the work covered in this manuscript that has involved human patients has been conducted with the ethical approval of all relevant bodies and that such approvals are acknowledged within the manuscript.
Declared by Authors-
1. Medha, Centre for the Study of Regional Development, Jawaharlal Nehru University, New Delhi, India, [email protected]
2. Biswajit Mondal, Centre for the Study of Regional Development, Jawaharlal Nehru University, New Delhi, India, [email protected]
3. Gour Doloi, Assistant Professor (SACT), Panskura Banamali College, [email protected]
Competing interests: The authors declare no competing interests.
ABC Report, (2020). Cyclone Amphan slams into India and Bangladesh, evacuations complicated by coronavirus. ABC News. https://www.abc.net.au/news/2020-05-20/super-cyclone-amphan-india-bangladesh-brace-for-storm-surges/12266496
Abedin, M.A., Collins, A.E., Habiba, U., Shaw, R., 2019. Climate change, water scarcity, and health adaptation in southwestern coastal Bangladesh. International Journal of Disaster Risk Science 10 (1), 28–42.
Aghdam, I.N., Varzandeh, M.H.M., Pradhan, B., 2016. Landslide susceptibility mappingusing an ensemble statistical index (Wi) and adaptive neuro-fuzzy inference system(ANFIS) model at Alborz Mountains (Iran). Environ. Earth Sci. 75 (7), 553.
Agnihotri, A. K., Ohri, A., Gaur, S., Shivam, Das, N., & Mishra, S. (2019). Flood inundation mapping and monitoring using SAR data and its impact on Ramganga River in Ganga basin. Environmental Monitoring and Assessment, 191(12). https://doi.org/10.1007/s10661-019-7903-4
Ahmed, B., Kelman, I., Fehr, H., Saha, M., 2016. Community resilience to cyclone disasters in coastal Bangladesh. Sustainability 8 (8), 805.
Aksha, S. K., Resler, L. M., Juran, L., & Carstensen, L. W. (2020). A geospatial analysis of multi-hazard risk in Dharan, Nepal. Geomatics, Natural Hazards and Risk, 11(1), 88–111. https://doi.org/10.1080/19475705.2019.1710580
Alam, E., Collins, A.E., 2010. Cyclone disaster vulnerability and response experiences in coastal Bangladesh. Disasters 34 (4), 931–954.
Alam, E., Dominey-Howes, D., 2015. A new catalogue of tropical cyclones of the northern Bay of Bengal and the distribution and effects of selected landfalling events in Bangladesh. Int. J. Climatol. 35 (6), 801–835.
Bakkensen, L.A., Mendelsohn, R.O., 2019. Global tropical cyclone damages and fatalities under climate change: an updated assessment. Hurricane Risk. Springer, pp. 179–197.
Bandyopadhyay, S., Dasgupta, S., Khan, Z. H., & Wheeler, D. (2018). Cyclonic Storm Landfalls in Bangladesh, West Bengal and Odisha, 1877-2016. World Bank Group.
Biswas, S. (2020, May 19). Amphan: Why Bay of Bengal is the world's hotbed of tropical cyclones. BBC News. https://www.bbc.com/news/world-asia-india-52718531
Chaturvedi, A. (2020, May 20). Cyclone Amphan: List of deadly storms in Bay of Bengal in last 30 years. Hindustan Times. https://www.hindustantimes.com/india-news/cyclone-amphan-list-of-deadly-storms-in-bay-of-bengal-in-last-30-years/story-z5FEdpXnpJOYRb6dAVxL3L.html
Chou, J., Dong, W., Tu, G., & Xu, Y. (2019). Spatiotemporal distribution of landing tropical cyclones and disaster impact analysis in coastal China during 1990–2016. Physics and Chemistry of the Earth, Parts A/B/C, 102830.
Cortés-Ramos, J., Farfán, L. M., & Herrera-Cervantes, H. (2020). Assessment of tropical cyclone damage on dry forests using multispectral remote sensing: The case of Baja California Sur, Mexico. Journal of Arid Environments, 178, 104171.
Dasgupta, S., Huq, M., Khan, Z.H., Ahmed, M.M.Z., Mukherjee, N., Khan, M.F., Pandey, K., 2011. Cyclones in a Changing Climate: The Case of Bangladesh. Department for Environment, Food and Rural Affairs, London.
Dieu, T.B., Tien-Chung, H., Pradhan, B., Pham, B.T., Nhu, V-H., Revhaug, I., 2016. GIS-based modeling of rainfall-induced landslides using data mining-based functional trees classifier with AdaBoost, Bagging, and MultiBoost ensemble frameworks. Environ. Earth Sci. 75, 1101.
Dolman, D. I., Brown, I. F., Anderson, L. O., Warner, J. F., Marchezini, V., & Santos, G. L. P. (2018). Re-thinking socio-economic impact assessments of disasters: The 2015 flood in Rio Branco, Brazilian Amazon. International Journal of Disaster Risk Reduction, 31, 212–219. https://doi.org/10.1016/j.ijdrr.2018.04.024
Dube, S., Jain, I., Rao, A., Murty, T., 2009. Storm surge modelling for the bay of Bengal and Arabian Sea. Nat. Hazards 51 (1), 3–27.
Gao, Y.,Wang, H., Liu, G., Sun, X., Fei, X.,Wang, P., Lv, T., Xue, Z., He, Y., 2014. Risk assessment of tropical storm surges for coastal regions of China. Journal of Geophysical Research: Atmospheres 119 (9), 5364–5374.
GoB. (2008). Cyclone Sidr in Bangladesh- Damage, Loss and Needs Assessment for Disaster Recovery and Reconstruction. Power, February, 177.
Hallegatte, S. (2015). The Indirect Cost of Natural Disasters and an Economic Definition of Macroeconomic Resilience. Policy Research Working Papers, July, 1–40. https://www.gfdrr.org/sites/gfdrr.org/files/documents/Public finance and macroeconomics, Paper 3.pdf
Haque, A., & Jahan, S. (2015). Impact of flood disasters in Bangladesh: A multi-sector regional analysis. International Journal of Disaster Risk Reduction, 13, 266–275. https://doi.org/10.1016/j.ijdrr.2015.07.001
Haraguchi, M., Cian, F., & Lall, U. (2019). Leveraging Global and Local Data Sources for Flood Hazard Assessment and Mitigation: An Application of Machine Learning to Manila. Contributing Paper to the 2019 UN Global Assessment on Disaster Risk Reduction, 1–39.
Hoque, M. A. A., Pradhan, B., Ahmed, N., & Roy, S., 2019. Tropical cyclone risk assessment using geospatial techniques for the eastern coastal region of Bangladesh. Science of The Total Environment, 692, 10–22.
Hoque, M.A.-A., Phinn, S., Roelfsema, C., Childs, I., 2017. Tropical cyclone disaster management using remote sensing and spatial analysis: a review. International Journal of Disaster Risk Reduction 22, 345–354
Hoque, M.A.-A., Phinn, S., Roelfsema, C., Childs, I., 2018. Assessing tropical cyclone risks using geospatial techniques. Appl. Geogr. 98, 22–33.
Islam, M.A., Mitra, D., Dewan, A., Akhter, S.H., 2016. Coastal multi-hazard vulnerability assessment along the Ganges deltaic coast of Bangladesh–a geospatial approach. Ocean & Coastal Management 127, 1–15.
Islam, T., Peterson, R., 2009. Climatology of landfalling tropical cyclones in Bangladesh 1877–2003. Nat. Hazards 48 (1), 115–135.
Jeyaseelan, A. T. (2003). Droughts & floods assessment and monitoring using remote sensing and GIS. Satellite Remote Sensing and GIS Applications in Agricultural Meteorology, 291–313. http://www.wamis.org/agm/pubs/agm8/Paper-14.pdf
Joyce, K.E., Belliss, S.E., Samsonov, S.V.,McNeill, S.J., Glassey, P.J., 2009. A reviewof the status of satellite remote sensing and image processing techniques for mapping natural hazards and disasters. Prog. Phys. Geogr. 33 (2), 183–207.
Karim, M.F., Mimura, N., 2008. Impacts of climate change and sea-level rise on cyclonic storm surge floods in Bangladesh. Glob. Environ. Chang. 18 (3), 490–500.
Khan,M.S.A., 2008. Disaster preparedness for sustainable development in Bangladesh. Disaster PrevManag 17 (5), 662–671.
Krapivin, V.F., Soldatov, V.Y., Varotsos, C.A., Cracknell, A.P., 2012. An adaptive information technology for the operative diagnostics of the tropical cyclones; solar–terrestrial coupling mechanisms. J. Atmos. Sol. Terr. Phys. 89, 83–89.
Kumar, A., Done, J., Dudhia, J., Niyogi, D., 2011. Simulations of Cyclone Sidr in the Bay of Bengal with a high-resolution model: sensitivity to large-scale boundary forcing. Meteorog. Atmos. Phys. 114 (3–4), 123–137.
Kunte, P.D., Jauhari, N., Mehrotra, U., Kotha, M., Hursthouse, A.S., Gagnon, A.S., 2014. Multi-hazards coastal vulnerability assessment of Goa, India, using geospatial techniques. Ocean & Coastal Management 95, 264–281.
Mallick, B., Ahmed, B., Vogt, J., 2017. Living with the risks of cyclone disasters in the south-western coastal region of Bangladesh. Environments 4 (1), 13.
Mendelsohn, R., Emanuel, K., Chonabayashi, S., Bakkensen, L., 2012. The impact of climate change on global tropical cyclone damage. Nat. Clim. Chang. 2 (3), 205–209.
Mohammady, M., Pourghasemi, H.R., Pradhan, B., 2012. Landslide susceptibility mapping at Golestan Province, Iran: A comparison between frequency ratio, Dempster-Shafer, and weights-of-evidence models. J. Asia Earth Sci. 61 (SI), 221–236.
Moon, I.-J., Kim, S.-H., Chan, J.C., 2019. Climate change and tropical cyclone trend. Nature 570 (7759), E3.
Mori, N., Takemi, T., 2016. Impact assessment of coastal hazards due to future changes of tropical cyclones in the North Pacific Ocean. Weather and Climate Extremes 11, 53–69.
Murakami, H.,Wang, B., Li, T., Kitoh, A., 2013. Projected increase in tropical cyclones near Hawaii. Nat. Clim. Chang. 3 (8), 749.
Needham, H.F., Keim, B.D., Sathiaraj, D., 2015. A review of tropical cyclone-generated storm surges: global data sources, observations, and impacts. Rev. Geophys. 53, 545–591. https://doi.org/10.1002/2014RG000477.
Paul, B.K., Dutt, S., 2010. Hazard warnings and responses to evacuation orders: the case of Bangladesh's cyclone SIDR*. Geogr. Rev. 100 (3), 336–355.
Pinto. D.N. (2020, May 20). Sorrow of the Bay: Cyclone Hotbed Bay of Bengal Continues to Drive Storms Towards India, Bangladesh. The Weather Channel India. https://weather.com/en-IN/india/news/news/2020-05-20-cyclone-amphan-india-bangladesh-bay-of-bengal-hotbed-cyclones
Poompavai, V., Ramalingam, M., 2013. Geospatial analysis for coastal risk assessment to cyclones. Journal of the Indian Society of Remote Sensing 1–20.
Pradhan, B., Hasan, M.A., Jebur, M.N., Tehrany, M.S., 2014. Land subsidence susceptibility mapping at Kinta Valley (Malaysia) using the evidential belief function model in GIS. Nat. Hazards 73 (2), 1019–1042
Pradhan, B., Lee, S., 2010. Regional landslide susceptibility analysis using back-propagation neural network model at Cameron Highland, Malaysia. Landslides 7 (1), 13–30.
Pundir, D., Chowdhury, S. and Bhasin, S., 2020, May 21. Cyclone Amphan Highlights: Impact Worse Than Coronavirus: Mamata Banerjee on Cyclone Amphan. NDTV. https://www.ndtv.com/india-news/cyclone-amphan-live-updates-cyclone-amphan-likely-to-hit-west-bengal-today-2231828
Quader, M.A., Khan, A.U., Kervyn, M., 2017. Assessing risks from cyclones for human livesand livelihoods in the coastal region of Bangladesh. Int. J. Environ. Res. Public Health14 (8), 831.
Ranson, M., Kousky, C., Ruth, M., Jantarasami, L., Crimmins, A., Tarquinio, L., 2014. Tropical and extratropical cyclone damages under climate change. Clim. Chang. 127 (2), 227–241
Rashid, A.K.M.M., 2013. Understanding vulnerability and risks. In: Shaw, R., Mallick, F., Islam, A. (Eds.), Disaster Risk Reduction Approaches in Bangladesh. Disaster Risk Reduction. Springer Japan, pp. 23–43.
Ravikiran (2020). Cyclone and its Management in India. IAS Express. https://www.iasexpress.net/cyclone-india-management/
Roy, K., Kumar, U., Mehedi, H., Sultana, T., & Ershad, D. M. (2009). Initial Damage Assessment Report of Cyclone AILA with focus on Khulna District. May, 31pp. https://doi.org/10.13140/RG.2.1.5193.3925
Roy, D.C., Blaschke, T., 2013. Spatial vulnerability assessment of floods in the coastal regions of Bangladesh. Geomatics, Natural Hazards and Risk 1–24 ahead-of-print.
Sahoo, B., Bhaskaran, P.K., 2018. Multi-hazard risk assessment of coastal vulnerability from tropical cyclones–a GIS based approach for the Odisha coast. J. Environ. Manag. 206, 1166–1178.
Sarwar,M.G.M., 2013. Sea-level rise along the coast of Bangladesh. In: Shaw, R.,Mallick, F., Islam, A. (Eds.), Disaster Risk Reduction Approaches in Bangladesh. Springer, Tokyo, pp. 217–231.
Shultz, J.M., Russell, J., Espinel, Z., 2005. Epidemiology of tropical cyclones: the dynamics of disaster, disease, and development. Epidemiol. Rev. 27 (1), 21–35.
Tay, C. W. J., Yun, S. H., Chin, S. T., Bhardwaj, A., Jung, J., & Hill, E. M. (2020). Rapid flood and damage mapping using synthetic aperture radar in response to Typhoon Hagibis, Japan. Scientific Data, 7(1), 1–9. https://doi.org/10.1038/s41597-020-0443-5
TBS Report. (2020, May 19). A brief history of the deadliest cyclones in the Bay of Bengal. The Business Standard. https://tbsnews.net/environment/brief-history-deadliest-cyclones-bay-bengal-83323
TET Report. (2020, May 22). Cyclone Amphan considered even more destructive than Cyclone Aila: UN. The Economic Times. https://economictimes.indiatimes.com/news/politics-and-nation/cyclone-amphan-considered-even-more-destructive-than-cyclone-aila-un/articleshow/75886057.cms
Varotsos, C.A., Efstathiou, M.N., Cracknell, A.P., 2015. Sharp rise in hurricane and cyclone count during the last century. Theor. Appl. Climatol. 119 (3), 629–638.
Varotsos, C.A., Efstathiou,M.N., 2013. Is there any long-termmemory effect in the tropical cyclones? Theor. Appl. Climatol. 114 (3–4), 643–650.
Walsh, K.J., McBride, J.L., Klotzbach, P.J., Balachandran, S., Camargo, S.J., Holland, G., Knutson, T.R., Kossin, J.P., Lee, T.c., Sobel, A., 2016. Tropical cyclones and climate change. Wiley Interdiscip. Rev. Clim. Chang. 7 (1), 65–89.
Yin, J., Xu, S., Wang, J., Zhong, H., Hu, Y., Yin, Z., Wang, K., Zhang, X., 2010. Vulnerability assessment of combined impacts of sea level rise and coastal flooding for China's coastal region using remote sensing and GIS. Geoinformatics, 2010 18th International Conference on. IEEE, pp. 1–4.
Ying, M., Zhang, W., Yu, H., et al., 2014. An overview of the China meteorological administration tropical cyclone database. J. Atmos. Ocean. Technol. 3, 287–301. https://doi.org/10.1175/JTECH-D-12-00119.1.
Youssef, A.M., Al-Kathery, M., 2015. Landslide susceptibility mapping at Al-Hasher area, Jizan (Saudi Arabia) using GIS-based frequency ratio and index of entropy models. Geosci. J. 19 (1), 113–134.
Zare,M., Pourghasemi, H.R., Vafakhah, M., Pradhan, B., 2013. Landslide susceptibility mapping at Vaz Watershed (Iran) using an artificial neural network model: a comparison between multilayer perception (MLP) and radial basic function (RBF) algorithms. Arab. J. Geosci. 6 (8), 2873–2888.
Sud, Vedika; Rajaram, Prema (2020). "Cyclone Amphan caused an estimated $13.2 billion in damage in India's West Bengal: government source". CNN. Retrieved 22 May 2020.
Aljazeera (2020). "Many killed as Cyclone Amphan tears into India, Bangladesh coasts". Retrieved 30September 2020.
Kossin, J. P. (2018). A global slowdown of tropical-cyclone translation speed. Nature, 558(7708), 104–107. https://doi.org/10.1038/s41586-018-0158-3
Hagenauer, J., & Helbich, M. (2016). SPAWNN: A Toolkit for SPatial Analysis With Self-Organizing Neural Networks. Transactions in GIS, 20(5), 755–774.
Uddin, K., Matin, M. A., & Meyer, F. J. (2019). Operational flood mapping using multi-temporal Sentinel-1 SAR images: A case study from Bangladesh. Remote Sensing, 11(13). https://doi.org/10.3390/rs11131581
World Meteorological Organization Communications and Public Affairs Office Final. (2011). Strengthening of Risk Assessment and Multi-hazard Early Warning Systems for Meteorological, Hydrological and Climate Hazards in the Caribbean. 1(1082), 186.
ODI. (2013). The geography of poverty, disasters and climate extremes in 2030, Research Report and Study, Overseas Development Institute, UK. 88. http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/8633.pdf
Abdullah, A. Y. M., Masrur, A., Gani Adnan, M. S., Al Baky, M. A., Hassan, Q. K., & Dewan, A. (2019). Spatio-temporal patterns of land use/land cover change in the heterogeneous coastal region of Bangladesh between 1990 and 2017. Remote Sensing, 11(7), 1–28. https://doi.org/10.3390/rs11070790
Anderson, J. R., Hardy, E. E., Roach, J. T., & Witmer, R. E. (1976). A Land Use and Land Cover Classification System for Use with Remote Sensor Data. United States Geological Survey Professional Paper, 964.
Supplementaryfile.docx
|
CommonCrawl
|
How to prove that a language is not context-free?
We learned about the class of context-free languages $\mathrm{CFL}$. It is characterised by both context-free grammars and pushdown automata so it is easy to show that a given language is context-free.
How do I show the opposite, though? My TA has been adamant that in order to do so, we would have to show for all grammars (or automata) that they can not describe the language at hand. This seems like a big task!
I have read about some pumping lemma but it looks really complicated.
formal-languages context-free proof-techniques reference-question
frafl
Raphael♦Raphael
$\begingroup$ Ntpick: it is undecidable to show whether a language is context-free. $\endgroup$
– reinierpost
$\begingroup$ @reinierpost I don't see how your comment relates to the question. It's about proving things, not deciding (algorithmically). $\endgroup$
$\begingroup$ Just making the point that it's not easy to show that a language is context-free, in general. If it's easy for frafl, that must be owing to certain special conditions that don't hold for languages in general, such as being given a pushdown automaton that describes the language. $\endgroup$
$\begingroup$ @reinierpost That line of reasoning seems to assume that undecidable implies (equals?) hard to prove. I wonder if that's true. $\endgroup$
To my knowledge the pumping lemma is by far the simplest and most-used technique. If you find it hard, try the regular version first, it's not that bad. There are some other means for languages that are far from context free. For example undecidable languages are trivially not context free.
That said, I am also interested in other techniques than the pumping lemma if there are any.
EDIT: Here is an example for the pumping lemma: suppose the language $L=\{ a^k \mid k ∈ P\}$ is context free ($P$ is the set of prime numbers). The pumping lemma has a lot of $∃/∀$ quantifiers, so I will make this a bit like a game:
The pumping lemma gives you a $p$
You give a word $s$ of the language of length at least $p$
The pumping lemma rewrites it like this: $s=uvxyz$ with some conditions ($|vxy|≤p$ and $|vy|≥1$)
You give an integer $n≥0$
If $uv^nxy^nz$ is not in $L$, you win, $L$ is not context free.
For this particular language for $s$ any $a^k$ (with $k≥p$ and $k$ is a prime number) will do the trick. Then the pumping lemma gives you $uvxyz$ with $|vy|≥1$. Do disprove the context-freeness, you need to find $n$ such that $|uv^nxy^nz|$ is not a prime number.
$$|uv^nxy^nz|=|s|+(n-1)|vy|=k+(n-1)|vy|$$
And then $n=k+1$ will do: $k+k|vy|=k(1+|vy|)$ is not prime so $uv^nxy^nz\not\in L$. The pumping lemma can't be applied so $L$ is not context free.
A second example is the language $\{ww \mid w \in \{a,b\}^{\ast}\}$. We (of course) have to choose a string and show that there's no possible way it can be broken into those five parts and have every derived pumped string remain in the language.
The string $s=a^{p}b^{p}a^{p}b^{p}$ is a suitable choice for this proof. Now we just have to look at where $v$ and $y$ can be. The key parts are that $v$ or $y$ has to have something in it (perhaps both), and that both $v$ and $y$ (and $x$) are contained in a length $p$ substring - so they can't be too far apart.
This string has a number of possibilities for where $v$ and $y$ might be, but it turns out that several of the cases actually look pretty similar.
$vy \in a^{\ast}$ or $vy \in b^{\ast}$. So then they're both contained in one of the sections of continguous $a$s or $b$s. This is the relatively easy case to argue, as it kind of doesn't matter which they're in. Assume that $|vy| = k \leq p$.
If they're in the first section of $a$s, then when we pump, the first half of the new string is $a^{p+k}b^{p-k/2}$, and the second is $b^{k/2}a^{p}b^{p}$. Obviously this is not of the form $ww$.
The argument for any of the three other sections runs pretty much the same, it's just where the $k$ and $k/2$ ends up in the indices.
$vxy$ straddles two of the sections. In this case pumping down is your friend. Again there's several places where this can happen (3 to be exact), but I'll just do one illustrative one, and the rest should be easy to figure out from there.
Assume that $vxy$ straddles the border between the first $a$ section and the first $b$ section. Let $vy = a^{k_{1}}b^{k_{2}}$ (it doesn't matter precisely where the $a$s and $b$s are in $v$ and $y$, but we know that they're in order). Then when we pump down (i.e. the $i=0$ case), we get the new string $s'=a^{p-k_{1}}b^{p-k_{2}}a^{p}b^{p}$, but then if $s'$ could be split into $ww$, the midpoint must be somewhere in the second $a$ section, so the first half is $a^{p-k_{1}}b^{p-k_{2}}a^{(k_{1}+k_{2})/2}$, and the second half is $a^{p-(k_{1}+k_{2})/2}b^{p}$. Clearly these are not the same string, so we can't put $v$ and $y$ there.
The remaining cases should be fairly transparent from there - they're the same ideas, just putting $v$ and $y$ in the other 3 spots in the first instance, and 2 spots in the second instance. In all cases though, you can pump it in such a way that the ordering is clearly messed up when you split the string in half.
David Richerby
jmadjmad
$\begingroup$ indeed, kozen's game is the way to go about this. $\endgroup$
– socrates
Ogden's Lemma
Lemma (Ogden). Let $L$ be a context-free language. Then there is a constant $N$ such that for every $z\in L$ and any way of marking $N$ or more positions (symbols) of $z$ as "distinguished positions", then $z$ can be written as $z=uvwxy$, such that
$vx$ has at least one distinguished position.
$vwx$ has at most $N$ distinguished positions.
For all $i\geq 0$, $uv^iwx^iy\in L$.
Example. Let $L=\{a^ib^jc^k:i\neq j,j\neq k,i\neq k\}$. Assume $L$ is context-free, and let $N$ be the constant given by Ogden's lemma. Let $z=a^Nb^{N+N!}c^{N+2N!}$ (which belongs to $L$), and suppose we mark as distinguished all the positions of the symbol $a$ (i.e. the first $N$ positions of $z$). Let $z=uvwxy$ be a decomposition of $z$ satisfying the conditions from Ogden's lemma.
If $v$ or $x$ contain different symbols, then $uv^2wx^2y\notin L$, because there will be symbols in the wrong order.
At least one of $v$ and $x$ must contain only symbols $a$, because only the $a$'s have been distinguished. Thus, if $x\in L(b^*)$ or $x\in L(c^*)$, then $v\in L(A^+)$. Let $p=|v|$. Then $1\leq p\leq N$, which means $p$ divides $N!$. Let $q=N!/p$. Then $z'=uv^{2q+1}wx^{2q+1}y$ should belong to $L$. However, $v^{2q+1}=a^{2pq+p}=a^{2N!+p}$. Since $uwy$ has exactly $N-p$ symbols $a$, then $z'$ has $2N!+N$ symbols $a$. But both $v$ and $x$ don't have $c$'s, so $z'$ also has $2N!+N$ symbols $c$, which means $z'\notin L$, and this contradicts Ogden's lemma. A similar contradiction occurs if $x\in L(A^+)$ or $x\in L(c^*)$. We conclude $L$ is not context-free.
Exercise. Using Ogden's Lemma, show that $L=\{a^ib^jc^kd^{\ell}:i=0\text{ or }j=k=\ell\}$ is not context-free.
Pumping Lemma
This is a particular case of Ogden's Lemma in which all positions are distinguished.
Lemma. Let $L$ be a context-free language. Then there is a constant $N$ such that for every $z\in L$, $z$ can be written as $z=uvwxy$, such that
$|vx|>0$.
$|vwx|\leq N$.
Parikh's Theorem
This is even more technical than Ogden's Lemma.
Definition. Let $\Sigma=\{a_1,\ldots,a_n\}$. We define $\Psi_{\Sigma}:\Sigma^*\to\mathbb{N}^n$ by $$\Psi_{\Sigma}(w)=(m_1,\ldots,m_n),$$ where $m_i$ is the number of appearances of $a_i$ in $w$.
Definition. A subset $S$ of $\mathbb{N}^n$ is called linear if it can be written: $$ S = \{\mathbf{u_0} + \sum_{1 \le i \le k} a_i \mathbf{u_i} : \text{ for some set of $\mathbf{u_i} \in \mathbb{N}^n$ and $a_i \in \mathbb{N}$}\} $$
Definition. A subset $S$ of $\mathbb{N}^n$ is called semi-linear if it is the union of a finite collection of linear sets.
Theorem (Parikh). Let $L$ be a language over $\Sigma$. If $L$ is context-free, then $$\Psi_{\Sigma}[L]=\{\Psi_{\Sigma}(w):w\in L\}$$ is semi-linear.
Exercise. Using Parikh's Theorem, show that $L=\{0^m1^n:m>n\text{ or }(m\text{ is prime and }m\leq n)\}$ is not context-free.
Exercise. Using Parikh's Theorem, show that any context-free language over a unary alphabet is also regular.
JanomaJanoma
$\begingroup$ I accepted jmad's answer because the question explicitly mentions Pumping Lemma. I appreciate your answer a lot, though; having all major methods collected here is a great thing. $\endgroup$
$\begingroup$ That's fine, but note that the pumping lemma is a particular case of Ogden's lemma ;-) $\endgroup$
– Janoma
$\begingroup$ Of course. Still, most people will try PL first; many don't even know OL. $\endgroup$
$\begingroup$ A theorem by Ginsburg and Spanier, building on Parikh's theorem, gives a neccessary and sufficient condition for context-freeness in the bounded case. math.stackexchange.com/a/122472 $\endgroup$
– sdcvvc
$\begingroup$ Can you please define "distinguished positions" in terms of other operations? Or at least informally? I find the definition of OL copied verbatim in many different places, but none of them so far cared to explain what that means. $\endgroup$
– wvxvw
Closure Properties
Once you have a small collection of non-context-free languages you can often use closure properties of $\mathrm{CFL}$ like this:
Assume $L \in \mathrm{CFL}$. Then, by closure property X (together with Y), $L' \in \mathrm{CFL}$. This contradicts $L' \notin \mathrm{CFL}$ which we know to hold, therefore $L \notin \mathrm{CFL}$.
This is often shorter (and often less error-prone) than using one of the other results that use less prior knowledge. It is also a general concept that can be applied all kinds of class of objects.
Example 1: Intersection with Regular Languages
We note $\mathcal L(e)$ the regular language specified by any regular expression $e$.
Let $L = \{w \mid w \in \{a,b,c\}^*, |w|_a = |w|_b = |w|_c\}$. As
$\qquad \displaystyle L \cap \mathcal{L}(a^*b^*c^*) = \{a^nb^nc^n \mid n \in \mathbb{N}\} \notin \mathrm{CFL}$
and $\mathrm{CFL}$ is closed under intersection with regular languages, $L \notin \mathrm{CFL}$.
Example 2: (Inverse) Homomorphism
Let $L = \{(ab)^{2n}c^md^{2n-m}(aba)^{n} \mid m,n \in \mathbb{N}\}$. With the homomorphism
$\qquad \displaystyle \phi(x) = \begin{cases} a &x=a \\ \varepsilon &x=b \\ b &x=c \lor x=d \end{cases}$
we have $\phi(L) = \{a^{2n}b^{2n}a^{2n} \mid n \in \mathbb{N}\}.$
Now, with
$\qquad \displaystyle \psi(x) = \begin{cases} aa &x=a \lor x=c \\ bb &x=b \end{cases}\quad\text{and}\quad L_1 = \{x^nb^ny^n \mid x,y \in \{a,c\}\wedge n \in \mathbb{N}\},$
we get $L_1 = \psi^{-1}(\phi(L)))$.
Finally, intersecting $L_1$ with the regular language $L_2 = \mathcal L(a^*b^*c^*)$ we get the language $L_3 = \{a^n b^n c^n \mid n \in \mathbb{N}\}$.
In total, we have $L_3 = L_2 \cap \psi^{-1}(\phi(L))$.
Now assume that $L$ was context-free. Then, since $\mathrm{CFL}$ is closed against homomorphism, inverse homomorphism, and intersection with regular sets, $L_3$ is context-free, too. But we know (via Pumping Lemma, if need be) that $L_3$ is not context-free, so this is a contradiction; we have shown that $L \notin \mathrm{CFL}$.
Interchange Lemma
The Interchange Lemma [1] proposes a necessary condition for context-freeness that is even stronger than Ogden's Lemma. For example, it can be used to show that
$\qquad \{xyyz \mid x,y,z \in \{a,b,c\}^+\} \notin \mathrm{CFL}$
which resists many other methods. This is the lemma:
Let $L \in \mathrm{CFL}$. Then there is a constant $c_L$ such that for any integer $n\geq 2$, any set $Q_n \subseteq L_n = L \cap \Sigma^n$ and any integer $m$ with $n \geq m \geq 2$ there are $k \geq \frac{|Q_n|}{c_L n^2}$ strings $z_i \in Q_n$ with
$z_i = w_ix_iy_i$ for $i=1,\dots,k$,
$|w_1| = |w_2| = \dots = |w_k|$,
$|y_1| = |y_2| = \dots = |y_k|$,
$m \geq |x_1| = |x_2| = \dots = |x_k| > \frac{m}{2}$ and
$w_ix_jy_i \in L_n$ for all $(i,j) \in [1..k]^2$.
Applying it means to find $n,m$ and $Q_n$ such that 1.-4. hold but 5. is violated. The application example given in the original paper is very verbose and is therefore left out here.
At this time, I do not have a freely available reference and the formulation above is taken from a preprint of [1] from 1981. I appreciate help in tracking down better references. It appears that the same property has been (re)discovered recently [2].
Other Necessary Conditions
Boonyavatana and Slutzki [3] survey several conditions similar to Pumping and Interchange Lemma.
An "Interchange Lemma" for Context-Free Languages by W. Ogden, R. J. Ross and K. Winklmann (1985)
Swapping Lemmas for Regular and Context-Free Languages by T. Yamakami (2008)
The interchange or pump (DI)lemmas for context-free languages by R. Boonyavatana and G. Slutzki (1988)
$\begingroup$ There are nice closure properties of rich subclasses of CFL that can be used to the same effect. $\endgroup$
There is no general method since the set non-context-free-languages is not semi-decidable (a.k.a. r.e.). If there was a general method, we could use it to semi-decide this set.
The situation is even worse, since given two CFL's it is not possible to decide whether their intersection is also a CFL.
Reference: Hopcroft and Ullman, "Introduction to Automata Theory, Languages, and Computation", 1979.
KavehKaveh
$\begingroup$ An interesting (but probably more advanced and open-ended question) would be categorizing the subclass of non-CFLs that can be proved to be non-CFL using a particular method. $\endgroup$
– Kaveh
$\begingroup$ I am not looking for a computable method but for pen & paper proof techniques. The latter does not necessarily imply the former. $\endgroup$
A stronger version of the Ogden's condition (OC) is the
Bader-Moura's condition (BMC)
A language $L\subseteq \Sigma^*$ satisfies BMC if there exists a constant $n$ such that if $z \in L$ and we label in it "distinguished" positions $d(z)$ and $e(z)$ "excluded" positions, with $d(z) > n^{e(z)+1}$, then we may write $z = uvwxy$ such that:
$d(vx) \geq 1$ and $e(vx) =0$
$d(vwx) \leq n^{e(vwx)+1}$ and
for every $i \geq 0$, $uv^iwx^iy$ is in $L$.
We say that a language $L \in BMC(\Sigma)$ if $L$ satisfies the Bader-Moura's condition.
We have $CFL(\Sigma) \subset BMC(\Sigma) \subset OC(\Sigma)$, so BMC is strictly stronger than OC.
Reference: Bader, C., Moura, A., A Generalization of Ogden's Lemma. JACM 29, no. 2, (1982), 404–407
VorVor
$\begingroup$ Why not just go all the way to Dömösi and Kudlek's generalisation dx.doi.org/10.1007/3-540-48321-7_18 ... $\endgroup$
– András Salamon
$\begingroup$ @AndrásSalamon: I didn't know it! :-) ... perhaps you can post it as a new answer saying that OC, BMC, PC are special cases of it (all distinguished or no excluded positions). $\endgroup$
– Vor
$\begingroup$ you are welcome to post it, don't have time right now. $\endgroup$
$\begingroup$ This answer would profit from an example. $\endgroup$
Not the answer you're looking for? Browse other questions tagged formal-languages context-free proof-techniques reference-question or ask your own question.
Is $a^n b^n c^n$ context-free?
How is $a^nb^nc^{2n}$ not a context free language, where as $a^nb^mc^{n+m}$ is?
Proving the language of words with equal numbers of symbols non-context-free
Using the pumping lemma to prove that a language is context-free
For $\sum = \{ 0,1 \}$, $A$ has strings which contain a $1$ in their middle third, and a $B$ which contain two $1$'s in their middle third
Show Language is not context free without pumping lemma
Create CFG and pushdown automaton for {ww}
How can I prove this language is not CFL?
Is the language $\{a^{n^2-1} | n \in \mathbb{N}\}$ context free? and how to prove it?
Pumping Lemma on CFL Problem
How to prove that a language is not regular?
A pumping lemma for deterministic context-free languages?
Are regular and context free languages closed against making them prefix-free?
Show that the pumping lemmas for context-free and regular languages are equivalent for unary languages
Prove or disprove: L^2 context free implies L is context free
Prove or disprove that the following language is context-free
Is the language given by a context-free grammar always context-free?
|
CommonCrawl
|
Definition:Cayley Table
1.1 Entry
2 Also known as
3.1 Set of Self-Maps on Doubleton
3.2 Cyclic Group of Order $4$
3.3 Symmetric Group on $3$ Letters
3.4 Arbitrary Structure of Order 3
4 Also see
5 Source of Name
A Cayley table is a technique for describing an algebraic structure (usually a finite group) by putting all the products in a square array:
$\begin {array} {c|cccc} \circ & a & b & c & d \\ \hline a & a & a & b & a \\ b & b & c & a & d \\ c & d & e & f & a \\ d & c & d & a & b \\ \end {array}$
The column down the left hand side denotes the first (leading) operand of the operation.
The row across the top denotes the second (following) operand of the operation.
Thus, in the above Cayley table:
$c \circ a = d$
If desired, the symbol denoting the operation itself can be put in the upper left corner, but this is not essential if there is no ambiguity.
The order in which the rows and columns are placed is immaterial.
However, it is conventional, when representing an algebraic structure with an identity element, to place that element at the head of the first row and column.
The occurrences in a Cayley table of the elements of the algebraic structure it defines are called the entries of the Cayley table.
Some sources call this an operation table, but there exists the view that this sounds too much like a piece of hospital apparatus.
Another popular name for this is a multiplication table, but this holdover from grade school terminology may be considered irrelevant to a table where the operation has nothing to do with multiplication as such.
In the field of logic, a truth table in this format is often referred to as matrix form, but note that this terminology clashes with the definition of a matrix in mathematics.
Set of Self-Maps on Doubleton
Let $S$ be the set of self-maps on the doubleton $D = \set {a, b}$.
Let these be enumerated:
$\epsilon := \begin{pmatrix} a & b \\ a & b \end{pmatrix} \quad \alpha := \begin{pmatrix} a & b \\ b & a \end{pmatrix} \quad \beta := \begin{pmatrix} a & b \\ a & a \end{pmatrix} \quad \gamma := \begin{pmatrix} a & b \\ b & b \end{pmatrix}$
Let $\struct {S, \circ}$ be the semigroup of self-maps under composition of mappings.
The Cayley table of $\struct {S, \circ}$ can be written:
$\begin{array}{c|cccc} \circ & \epsilon & \alpha & \beta & \gamma \\ \hline \epsilon & \epsilon & \alpha & \beta & \gamma \\ \alpha & \alpha & \epsilon & \gamma & \beta \\ \beta & \beta & \beta & \beta & \beta \\ \gamma & \gamma & \gamma & \gamma & \gamma \\ \end{array}$
Cyclic Group of Order $4$
The Cayley table of the cyclic group of order $4$ can be written:
$\begin{array}{c|cccc} & e & a & b & c \\ \hline e & e & a & b & c \\ a & a & b & c & e \\ b & b & c & e & a \\ c & c & e & a & b \\ \end{array}$
Symmetric Group on $3$ Letters
The Cayley table of the symmetric group on $3$ letters can be written:
$\begin{array}{c|cccccc} \circ & e & p & q & r & s & t \\ \hline e & e & p & q & r & s & t \\ p & p & q & e & s & t & r \\ q & q & e & p & t & r & s \\ r & r & t & s & e & q & p \\ s & s & r & t & p & e & q \\ t & t & s & r & q & p & e \\ \end{array}$
Arbitrary Structure of Order 3
A Cayley table does not necessarily describe the structure of a group.
The Cayley table of an algebraic structure of order $3$ can be presented:
$\begin{array}{c|cccc} \circ & a & b & c \\ \hline a & b & c & b \\ b & b & a & c \\ c & a & c & c \\ \end{array}$
Cayley Table for Commutative Operation is Symmetrical about Main Diagonal
Source of Name
This entry was named for Arthur Cayley.
1854: Arthur Cayley: On the theory of groups, as depending on the symbolic equation $\theta^n - 1$ (Phil. Mag. Ser. 4 Vol. 7: pp. 40 – 47)
1951: Nathan Jacobson: Lectures in Abstract Algebra: Volume $\text { I }$: Basic Concepts ... (previous) ... (next): Chapter $\text{I}$: Semi-Groups and Groups: $1$: Definition and examples of semigroups
1964: W.E. Deskins: Abstract Algebra ... (previous) ... (next): $\S 1.4$
1964: Walter Ledermann: Introduction to the Theory of Finite Groups (5th ed.) ... (previous) ... (next): Chapter $\text {I}$: The Group Concept: $\S 5$: The Multiplication Table
1965: J.A. Green: Sets and Groups ... (previous) ... (next): $\S 4.1$. Binary operations on a set: Example $58$
1965: Seth Warner: Modern Algebra ... (previous) ... (next): $\S 2$
1966: Richard A. Dean: Elements of Abstract Algebra ... (previous) ... (next): $\S 1.4$
1967: George McCarty: Topology: An Introduction with Application to Topological Groups ... (previous) ... (next): Chapter $\text{II}$: Groups: Exercise $\text{A}$
1982: P.M. Cohn: Algebra Volume 1 (2nd ed.) ... (previous) ... (next): $\S 3.2$: Groups; the axioms: Examples of groups $\text{(v)}$
2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): Entry: Cayley table
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Cayley_Table&oldid=493920"
Definitions/Named Definitions/Cayley
Definitions/Abstract Algebra
Definitions/Cayley Tables
This page was last modified on 11 October 2020, at 09:45 and is 0 bytes
|
CommonCrawl
|
Computer Science > Graphics
[Submitted on 1 Nov 2021]
Title:Principles towards Real-Time Simulation of Material Point Method on Modern GPUs
Authors:Yun Fei, Yuhan Huang, Ming Gao
Abstract: Physics-based simulation has been actively employed in generating offline visual effects in the film and animation industry. However, the computations required for high-quality scenarios are generally immense, deterring its adoption in real-time applications, e.g., virtual production, avatar live-streaming, and cloud gaming. We summarize the principles that can accelerate the computation pipeline on single-GPU and multi-GPU platforms through extensive investigation and comprehension of modern GPU architecture. We further demonstrate the effectiveness of these principles by applying them to the material point method to build up our framework, which achieves $1.7\times$--$8.6\times$ speedup on a single GPU and $2.5\times$--$14.8\times$ on four GPUs compared to the state-of-the-art. Our pipeline is specifically designed for real-time applications (i.e., scenarios with small to medium particles) and achieves significant multi-GPU efficiency. We demonstrate our pipeline by simulating a snow scenario with 1.33M particles and a fountain scenario with 143K particles in real-time (on average, 68.5 and 55.9 frame-per-second, respectively) on four NVIDIA Tesla V100 GPUs interconnected with NVLinks.
Subjects: Graphics (cs.GR); Hardware Architecture (cs.AR)
ACM classes: I.3.1; I.3.7
Cite as: arXiv:2111.00699 [cs.GR]
(or arXiv:2111.00699v1 [cs.GR] for this version)
From: Yun Fei [view email]
[v1] Mon, 1 Nov 2021 04:42:40 UTC (12,180 KB)
cs.GR
cs.AR
Yun Fei
Ming Gao
|
CommonCrawl
|
BMC Bioinformatics
Identification of driver genes based on gene mutational effects and network centrality
Volume 22 Supplement 3
Proceedings of the 2019 International Conference on Intelligent Computing (ICIC 2019): bioinformatics
Yun-Yun Tang1,
Pi-Jing Wei1,
Jian-ping Zhao4,
Junfeng Xia2,
Rui-Fen Cao1,3 &
Chun-Hou Zheng1,4
BMC Bioinformatics volume 22, Article number: 457 (2021) Cite this article
As one of the deadliest diseases in the world, cancer is driven by a few somatic mutations that disrupt the normal growth of cells, and leads to abnormal proliferation and tumor development. The vast majority of somatic mutations did not affect the occurrence and development of cancer; thus, identifying the mutations responsible for tumor occurrence and development is one of the main targets of current cancer treatments.
To effectively identify driver genes, we adopted a semi-local centrality measure and gene mutation effect function to assess the effect of gene mutations on changes in gene expression patterns. Firstly, we calculated the mutation score for each gene. Secondly, we identified differentially expressed genes (DEGs) in the cohort by comparing the expression profiles of tumor samples and normal samples, and then constructed a local network for each mutation gene using DEGs and mutant genes according to the protein–protein interaction network. Finally, we calculated the score of each mutant gene according to the objective function. The top-ranking mutant genes were selected as driver genes. We name the proposed method as mutations effect and network centrality.
Four types of cancer data in The Cancer Genome Atlas were tested. The experimental data proved that our method was superior to the existing network-centric method, as it was able to quickly and easily identify driver genes and rare driver factors.
Cancer is one of the most complex diseases that threaten human health [1]. The latest developments in next-generation sequencing (NGS) technology have provided us with an unprecedented opportunity to better characterize the molecular characteristics of human cancer [2, 3]. The Cancer Genome Atlas (TCGA) [4] and the International Cancer Genome Consortium (ICGC) [5] have produced and analyzed a large amount of genomic data of various cancers [6]. Cancer development involves many complex and dynamic cellular processes. These processes can be accurately described according to the pathological stages, and the extraction of reliable biomarkers is required to characterize the dynamics of these stages, including (1) stage-specific recurrence somatic copy number alterations (SCNAs), (2) the related aberrant genes, and (3) the enriched dysfunctional pathways [7,8,9,10,11,12]. The key challenge for cancer genomics is analyzing and integrating this information in the most efficient and meaningful way, which can promote cancer biology and then translate this knowledge into clinical practice [13, 14]; for example, the design of anticancer drugs and identification of drug-resistant genes [15]. Cancer is an evolutionary process in which normal cells accumulate various genomic and epigenetic changes, including single-nucleotide variations (SNVs) and chromosomal aberrations. Some of these alterations give mutant cells an advantage in growth and positive selection as well as cause intense proliferation, giving raise to tumors [16]. Although somatic mutations occur in normal cells, they are neutral or apoptosis-inducing, not leading to conversion to cancer cells [17]. One of the key questions in cancer genomics is how to distinguish 'driver' mutations that cause tumors from 'passenger' mutations that are functionally neutral [18].
The simplest way to identify driver genes is to classify mutations according to recurrence; in other words, the most frequently occurring mutations are more likely to be drivers [19, 20], or the background mutation rates are used to measure significantly mutated genes. Many computational methods based on mutation frequency recognition for driver mutations and driver genes have been widely used, such as MutSig [21] and MuSic [22]. MuSig estimates the background mutation rate of each gene and identifies mutations that deviate significantly from that rate. MuSic uses mutation rates that are significantly higher than expected, pathway mutation rates, and correlations with clinical features to detect driver genes. Tamborero et al. used a silent mutation in the coding region to construct a background model and proposed the OncodriveCLUST method, which is mainly used to identify genes with a significant mutation clustering tendency in protein sequences [23]. However, a portion of the driver genes are mutated at high frequencies (> 20%), and most cancer mutations occur at intermediate frequencies (2–20%) or lower frequencies than expected [24]. Although frequency-based methods can identify driver genes among genes that are frequently mutated in patients, they are ineffective in identifying drivers in infrequently or rarely mutated genes [25]. To obtain sufficient statistical power to detect cancer driver genes with low mutation frequency, a large number of cancer patients must be sequenced [26]. This situation has provoked a number of methods that assist in identifying driver genes. Generally, these methods can be categorized into machine learning-based methods and network-based methods.
Machine learning-based approaches use existing knowledge to identify driver genes or driver mutations. For example, CHASM uses random forests to classify driver mutations and uses known carcinogenic somatic cells for missense mutation training [27]. Moreover, the CHASM score has also been successfully applied to the CRAVAT algorithm [28]. In addition to CHASM, the CRAVAT algorithm integrates the results of the SNVBox [29] and VEST [30] tools and realizes the annotation of the effect of non-synonymous mutation functions [28]. The CanDrA algorithm integrates the results of more than 10 algorithms (such as CHASM, SIFT, and MutationAssessor); obtains 96 features in structure, evolution, and genes; and builds an algorithm based on machine learning prediction-driven missense mutations [31]. The FATHMM algorithm integrates homologous sequences and conserved protein domain information and uses a hidden Markov model-based algorithm to distinguish cancer-related amino acid mutations among passenger mutations [32, 33]. The DriverML algorithm proposed by Han et al. used statistical methods to quantify the scores of different mutation types on protein function and then combined them with machine learning algorithms to identify cancer driver genes [34]. However, the method of training prediction models using machine learning has some shortcomings. For example, in predicting driver mutations, it is difficult to obtain high-quality positive and negative sample datasets, which is a significant challenge for machine learning-based algorithms.
The development of network analysis science, such as in the fields of complex systems, social networks, communication networks, and transportation networks, has inspired many bioinformatics researchers to use network analysis methods to study the functional mechanism of molecular systems. Pathway- and network-based methods can easily simplify biological entities and their interactions into nodes and edges, allowing the systematic study of the nature of complex diseases [35] and the diagnosis, prevention, and treatment of cancer. Moreover, network- and pathway-based strategies have become one of the most promising approaches for identifying driver mutations, and some researchers have found that genes work together to form biological networks, which can be used to identify driver genes. MEMO [36] relies on the predictive pathway or the mutual exclusion of driving mutations in the sub-net to try to find a small sub-net of genes belonging to the same pathway. PARADIGM-Shift [37] uses pathway-level information and other features to infer the dysfunction of mutations. Researchers have also attempted to use protein–protein interaction network (PPI) data to integrate different omics data. For example, HotNet2 [38] combined with PPI used hotspot diffusion to find the small sub-networks of frequent mutations. However, the authors tried to identify a cancer-driving module composed of many genes, rather than genes that are crucial for cancer development. A recently published method, DriverNet [39], identifies a simple set of mutated genes associated with genes that experience mRNA expression disorders in a PPI network. OncolMPACT [40] prioritizes mutated genes based on linkages to dysregulated genes in cancer using matched expression data. The VarWalker algorithm, through sample-specific gene screening, constructs a sample-specific network, and integrates and recognizes driver genes [41]. The DawnRank algorithm analyzes the effect of a mutant gene on its downstream genes in a molecular interaction network, and used the PageRank algorithm sequences the genes of a single sample, finally resulting in the identification of driver genes [3]. The DEOD algorithm integrates genomic mutation data, expression data, and PPI network data; constructs a directed weighted graph based on the method of partial covariance selection; and identifies driver genes that have a significant effect on the target gene [42]. MUFFINN [43] considers mutations in neighboring genes in a network in two different ways, either consider mutations in the most frequently mutated neighbor (DNmax) or to consider mutations in all direct neighbors with normalization by their degree connectivity (DNsum) showing good predictive performance in large candidate sets.
In recent years, researchers have also attempted to identify driver genes from the perspective of individual networks. For example, the SSN algorithm is based on individual network identification of driver genes, which uses the Pearson Correlation Coefficient (PCC) of sample expression data to construct individual networks and then, through statistical analysis, determine cancer driver genes or modules [44]. The HIT'n DRIVE algorithm integrates each patient's individual genomic mutation data and expression data to construct a network and identify the driver genes and modules that affect transcriptional changes based on the expected value of the shortest random walk length in the network [45]. From the perspective of individuals, Guo et al. successively proposed the SCS [46] and PNC [47] algorithms. The SCS algorithm integrates mutation data, expression data, and molecular network data of each patient sample, and uses the network control method to evaluate the individual genes. Driver genes are then identified based on the effect of gene mutations on gene expression [46]. The PNC algorithm uses paired samples to construct individual networks, and then uses structure-based network control principles to identify individual driver genes [47]. The PRODIGY algorithm proposed by Dinstag et al. integrates individual mutation and expression data with pathways and PPI network data, uses reward collection Steiner tree models to quantify the regulatory effects of mutant genes on pathways and recognize driver genes [48]. However, owing to incomplete data in gene interaction networks, the false positive rate of these existing methods is still very high; therefore, further improvement is needed, which brings challenges to network-based prediction methods.
To overcome false positives and improve prediction accuracy, in this study, we introduced semi-local centrality and considered mutational information between genes to identify mutant genes in tumors. Unlike DriverNet, we considered the structure of the genes in the network. The introduction of network centrality can lead to the identification of genes at key locations in the network. These genes may be driven by genes or regulatory genes. MUFFINN considers the direct neighbor information of mutated genes in the network, but ignores the information of the secondary neighbor. Based on this, our method considered not only the nearest and the next-nearest neighbors of node but also the interaction between mutant gene nodes. We processed the cancer coding region mutation data from TCGA into a gene–patient mutation matrix as well as calculated the gene mutation score and the Euclidean distance between two genes according to the matrix. Increasing evidence shows that miRNAs are widely involved in the occurrence of cancer [49, 50]; therefore, we also performed gene expression analysis to obtain differentially expressed genes. Moreover, functional studies have suggested that driver mutations alter the expression of its downstream genes in the molecular interaction network [51]; therefore, we integrated differentially expressed genes and mutated genes into the PPI network and calculated the effect of the mutated genes based on the obtained local network. Experiment on TCGA datasets verified that our proposed mutations effect and network centrality (MENC) method was superior to the existing methods based on frequency and network centrality.
Most existing network methods for identifying driver genes are based on global networks. These global networks increase computational complexity. In addition, the accuracy of these methods needs to be improved. Our method employed a novel scheme: we first calculated the effect of the mutation, and then identified a local network for each mutated gene. We used the objective function to calculate the effect of mutated genes in the local network and sort the mutated genes according to the score to determine the driver genes. The top-ranking genes were more likely to become driver genes, which are more interesting to researchers and can even advance to further biological experiments for verification. Therefore, in the comparison analysis, we only used the top 50 candidate genes. To show the advantages of our model, we analyzed four large-scale publicly available datasets, including glioblastoma (GBM), bladder cancer (BLCA), prostate cancer (PRAD), and ovarian cancer (OVARIAN). The experimental results showed that our method was better than not only the network-centric method but also other types of methods. More importantly, our method was also able to recognize rare driver genes.
Datasets and resources
In this study, we mainly used two types of data: coding region mutation data and gene expression data. In particular, the coding region mutation data included copy number variations (CNVs) and SNVs. These data were obtained from 328 GBM samples, 379 BLCA samples, 252 PRAD samples, and 316 OVARIAN samples, and downloaded from the TCGA data portal (https://tcga-data.nci.nih.gov/tcga/). We used only samples that included both of them. The PPI network we used was downloaded from the Human Protein Reference Database (HPRD) [52, 53], which consists of 9617 genes and 74,078 edges. Table 1 shows the sample counts in the four cancers mapped on the PPI network mutated gene numbers and outlying gene numbers.
Table 1 Description of datasets
In the absence of basic facts, quantitative measurements using standard sensitivity/specific benchmarking techniques are impractical. To help assess the quality of our results, we obtained a list of 616 known drivers from the Cancer Gene Census (CGC) database (09/26/2016) [54].
Comparison with network-centric approaches
To evaluate the method's ability to identify known driver genes, we compared our method with network centricity-based methods. As mentioned above, we used the CGC as an approximate benchmark for known driver genes. For comparison, we used the following three metrics (precision and recall rates and F1score) in this study:
$$\begin{aligned} & Precision = \frac{{(\# {\text{Mutated genes in CGC}}) \cap (\# {\text{Genes found in MENC}})}}{{(\# {\text{Genes found in MENC}})}} \\ & {\text{Recall}} = \frac{{(\# {\text{Mutated genes in CGC}}) \cap (\# {\text{Genes found in MENC}})}}{{(\# {\text{Genes found in CGC}})}} \\ & {\text{F1score}} = 2 \times \frac{Precision \times Recall}{{Precision + Recall}} \\ \end{aligned}$$
We compared our method with two main network-centrality-based methods, SCS [46] and MUFFINN [43]. MUFFINN considers mutational information among direct neighbors, either in the most frequently mutated neighbor (DNmax) or in all direct neighbors with normalization by their degree of connectivity (DNsum). The results are shown in Fig. 1. Here, we only show the results for two types of cancer (GBM and OVARIAN). As shown in the figure, our method performed better than SCS and MUFFINN. For GBM cancer, our method was not as effective as SCS in identifying the first 15 candidate driving genes, but our method showed a great improvement in the latter. MENC was significantly superior to the other methods for the other three cancers. The number of CGCs covered among the top 50 genes identified was 27 genes with our method, 24 with SCS, 12 with DNsum, and 21 with DNmax. Our method achieved the best results for the BLCA and PRAD cancer data.
Comparison of precision, recall, and F1score for the top-ranking genes. The X axis represents the number of top-ranking genes, and the Y axis represents the score of the precision, recall, and F1score
For OVARIAN cancer, the top 50 genes analyzed by our method included 27 in the CGC database, while SCS had 17, DNsum had 10, and DNmax had 17. It can also be seen that the SCS method exhibits a large downward trend. The accuracy of the top 10 genes was 0.8, and the accuracy was reduced to below 0.4 in the top 30. Our method is relatively stable, and there is no significant decline. The results indicated that our method yielded reliable results for identifying driver genes.
Comparison with other approaches
Because our method not only considers the characteristics of the network but also calculates the mutation scores and interaction of the genes, we also compared MENC with DriverNet [39], a frequency-based method, and OncolMPACT [40]. As shown in Fig. 2, in general, relative to CGC, our approach was superior to DriverNet, Frequency, and OncolMPACT in analyzing all cancer datasets. Although only the results of BLCA and PRAD cancers are shown here, the same good results were obtained for other cancer data, which are not shown here.
The comparison of precision, recall and F1score for top ranking genes of MENC and other methods. The X axis represents the number of top ranking genes and the Y axis represents the score of the precision, recall and F1score respectively. The last row is the result of BLCA and the next row is the result of PRAD
Novel and reliable driver genes found using our method
In addition to identifying frequently mutated driver genes, MENC can identify important rare driver genes. According to DawnRank's [3] definition of novel and important driver genes, genes meeting the following requirements are rare genes: (1) the ranking of the driver gene is based on patient population; (2) frequency of the mutation is less than 2% of the patient population in the mutation data; (3) the gene has not been identified as a driver gene by CGC.
In OVARIAN, 316 samples were analyzed. Using our method, nine rare driving factors were identified as the top 20 genes according to the above definition, seven of which were in included in CGC (see Table 2). Although some rare driver genes such as EGFR, EP300, and CREBBP have been found in DNMax and DNSum, they rank higher in our method. In addition, SRC (1.58% of cases) is usually associated with disease and may lead to the development of human malignancies [55]. FYN (0.95% of cases) and PRKCA (1.58% of cases) have not been listed as driving genes by CGC, but studies have found that they are associated with many cancers and overexpressed in cancer patients [56, 57].
Table 2 Rare driver genes in OVARIAN
In BLCA, we identified 18 rare genes among 22 candidate driver genes (see Table 3), 12 of which were in CGC. For example, MENC recognized AKT1 (0.53% of cases) as a serine/threonine protein kinase, and its downstream proteins have been reported to be frequently activated in human cancers [58]. Most of the highest-ranked genes in BLCA are low-frequency mutant genes.
Table 3 Rare driver genes in BLCA
Considering that the identification of cancer driver genes is required for cancer treatment, we used the drug–genes interaction database (DGIdb) [59] and TARGET database [60] to determine whether our candidate driver genes are clinically relevant genes. The results are shown in Fig. 3. In all four cancer datasets, 80% or more candidate driver genes were identified as actionable targets. Approximately 40% of the genes were druggable. There is a partial intersection between the candidate genes and druggable genes. The union of the actionable and druggable genes in the four cancers BLCA, GBM, OVARIAN, PRAD was 42, 42, 39, and 42, respectively. These results indicate that the candidate driver genes are clinically relevant.
Actionable and druggable genes among candidate driver genes in four types of cancer
To test the biological function of the MENC-predicted candidate drivers, we used the DAVID tool (v6.8) for KEGG pathway and GO function enrichment analyses.
For OVARIAN, the important candidates were mainly enriched in pathways in cancer, viral carcinogenesis, proteoglycans in cancer, prostate cancer, and pancreatic cancer. They were also involved in biological process such as positive regulation of transcription from RNA polymerase II promoter and signal transduction. Regarding cellular components, the identified candidates were enriched in the nucleus, nucleoplasm, cytosol, cytoplasm, and plasma membrane. Furthermore, with regards to important molecular functions, the candidate drivers were enriched in identical protein binding, DNA binding, and transcription factor binding.
In BLCA, KEGG analysis showed that the candidate genes were enriched in pathways in cancer, chemokine signaling pathway, and PI3K-Akt signaling pathway. GO analysis revealed that the candidate genes were enriched in signal transduction, positive regulation of transcription, and DNA template. As for cellular components, the candidates were enriched in the cytoplasm and nucleus. In terms of molecular functions, the candidates were enriched in protein binding, enzyme binding, and transcription factor activity.
In GBM, the candidates were enriched in pathways in cancer, viral carcinogenesis, and hepatitis B. In terms of biological processes, the candidate drivers were enriched in signal transduction, viral processes, and protein phosphorylation. With respect to cellular components, the candidates were enriched in the nucleus, plasma members, cytoplasm, and nucleoplasm. As for molecular functions, the candidates were enriched in enzyme binding, transcription factor activity, and sequence-specific DNA binding.
In PRAD, the enriched KEGG pathways were proteoglycans in cancer, thyroid hormone signaling pathway, and microRNAs in cancer. The enriched GO functions were negative regulation of the apoptotic process and protein phosphorylation. As for cellular components, the candidates were enriched in the cytosol, nucleus, and plasma membrane. In terms of molecular functions, the candidate drivers were enriched in protein binding, ATP binding, transcription factor binding, and kinase activity.
In this study, we proposed the MENC method for identification of driver genes. Our approach not only considered mutation frequency in patients but also integrated mutation and gene expression data into a gene–gene interaction network. We considered the nearest and next-nearest nodes from the source node when calculating the network centrality. When tested on the GBM and OVARIAN datasets, our method performed significantly better than the network-based SCS and MUFFIN methods. In addition, our method was superior to other methods such as DriverNet in analyzing the PRAD and BLCA datasets. Our method even identified rare driver genes.
Nevertheless, our approach had some limitations. For example, in clinical practice, precision medicine and personalized medicine are important for the diagnosis and treatment of patients. However, using the proposed method, we could not diagnose driver genes in the individual. In the future, we will propose a new approach to identify patient-specific and rare driver genes based on individual mutations and gene expression profiles in tumors.
Overview of the MENC approach
We proposed a new method that combined mutation and expression data into a PPI network, and adopted a combination of semi-local centrality and mutation effect function to identify the driver genes of cancer. The method consisted of three main steps. First, we integrated SNV and CNV data to obtain a mutation matrix, and calculated the gene mutation score (Eq. 2) and the Euclidean distance (Eq. 3) between two genes according to the matrix. Next, the mutation effect function between genes was calculated according to Eq. 4. In the second step, we compared the expression profiles of tumor samples with those of normal samples to identify DEGs. We subsequently constructed a semi-local network for each mutation gene using DEGs and mutation genes according to the PPI network. The third step was to calculate the local centrality and mutation effect of the mutated genes according to the target function (Eq. 5). The top-ranking genes were regarded as candidate driver genes. Our method considered the nearest and next-nearest nodes when calculating the local centrality. Compared with global centrality measures (e.g., betweenness centrality and closeness centrality), our local centrality measure had a much lower computational complexity. We also added the mutational effect function, as to not ignore some genes that have a low degree but may have a much higher influence than high-degree genes [61]. A flowchart of the method is shown in Fig. 4.
Flowchart of comparative transcriptome analysis of the mutations effect and network centrality (MENC) method used in this study. The red nodes represent the mutated gene from the mutation-patient matrix, and the blue nodes represent the differentially expressed genes from the gene expression matrix
Calculation of gene mutation score and distance between genes
The downloaded TCGA coding region mutation data were summarized in a binary gene-patient matrix M, in which the rows represent the genes, and the columns represent the cancer samples (patients). For gene i, if the patient has SNVs or CNVs, M(i, j) = 1; otherwise, M(i, j) = 0. We used the MaxMIF [62] method to calculate the mutation score (Eq. 2). Based on the obtained gene-patient matrix, we calculated the mutation score of the gene. The mutation score M(i) for each gene i accounts for the contribution of its mutation to cancer, defined as follows:
$${\text{M}}({\text{i}}) = \left\{ {\begin{array}{*{20}l} {\sum\nolimits_{{k \in K_{i} }} {\frac{1}{{N_{k} }}} ,} \hfill & {\quad K_{i} \ne \Phi } \hfill \\ {\frac{1}{{N_{\max } }},} \hfill & {\quad K_{i} = \Phi } \hfill \\ \end{array} } \right.$$
where Ki is the set of patients with mutations in gene i. Nk is the total number of mutated genes in sample k. Nmax is the maximum number of mutated genes in all samples. If gene i has no mutation in all samples, that is, Ki is empty, then M(i) is assigned a background mutation score (BMS) that is no greater than any mutant gene.
We then calculated the Euclidean distance between two genes according to the distance formula (Eq. 3), where vector X, Y is the row vector of each gene in the gene-patient matrix, and xi, yi is an element in the row vector. In this study, we also tried other distance formulas, such as Jaccard and Manhattan, and brought the distance obtained by each distance formula into the final objective function. We found that the obtained driver genes were the same; therefore, we chose the Euclidean distance in the experiment.
$${\text{dist}}(X,Y) = \sqrt {\sum\limits_{i = 1}^{n} {(x_{i} - y_{i} )^{2} } }$$
Mutations effect function between genes
Reference MaxMIF measures the effect of interaction between two mutant genes on biological functions. In this experiment, we also used mutation impact function (MIF) values to calculate the effect of mutation between two genes. The value is driven by the gravity principle [63].
$${\text{MIF}}(i,j) = \frac{M(i)M(j)}{{r_{ij}^{2} }}$$
Here, M(i) and M(j) are the mutation scores of gene i and j, respectively. rij is the reciprocal of the Euclidean distance between gene i and gene j. Euclidean distance measures the similarity of two vectors (the similarity of two genes on the patient set). Two genes with high mutation scores and high similarity had high MIF values.
Identification of DEGs and construction of local network
In this study, expression data were processed the same way as SCS data. To indicate the DEGs of each patient, we first calculated the log2 fold-change in gene expression between the paired tumor and normal samples. Genes with an absolute value greater than 1 were considered as DEGs. We then collected the DEGs from each patient to obtain the DEGs of the cohort. All patient mutation genes were selected from the mutation matrix. In addition, we downloaded the PPI network as an interaction graph between the mutated genes and DEGs. If there are edges of mutant genes and DEGs in the network, the two genes are connected to the semi-local network. We built a semi-local network where mutated genes were considered the source node and DEGs were the target nodes. Moreover, we only considered the role of the mutant in two steps, which reduced the computational complexity. After preprocessing the data, the next step was performed.
Calculation of driver gene scores
Unlike some existing network-based methods, we constructed a new semi-local intersection network for each mutated gene by merging mutant genes, DEGs, and HPRD networks. Referring to the metric of the network local centrality measure CL(v) in [61], CL(v) calculates the number of neighbors of node v and the neighbors of the neighbors. We have made corresponding improvements to this formula: when counting the number of neighbors of a node gene, we performed different calculations for the neighbors of the node that were mutations and DEGs. If the neighbor of the node was a mutated gene, we used the MIF between the genes multiplied by the degree of the node, and if the neighbor was the DEGs, only the degree of the node was calculated. See formula (5):
$$\begin{aligned} score(v) & = N(v) + \sum\limits_{\begin{subarray}{l} u \in N(u) \\ u \in Mutation \end{subarray} } {c(u)*MIF(v,u)} + \sum\limits_{\begin{subarray}{l} u \in N(u) \\ u \in DEGs \end{subarray} } {b(u)} \\ c(u) & = \sum\limits_{\begin{subarray}{l} w \in N(u) \\ w \in Mutation \end{subarray} } {N(u)} *MIF(u,w) + \sum\limits_{\begin{subarray}{l} w \in N(u) \\ w \in DEGs \end{subarray} } {N(w)} \\ b(u) & = \sum\limits_{\begin{subarray}{l} w \in N(u) \\ w \in Mutation \end{subarray} } {c(w)} + \sum\limits_{\begin{subarray}{l} w \in N(u) \\ w \in DEGs \end{subarray} } {N(w)} \\ \end{aligned}$$
where N(v)/N(w) represents the set of neighbors of node v/w. We calculated the local centrality of the mutated gene. For mutation i, if the mutated gene u/w was ligated, we also considered the mutation effect between them as a weight, calculated by c(u)/c(w). Therefore, we can identify drivers that are important in the network and have a strong effect on other genes. If the neighbor u/v is a DEG, calculated by b(u)/b(w), which only considered the centrality of the network. Our main idea was to accord the function as the effect score in a local network. The higher the score, the greater the effect of the mutated gene on the DEGs in the local network. The presence of genes is both a mutation and a differential expression. Therefore, these genes may be more important. Therefore, when a gene is differentially expressed, it acts as a target node. However, when mutated, it acts as a source node. The score for this type of gene increased. Using this model, we obtained a score for each mutant gene. Then, according to the scores, we ranked the mutation genes to identify influential genes. We assumed that the higher the ranking, the more likely it was to be a driver gene.
All datasets analyzed in the current study were downloaded from the TCGA data portal (https://tcga-data.nci.nih.gov/tcga/). The evaluation data set used was from the CGC gene list of the COSMIC database (https://cancer.sanger.ac.uk/cosmic), version number (09/26/2016).
DEGs:
Differentially expressed genes
PPI:
Protein–protein interaction
MENC:
Mutations effect and network centrality
NGS:
TCGA:
The cancer genome atlas
ICGC:
International cancer genome consortium
SCNAs:
Somatic copy vumber alterations
SNVs:
Single nucleotide variants
PCC:
Pearson correlation coefficient
GBM:
BLCA:
PRAD:
OVARIAN:
CNVs:
Copy-number variations
HPRD:
Human protein reference database
CGC:
DNmax:
Direct neighbor max
DNsum:
Direct neighbor sum
DGIdb:
Drug–genes interaction database
Tumor alterations relevant for genomics-driven therapy
Database for annotation, visualization and integrated discovery
BMS:
Background mutation score
MIF:
Mutation impact function
Consortium GP. An integrated map of genetic variation from 1,092 human genomes. Nature. 2012;491(7422):56–65.
Capriotti E, Nehrt NL, Kann MG, Bromberg Y. Bioinformatics for personal genome interpretation. Brief Bioinform. 2012;13(4):495–512.
Hou JP, Ma J. DawnRank: discovering personalized driver genes in cancer. Genome Med. 2014. https://doi.org/10.1186/s13073-014-0056-8.
Weinstein JN, Collisson EA, Mills GB, Shaw KRM, Ozenberger BA, Ellrott K, et al. The cancer genome atlas pan-cancer analysis project. Nat Genet. 2013;45:1113–20.
Hudson TJ, Anderson W, Aretz A, Barker AD, Bell C, Bernabé RR, et al. International network of cancer genome projects. Nature. 2010;464(7291):993–8.
Zhang J, Zhang S, Wang Y, Zhang XS. Identification of mutated core cancer modules by intergrating somatic mutation, copy number variation, and gene expression data. BMC Syst Biol. 2013;7(Suppl 2):S4.
Chen L, Wang RS, Zhang XS. Biomolecular networks: methods and applications in systems biology. Hoboken: Wiley; 2009.
Lee JH, Zhao XM, Yoon L, Lee JY, Kwon NH, Wang YY, et al. Integrative analysis of mutational and transcriptional profiles reveals driver mutations of metastatic breast cancers. Cell Discov. 2016;2:16025.
Liang L, Fang JY, Xu J. Gastric cancer and gene copy number variation: emerging cancer drivers for targeted therapy. Oncogene. 2016;35:1475.
Wang H, Liang L, Fang JY, Xu J. Somatic gene copy number alterations in colorectal cancer: new quest for cancer drivers and biomarkers. Oncogene. 2011;2016:35.
Nibourel O, Guihard S, Roumier C, Pottier N, Terre C, Paquet A, et al. Copy-number analysis identified new prognostic marker in acute myeloid leukemia. Leukemia. 2017;31:555.
Zhu G, Yang H, Chen X, Wu J, Zhang Y, Zhao XM. CSTEA: a webserver for the cell state transition expression atlas. Nucleic Acids Res. 2017;45(W1):W103–8.
Green ED, Guyer MS. National human genome research I: charting a course for genomic medicine from base pairs to bedside. Nature. 2011;470:204–13.
Stratton MR. Journeys into the genome of cancer cells. EMBO Mol Med. 2013;5:169–72.
Wang YY, Chen WH, Xiao PP, Xie WB, Luo QB, Bork P, et al. GEAR: a database of genomic elements associated with drug resistance. Sci Rep. 2017;7:44085.
Stratton MR, Campbell PJ, Futreal PA. The cancer genome. Nature. 2009;458:719–24.
Vogelstein B, Papandopoulos N, Velculescu VE, Zhou S, Diaz LAJ, Kinzler KW. Cancer genome landscapes. Science. 2013;339:1546–58.
Greenman C, Stephens P, Smith R, Dalgliesh GL, Hunter C, Bignell G, et al. Patterns of somatic mutation in human cancer genomes. Nature. 2007;446(7132):153–8.
Ding L, Getz G, Wheeler DA, Mardis ER, McLellan MD, Cibulskis K, et al. Somatic mutations affect key pathways in lung adenocarcinoma. Nature. 2008;455(7216):1069–75.
Jones S, Zhang X, Parsons DW, Lin JC-H, Leary RJ, Angenendt P, et al. Core signaling pathways in human pancreatic cancers revealed by global genomic analyses. Science. 2008;321(5897):1801–6.
Banerji S, Cibulskis K, Rangel-Escareno C, Brown KK, Carter SL, Frederick AM, et al. Sequence analysis of mutations and translocations across breast cancer subtypes. Nature. 2012;486:405–9.
Dees ND, Zhang Q, Kandoth C, Wendl MC, Schierding W, Koboldt DC, et al. MuSic: identifying mutational significance in cancer genomes. Genome Res. 2012;22:1589–98.
Tamborero D, Gonzalezperez A, Lopezbigas N. OncodriveCLUST: exploiting the positional clustering of somatic mutations to identify cancer genes. Bioinformatics. 2013;29(18):2238–44.
Lawrence MS, Stojanov P, Mermel CH, Robinson JT, Garraway LA, Golub TR, et al. Discovery and saturation analysis of cancer genes across 21 tumour types. Nature. 2014;505(7484):495.
Wood LD, Parsons DW, Jones S, Lin J, Sjoblom T, Leary RJ, et al. The genomic landscapes of human breast and colorectal cancers. Science. 2007;318:1108–13.
Lawrence MS, Stojanov P, Mermel CH, Robinson JT, Garraway LA, Golub TR, et al. Discovery and saturation analysis of cancer genes across 21 tumour types. Nature. 2014;505:495–501.
Carter H, Chen S, Isik L, Tyekucheva S, Velculescu VE, Kinzler KW, et al. Cancer-specific high-throughput annotation of somatic mutations: computational prediction of driver missense mutations. Cancer Res. 2009;69:6660–7.
Douville C, Carter H, Kim R, Niknafs N, Diekhans M, Stenson PD, et al. CRAVAT: cancer-related analysis of variants toolkit. Bioinformatics. 2013;29(5):647–8.
Wong WC, Kim D, Carter H, Diekhans M, Ryan MC, Karchin R. CHASM and SNVBox: toolkit for detecting biologically important single nucleotide mutations in cancer. Bioinformatics. 2011;27(15):2147–8.
Carter H, Douville C, Stenson PD, Cooper DN, Karchin R. Identifying Mendelian disease genes with the variant effect scoring tool. BMC Genom. 2013;14(3):1–16.
Mao Y, Chen H, Liang H, Meric-Bernstam F, Mills GB, Chen K. CanDrA: cancer-specific driver missense mutation annotation with optimized features. PLoS ONE. 2013;8:e77945.
Shihab HA, Gough J, Cooper DN, Day INN, Gaunt TR. Predicting the functional consequences of cancer-associated amino acid substitutions. Bioinformatics. 2013;29(12):1504–10.
Shihab HA, Gough J, Cooper DN, Stenson PD, Barker GLA, Edwards KJ, et al. Predicting the functional, molecular, and phenotypic consequences of amino acid substitutions using hidden Markov models. Hum Mutat. 2013;34(1):57–65.
Han Y, Yang JZ, Qian XY, Cheng WC, Liu SH, Hua X, et al. DriverML: a machine learning algorithm for identifying driver genes in cancer sequencing studies. Nucleic Acids Res. 2019;47(8):e45.
Hu JX, Thomas CE, Brunak S. Network biology concepts in complex disease comorbidities. Nat Rev Genet. 2016;17(10):615–29.
Ciriello G, Cerami E, Sander C, Schultz N. Mutual exclusivity analysis identifies oncogenic network modules. Genome Res. 2012;22:398–406.
Ng S, Collisson EA, Sokolov A, Goldstein T, Gonzalez-Perez A, Lopez-Bigas N, et al. PARADIGM-SHIFT predicts the function of mutations in multiple cancers using pathway impact analysis. Bioinformatics. 2012;28:i640–6.
Leiserson MDM, Vandin F, Wu HT, Dobson JR, Eldridge JV, Thomas JL, et al. Pan-cancer network analysis identifies combinations of rare somatic mutations across pathways and protein complexes. Nat Genet. 2014;47:106–14.
Bashashati A, Haffari G, Ding JR, Ha G, Lui K, Rosner J, et al. DriverNet: uncovering the impact of somatic drive mutations on transcriptional network in cancer. Genome Biol. 2012;13:R124.
Bertrand D, Chng KR, Sherbaf FG, Kiesel A, Chia BKH, Sia YY, et al. Patient-specific driver gene prediction and risk assessment through integrated network analysis of cancer omics profiles. Nucleic Acids Res. 2015;43(7):e44.
Jia P, Zhao Z. VarWalker: personalized mutation network analysis of putative cancer genes from next-generation sequencing data. PLOS Comput Biol. 2014;10(2):e1003460.
Amgalan B, Lee H. DEOD: uncovering dominant effects of cancer-driver genes based on a partial covariance selection method. Bioinformatics. 2015;31(15):2452–60.
Ara C, Jung ES, Eriu K, Fran S, Ben L, Insuk L. MUFFINN: cancer gene discovery via network analysis of somatic mutation data. Genome Biol. 2016;17:129.
Liu XP, Wang YT, Ji HB, Aihara K, Chen LN. Personalized characterization of diseases using sample-specific networks. Nucleic Acids Res. 2016;44(22):e164.
Shrestha R, Hodzic E, Sauerwald T, Dao P, Wang K, Yeung J, et al. HIT'nDRIVE: patient-specific multidriver gene prioritization for precision oncology. Genome Res. 2017;27(9):1573–88.
Guo WF, Zhang SW, Liu LL, Liu F, Shi QQ, Zhang L, et al. Discovering personalized driver mutation profiles of single samples in cancer by network control strategy. Bioinformatics. 2018;34(11):1893–903.
Guo WF, Zhang SW, Zeng T, Li Y, Gao J, Chen L. A novel network control model for identifying personalized driver genes in cancer. PLOS Comput Biol. 2019;15(11):e1007520.
Dinstag G, Shamir R. PRODIGY: personalized prioritization of driver genes. Bioinformatics. 2019;36(6):1831–9.
PubMed Central Google Scholar
Qin GM, Li RY, Zhao XM. Identifying disease associated miRNAs based on protein domains. IEEE/ACM Trans Comput Biol Bioinform. 2016;13(6):1027–35.
Zhao XM, Liu KQ, Zhu GH, He F, Duval B, Richer JM, et al. Identifying cancer-related microRNAs based on gene expression data. Bioinformatics. 2015;37(8):1226–34.
Prahallad A, Sun C, Huang S, Di NF, Salazar R, Zecchin D, et al. Unresponsiveness of colon cancer to BRAF(V600E) inhibition through feedback activation of EGFR. Nature. 2012;483:100–3.
Prasad TSK, Goel R, Kandasamy K, Keerthikumar S, Kumar S, Mathivanan S, et al. Human protein reference database–2009 update. Nucleic Acids Res. 2009;37(suppl 1):D767–72.
Wei PJ, Zhang D, Xia JF, Zheng CH. LNDriver: identifying driver genes by integrating mutation and expression data based on gene-gene interaction network. BMC Bioinform. 2016;17(17):467.
Futreal P, Coin L, Marshall M, Down T, Hubbard T, Wooster R, et al. A census of human cancer genes. Nat Rev Cancer. 2004;4:177–83.
Frame MC. Src in cancer: deregulation and consequences for cell behaviour. Biochim Biophys Acta. 2002;1602(2):114–30.
Saito YD, Jensen AR, Salgia R, Posadas EM. Fyn a novel molecular target in cancer. Cancer. 2010;116(7):1629–37.
Cohen JN, Joseph NM, North JP, Onodera C, Zembowicz A, LeBoit PE. Genomic analysis of pigmented epithelioid melanocytomas reveals recurrent alterations in PRKAR1A, and PRKCA genes. Am J Surg Pathol. 2017;14(10):1333–46.
Lee D, Do IG, Choi K, Sung CO, Jang KT, Choi D, et al. The expression of phospho-AKT1 and phospho-MTOR is associated with a favorable prognosis independent of PTEN expression in intrahepatic cholangiocarcinomas. Mod Pathol Off J US Can Acad Pathol. 2012;25(1):131–9.
Griffith M, Griffith OL, Coffman AC, Weible JV, McMichael JF, Spies NC, et al. DGIdb: mining the druggable genome. Nat Methods. 2013;10(12):1209.
Van Allen EM, Wagle N, Stojanov P, Perrin DL, Cibulskis K, Marlow S, et al. Whole-exome sequencing and clinical interpretation of formalin-fixed, paraffin-embedded tumor samples to guide precision cancer medicine. Nat Med. 2014;20(6):682–8.
Chen DB, Lu LY, Shang MS, Zhang YC, Zhou T. Identifying influential nodes in complex networks. Physica A. 2012;391(4):1777–87.
Hou Y, Gao B, Li G, Su Z. MaxMIF: a new method for identifying cancer driver genes through effective data integration. Comput Biol. 2018;5:1800640.
Cheng FX, Liu C, Lin CC, Jia PL, Li WH, Zhao ZM. A gene gravity model for the evolution of cancer genomes: a study of 3000 cancer genomes across 9 cancer types. PLos Comput Biol. 2015;11:e1004497.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 22 Supplement 3, 2021: Proceedings of the 2019 International Conference on Intelligent Computing (ICIC 2019): bioinformatics. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-22-supplement-3.
The publication costs for this study were funded by the National Natural Science Foundation of China (Nos. U19A2064, 61873001, 61872220, and 61861146002). This study was supported by the Open Foundation of Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University (KF2020006), and the Xinjiang Autonomous Region University Research Program (XJEDU2019Y002). The funding bodies played no role in the design of the study; collection, analysis, and interpretation of data; and writing of the manuscript.
Key Lab of Intelligent Computing and Signal Processing of Ministry of Education, College of Computer Science and Technology, Anhui University, Hefei, China
Yun-Yun Tang, Pi-Jing Wei, Rui-Fen Cao & Chun-Hou Zheng
Institute of Physical Science and Information Technology, Anhui University, Hefei, China
Junfeng Xia
Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, Fujian, China
Rui-Fen Cao
College of Mathematics and System Sciences, Xinjiang University, Urumqi, China
Jian-ping Zhao & Chun-Hou Zheng
Yun-Yun Tang
Pi-Jing Wei
Jian-ping Zhao
Chun-Hou Zheng
YYT carried out the experiments and analyses presented in this work and wrote the manuscript. PJW performed data analysis. JX and CHZ helped with the project design, edited the manuscript, and provided guidance and feedback throughout the project. RFC and JZ supervised YYT and PJW in collecting the data and participated in the discussion of experimental results with all authors. All authors read and approved the final manuscript.
Correspondence to Chun-Hou Zheng.
Tang, YY., Wei, PJ., Zhao, Jp. et al. Identification of driver genes based on gene mutational effects and network centrality. BMC Bioinformatics 22 (Suppl 3), 457 (2021). https://doi.org/10.1186/s12859-021-04377-0
Accepted: 23 August 2021
Driver genes
Mutation data
Local centrality
Transcriptional network
Submission enquiries: [email protected]
|
CommonCrawl
|
Numerical methods and programming
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
What is RSS
Num. Meth. Prog.:
Personal entry:
2011, Volume 12, Issue 4
Вычислительные методы и приложения
Nonsmooth minimization problems for the difference of two convex functions
T. V. Gruzdeva, A. S. Strekalovskii, A. V. Orlov, O. V. Druzinina 384
Calculation of the density of states and the thermal properties of polymer chains and stars on a lattice by the Monte Carlo method with the use of the Wang–Landau algorithm
I. A. Silantyeva, P. N. Vorontsov-Velyaminov 397
Symbolic computations in the lattice space $\mathbb{R}_{c}^{n}$
G. G. Ryabov, V. A. Serov 409
Quantum-chemical modeling of excited states of Bismuth monocations
A. N. Romanov, O. A. Kondakova, A. Yu. Golovacheva, A. V. Sulimov, I. V. Oferkin, V. B. Sulimov 417
A parallel algorithm for solving strong separability problem on the basis of Fejer mappings
A. V. Ershova, I. M. Sokolinskaya 423
Numerical algorithms for the analysis of elastic waves in block media with thin interlayers
M. P. Varygina, M. A. Pokhabova, O. V. Sadovskaya, V. M. Sadovskii 435
Calculation of excited states of the polycation Bi$_5^3+$ by the spin-orbit configuration interaction method
A. N. Romanov, O. A. Kondakova, D. N. Vtyurina, A. V. Sulimov, V. B. Sulimov 443
On peculiarities of using heterogeneous cluster architecture for solving continuum mechanics problems
D. A. Gubaidullin, A. I. Nikiforov, R. Sadovnikov 450
Extraction and use of opinion words for the three-way review classification problem
N. V. Lukashevich, I. I. Chetviorkin 73
High-performance reconfigurable computer systems of new generation
A. I. Dordopulo, I. A. Kalyaev, I. I. Levin, E. A. Semernikov 82
An approach to cluster system task flow monitoring, analysis, and visualization
A. V. Adinetz, P. A. Bryzgalov, Vad. V. Voevodin, S. A. Zhumatii, D. A. Nikitenko 90
A test suite for analyzing the performance features of graphics processing units
A. V. Adinetz, P. A. Shvets 94
Terms of Use Registration to the website Logotypes © Steklov Mathematical Institute RAS, 2022
|
CommonCrawl
|
Categorifying the Heisenberg Algebra
Posted by Jeffrey Morton under algebra, categorification, quantum mechanics, reading, representation theory, spans, species
Marco Mackaay recently pointed me at a paper by Mikhail Khovanov, which describes a categorification of the Heisenberg algebra (or anyway its integral form ) in terms of a diagrammatic calculus. This is very much in the spirit of the Khovanov-Lauda program of categorifying Lie algebras, quantum groups, and the like. (There's also another one by Sabin Cautis and Anthony Licata, following up on it, which I fully intend to read but haven't done so yet. I may post about it later.)
Now, as alluded to in some of the slides I've from recent talks, Jamie Vicary and I have been looking at a slightly different way to answer this question, so before I talk about the Khovanov paper, I'll say a tiny bit about why I was interested.
Groupoidification
The Weyl algebra (or the Heisenberg algebra – the difference being whether the commutation relations that define it give real or imaginary values) is interesting for physics-related reasons, being the algebra of operators associated to the quantum harmonic oscillator. The particular approach to categorifying it that I've worked with goes back to something that I wrote up here, and as far as I know, originally was suggested by Baez and Dolan here. This categorification is based on "stuff types" (Jim Dolan's term, based on "structure types", a.k.a. Joyal's "species"). It's an example of the groupoidification program, the point of which is to categorify parts of linear algebra using the category . This has objects which are groupoids, and morphisms which are spans of groupoids: pairs of maps . Since I've already discussed the backgroup here before (e.g. here and to a lesser extent here), and the papers I just mentioned give plenty more detail (as does "Groupoidification Made Easy", by Baez, Hoffnung and Walker), I'll just mention that this is actually more naturally a 2-category (maps between spans are maps making everything commute). It's got a monoidal structure, is additive in a fairly natural way, has duals for morphisms (by reversing the orientation of spans), and more. Jamie Vicary and I are both interested in the quantum harmonic oscillator – he did this paper a while ago describing how to construct one in a general symmetric dagger-monoidal category. We've been interested in how the stuff type picture fits into that framework, and also in trying to examine it in more detail using 2-linearization (which I explain here).
Anyway, stuff types provide a possible categorification of the Weyl/Heisenberg algebra in terms of spans and groupoids. They aren't the only way to approach the question, though – Khovanov's paper gives a different (though, unsurprisingly, related) point of view. There are some nice aspects to the groupoidification approach: for one thing, it gives a nice set of pictures for the morphisms in its categorified algebra (they look like groupoids whose objects are Feynman diagrams). Two great features of this Khovanov-Lauda program: the diagrammatic calculus gives a great visual representation of the 2-morphisms; and by dealing with generators and relations directly, it describes, in some sense1, the universal answer to the question "What is a categorification of the algebra with these generators and relations". Here's how it works…
Heisenberg Algebra
One way to represent the Weyl/Heisenberg algebra (the two terms refer to different presentations of isomorphic algebras) uses a polynomial algebra . In fact, there's a version of this algebra for each natural number (the stuff-type references above only treat , though extending it to " -sorted stuff types" isn't particularly hard). In particular, it's the algebra of operators on generated by the "raising" operators and the "lowering" operators . The point is that this is characterized by some commutation relations. For , we have:
but on the other hand
So the algebra could be seen as just a free thing generated by symbols with these relations. These can be understood to be the "raising and lowering" operators for an -dimensional harmonic oscillator. This isn't the only presentation of this algebra. There's another one where (as in ) has a slightly different interpretation, where the and operators are the position and momentum operators for the same system. Finally, a third one – which is the one that Khovanov actually categorifies – is skewed a bit, in that it replaces the with a different set of so that the commutation relation actually looks like
It's not instantly obvious that this produces the same result – but the can be rewritten in terms of the , and they generate the same algebra. (Note that for the one-dimensional version, these are in any case the same, taking .)
Diagrammatic Calculus
To categorify this, in Khovanov's sense (though see note below1), means to find a category whose isomorphism classes of objects correspond to (integer-) linear combinations of products of the generators of . Now, in the setup, we can say that the groupoid , or equvialently , represents Fock space. Groupoidification turns this into the free vector space on the set of isomorphism classes of objects. This has some extra structure which we don't need right now, so it makes the most sense to describe it as , the space of power series (where corresponds to the object ). The algebra itself is an algebra of endomorphisms of this space. It's this algebra Khovanov is looking at, so the monoidal category in question could really be considered a bicategory with one object, where the monoidal product comes from composition, and the object stands in formally for the space it acts on. But this space doesn't enter into the description, so we'll just think of as a monoidal category. We'll build it in two steps: the first is to define a category .
The objects of are defined by two generators, called and , and the fact that it's monoidal (these objects will be the categorifications of and ). Thus, there are objects and so forth. In general, if is some word on the alphabet , there's an object .
As in other categorifications in the Khovanov-Lauda vein, we define the morphisms of to be linear combinations of certain planar diagrams, modulo some local relations. (This type of formalism comes out of knot theory – see e.g. this intro by Louis Kauffman). In particular, we draw the objects as sequences of dots labelled or , and connect two such sequences by a bunch of oriented strands (embeddings of the interval, or circle, in the plane). Each dot is the endpoint of a strand oriented up, and each dot is the endpoint of a strand oriented down. The local relations mean that we can take these diagrams up to isotopy (moving the strands around), as well as various other relations that define changes you can make to a diagram and still represent the same morphism. These relations include things like:
which seems visually obvious (imagine tugging hard on the ends on the left hand side to straighten the strands), and the less-obvious:
and a bunch of others. The main ingredients are cups, caps, and crossings, with various orientations. Other diagrams can be made by pasting these together. The point, then, is that any morphism is some -linear combination of these. (I prefer to assume most of the time, since I'm interested in quantum mechanics, but this isn't strictly necessary.)
The second diagram, by the way, are an important part of categorifying the commutation relations. This would say that (the commutation relation has become a decomposition of a certain tensor product). The point is that the left hand sides show the composition of two crossings and in two different orders. One can use this, plus isotopy, to show the decomposition.
That diagrams are invariant under isotopy means, among other things, that the yanking rule holds:
(and similar rules for up-oriented strands, and zig zags on the other side). These conditions amount to saying that the functors and are two-sided adjoints. The two cups and caps (with each possible orientation) give the units and counits for the two adjunctions. So, for instance, in the zig-zag diagram above, there's a cup which gives a unit map (reading upward), all tensored on the right by . This is followed by a cap giving a counit map (all tensored on the left by ). So the yanking rule essentially just gives one of the identities required for an adjunction. There are four of them, so in fact there are two adjunctions: one where is the left adjoint, and one where it's the right adjoint.
Karoubi Envelope
Now, so far this has explained where a category comes from – the one with the objects described above. This isn't quite enough to get a categorification of : it would be enough to get the version with just one and one element, and their powers, but not all the and . To get all the elements of the (integral form of) the Heisenberg algebras, and in particular to get generators that satisfy the right commutation relations, we need to introduce some new objects. There's a convenient way to do this, though, which is to take the Karoubi envelope of .
The Karoubi envelope of any category is a universal way to find a category that contains and for which all idempotents split (i.e. have corresponding subobjects). Think of vector spaces, for example: a map such that is a projection. That projection corresponds to a subspace , and is actually another object in , so that splits (factors) as . This might not happen in any general , but it will in . This has, for objects, all the pairs where is idempotent (so is contained in as the cases where ). The morphisms are just maps with the compatibility condition that (essentially, maps between the new subobjects).
So which new subobjects are the relevant ones? They'll be subobjects of tensor powers of our . First, consider . Obviously, there's an action of the symmetric group on this, so in fact (since we want a -linear category), its endomorphisms contain a copy of , the corresponding group algebra. This has a number of different projections, but the relevant ones here are the symmetrizer,:
which wants to be a "projection onto the symmetric subspace" and the antisymmetrizer:
which wants to be a "projection onto the antisymmetric subspace" (if it were in a category with the right sub-objects). The diagrammatic way to depict this is with horizontal bars: so the new object (the symmetrized subobject of ) is a hollow rectangle, labelled by . The projection from is drawn with arrows heading into that box:
The antisymmetrized subobject is drawn with a black box instead. There are also and defined in the same way (and drawn with downward-pointing arrows).
The basic fact – which can be shown by various diagram manipulations, is that . The key thing is that there are maps from the left hand side into each of the terms on the right, and the sum can be shown to be an isomorphism using all the previous relations. The map into the second term involves a cap that uses up one of the strands from each term on the left.
There are other idempotents as well – for every partition of , there's a notion of -symmetric things – but ultimately these boil down to symmetrizing the various parts of the partition. The main point is that we now have objects in corresponding to all the elements of . The right choice is that the (the new generators in this presentation that came from the lowering operators) correspond to the (symmetrized products of "lowering" strands), and the correspond to the (antisymmetrized products of "raising" strands). We also have isomorphisms (i.e. diagrams that are invertible, using the local moves we're allowed) for all the relations. This is a categorification of .
Some Generalities
This diagrammatic calculus is universal enough to be applied to all sorts of settings where there are functors which are two-sided adjoints of one another (by labelling strands with functors, and the regions of the plane with categories they go between). I like this a lot, since biadjointness of certain functors is essential to the 2-linearization functor (see my link above). In particular, uses biadjointness of restriction and induction functors between representation categories of groupoids associated to a groupoid homomorphism (and uses these unit and counit maps to deal with 2-morphisms). That example comes from the fact that a (finite-dimensional) representation of a finite group(oid) is a functor into , and a group(oid) homomorphism is also just a functor . Given such an , there's an easy "restriction" , that just works by composing with . Then in principle there might be two different adjoints , given by the left and right Kan extension along . But these are defined by colimits and limits, which are the same for (finite-dimensional) vector spaces. So in fact the adjoint is two-sided.
Khovanov's paper describes and uses exactly this example of biadjointness in a very nice way, albeit in the classical case where we're just talking about inclusions of finite groups. That is, given a subgroup , we get a functors , which just considers the obvious action of act on any representation space of . It has a biadjoint , which takes a representation of to , which is a special case of the formula for a Kan extension. (This formula suggests why it's also natural to see these as functors between module categories and ). To talk about the Heisenberg algebra in particular, Khovanov considers these functors for all the symmetric group inclusions .
Except for having to break apart the symmetric groupoid as , this is all you need to categorify the Heisenberg algebra. In the categorification, we pick out the interesting operators as those generated by the map from to itself, but "really" (i.e. up to equivalence) this is just all the inclusions taken at once. However, Khovanov's approach is nice, because it separates out a lot of what's going on abstractly and uses a general diagrammatic way to depict all these 2-morphisms (this is explained in the first few pages of Aaron Lauda's paper on ambidextrous adjoints, too). The case of restriction and induction is just one example where this calculus applies.
There's a fair bit more in the paper, but this is probably sufficient to say here.
1 There are two distinct but related senses of "categorification" of an algebra here, by the way. To simplify the point, say we're talking about a ring . The first sense of a categorification of is a (monoidal, additive) category with a "valuation" in that takes to and to . This is described, with plenty of examples, in this paper by Rafael Diaz and Eddy Pariguan. The other, typical of the Khovanov program, says it is a (monoidal, additive) category whose Grothendieck ring is . Of course, the second definition implies the first, but not conversely. The objects of the Grothendieck ring are isomorphism classes in . A valuation may identify objects which aren't isomorphic (or, as in groupoidification, morphisms which aren't 2-isomorphic).
So a categorification of the first sort could be factored into two steps: first take the Grothendieck ring, then take a quotient to further identify things with the same valuation. If we're lucky, there's a commutative square here: we could first take the category , find some surjection , and then find that . This seems to be the relation between Khovanov's categorification of and the one in . This is the sense in which it seems to be the "universal" answer to the problem.
Another note on Voting Systems and Social Choice
Posted by Jeffrey Morton under game theory, musing, philosophical, reading, social choice theory
Looks like the Standard Model is having a bad day – Fermilab has detected CP-asymmetry about 50 times what it predicts in some meson decay. As they say – it looks like there might be some new physics for the LHC to look into.
That said, this post is mostly about a particular voting system which has come back into the limelight recently, but also runs off on a few tangents about social choice theory and the assumptions behind it. I'm by no means expert in the mathematical study of game theory and social choice theory, but I do take an observer's interest in them.
A couple of years ago, during an election season, I wrote a post on Arrow's theorem, which I believe received more comments than any other post I've made in this blog – which may only indicate that it's more interesting than my subject matter, but I suppose is also a consequence of mentioning anything related to politics on the Internet. Arrow's theorem is in some ways uncontroversial – nobody disputes that it's true, and in fact the proof is pretty easy – but what significance, if any, it has for the real world can be controversial. I've known people who wouldn't continue any conversation in which it was mentioned, probably for this reason.
On the other hand, voting systems are now in the news again, as they were when I made the last post (at least in Ontario, where there was a referendum on a proposal to switch to the Mixed Member Proportional system). Today it's in the United Kingdom, where the new coalition government includes the Liberal Democrats, who have been campaigning for a long time (longer than it's had that name) for some form of proportional representation in the British Parliament. One thing you'll notice if you click that link and watch the video (featuring John Cleese), is that the condensed summary of how the proposed system would work doesn't actually tell you… how the proposed system would work. It explains how to fill out a ballot (with rank-ordering of candidates, instead of selecting a single one), and says that the rest is up to the returning officer. But obviously, what the returning officer does with the ballot is the key of the whole affair.
In fact, collecting ordinal preferences (that is, a rank-ordering of the options on the table) is the starting point for any social choice algorithm in the sense that Arrow's Theorem talks about. The "social choice problem" is to give a map from the set of possible preference orders for each individual, and produce a "social" preference order, using some algorithm. One can do a wide range of things with this information: even the "first-past-the-post" system can start with ordinal preferences: this method just counts the number of first-place rankings for each option, ranks the one with the largest count first, and declares indifference to all the rest.
The Lib-Dems have been advocating for some sort of proportional representation, but there are many different systems that fall into that category and they don't all work the same way. The Conservatives have promised some sort of referendum on a new electoral system involving the so-called "Alternative Vote", also called Instant Runoff Voting (IRV), or the Australian Ballot, since it's used to elect the Australian legislature.
Now, Arrow's theorem says that every voting system will fail at least one of the conditions of the theorem. The version I quoted previously has three conditions: Unrestricted Range (no candidate is excluded by the system before votes are even counted); Monotonicity (votes for a candidate shouldn't make them less likely to win); and Independence of Irrelevant Alternatives (if X beats Y one-on-one, and both beat Z, then Y shouldn't win in a three-way race). Most voting systems used in practice fail IIA, and surprisingly many fail monotonicity. Both possibilities allow forms of strategic voting, in which voters can sometimes achieve a better result, according to their own true preferences, by stating those preferences falsely when they vote. This "strategic" aspect to voting is what ties this into game theory.
In this case, IRV fails both IIA and monotonicity. In fact, this is involved with the fact that IRV also fails the Condorcet condition which says that if there's a candidate X who beats every other candidate one-on-one, X should win a multi-candidate race (which, obviously, can only happen if the voting system fails IIA).
So the IRV algorithm, one effectively uses the preference ordering to "simulate" a runoff election, in which people vote for their first choice from candidates, then the one with the fewest votes is eliminated, and the election is held again with candidates, and so on until a single winner emerges. In IRV, this is done by transferring the votes for the discarded candidate to their second-choice candidate, recounding, discarding again, and so on. (The proposal in the UK would be to use this system in each constituency to elect individual MP's.)
Here's an example of how IRV might fail these criteria, and permit strategic voting. The way assumes a close three-way election, but this isn't the only possibility.
Suppose there are three candidates: X, Y, and Z. There are six possible preference orders a voter could have, but to simplify, we'll suppose that only three actually occur, as follows:
Percentage Choice 1 Choice 2 Choice 3
36 X Z Y
33 Y Z X
31 Z Y X
One could imagine Z is a "centrist" candidate somewhere between X and Y. It's clear here that Z is the Condorcet winner: in a two-person race with either X or Y, Z would win by nearly a 2-to-1 margin. Yet under IRV, Z has the fewest first-choice ballots, and so is eliminated, and Y wins the second round. So IRV fails the Condorcet criterion. It also fails the Independence of Irrelevant Alternatives, since X is loses in a two-candidate vote against either Y or Z (by 64-36), hence should be "irrelevant", yet the fact that X is on the ballot causes Z to lose to Y, whom Z would otherwise beat
This tends to undermine the argument for IRV that it eliminates the "spoiler effect" (another term for the failure of IIA): here, Y is the "spoiler".
The failure of monotonicity is well illustrated by a slightly differente example, where Z-supporters are split between X and Y, say 16-15. Then X-supporters can get a better result for themselves if 6 of their 36 percent lie, and rank Y first instead of X (even though they like Y the least), followed by X. This would mean only 30% rank X first, so X is eliminated, and Y runs against Z. Then Z wins 61-39 against Y, which X-supporters prefer. Thus, although the X supporters switched to Y – who would otherwise have won – Y now loses. (Of course, switching to Z would also have worked – but this shows that in increase of support for the winning candidate could actually cause that candidate to LOSE, if it comes from the right place). This kind of strategic voting can happen with any algorithm that proceeds in multiple rounds.
Clearly, though, this form of strategic voting is more difficult than the kind seen in FPTP – "vote for your second choice to vote against your third choice", which is what usually depresses the vote for third parties, even those who do well in polls. Strategic voting always involves having some advance knowledge about what the outcome of the election is likely to be, and changing one's vote on that basis: under FPTP, this means knowing, for instance, that your favourite candidate is a distant third in the polls, and your second and third choices are the front-runners. Under IRV, it involves knowing the actual percentages much more accurately, and coordinating more carefully with others (to make sure that not too many people switch, in the above example). This sort of thing is especially hard to do well if everyone else is also voting strategically, disguising their true preferences, which is where the theory of such games with imperfect information gets complicated.
So there's an argument that in practice strategic voting matters less under IRV.
Another criticism of IRV – indeed, of any voting system that selects a single-candidate per district – is that it tends toward a two party system. This is "Duverger's Law", (which if it is a law in the sense of a theorem, it must be one of those facts about asymptotic behaviour that depend on a lot of assumptions, since we have a FPTP system in Canada, and four main parties). Whether this is bad or not is contentious – which illustrates the gap between analysis and conclusions about the real world. Some say two-party systems are bad because they disenfranchise people who would otherwise vote for small parties; others say they're good because they create stability by allowing governing majorities; still others (such as the UK's LibDems) claim they create instability, by leading to dramatic shifts in ruling party, instead of quantitative shifts in ruling coalitions. As far as I know, none of these claims can be backed up with the kind of solid analysis one has with strategic voting.
Getting back to strategic voting: perverse voting scenarios like the ones above will always arise when the social choice problem is framed as finding an algorithm taking voters' preference orders, and producing a "social" preference order. Arrow's theorem says any such algorithm will fail one of the conditions mentioned above, and the Gibbard-Satterthwaite theorem says that some form of strategic voting will always exist to take advantage of this, if the algorithm has unlimited range. Of course, a "limited range" algorithm – for example, one which always selects the dictator's preferred option regardless of any votes cast – may be immune to strategic voting, but not in a good way. (In fact, the GS theorem says that if strategic voting is impossible, the system is either dictatorial or a priori excludes some option.)
One suggestion to deal with Arrow's theorem is to frame the problem differently. Some people advocate Range Voting (that's an advocacy site, in the US context – here is one advocating IRV which describes possible problems with range voting – though criticism runs both ways). I find range voting interesting because it escapes the Arrow and Gibbard-Satterthwaite theorems; this in turn is because it begins by collecting cardinal preferences, not ordinal preferences, from each voter, and produces cardinal preferences as output. That is, voters give each option a score in the range between 0% and 100% – or 0.0 and 10.0 as in the Olympics. The winner (as in the Olympics) is the candidate with the highest total score. (There are some easy variations in non-single-winner situations: take the candidates with the top scores, or assign seats in Parliament proportional to total score using a variation on the same scheme). Collecting more information evades the hypotheses of these theorems. The point is that Arrow's theorem tells us there are fundamental obstacles to coherently defining the idea of the "social preference order" by amalgamating individual ones. There's no such obstacle to defining a social cardinal preference: it's just an average. Then, too: it's usually pretty clear what a preference order means – it's less clear for cardinal preferences; so the extra information being collected might not be meaningful. After all: many different cardinal preferences give the same order, and these all look the same when it comes to behaviour.
Now, as the above links suggest, there are still some ways to "vote tactically" with range voting, but many of the usual incentives to dishonesty (at least as to preference ORDER) disappear. The incentives to dishonesty are usually toward exaggeration of real preferences. That is, falsely assigning cardinal values to ordinal preferences: if your preference order is X > Y > Z, you may want to assign 100% to X, and 0% to Y and Z, to give your preferred candidate the strongest possible help. Another way to put this is: if there are candidates, a ballot essentially amounts to choosing a vector in , and vote-counting amounts to taking an average of all the vectors. Then assuming one knew in advance what the average were going to be, the incentive in voting is to pick a vector pointing from the actual average to the outcome you want.
But this raises the same problem as before: the more people can be expected to vote strategically, the harder it is to predict where the actual average is going to be in advance, and therefore the harder it is to vote strategically.
There are a number of interesting books on political theory, social choice, and voting theory, from a mathematical point of view. Two that I have are Peter Ordeshook's "Game Theory and Political Theory", which covers a lot of different subjects, and William Riker's "Liberalism Against Populism" which is a slightly misleading title for a book that is mostly about voting theory. I would recommend either of them – Ordeshook's is the more technical, whereas Riker's is illustrated with plenty of real-world examples.
I'm not particularly trying to advocate one way or another on any of these topics. If anything, I tend to agree with the observation in Ordeshook's book – that a major effect of Arrow's theorem, historically, has been to undermine the idea that one can use terms like "social interest" in any sort of uncomplicated way, and turned the focus of social choice theory from an optimization question – how to pick the best social choice for everyone – into a question in the theory of strategy games – how to maximize one's own interests under a given social system. I guess what I'd advocate is that more people should understand how to examine such questions (and I'd like to understand the methods better, too) – but not to expect that these sorts of mathematical models will solve the fundamental issues. Those issues live in the realm of interpretation and values, not analysis.
Disintegrations Integrated and Operations on Categories of Sheaves
Posted by Jeffrey Morton under 2-Hilbert Spaces, analysis, groupoids, papers, reading
Last week there was an interesting series of talks by Ivan Dynov about the classification of von Neumann algebras, and I'd like to comment on that, but first, since it's been a while since I posted, I'll catch up on some end-of-term backlog and post about some points I brought up a couple of weeks ago in a talk I gave in the Geometry seminar at Western. This was about getting Extended TQFT's from groups, which I've posted about plenty previously . Mostly I talked about the construction that arises from "2-linearization" of spans of groupoids (see e.g. the sequence of posts starting here).
The first intuition comes from linearizing spans of (say finite) sets. Given a map of sets , you get a pair of maps and between the vector spaces on and . (Moving from the set to the vector space stands in for moving to quantum mechanics, where a state is a linear combination of the "pure" ones – elements of the set.) The first map is just "precompose with ", and the other involves summing over the preimage (it takes the basis vector to the basis vector . These two maps are (linear) adjoints, if you use the canonical inner products where and are orthonormal bases. So then a span gives rise to a linear map (and an adjoint linear map going the other way).
There's more motivation for passing to 2-Hilbert spaces when your "pure states" live in an interesting stack (which can be thought of, up to equivalence, as a groupoid hence a category) rather than an ordinary space, but it isn't hard to do. Replacing with the category , and the sum with the direct sum of (finite dimensional) Hilbert spaces gives an analogous story for (finite dimensional) 2-Hilbert spaces, and 2-linear maps.
I was hoping to get further into the issues that are involved in making the 2-linearization process work with Lie groups, rather than finite groups. Among other things, this generalization ends up requiring us to work with infinite dimensional 2-Hilbert spaces (in particular, replacing with $\mathbf{Hilb}$). Other issues are basically measure-theoretic, since in various parts of the construction one uses direct sums. For Lie groups, these need to be direct integrals. There are also places where counting measure is used in the case of a discrete group . So part of the point is to describe how to replace these with integrals. The analysis involved with 2-Hilbert spaces isn't so different for than that required for (1-)Hilbert spaces.
Category theory and measure theory (analysis in general, really), have not historically got along well, though there are exceptions. When I was giving a similar talk at Dalhousie, I was referred to some papers by Mike Wendt, "The Category of Disintegration", and "Measurable Hilbert Sheaves", which is based on category-theoriecally dealing with ideas of von Neumann and Dixmier (a similar remark applies Yetter's paper "Measurable Categories"), so I've been reading these recently. What, in the measurable category, is described in terms of measurable bundles of Hilbert spaces, can be turned into a description in terms of Hilbert sheaves when the category knows about measures. But categories of measure spaces are generally not as nice, categorically, as the category of sets which gives the structure in the discrete case. Just for example, the product measure space isn't a categorical product – just a monoidal one, in a category Wendt calls .
This category has (finite) measure spaces as objects, and as morphisms has disintegrations. A disintegration from to consists of:
a measurable function
for each , the preimage becomes a measure space (with the obvious subspace sigma-algebra ), with measure
such that can be recovered by integrating against $\nu$: that is, for any measurable , (that is, ), we have
$\int_Y \int_{A_y} d\mu_y(x) d\nu(y) = \int_A d\mu(x) = \mu (A)$
where .
So the point is that such a morphism gives, not only a measurable function , but a way of "disintegrating" relative to . In particular, there is a forgetful functor , where is the category of measurable spaces, taking the disintegration to .
Now, is Cartesian; in particular, the product of measurable spaces, , is a categorical product. Not true for the product measure space in , which is just a monoidal category1. Now, in principle, I would like to describe what to do with groupoids in (i.e. internal to), , but that would involve side treks into things like volumes of measured groupoids, and for now I'll just look at plain spaces.
The point is that we want to reproduce the operations of "direct image" and "inverse image" for fields of Hilbert spaces. The first thing is to understand what's mean by a "measurable field of Hilbert spaces" (MFHS's) on a measurable space . The basic idea was already introduced by von Neumann not long after formalizing Hilbert spaces. A MFHS's on consists of:
a family of (separable) Hilbert spaces, for
a space (of "measurable sections" ) (i.e. pointwise inverses to projection maps ) with three properties:
measurability: the function is measurable for all
completeness: if and makes the function then
separability: there is a countable set of sections such that for all , the are dense in
This is a categorified analog of a measurable function: a measurable way to assign Hilbert spaces to points. Yetter describes a 2-category of MFHS's on , which is an (infinite dimensional) 2-vector space – i.e. an abelian category, enriched in vector spaces. is analogous to the space of measurable complex-valued functions on . It is also similar to a measurable-space-indexed version of , the prototypical 2-vector space – except that here we have . Yetter describes how to get 2-linear maps (linear functors) between such 2-vector spaces and .
This describes a 2-vector space – that is, a -enriched abelian category – whose objects are MFHS's, and whose morphisms are the obvious (that is, fields of bounded operators, whose norms give a measurable function). One thing Wendt does is to show that a MFHS on gives rise to measurable Hilbert sheaf – that is, a sheaf of Hilbert spaces on the site whose "open sets" are the measurable sets in $\mathcal{A}$, and where inclusions and "open covers" are oblivious to any sets of measure zero. (This induces a sheaf of Hilbert spaces on the open sets, if is a topological space and is the usual Borel -algebra). If this terminology doesn't spell it out for you, the point is that for any measurable set , there is a Hilbert space:
The descent (gluing) condition that makes this assignment a sheaf follows easily from the way the direct integral works, so that is the space of sections of with finite norm, where the inner product of two sections and is the integral of over .
The category of all such sheaves on is called , and it is equivalent to the category of MFHS up to equivalence a.e. Then the point is that a disintegration gives rise to two operations between the categories of sheaves (though it's convenient here to describe them in terms of MFHS: the sheaves are recovered by integrating as above):
which comes from pulling back along – easiest to see for the MFHS, so that , and
the "direct image" operation, where in terms of MFHS, we have . That is, one direct-integrates over the preimage.
Now, these are measure-theoretic equivalents of two of the Grothendieck operations on sheaves (here is the text of Lipman's Springer Lecture Notes book which includes an intro to them in Ch3 – a bit long for a first look, but the best I could find online). These are often discussed in the context of derived categories. The operation is the analog of what is usually called .
Part of what makes this different from the usual setting is that is not as nice as , the more usual underlying category. What's more, typically one talks about sheaves of sets, or abelian groups, or rings (which give the case of operations on schemes – i.e. topological spaces equipped with well-behaved sheaves of rings) – all of which are nicer categories than the category of Hilbert spaces. In particular, while in the usual picture is left adjoint to , this condition fails here because of the requirement that morphisms in are bounded linear maps – instead, there's a unique extension property.
Similarly, while is always defined by pulling back along a function , in the usual setting, the direct image functor is left-adjoint to , found by taking a left Kan extension along . This involves taking a colimit (specifically, imagine replacing the direct integral with a coproduct indexed over the same set). However, in this setting, the direct integral is not a coproduct (as the direct sum would be for vector spaces, or even finite-dimensional Hilbert spaces).
So in other words, something like the Grothendieck operations can be done with 2-Hilbert spaces, but the categorical properties (adjunction, Kan extension) are not as nice.
Finally, I'll again remark that my motivation is to apply this to groupoids (or stacks), rather than just spaces , and thus build Extended TQFT's from (compact) Lie groups – but that's another story, as we said when I was young.
1 Products: The fact that we want to look at spans in categories that aren't Cartesian is the reason it's more general to think about spans, rather than (as you can in some settings such as algebraic geometry) in terms of "bundles over the product", which is otherwise equivalent. For sets or set-groupoids, this isn't an issue.
"States" and Time – Hamiltonians, KMS states, and Tomita Flow
Posted by Jeffrey Morton under algebra, c*-algebras, musing, noncommutative geometry, philosophical, physics, quantum mechanics, reading
When I made my previous two posts about ideas of "state", one thing I was aiming at was to say something about the relationships between states and dynamics. The point here is that, although the idea of "state" is that it is intrinsically something like a snapshot capturing how things are at one instant in "time" (whatever that is), extrinsically, there's more to the story. The "kinematics" of a physical theory consists of its collection of possible states. The "dynamics" consists of the regularities in how states change with time. Part of the point here is that these aren't totally separate.
Just for one thing, in classical mechanics, the "state" includes time-derivatives of the quantities you know, and the dynamical laws tell you something about the second derivatives. This is true in both the Hamiltonian and Lagrangian formalism of dynamics. The Hamiltonian function, which represents the concept of "energy" in the context of a system, is based on a function , where is a vector representing the values of some collection of variables describing the system (generalized position variables, in some configuration space ), and the are corresponding "momentum" variables, which are the other coordinates in a phase space which in simple cases is just the cotangent bundle . Here, refers to mass, or some equivalent. The familiar case of a moving point particle has "energy = kinetic + potential", or for some potential function . The symplectic form on can then be used to define a path through any point, which describes the evolution of the system in time – notably, it conserves the energy . Then there's the Lagrangian, which defines the "action" associated to a path, which comes from integrating some function living on the tangent bundle , over the path. The physically realized paths (classically) are critical points of the action, with respect to variations of the path.
This is all based on the view of a "state" as an element of a set (which happens to be a symplectic manifold like or just a manifold if it's ), and both the "energy" and the "action" are some kind of function on this set. A little extra structure (symplectic form, or measure on path space) turns these functions into a notion of dynamics. Now a function on the space of states is what an observable is: energy certainly is easy to envision this way, and action (though harder to define intuitively) counts as well.
But another view of states which I mentioned in that first post is the one that pertains to statistical mechanics, in which a state is actually a statisticial distribution on the set of "pure" states. This is rather like a function – it's slightly more general, since a distribution can have point-masses, but any function gives a distribution if there's a fixed measure around to integrate against – then a function like becomes the measure . And this is where the notion of a Gibbs state comes from, though it's slightly trickier. The idea is that the Gibbs state (in some circumstances called the Boltzmann distribution) is the state a system will end up in if it's allowed to "thermalize" – it's the maximum-entropy distribution for a given amount of energy in the specified system, at a given temperature . So, for instance, for a gas in a box, this describes how, at a given temperature, the kinetic energies of the particles are (probably) distributed. Up to a bunch of constants of proportionality, one expects that the weight given to a state (or region in state space) is just , where is the Hamiltonian (energy) for that state. That is, the likelihood of being in a state is inversely proportional to the exponential of its energy – and higher temperature makes higher energy states more likely.
Now part of the point here is that, if you know the Gibbs state at temperature , you can work out the Hamiltonian
just by taking a logarithm – so specifying a Hamiltonian and specifying the corresponding Gibbs state are completely equivalent. But specifying a Hamiltonian (given some other structure) completely determines the dynamics of the system.
This is the classical version of the idea Carlo Rovelli calls "Thermal Time", which I first encountered in his book "Quantum Gravity", but also is summarized in Rovelli's FQXi essay "Forget Time", and described in more detail in this paper by Rovelli and Alain Connes. Mathematically, this involves the Tomita flow on von Neumann algebras (which Connes used to great effect in his work on the classification of same). It was reading "Forget Time" which originally got me thinking about making the series of posts about different notions of state.
Physically, remember, these are von Neumann algebras of operators on a quantum system, the self-adjoint ones being observables; states are linear functionals on such algebras. The equivalent of a Gibbs state – a thermal equilibrium state – is called a KMS (Kubo-Martin-Schwinger) state (for a particular Hamiltonian). It's important that the KMS state depends on the Hamiltonian, which is to say the dynamics and the notion of time with respect to which the system will evolve. Given a notion of time flow, there is a notion of KMS state.
One interesting place where KMS states come up is in (general) relativistic thermodynamics. In particular, the effect called the Unruh Effect is an example (here I'm referencing Robert Wald's book, "Quantum Field Theory in Curved Spacetime and Black Hole Thermodynamics"). Physically, the Unruh effect says the following. Suppose you're in flat spacetime (described by Minkowski space), and an inertial (unaccelerated) observer sees it in a vacuum. Then an accelerated observer will see space as full of a bath of particles at some temperature related to the acceleration. Mathematically, a change of coordinates (acceleration) implies there's a one-parameter family of automorphisms of the von Neumann algebra which describes the quantum field for particles. There's also a (trivial) family for the unaccelerated observer, since the coordinate system is not changing. The Unruh effect in this language is the fact that a vacuum state relative to the time-flow for an unaccelerated observer is a KMS state relative to the time-flow for the accelerated observer (at some temperature related to the acceleration).
The KMS state for a von Neumann algebra with a given Hamiltonian operator has a density matrix , which is again, up to some constant factors, just the exponential of the Hamiltonian operator. (For pure states, , and in general a matrix becomes a state by which for pure states is just the usual expectation value value for A, ).
Now, things are a bit more complicated in the von Neumann algebra picture than the classical picture, but Tomita-Takesaki theory tells us that as in the classical world, the correspondence between dynamics and KMS states goes both ways: there is a flow – the Tomita flow – associated to any given state, with respect to which the state is a KMS state. By "flow" here, I mean a one-parameter family of automorphisms of the von Neumann algebra. In the Heisenberg formalism for quantum mechanics, this is just what time is (i.e. states remain the same, but the algebra of observables is deformed with time). The way you find it is as follows (and why this is right involves some operator algebra I find a bit mysterious):
First, get the algebra acting on a Hilbert space , with a cyclic vector (i.e. such that is dense in – one way to get this is by the GNS representation, so that the state just acts on an operator by the expectation value at , as above, so that the vector is standing in, in the Hilbert space picture, for the state ). Then one can define an operator by the fact that, for any , one has
That is, acts like the conjugation operation on operators at , which is enough to define since is cyclic. This has a polar decomposition (analogous for operators to the polar form for complex numbers) of , where is antiunitary (this is conjugation, after all) and is self-adjoint. We need the self-adjoint part, because the Tomita flow is a one-parameter family of automorphisms given by:
An important fact for Connes' classification of von Neumann algebras is that the Tomita flow is basically unique – that is, it's unique up to an inner automorphism (i.e. a conjugation by some unitary operator – so in particular, if we're talking about a relativistic physical theory, a change of coordinates giving a different parameter would be an example). So while there are different flows, they're all "essentially" the same. There's a unique notion of time flow if we reduce the algebra to its cosets modulo inner automorphism. Now, in some cases, the Tomita flow consists entirely of inner automorphisms, and this reduction makes it disappear entirely (this happens in the finite-dimensional case, for instance). But in the general case this doesn't happen, and the Connes-Rovelli paper summarizes this by saying that von Neumann algebras are "intrinsically dynamic objects". So this is one interesting thing about the quantum view of states: there is a somewhat canonical notion of dynamics present just by virtue of the way states are described. In the classical world, this isn't the case.
Now, Rovelli's "Thermal Time" hypothesis is, basically, that the notion of time is a state-dependent one: instead of an independent variable, with respect to which other variables change, quantum mechanics (per Rovelli) makes predictions about correlations between different observed variables. More precisely, the hypothesis is that, given that we observe the world in some state, the right notion of time should just be the Tomita flow for that state. They claim that checking this for certain cosmological models, like the Friedman model, they get the usual notion of time flow. I have to admit, I have trouble grokking this idea as fundamental physics, because it seems like it's implying that the universe (or any system in it we look at) is always, a priori, in thermal equilibrium, which seems wrong to me since it evidently isn't. The Friedman model does assume an expanding universe in thermal equilibrium, but clearly we're not in exactly that world. On the other hand, the Tomita flow is definitely there in the von Neumann algebra view of quantum mechanics and states, so possibly I'm misinterpreting the nature of the claim. Also, as applied to quantum gravity, a "state" perhaps should be read as a state for the whole spacetime geometry of the universe – which is presumably static – and then the apparent "time change" would then be a result of the Tomita flow on operators describing actual physical observables. But on this view, I'm not sure how to understand "thermal equilibrium". So in the end, I don't really know how to take the "Thermal Time Hypothesis" as physics.
In any case, the idea that the right notion of time should be state-dependent does make some intuitive sense. The only physically, empirically accessible referent for time is "what a clock measures": in other words, there is some chosen system which we refer to whenever we say we're "measuring time". Different choices of system (that is, different clocks) will give different readings even if they happen to be moving together in an inertial frame – atomic clocks sitting side by side will still gradually drift out of sync. Even if "the system" means the whole universe, or just the gravitational field, clearly the notion of time even in General Relativity depends on the state of this system. If there is a non-state-dependent "god's-eye view" of which variable is time, we don't have empirical access to it. So while I can't really assess this idea confidently, it does seem to be getting at something important.
Update. Curvature. 2-monads.
Posted by Jeffrey Morton under category theory, geometry, higher dimensional algebra, musing, reading, talks, tqft, update
First off, a nice recent XKCD comic about height.
I've been busy of late starting up classes, working on a paper which should appear on the archive in a week or so on the groupoid/2-vector space stuff I wrote about last year. I resolved the issue I mentioned in a previous post on the subject, which isn't fundamentally that complicated, but I had to disentangle some notation and learn some representation theory to get it figured out. I'll maybe say something about that later, but right now I felt like making a little update. In the last few days I've also put together a little talk to give at Octoberfest in Montreal, where I'll be this weekend. Montreal is a lovely city to visit, so that should be enjoyable.
A little while ago I had a talk with Dan's new grad student – something for a class, I think – about classical and modern differential geometry, and the different ideas of curvature in the two settings. So the Gaussian curvature of a surface embedded in has a very multivariable-calculus feel to it: you think of curves passing through a point, parametrized by arclength. The have a moving orthogonal frame attached: unit tangent vector, its derivative, and their cross-product. The derivative of the unit tangent is always orthogonal (it's not changing length), so you can imagine it to be the radius of a circle, with length , the radius of curvature. Then you have curvature along that path. At any given point on a surface, you get two degrees of freedom – locally, the curve looks like a hyperboloid or an ellipse, or whatever, so there's actually a curvature form. The determinant gives the Gaussian curvature . So it's a "second derivative" of the surface itself (if you think of it as ). The Gaussian curvature, unlike the curvature in particular directions, is intrinsic – preserved by isometry of the surface, so it's not really dependent on the embedding. But this fact takes a little thinking to get to. Then there's the trace – the scalar curvature.
In a Riemannian manifold, you need to have a connection to see what the curvature is about. Given a metric, there's the associated Levi-Civita connection, and of course you'd get a metric on a surface embedded in , inherited from the ambient space. But the modern point of view is that the connection is the important object: the ambient space goes away entirely. Then you have to think of what the curvature represents differenly, since there's no normal vector to the surface any more. So now we're assuming we want an intrinsic version of the "second derivative of the surface" (or n-manifold) from the get-go. Here you look at the second derivative of the connection in any given coordinate system. You're finding the infinitesimal noncommutativity of parallel transport w.r.t two coordinate directions: take a given vector, and transport it two ways around an infinitesimal square, and take the difference, get a new vector. This all is written as a (3,1)-form, the Riemann tensor. Then you can contract it down and get a matrix again, and then contract on the last two indices (a trace!) and you get back the scalar curvature again – but this is all in terms of the connection (the coordinate dependence all disappears once you take the trace).
I hadn't thought about this stuff in coordinates for a while, so it was interesting to go back and work through it again.
In the noncommutative geometry seminar, we've been talking about classical mechanics – the Lagrangian and Hamiltonian formulation. So it reminded me of the intuition that curvature – a kind of second derivative – often shows up in Lagrangians for field theories using connections because it's analogous to kinetic energy. A typical mechanics Lagrangian is something like (kinetic energy) – (potential energy), but this doesn't appear much in the topological field theories I've been thinking about because their curvature is, by definition, zero. Topological field theory is kind of like statics, as opposed to mechanics, that way. But that's a handy simplification for the program of trying to categorify everything. Since the whole space of connections is infinite dimensional, worrying about categorified action principles opens up a can of worms anyway.
So it's also been interesting to remember some of that stuff and discuss it in the seminar – and it was inially suprising that it's the introduction to "noncommutative geometry". It does make sense, though, since that's related to the formalism of quantum mechanics: operator algebras on Hilbert spaces.
Finally, I was looking for something on 2-monads for various reasons, and found a paper by Steve Lack which I wanted to link to here so I don't forget it.
The reason I was looking was that (a) Enxin Wu, after talking about deformation theory of algebras, was asking after monads and the bar construction, which we talked about at the UCR "quantum gravity" seminar, so at some point we'll take a look at that stuff. But it reminded me that I was interested in the higher-categorical version of monads for a different reason. Namely, I'd been talking to Jamie Vicary about his categorical description of the harmonic oscillator, which is based on having a monad in a nice kind of monoidal category. Since my own category-theoretic look at the harmonic oscillator fits better with this groupoid/2-vector space program I'll be talking about at Octoberfest (and posting about a little later), it seemed reasonable to look at a categorified version of the same picture.
But first things first: figuring out what the heck a 2-monad is supposed to be. So I'll eventually read up on that, and maybe post a little blurb here, at some point.
Anyway, that update turned out to be longer than I thought it would be.
Arrow's Theorem, Strategic Voting etc.
Posted by Jeffrey Morton under musing, reading
I mentioned before that I wanted to try expanding the range of things I blog about here. One example is that I have a long-standing interest in game theory, which I think began when I was an undergrad at U of Waterloo. I don't (currently) do research in game theory, and have nothing particularly novel to say about it (though naturally a default question for me would be: can you categorify it?), but it is regularly used to analyze situations in evolutionary theory, and social sciences like economics, and politics. So it seems to be a fairly fundamental discipline, and worth looking at, I think.
For a little over a week now, Canada has been in a campaign leading up to a federal election in October. Together with the constant coverage of the US election campaign, this means the news is full of politicking, projections of results, etc. Also as usual, there are less-well-covered people agitating for electoral reform. So it seemed as good a time as any to blog something about social choice theory, which is an application of game theory to the special problem of making choices in groups which reflect the preferences of its members.
This is the defining problem for democracy, which is (in recent times) the world's most popular political system, so it has been pretty extensively studied. One aspect of democracy is voting. One aspect of social choice theory is the study of voting systems – algorithms which collect information about the populations preferences among some alternatives, and produce a "social choice function". For example, the US Presidential election process (in its ideal form, and excluding Primaries) collects the name of one candidate from each voter, and returns the name of a single winner. The Australian process collects more information from each voter: an ordered list of candidates. It uses a variant of the voting scheme called STV, which can return more than one winner.
Okay, so there are many different voting methods. Why should there be so many? Why not just use the best? For one thing, there are different criteria what is meant by "best", and every voting system has some sort of limitation. Depending on your precise criteria, you may prefer one system or another. A more precise statement of this comes in the form of Arrow's theorem.
Suppose we have a set of "choices" (a general term for alternatives, not necessarily candidates for office), and for each voter in the population of voters , there is , a total order on (a "preference order"). Call this information a "profile". Then we'd like to collect some information about the profile (perhaps the whole thing, perhaps not), and produce a "social preference order" . This is a function (where is the set of orders on ). The functon should satisfy some conditions, of course, and many possible conditions have been defined. Arrow's theorem is usually stated with five conditions, but an equivalent form uses these:
Unrestricted Domain: is surjective. (I.e. any preference order could potentially occur as output of the algorithm – e.g. in a single-winner election, the system should allow any candidate to win, given enough votes)
Independence of Irrelevant Alternatives: for any , the restriction agrees with (i.e. when applied to the restrictions of the orders to , it produces the restriction of to . In particular, removing a non-winning candidate from the ballot should not affect the outcome.)
Monotonicity: if prefers alternative to , then changing any so that it prefers to (assuming it did not) should not cause to prefer to (i.e. no voter's ballot has a harmful effect on their preferred candidate).
Arrow's theorem says that no algorithm satisfies all three conditions (and of course some algorithms don't satisfy any). (A different formulation and its proof can be found here.)
Most popular voting systems satisfy (1) and (3) since these are fairly obvious criteria of fairness: every candidate has a chance, and nobody's votes have negative weight (though note that a popular reform suggestion in the US, Instant Runoff Voting, fails monotonicity!). Condition (2) is the one that most people seem to find non-obvious. Failures of condition (2) can take various forms. One is "path dependence", which may occur if the decision is made through a number of votes, and the winner can (for some profiles) depend on the order in which the votes are taken (for instance, the runoff voting used in French presidential elections is path dependent). When a voting system has this property, the outcome can sometimes be manipulated by the "agenda-setter" (if there is one).
Another way (2) can fail is by creating the possibility of strategic voting: creating a situation in which voters have a rational incentive to give false information about their preferences. For instance, voters may want to avoid "splitting" the vote. In the US election in 2000, the presence of the "irrelevant" (i.e. non-winning) alternative Ralph Nader (allegedly) split the vote which otherwise would have gone to Gore, allowing Bush to win certain states. Somewhat similarly, in the current Canadian election, there is a single "right wing" party (Conservative), two or three "left wing" parties, depending on the riding (Liberal, NDP, Bloc Quebecois), and one which is neither (Green). In some competitive districts, for example, voters who prefer NDP to Liberal, but Liberal to Conservative, have a rational reason to vote for their second choice in order to avoid their third choice – assuming they know the "real" race is between Conservative and Liberal in their riding. (In my riding, North London Centre, this is not an issue since the outcome – Liberal – is not in doubt at all.)
These, like all voting system flaws, only become apparent when there are at least three options: with only two options, condition (2) doesn't apply, since removing one option leaves nothing to vote on. (This is one reason why many voting systems are said to "favour the two-party system", where its flaws are not apparent: when the vote is split, voters have an incentive to encourage parties to merge. This is why Canada now has only one "right-wing" party).
These two flaws also allow manipulation of the vote only when the manipulator knows enough about the profile of preferences. Apart from allowing parties to find key competitive ridings (or states, etc.), this is probably one of the most important uses of polling data. (Strategic voting is hard in Canada, since a good statistical sample of, say, 1000 in each of 308 ridings would require polling about 1% of the total population of the country, so one usually has only very imperfect information about the profile. Surveying 1000 people in each of 50 US states is relatively much easier. Even at that, projections are hard: try reading any of the methodology details on, say, fivethirtyeight.com for illustration.)
Now, speaking of strategic voting, the Gibbard-Satterthwaite theorem, closely related to Arrow's theorem, applies to social choice systems which aim to select a single winner based on a number of votes. (Arrow's theorem originally doesn't specifically apply to voting: it applies to any multi-criterion decision-making process). The G-S theorem says that any (deterministic) voting system satisfies one of three conditions:
It is dictatorial (i.e. there is a voter $v$ such that the winner depends only on $P_v$
It is restricted (i.e. there is some candidate who cannot win no matter the profile)
It allows strategic voting for some profiles
So it would seem that the possibility of strategic voting can't reasonably be done away with. This suggests the point of view that voting strategically is no more a problem than, say, playing chess strategically. The fact that an analysis of voting in terms of the theory of games of strategy suggests this point of view is probably not a coincidence…
As I remarked, here in London North Centre, the outcome of the vote is in no doubt, so, strategically speaking, any vote is as good as any other, or none. This curious statement is, paradoxically, only true if not believed by most voters – voting strategically in a context where other voters are doing the same, or worse yet, answering pollsters strategically, is a much more complicated game with incomplete (and unreliable) information. This sort of thing is probably why electoral reform is a persistent issue in Canada.
20th Century Space and 10th Century Space
Posted by Jeffrey Morton under c*-algebras, historical, noncommutative geometry, philosophical, physics, reading
A couple of posts ago, I mentioned Max Jammer's book "Concepts of Space" as a nice genealogy of that concept, with one shortcoming from my point of view – namely, as the subtitle suggests, it's a "History of Theories of Space in Physics", and since physics tends to use concepts out of mathematics, it lags a bit – at least as regards fundamental concepts. Riemannian geometry predates Einstein's use of it in General Relativity by fifty some years, for example. Heisenberg reinvented matrices and matrix multiplication (which eventually led to wholesale importation of group theory and representation theory into physics). More examples no doubt could be found (String Theory purports to be a counterexample, though opinions differ as to whether it is real physics, or "merely" important mathematics; until it starts interacting with experiments, I'm inclined to the latter, though of course contra Hardy, all important mathematics eventually becomes useful for something).
What I said was that it would be nice to see further investigation of concepts of space within mathematics, in particular Grothendieck's and Connes'. Well, in a different context I was referred to this survey paper by Pierre Cartier from a few years back, "A Mad Day's Work: From Grothendieck To Connes And Kontsevich, The Evolution Of Concepts Of Space And Symmetry", which does at least some of that – it's a fairly big-picture review that touches on the relationship between these new ideas of space. It follows that stream of the story of space up to the end of the 20th century or so.
There's also a little historical/biographical note on Alexander Grothendieck – the historical context is nice to see (one of the appealing things about Jammer's book). In this case, much of the interesting detail is more relevant if you find recent European political history interesting – but I do, so that's okay. In fact, I think it's helpful – maybe not mathematically, but in other ways – to understand the development of mathematical ideas in the context of history. This view seems to be better received the more ancient the history in question.
On the scientific end, Cartier tries to explain Grothendieck's point of view of space – in particular what we now call topos theory – and how it developed, as well as how it relates to Connes'. Pleasantly enough, a key link between them turns out to be groupoids! However, I'll pass on commenting on that at the moment.
Instead, let me take a bit of a tangent and jump back to Jammer's book. I'll tell you something from his chapter "Emancipation from Aristotelianism" which I found intriguing. This would be an atomistic theory of space – an idea that's now beginning to make something of a comeback, in the guise of some of the efforts toward a quantum theory of gravity (EDIT: but see comments below). Loop quantum gravity, for example, deals with space in terms of observables, which happen to take the form of holonomies of connections around loops. Some of these observables have interpretations in terms of lengths, areas, and volumes. It's a prediction of LQG that these measurements should have "quantized", which is to say integer, values: states of LQG are "spin networks", which is to say graphs with (quantized) labels on the edges, interpreted as areas (in a dual cell complex). (Notice this is yet again another, different, view of space, different from Grothendieck's or Connes', but shares with Connes especially the idea of probing space in some empirical way. Grothendieck "probes" space mainly via cohomology – how "empirical" that is depends on your point of view.)
The atomistic theory of space Jammer talks about is very different, but it does also come from trying to reconcile a discrete "quantum" theory of matter with a theory linking matter to space. In particular, the medieval Muslim philosophical school known as al Kalam tried to reconcile the Koran and Islamic theology with Greek philosophy (most of the "Hellenistic" world conquered by Alexander the Great, not least Egypt, is inside Dar al Islam, which is why many important Greek texts came into Europe via Arabic translations). Though they were, as Jammer says, "Emancipating" themselves from Aristotle, they did share some of his ideas about space.
For Aristotle, space meant "place" – the answer to the questions "where is it?" and "what is its shape and size?". In particular, it was first and foremost an attribute of some substance. All "where?" questions are about some THING. The answer is defined in terms of other things: my cat is on the ground, under the tree, beside the house. The "place" of an object was literally the inner shell of the containing body that held it (which was contained by some other body, and so on – there being no vacuum in Aristotle). So my "place" is defined by (depending how you look at it) my skin, my clothes, or the walls of the room I'm in. This is a relational view of space, though more hard-headed than, say, Leibniz's.
The philosophers of the Kalam had a similar relational view of space, but they didn't accept Aristotle's view of "substances", where each thing has its own essential identity, on which attributes are hung like hats. Instead, they believed in atomism, following Democritus and Leucippus: bodies were made out of little indivisible nuggets called "atoms". Macroscopic things were composites of atoms, and their attributes resulted from how the atoms were put together. Here's Jammer's description:
The atoms of the Kalam are indivisible particles, equal to each other and devoid of all extension. Spatial magnitude can be attributed only to a combination of atoms forming a body. Although a definite position (hayyiz) belongs to each individual atom, it does not occupy space (makan). It is rather the set of these positions – one is almost tempted to say, the system of relations – that constitutes spatial extension….
In the Kalam, these rather complicated and surprisingly abstract ideas were deemed necessary in order to meet Aristotle's objections against atomism on the ground that a spatial continuum cannot be constituted by, or resolved into, indivisibles nor can two points be continuous or contiguous with one another.
So like people who prefer a "background independent" quantum theory of gravity, they wanted to believe that space (geometry) derives from matter, and that matter is discrete, but space was commonly held to be continuous. Also alike, they resolved the problem by discarding the assumption of continuous space, and, by consideration of motion, to discrete time.
There are some differences, though. The most obvious is that the nodes of the graph in a spin network state don't represent units of matter, or "atoms". For that matter, quantum field theory doesn't really have "atoms" in the sense of indivisible units which don't break apart or interact. Everything interacts in QFT. (In some sense, interactions are more fundamental units in QFT than "particles" are – particles only (sic!) serve to connect one interaction with another.)
Another key difference is how space relates to matter. In Aristotle, and in the Kalam, space is defined directly by matter: two bits of matter "define" the space between them. In General Relativity (the modern theory with the "relational" view of space), there's still room for space as an actor in its own right, like Newton's absolute space-as-independent-variable – in other words, room for a vacuum, which Aristotle categorically denied could even conceivably exist. In GR, what matter determines is the curvature of space (more precisely the Einstein tensor of the curvature).
Well, so the differences are probably more informative than the similarities,
(Edit: To emphasize a key difference glossed over before… It was coupling to quantum matter which suggested quantizing the picture of space. Discreteness of the spectrum of various observables is a logically separate prediction in each case. Either matter or space(time) could have had continuous spectrum for the relevant observables and still been quantized – discrete matter would have given discreteness for some observed quantities, but not area, length, and so on. So in the modern setting, the link is much less direct.)
but the fact that theories of related discreteness in matter, space, and time, have been around for a thousand years or more is intriguing. The idea of empty space as an independent entity – in the modern form only about three hundred years old – appears to be the real novel part. One of the nice intuitions in Carlo Rovelli's book on Quantum Gravity, for me at least, was to say that, rather than there being a separate "space", we have a theory of fields defined on other fields as background – one of which, the "gravitational field" has customarily been taken for "space". So spatial geometry is a field, and it has some propagating (through space!) degrees of freedom – the particle associated to this field is a graviton. Nobody's ever seen one, mind you – but supposing they exist makes many of things easier.
To re-state a previous point: I think this is a nice aspect of categorification for dealing with space. Extending the "stuff/structure/properties" trichotomy to allow space to resemble both "stuff" and relations between stuff leaves room for both points of view.
I mention this because tomorrow I leave London (Ontario) for London (England), and thence to Nottingham, for the Quantum Gravity and Quantum Geometry Conference. It's been a while since I worked much on quantum gravity, per se, but this conference should be interesting because it seems to be a confluence of mathematically and physically inclined people, as the name suggests. I read on the program, for example, that Jerzy Lewandowski is speaking on QFT in Quantum Curved Spacetime, and suddenly remember that, oh yes, I did a Masters thesis (viz) on QFT in curved (classical) spacetime… but that was back in the 20th century!
It's been a while, and I only made a small start at it before, but that whole area of physics is quite pretty. Anyway, it should be interesting, and there are a number of people I'm looking forward to talking to.
Correspondences and Spans
Posted by Jeffrey Morton under c*-algebras, category theory, noncommutative geometry, philosophical, quantum mechanics, reading, spans
In the past couple of weeks, Masoud Khalkhali and I have been reading and discussing this paper by Marcolli and Al-Yasry. Along the way, I've been explaining some things I know about bicategories, spans, cospans and cobordisms, and so on, while Masoud has been explaining to me some of the basic ideas of noncommutative geometry, and (today) K-theory and cyclic cohomology. I find the paper pretty interesting, especially with a bit of that background help to identify and understand the main points. Noncommutative geometry is fairly new to me, but a lot of the material that goes into it turns out to be familiar stuff bearing unfamiliar names, or looked at in a somewhat different way than the one I'm accustomed to. For example, as I mentioned when I went to the Groupoidfest conference, there's a theme in NCG involving groupoids, and algebras of -linear combinations of "elements" in a groupoid. But these "elements" are actually morphisms, and this picture is commonly drawn without objects at all. I've mentioned before some ideas for how to deal with this (roughly: is easy to confuse with the algebra of matrices over ), but anything special I have to say about that is something I'll hide under my hat for the moment.
I must say that, though some aspects of how people talk about it, like the one I just mentioned, seem a bit off, to my mind, I like NCG in many respects. One is the way it ties in to ideas I know a bit about from the physics end of things, such as algebras of operators on Hilbert spaces. People talk about Hamiltonians, concepts of time-evolution, creation and annihilation operators, and so on in the algebras that are supposed to represent spaces. I don't yet understand how this all fits together, but it's definitely appealing.
Another good thing about NCG is the clever elegance of Connes' original idea of yet another way to generalize the concept "space". Namely, there was already a duality between spaces (in the usual sense) and commutative algebras (of functions on spaces), so generalizing to noncommutative algebras should give corresponding concepts of "spaces" which are different from all the usual ones in fairly profound ways. I'm assured, though I don't really know how it all works, that one can do all sorts of things with these "spaces", such as finding their volumes, defining derivatives of functions on them, and so on. They do lack some qualities traditionally associated with space – for instance, many of them don't have many, or in some cases any, points. But then, "point" is a dubious concept to begin with, if you want a framework for physics – nobody's ever seen one, physically, and it's not clear to me what seeing one would consist of…
(As an aside – this is different from other versions of "pointless" topology, such as the passage from ordinary topologies to, sites in the sense of Grothendieck. The notion of "space" went through some fairly serious mutations during the 20th century: from Einstein's two theories of relativity, to these and other mathematicians' generalizations, the concept of "space" has turned out to be either very problematic, or wonderfully flexible. A neat book is Max Jammer's "Concepts of Space": though it focuses on physics and stops in the 1930's, you get to appreciate how this concept gradually came together out of folk concepts, went through several very different stages, and in the 20th century started to be warped out of all recognition. It's as if – to adapt Dan Dennett – "their word for milk became our word for health".I would like to see a comparable history of mathematicians' more various concepts, covering more of the 20th century. Plus, one could probably write a less Eurocentric genealogy nowadays than Jammer did in 1954.)
Anyway, what I'd like to say about the Marcolli and Al-Yasry paper at the moment has to do with the setup, rather than the later parts, which are also interesting. This has to do with the idea of a correspondence between noncommutative spaces. Masoud explained to me that, related to the matter of not having many points, such "spaces" also tend to be short on honest-to-goodness maps between them. Instead, it seems that people often use correspondences. Using that duality to replace spaces with algebras, a recurring idea is to think of a category where morphism from algebra to algebra is not a map, but a left-right -bimodule, . This is similar to the business of making categories of spans.
Let me describe briefly what Marcolli and Al-Yasry describe in the paper. They actually have a 2-category. It has:
Objects: An object is a copy of the 3-sphere with an embedded graph .
Morphisms: A morphism is a span of branched covers of 3-manifolds over :
such that each of the maps is branched over a graph containing (perhaps strictly). In fact, as they point out, there's a theorem (due to Alexander) proving that ANY 3-manifold can be realized as a branched cover over the 3-sphere, branched at some graph (though perhaps not including a given , and certainly not uniquely).
2-Morphisms: A 2-morphism between morphisms and (together with their maps) is a cobordism , in a way that's compatible with the structure of the $lateux M_i$ as branched covers of the 3-sphere. The are being included as components of the boundary – I'm writing it this way to emphasize that a cobordism is a kind of cospan. Here, it's a cospan between spans.
This is somewhat familiar to me, though I'd been thinking mostly about examples of cospans between cospans – in fact, thinking of both as cobordisms. From a categorical point of view, this is very similar, except that with spans you compose not by gluing along a shared boundary, but taking a fibred product over one of the objects (in this case, one of the spheres). Abstractly, these are dual – one is a pushout, and the other is a pullback – but in practice, they look quite different.
However, this higher-categorical stuff can be put aside temporarily – they get back to it later, but to start with, they just collapse all the -categories into -sets by taking morphisms to be connected components of the categories. That is, they think about taking morphisms to be cobordism classes of manifolds (in a setting where both manifolds and cobordisms have some branched-covering information hanging around that needs to be respected – they're supposed to be morphisms, after all).
So the result is a category. Because they're writing for noncommutative geometry people, who are happy with the word "groupoid" but not "category", they actually call it a "semigroupoid" – but as they point out, "semigroupoid" is essentially a synonym for (small) "category".
Apparently it's quite common in NCG to do certain things with groupoids – like taking the groupoid algebra of -linear combinations of morphisms, with a product that comes from multiplying coefficients and composing morphisms whenever possible. The corresponding general thing is a categorical algebra. There are several quantum-mechanical-flavoured things that can be done with it. One is to let it act as an algebra of operators on a Hilbert space.
This is, again, a fairly standard business. The way it works is to define a Hilbert space at each object of the category, which has a basis consisting of all morphisms whose source is . Then the algebra acts on this, since any morphism which can be post-composed with one starting at acts (by composition) to give a new morphism starting at – that is, it acts on basis elements of to give new ones. Extending linearly, algebra elements (combinations of morphisms) also act on .
So this gives, at each object , an algebra of operators acting on a Hilbert space – the main components of a noncommutative space (actually, these need to be defined by a spectral triple: the missing ingredient in this description is a special Dirac operator). Furthermore, the morphisms (which in this case are, remember, given by those spans of branched covers) give correspondences between these.
Anyway, I don't really grasp the big picture this fits into, but reading this paper with Masoud is interesting. It ties into a number of things I've already thought about, but also suggests all sorts of connections with other topics and opportunities to learn some new ideas. That's nice, because although I still have plenty of work to do getting papers written up on work already done, I was starting to feel a little bit narrowly focused.
|
CommonCrawl
|
Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions
EECT Home
Uniform attractors of 3D Navier-Stokes-Voigt equations with memory and singularly oscillating external forces
March 2021, 10(1): 25-36. doi: 10.3934/eect.2020040
Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term
Wenbin Lv , and Qingyuan Wang
School of Mathematical Sciences, Shanxi University, Taiyuan 030006, China
* Corresponding author: Wenbin Lv
Received July 2019 Revised November 2019 Published March 2021 Early access March 2020
This paper focuses on the global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term under homogeneous Neumann boundary conditions in a two-dimensional smoothly bounded domain. We show that if
$ \lambda\in\mathbb{R}, \, \mu>0 $
$ l>2 $
are constants, then for all sufficiently smooth initial data the system
$\left\{ \begin{array}{l}{u_t} = \Delta (\gamma (v)u) + \lambda u - \mu {u^l},\;\;\;\;\;x \in \Omega ,{\mkern 1mu} t > 0,\;\\{v_t} = \Delta v - v + u,\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;x \in \Omega ,{\mkern 1mu} t > 0,\end{array} \right.$
possesses a global classical solution.
Keywords: Chemotaxis, global existence, general logistic source, signal-dependent motility, singularity.
Mathematics Subject Classification: Primary: 35A01, 35B45, 35K51; Secondary: 35Q92.
Citation: Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040
H. Amann, Nonhomogeneous linear and quasilinear elliptic and parabolic boundary value problems, Function Spaces, Differential Operators and Nonlinear Analysis (Friedrichroda, 1992), Teubner-Texte Math., Teubner, Stuttgart, 133 (1993), 9-126. doi: 10.1007/978-3-663-11336-2_1. Google Scholar
X. R. Cao, Large time behavior in the logistic Keller-Segel model via maximal Sobolev regularity, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 3369-3378. doi: 10.3934/dcdsb.2017141. Google Scholar
M. A. Herrero and J. J. L. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 24 (1997), 633–683. Google Scholar
T. Hillen and K. J. Painter, A user's guide to PDE models for chemotaxis, J. Math. Biol., 58 (2009), 183-217. doi: 10.1007/s00285-008-0201-3. Google Scholar
D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. Ⅱ, Jahresber. Deutsch. Math.-Verein., 106 (2004), 51-69. Google Scholar
D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar
H.-Y. Jin, Y.-J. Kim and Z.-A. Wang, Boundedness, stabilization, and pattern formation driven by density-suppressed motility, SIAM J. Appl. Math., 78 (2018), 1632-1657. doi: 10.1137/17M1144647. Google Scholar
E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theoret. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. Google Scholar
R. Kowalczyk and Z. Szymańska, On the global existence of solutions to an aggregation model, J. Math. Anal. Appl., 343 (2008), 379-398. doi: 10.1016/j.jmaa.2008.01.005. Google Scholar
J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, J. Differential Equations, 258 (2015), 1158-1191. doi: 10.1016/j.jde.2014.10.016. Google Scholar
T. Suzuki, Free Energy and Self-Interacting Particles, Progress in Nonlinear Differential Equations and their Applications, 62. Birkhäuser Boston, Inc., Boston, MA, 2005. doi: 10.1007/0-8176-4436-9. Google Scholar
Y. S. Tao and M. Winkler, A chemotaxis-haptotaxis model: The roles of nonlinear diffusion and logistic source, SIAM J. Math. Anal., 43 (2011), 685-704. doi: 10.1137/100802943. Google Scholar
Y. S. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differential Equations, 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar
Y. S. Tao and M. Winkler, Effects of signal-dependent motilities in a Keller-Segel-type reaction-diffusion system, Math. Models Methods Appl. Sci., 27 (2017), 1645-1683. doi: 10.1142/S0218202517500282. Google Scholar
J. I. Tello and M. Winkler, A chemotaxis system with logistic source, Comm. Partial Differential Equations, 32 (2007), 849-877. doi: 10.1080/03605300701319003. Google Scholar
J. P. Wang and M. X. Wang, Boundedness in the higher-dimensional Keller-Segel model with signal-dependent motility and logistic growth, J. Math. Phys., 60 (2019), 011507, 14 pp. doi: 10.1063/1.5061738. Google Scholar
Z. A. Wang and T. Hillen, Classical solutions and pattern formation for a volume filling chemotaxis model, Chaos, 17 (2007), 037108, 13 pp. doi: 10.1063/1.2766864. Google Scholar
M. Winkler, Chemotaxis with logistic source: Very weak global solutions and their boundedness properties, J. Math. Anal. Appl., 348 (2008), 708-729. doi: 10.1016/j.jmaa.2008.07.071. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Comm. Partial Differential Equations, 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. Google Scholar
M. Winkler, Global asymptotic stability of constant equilibria in a fully parabolic chemotaxis system with strong logistic dampening, J. Differential Equations, 257 (2014), 1056-1077. doi: 10.1016/j.jde.2014.04.023. Google Scholar
M. Winkler, How far can chemotactic cross-diffusion enforce exceeding carrying capacities?, J. Nonlinear Sci., 24 (2014), 809-855. doi: 10.1007/s00332-014-9205-x. Google Scholar
M. Winkler, Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems, Discrete Contin. Dyn. Syst. Ser. B, 22 (2017), 2777-2793. doi: 10.3934/dcdsb.2017135. Google Scholar
C. Yoon and Y.-J. Kim, Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion, Acta Appl. Math., 149 (2017), 101-123. doi: 10.1007/s10440-016-0089-7. Google Scholar
Hai-Yang Jin, Zhi-An Wang. The Keller-Segel system with logistic growth and signal-dependent motility. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3023-3041. doi: 10.3934/dcdsb.2020218
Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (12) : 6155-6171. doi: 10.3934/dcdsb.2021011
Abelardo Duarte-Rodríguez, Lucas C. F. Ferreira, Élder J. Villamizar-Roa. Global existence for an attraction-repulsion chemotaxis fluid model with logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (2) : 423-447. doi: 10.3934/dcdsb.2018180
Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125
Masaaki Mizukami. Boundedness and asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2301-2319. doi: 10.3934/dcdsb.2017097
Masaaki Mizukami. Improvement of conditions for asymptotic stability in a two-species chemotaxis-competition model with signal-dependent sensitivity. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 269-278. doi: 10.3934/dcdss.2020015
Ling Liu, Jiashan Zheng. Global existence and boundedness of solution of a parabolic-parabolic-ODE chemotaxis-haptotaxis model with (generalized) logistic source. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3357-3377. doi: 10.3934/dcdsb.2018324
Hong Yi, Chunlai Mu, Guangyu Xu, Pan Dai. A blow-up result for the chemotaxis system with nonlinear signal production and logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2537-2559. doi: 10.3934/dcdsb.2020194
Chun Huang. Global boundedness for a chemotaxis-competition system with signal dependent sensitivity and loop. Electronic Research Archive, 2021, 29 (5) : 3261-3279. doi: 10.3934/era.2021037
Ke Lin, Chunlai Mu. Global dynamics in a fully parabolic chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2016, 36 (9) : 5025-5046. doi: 10.3934/dcds.2016018
Guoqiang Ren, Heping Ma. Global existence in a chemotaxis system with singular sensitivity and signal production. Discrete & Continuous Dynamical Systems - B, 2022, 27 (1) : 343-360. doi: 10.3934/dcdsb.2021045
Rachidi B. Salako, Wenxian Shen. Existence of traveling wave solutions to parabolic-elliptic-elliptic chemotaxis systems with logistic source. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 293-319. doi: 10.3934/dcdss.2020017
Guoqiang Ren, Bin Liu. Global boundedness of solutions to a chemotaxis-fluid system with singular sensitivity and logistic source. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3843-3883. doi: 10.3934/cpaa.2020170
Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094
Chunhua Jin. Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3547-3566. doi: 10.3934/dcds.2018150
Langhao Zhou, Liangwei Wang, Chunhua Jin. Global solvability to a singular chemotaxis-consumption model with fast and slow diffusion and logistic source. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021122
Hai-Yang Jin. Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems, 2018, 38 (7) : 3595-3616. doi: 10.3934/dcds.2018155
Jie Zhao. Large time behavior of solution to quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1737-1755. doi: 10.3934/dcds.2020091
Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems, 2014, 34 (2) : 789-802. doi: 10.3934/dcds.2014.34.789
Wenji Zhang, Pengcheng Niu. Asymptotics in a two-species chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4281-4298. doi: 10.3934/dcdsb.2020288
Wenbin Lv Qingyuan Wang
|
CommonCrawl
|
Physics and Astronomy » High Energy Physics »
HEP Seminars
UCL HEP Group
Internal homepage
ENLIGHT 2018
BOOST 2014
DIS 2008
SMLHC09
GridPP22
Young Theorists' Forum
Spreadbury Library
Summer Students
MSc Physics
Trigger+Software
Data Science CDT
ATLAS PhD Studentships
VHEeP and applications
CDF Summary
CREAM TEA
LUX-ZEPLIN (LZ)
Quantum Sensors
Xenon Futures
Low-background Facilities
Linear Collider
Cavity BPM
Papers and Talks
Liquid Argon
SBND
Neutrino Oscillations
Muons
NEMO III
SuperNEMO
PBT Wiki
QTNM
Beyond SM
PDF4LHC
MMHT PDFs
MSHT PDFs
UHE Neutrinos
ACORNE
Weak Measurement
EuXFEL
CC R&D
ZEUS Central Tracking Detector
ZEUS UK reports
CTD and MVD Electronics
Royal Society Summer Science Exhibition
Bright Club & Science Showoff
LHCsound
Computing, Engineering and Technical Staff
Visitors and Emeriti
UCL High Energy Physics
HEP Seminars Home
HEP Seminars (2023)
HEP Seminars (1994-2022)
HEP Seminar Archive
Other departmental Seminars
UCL HEP Seminars 1994-2022
Friday 2 December 2022, 16:00 : Matthew Mccullough (CERN)
The Higgs Boson Under a Microscope
What does the Higgs boson look like if you put it under a microscope? How does it move? Is it made up of smaller parts? Does it give itself mass? In this talk I will sketch a theoretical range of possible answers to these questions and, ultimately, how we might hope to answer them with the HL-LHC and future colliders.
Friday 18 November 2022, 16:00 : Giacomo Magni (Nikhef)
Evidence for intrinsic charm quarks in the proton
QCD, describes the proton in terms of quarks and gluons. The proton is a state of two up quarks and one down quark bound by gluons, but quantum theory predicts that in addition there is an infinite number of quark-antiquark pairs. Both light and heavy quarks, whose mass is respectively smaller or bigger than the mass of the proton, are revealed inside the proton in high-energy collisions. However, it is unclear whether heavy quarks also exist as a part of the proton wavefunction, which is determined by non-perturbative dynamics and accordingly unknown: so-called intrinsic heavy quarks. After a brief introduction how parton distribution functions (PDFs) can be extracted from high energy data, we provide a first evidence for intrinsic charm by exploiting a high-precision determination of the quark-gluon content of the nucleon based on machine learning and a large experimental dataset. We disentangle the intrinsic charm component from charm-anticharm pairs arising from high-energy radiation, showing a generally good agreement with models. Finally we show how these findings can be compared to recent data on Z-boson production with charm jets from the LHCb experiment.
Friday 11 November 2022, 16:00 : Andrew Stevens (UCL)
First results from LUX-ZEPLIN
The LUX-ZEPLIN (LZ) experiment is a dark matter detector centred on a dual-phase xenon time projection chamber operating at the Sanford Underground Research Facility in Lead, South Dakota, USA. Results from LZ's first search for Weakly Interacting Massive Particles (WIMPs) with an exposure of 60 live days using a fiducial mass of 5.5 tonnes were recently published. A profile-likelihood analysis shows the data to be consistent with a background-only hypothesis, setting new limits on spin-independent WIMP-nucleon cross-sections for WIMP masses above 9 GeV/c2 . The most stringent limit is set at 30 GeV/c2 , excluding cross sections above 5.9 × 10−48 cm2 at the 90% confidence level. This talk will give an overview of the LZ detector, a description of the first results, and a brief look at the science program that is now accessible with the LZ experiment.
Friday 28 October 2022, 16:00 : Lydia Beresford (DESY)
Measuring tau g-2 using Pb+Pb collisions
The electromagnetic moments of the tau lepton are highly sensitive to new physics but are challenging to measure due to the short tau lifetime. Recently, the ATLAS and CMS Collaborations set the first new constraints on the tau magnetic moment ('g-2') in nearly two decades, using LHC heavy ion collisions as an intense source of photon collisions to observe photo-production of tau pairs in Pb+Pb collisions. In this seminar I will discuss these new results and I will also highlight exciting future prospects.
Friday 21 October 2022, 16:00 : Sonia Escribano Rodriguez (UCL)
Prompt gamma-ray imaging of nanoparticles for in vivo range verification in proton therapy
Proton therapy is an emerging modality for cancer treatment. It produces a better dose conformation, reducing the damage to the surrounding healthy structures and tissues. However, in vivo range verification is desirable to minimize beam delivery errors during the treatment. The most promising range verification technique is prompt gamma-ray imaging (PGI). In addition, the use of nanoparticles (NPs) in proton therapy has been developed in the past years, as particles with high atomic number produce an enhancement in the dose received by the tumor. The aim of the project is to investigate the feasibility of performing prompt gamma-ray imaging measurements using characteristic gamma rays emitted from a nanoparticles target. To test the proof of principle an in-house designed target, consisting of a solution of magnetite NPs (Fe3O4) diluted in water, was irradiated with a proton beam at different energies. Two different detection systems were placed perpendicular to the target to measure the prompt gamma-rays emitted from the NPs after the proton irradiation. The experimental results, obtained at KVI-CART and the University of Birmingham, are compared to a Geant4 Monte Carlo simulation.
Friday 17 June 2022, 16:00 : Rafael Teixeira De Lima (SLAC)
Exploring Di-Higgs production with beauty quarks and Machine Learning
The Standard Model of particle physics dictates how our universe works. Decades of experimental research have probed it in details. In the last 10 years, the final piece of the puzzle, the Higgs boson, has been discovered and studied to great precision. However, one area of the Standard Model remains a mystery: the Higgs potential. Determining the parameters of the Higgs potential can have significant implications in the nature of the electroweak symmetry breaking (the process by which fundamental particles acquire mass) and the history of the universe. In this talk, I will explore how we study the Higgs potential at the Large Hadron Collider (LHC) through the search for events where two Higgs bosons are produced (Di-Higgs). In particular, I will describe how the ATLAS experiment at the LHC conducts this search via the Higgs decays to beauty quarks. This search is only possible through the use of different Machine Learning algorithms, which will also be discussed.
Friday 10 June 2022, 16:00 : Kevin Lesko (LBL)
Low Background Assay & Cleanliness Requirements for Future Rare-Search Experiments
I will review low background assay for fixed contaminants and surface cleanliness efforts from SNO, LUX, and LZ. Future large-scale rare-search experiments such as a G3 LXe or Neutrinoless Double-beta Decay may benefit from these experiences and lessons-learned. The challenges of large scale, underground construction efforts, and increasingly lower acceptable levels of fixed contamination are examined.
Friday 27 May 2022, 16:00 : Robin G. Stuart
Searching For Shackleton's Vessel Endurance – in the numbers
On 21 November 1915 Sir Ernest Shackleton's vessel Endurance was crushed by ice in the Weddell Sea and sank at a position given as 68°39¢30²S 52°26¢30²W. Close examination of the sextant sights recorded in Captain Frank Worsley's original logbooks showed, however, that the location of the wreck would be offset from this position. The navigational methods and calculations that Worsley used to fix the position will be discussed as well as the reanalysis of the observations using modern techniques. Lunar occultation timings, performed in the southern winter of 1915 to rate the chronometers for longitude, were reduced using JPL ephemerides for the Moon and Hipparcos star positions and yield results that are significantly different from those obtained from tables in the Nautical Almanac of the time. Various factors are at play but when taken together they predicted that the wreck should lie to the south and likely to the east of the recorded position.
Friday 13 May 2022, 16:00 : Chris Hays (Oxford)
High-precision measurement of the W boson mass with the CDF II detector
The mass of the W boson, a mediator of the weak force between elementary particles, is tightly constrained by the symmetries of the standard model of particle physics. The Higgs boson was the last missing component of the model. After the observation of the Higgs boson, a measurement of the W boson mass provides a stringent test of the model. We measure the W boson mass using data corresponding to 8.8 inverse femtobarns of integrated luminosity collected in proton-antiproton collisions at a 1.96 TeV centre-of-mass energy with the CDF II detector at the Fermilab Tevatron collider. The measured value is in tension with the prediction of the model.
Friday 29 April 2022, 16:00 : Louie Corpe (CERN)
The search for exotic long-lived particles: illuminating a blind spot of the LHC programme
Exotic long-lived particles (LLPs) occur in many well-motivated extensions to the Standard Model (SM) of particle physics, and could explain the nature of Dark Matter (DM). However, LLPs could have been missed by the traditional LHC search programme to date due to their non-standard energy deposition patterns, which would often be thrown away as noise by standard reconstruction techniques. LLP signatures could be hiding in the "Blind Spot" of LHC searches! This talk will include a general motivation for LLP searches, and highlight the sort of signatures which are expected at CMS and ATLAS, as well as the unusual background which must be mastered. Recent examples of LLP results from the ATLAS collaboration will be used to highlight some of the novel techniques which are employed in these searches. Finally, prospects for the development of the LLP programme at CERN in coming years will be presented.
Friday 25 March 2022, 10:00 : IoP Practice Talks — E7/Zoom
IoP Practice Talks
Practice Talks for IoP Conference.
Friday 18 March 2022, 16:00 : Sean Mee (Graz)
On the construction of theories of composite dark matter
The dark matter remains one of the most famous unanswered questions in physics today. Despite the popular weakly interacting massive particle (WIMP) paradigm, a new class of theories posits that dark matter could be described as the lightest bound state in a QCD-like confining hidden sector. We discuss the construction of theories which describe the interactions of these states with other interesting bound states of the hidden sector, with a particular focus on pseudoreal symmetry groups. We highlight the key differences between such theories and the more familiar SM QCD. We present some results for the low-energy quantities describing the spectrum of these composite theories (masses and decay constants). Finally, we discuss the simplest portals which could allow us to look for such sectors at colliders and elsewhere, the associated symmetry breaking patterns and the potential phenomenological consequences.
Wednesday 16 March 2022, 15:30 : Prof. David Waters — Harrie Massey LT
XXXI Elizabeth Spreadbury Lecture: The Mystery of Neutrino Mass
We don't know the mass of the most abundant fermion in the universe. We suspect that the origin of neutrino masses might be quite different from those of other fundamental particles. The talk will begin with a review of what we know about neutrino mass and its generation. We will present experiments, some of which are UCL-led, that aim to determine the nature of neutrino mass. A crude neutrino mass measurement will be attempted during the lecture itself, before presenting a perspective on how this field may develop over the next few years.
Friday 11 March 2022, 16:00 : Nicola McConkey (Manchester)
Latest results from MicroBooNE and the outlook for the SBN programme
The MicroBooNE experiment is a liquid argon neutrino detector making groundbreaking neutrino physics measurements in the BNB and NuMI beamlines at Fermilab. I will present the latest results from new physics searches and neutrino cross-section measurements with MicroBooNE, and the outlook for the future, with a particular view to the physics potential of the Short Baseline Neutrino programme at Fermilab.
Friday 04 March 2022, 16:00 : Inwook Kim (LANL)
Results from Baksan Experiment on Sterile Transitions (BEST)
The Baksan Experiment on Sterile Transitions (BEST) was designed to investigate the deficit of electron neutrinos, ƞe, observed in previous gallium-based radiochemical measurements with high intensity neutrino sources, commonly referred to as the gallium anomaly. Based on the Gallium-Germanium Neutrino Telescope (GGNT) of the SAGE experiment, the BEST setup is comprised of two zones of liquid Ga target to explore neutrino oscillations on the meter scale. Any deficits in the 71Ge production rates in the two zones, as well as the differences between them, would be an indication of nonstandard neutrino properties at this short scale. From July 5th to October 23rd 2019, the two-zone target was exposed to the mainly monoenergetic 51Cr neutrino source ten times with 20 independent 71Ge extractions from the two Ga targets. The 71Ge decay rates were measured from July 2019 to March 2020 to determine the total production rate from the neutrino source. At the end of the experiment, the counting systems were calibrated using 71Ge isotope data taken in November 2020. We report the results from the BEST sterile neutrino oscillation experiment. 4σ deviations from unity were observed for the ratio of the measured 71Ge production rate to the predicted rate from the known cross section in both zones and confirm the previously reported Ga anomaly. If interpreted in the context of neutrino oscillations, the deficits give best fit oscillation parameters of Δm2 = 3.3+∞-2.3 ~eV2 and sin2 2θ = 0.42+0.15-0.17, consistent with νe → νs oscillations governed by a surprisingly large mixing angle.
Friday 25 February 2022, 16:00 : Veronique Boisvert (RHUL)
The Climate Emergency: can Particle Physics ever be sustainable?
We live in a climate emergency and consequently all countries are putting in place measures to reduce their carbon emissions in order to reach a so-called "net zero emissions" by 2050. All aspects of economic life will be affected by such measures, including particle physics research. I will present some examples of sources of carbon emissions within the field of particle physics. This will include emissions associated with building and running accelerators, detector operations, high-performance computing and activities associated with our research life like travel. I will also present solutions being developed for addressing this in the near and long term as well as recommendations for the field.
Friday 18 February 2022, 16:00 : Alexander Booth (Queen Mary)
Latest 3-flavour Oscillation Results form the NOvA Experiment
NOvA is a long-baseline neutrino oscillation experiment searching for electron neutrino appearance and muon neutrino disappearance. To do this, NOvA uses the NuMI beam at Fermi National Accelerator Laboratory along with two functionally identical detectors, separated by a baseline of 809 km. A near detector, which is close to the point of neutrino production, provides a measurement of initial beam energy spectra and flavour composition. The spectra are then extrapolated to a far detector and compared to data to look for oscillations. The experiment is able to constrain several parameters of the PMNS matrix and is sensitive to the neutrino mass hierarchy. This seminar presents a 3 flavour oscillation analysis of 6 years of NuMI data collected by the NOvA far detector corresponding to a 14 ktonne equivalent exposure of 3.60\times 10^{20}$ and 2.50\times 10^{20}$ protons on target, in neutrino and antineutrino beam modes respectively. The analysis, shown for the first time at the International Conference on Neutrino Physics and Astrophysics in 2020, builds on previous results with a new simulation, updated reconstruction and roughly 50\% more neutrino data.
Friday 11 February 2022, 16:00 : Sunny Vagnozzi (Cambridge)
Terrestrial, cosmological, and astrophysical direct detection of dark energy
Most of the efforts in searching for dark energy, the component responsible for the accelerated expansion of the Universe, have focused on its gravitational signatures and in particular on constraining its equation of state: however, there is a lot to be learned about dark energy by getting off the beaten track. I will argue that non-gravitational interactions of dark energy with visible matter are natural and somewhat unavoidable, and lead to the possibility of direct
Friday 17 December 2021, 16:00 : Mario Campenelli
Forward detectors in ATLAS
While most of the particles produced in LHC collisions are emitted in the forward detectors, most of the physics program and detectors focus on high-pt small-rapidity production. But the ATLAS experiment (as well as CMS) has several detectors aiming at measuring particles emitted very close to the beam, covering a large class of measurements, different and complementary to those performed with only the central detectors. In this seminar I will describe the forward detectors from ATLAS, their goals, their current status and their use in past and future physics measurements.
Friday 10 December 2021, 16:00 : Elisabetta Pallante (Groningen)
Theory of the Muon g-2
I review the status of the theoretical prediction for the anomalous magnetic moment of the muon within the Standard Model of Particle Physics. Guided by a historical perspective, the seminar will cover physics and anecdotes from the first calculations to current efforts. I will discuss the challenges, strategies, status, and open questions of the Standard Model prediction. Is the discrepancy between the latter and current experimental results hinting at new physics?
Friday 26 November 2021, 16:00 : Melissa van Beekveld (Oxford)
SUSY wanted - dead or alive
Due to the null-results at the LHC in the search for supersymmetry (SUSY), there is a growing belief that the concept of SUSY is just another failed theory. In this talk, we will examine where this belief comes from. We will take a careful look at the fine-tuning problem, and see that the story is a lot more nuanced than often suggested. If time allows, we will also discuss the dark side of SUSY.
Friday 19 November 2021, 16:00 : Yoshi Uchida (Imperial)
CP-Violation: the next stop on our neutrino journey of discovery?
The field of neutrino oscillations has experienced several breakthrough moments over the past couple of decades, each time with a number of vastly different experiments coming together to point the way forward. The most recent breakthrough, from the T2K Experiment, shows that we are already able to start probing whether our description of neutrinos should include a significant CP-violating complex phase, something that even the most optimistic of us might not have bet too much on when we set out to build the experiment. As we enter a new era, with DUNE and Hyper-K—the next generation of very long-baseline experiments—well into construction, I will introduce the field of neutrino oscillations and our most recent results, how we got here and the challenges we face as we pursue the next breakthroughs that neutrinos have in store for us.
Friday 12 November 2021, 16:00 : Clara Barker (Oxford)
Scattering atoms, electrons and perceptions
In this talk I will discuss the state of equity and inclusion in the UK in the field of STEM, with examples from my own journey. I will discuss why it is something we should consider, how science can benefit from making STEM more equitable, and suggest some thinking points for bring about this change.
Friday 28 May 2021, 16:00 : Andy Buckley (Glasgow)
Accelerating physics impact: the why & how of Rivet analysis preservation
In a decade, the LHC experiments have issued over 3000 analysis papers, covering a huge spectrum of physics from soft QCD to myriad high-scale regions sensitive to hypothetical BSM physics. Central to the vast majority of these studies have been MC event generators, whose sophistication has also increased manyfold though the period of LHC operation. In this talk I'll review the ways in which the Rivet toolkit -- a key tool in this rise -- emerged from MC development and tuning, the development of its many hundreds of encoded analyses, through to its current leading-edge use as a powerful complement to BSM direct-searches in global fits of new physics. Whether your interest is QCD, electroweak, or BSM physics, I will show how the small amount of work in preparing a Rivet routine acts as a gateway to analysis re-use and greater physics impact.
Friday 21 May 2021, 16:00 : Giovanni De Lellis (Naples)
Collider neutrinos: the SND@LHC experiment at CERN
SND@LHC is a compact and stand-alone experiment to perform measurements with neutrinos produced at the LHC in a hitherto unexplored pseudo-rapidity region of 7.2<𝜂<9.6, complementary to all the other experiments at the LHC. The experiment is to be located 480 m downstream of IP1 in the unused TI18 tunnel. The detector is composed of a hybrid system based on an 800 kg target mass of tungsten plates, interleaved with emulsion and electronic trackers, followed downstream by a calorimeter and a muon system. The configuration allows efficiently distinguishing between all three neutrino flavours, opening a unique opportunity to probe physics of heavy flavour production at the LHC in the region that is not accessible to ATLAS, CMS and LHCb. This region is of particular interest also for future circular colliders. The detector concept is also well suited to searching for Feebly Interacting Particles via signatures of scattering in the detector target. The first phase aims at operating the detector throughout LHC Run 3 to collect a total of 150 fb−1. The experiment was recently approved by the Research Board at CERN. A new era of collider neutrino physics is just starting.
Friday 14 May 2021, 16:00 : Sneha Malde (Oxford)
Exploiting the strengths of the BESIII and LHCb experiments to make the most precise CKM angle gamma measurement
The BESIII and LHCb detectors – while both dedicated to flavour physics are different in collision energy, collision particle, geometry and size. Nonetheless in the quest to understand the Standard Model or potential new physics models they both play an important role. BESIII has the largest sample of quantum-correlated decays of the Psi(3770) meson – useful for measurement of the seemingly obscure charm strong-phase parameters. However, these are vital for studies at LHCb where their combination with the huge B decay samples leads to the most precise measurement of the CKM angle gamma. In this talk I will describe the significant developments over the last 18 months in this sector and discuss the leading measurements at both experiments.
Friday 07 May 2021, 16:00 : Mitesh Patel (Imperial)
Update on the B-anomalies
I will give an update on the so-called B-anomalies, focussing on the recent evidence for lepton flavour universality breaking from the LHCb experiment.
Friday 30 April 2021, 16:00 : Liz Kneale (Sheffield)
WATCHMAN: Project Overview and New Techniques for Reactor Antineutrino Detection
Monitoring known nuclear reactors and identifying an unknown nuclear reactor in a complex nuclear landscape is challenging. WATCHMAN will demonstrate for the first time a scalable anti-neutrino detector for mid- to far-field nuclear non-proliferation applications. In this talk, I will discuss the nuclear power-weapons connection and the proposed WATCHMAN anti-neutrino detector and remote reactor monitor prototype, before going on to show how using new and updated reconstruction and analysis methods can improve anti-neutrino signal detection and background suppression to optimise the sensitivity of Gd-doped water Cherenkov detection for remote reactor monitoring.
[recording]
Friday 23 April 2021, 16:00 : Paolo Franchini (Imperial)
The Muon Ionization Cooling Experiment and the muon ionization cooling demonstration
Future muon colliders can study lepton-antilepton collisions up to several TeV while neutrino beams produced in neutrino factories from stored beams will have an unique precision in measuring neutrino oscillations. Both muon colliders and neutrino factories will require intense muon sources with low emittance. Muons produced from decays of pions (created in proton-target interactions) occupy a large phase space which makes difficult the acceleration and storage of the beam. Cooling techniques like stochastic cooling (successfully used for protons at CERN) do not prove to be efficient methods due to the short lifetime of the muons. Ionization cooling, proposed in the late '60s and only recently demonstrated by MICE (Muon Ionization Cooling Experiment) consists in reducing the amplitude of the muon beams, passing the muons through an absorber in order to remove part of the transverse and longitudinal momenta, eventually restoring the longitudinal component with radiofrequency cavities. The result is a reduction of the transverse emittance which can be traduced as an increase of the phase space density. In the present talk I will introduce MICE, describe the recent results and outline possible future scenarios.
Friday 16 April 2021, 16:00 : Kristin Lohwasser (Sheffield)
Photon collisions at the LHC: Probing the electroweak sector
The observation of WW production in photon collisions, yy->WW, represents a mile stone in the quest to fully characterize the electroweak sector of the SM. The talk will discuss the details of this challenging measurement: Origin of photons at the LHC, treatment of pile-up and the modelling of the signal as well as that of the underlying event for background processes. An outlook on how this will play a role for future tests of the Standard Model and in searches of new physics at the TeV scale is given.
Friday 19 March 2021, 16:00 : John Nugent (Glasgow)
WAGASCI: A New Near Detector at T2K
WAGASCI is a newly installed detector at the J-PARC facility and part of the T2K experiment. Located at the end of the neutrino beam line it provides measurements of the neutrino flux before oscillation. Its design is fundamentally different from the existing near detectors and offers the first opportunity to measure the ratio of neutrino cross-sections on water and hydrocarbon to within a total uncertainty of 3%. This will allow neutrino nuclear scattering to be probed to a never before achievable extent. Developing our understanding of neutrino nuclear interactions is critical to the future of T2K and the long baseline neutrino program in general. Only through understanding these interactions can T2K reached the desired sensitivity to CP violation in leptons. A measurement of CP violation in neutrino oscillation would constitute one of the most significant breakthroughs in the field of particle physics for decades. In this seminar the WAGASCI detector will be introduced and the status after its first commissioning and physics runs reported.
Friday 19 February 2021, 16:00 : Jost Migenda (Kings College London)
The Hyper-Kamiokande Experiment
Hyper-Kamiokande is a next generation neutrino and nucleon decay experiment that is expected to start taking data in 2027. In this talk, I will introduce the experiment and current construction progress. I will then give an overview over its broad physics programme, with a special focus on neutrino astronomy.
Friday 05 February 2021, 16:00 : Ben Kilminster (UZH)
Probing 10 orders of magnitude of dark matter mass using CCDs: New results from DAMIC@SNOLAB and prospects for DAMIC-M
"The DAMIC (Dark Matter in CCDs) experiment uses CCD detectors to search for the direct interaction of galactic dark matter. Scientific CCD detectors provide an unprecedented low energy threshold and spatial resolution to probe for light dark matter. Given the current lack of evidence for a WIMP of mass around the weak scale, DAMIC focuses its search on lighter WIMPs, as well as the interaction of hidden-sector photons that could mediate the interaction of DM or even comprise DM. The current experiment, DAMIC@SNOLAB pioneered the search for hidden-photon interactions of DM and set world-leading constraints for low-mass WIMPs with a silicon-based target. The next experiment, DAMIC-M at LSM (Laboratoire Souterrain de Modane in France) will be sensitive to never-before probed potential DM models, covering a broad range of models spanning from eV to 10 TeV. In this talk, exciting new results from DAMIC@SNOLAB and prospects from DAMIC-M will be presented."
Friday 29 January 2021, 16:00 : Artur Sztuc (UCL)
Recent neutrino oscillation results from the T2K experiment
T2K is a long baseline neutrino experiment using a beam of mostly muon neutrinos from the Japan Particle Accelerator Research Centre in Japan, and measuring their oscillated state 295 km away in the Super-Kamiokande detector. Measuring the change in the neutrino flavour at Super-Kamiokande constrains the neutrino oscillation parameters of the PMNS matrix, specifically Δm232, sin2 θ23 and the CP-violating phase ΔCP. The 2019 results from T2K have shown a strong constraint on ΔCP, with large regions of ΔCP excluded at 3σ CL. This talk will describe results from T2K with the data collected until early 2020, and the future prospects for the experiment.
[slides]
Friday 15 January 2021, 16:00 : Martin Slezak (Max Planck Inst.)
First neutrino mass results from the KATRIN experiment
Knowledge of the absolute neutrino mass scale is of particular importance to particle physics, astrophysics and cosmology. The Karlsruhe Tritium Neutrino (KATRIN) experiment aims to search for the effective electron antineutrino mass with an unprecedented sensitivity of 200 meV from the shape of tritium beta-decay spectrum near its kinematical endpoint. In 2019 KATRIN performed the first measurement yielding a new upper limit of 1.1 eV (90 % C.L.) for the neutrino mass using beta-decay. In this talk, I will give an overview of the KATRIN experiment and discuss the first neutrino mass campaign in detail. I will also briefly mention perspectives for the next measurement which took place in 2020 and is currently under analysis.
Friday 11 December 2020, 16:00 : Jens Weingarten (Dortmund)
A Second Life - ATLAS Pixel Detectors in Medical Physics
The field of medical physics has been growing steadily over the past decades, with more and more hospitals using more and more complex machinery in diagnosis and treatment of patients. Reacting to the increased demand for well-trained medical physics experts, TU Dortmund university offers a medical physics study programme since 2011. Close collaborations with hospitals and dedicated centers for radiotherapy and proton therapy open up a great many possibility for the resident HEP detector physicists to find new applications for our well-known technologies. Coming from the development of silicon pixel detectors for the ATLAS tracking detector, we started looking into applications where those detectors could be useful. These range from beam monitoring for machine QA for proton therapy, via small-field dosimetry to proton radiography and proton CT. In the talk I will report on some of our activities, as well as some future developments.
Friday 27 November 2020, 16:00 : Juan Miguel Carceller (UCL)
Ultra-High Energy Cosmic Rays with the Pierre Auger Observatory
Even though cosmic rays were discovered more than one hundred years ago, there are still many questions unanswered about them. Some of them reach energies that are macroscopic (above 10^18 eV or .16 J). What are they? What is their origin? How are they accelerated to such energies? To answer these questions the Pierre Auger Observatory was built. It comprises more than 1600 surface detector that measure the arrival time and signal left by secondary particles of air showers, initiated when of these cosmic rays collides with a molecule of air in the atmosphere. The array of surface detector is overlooked by fluorescence telescopes that measure the fluorescence light emitted. I will introduce cosmic rays, the air showers that they produce and how the Pierre Auger Observatory measures them. A few recent results on composition and hadronic interactions will be shown, and some of my contributions to these topics.
Friday 20 November 2020, 16:00 : Thorben Quast (RWTH)
The CMS High Granularity Calorimeter upgrade for HL-LHC
The CMS collaboration is preparing to build replacement endcap calorimeters for the HL-LHC era. The new calorimeter endcap will be a highly-granular sampling calorimeter (HGCAL) featuring unprecedented transverse and longitudinal readout segmentation for both its electromagnetic and hadronic compartments. The granularity together with a foreseen timing precision on the order of a few tens of picoseconds will allow for measuring the fine structure of showers, will enhance pileup rejection and particle identification, whilst still achieving good energy resolution. The regions exposed to higher-radiation levels will use silicon as active detector material. The lower-radiation environment will be instrumented with scintillator tiles with on-tile SiPM readout. In addition to the hardware aspects, the reconstruction of signals, both online for triggering and offline, represents a challenging task - one where modern machine learning approaches are well applicable. In this talk, the reasoning and ideas behind the HGCAL, the proof-of-concept of its design in test beam experiments, and the challenges ahead will be presented.
Friday 30 October 2020, 16:00 : Sarah Malik (UCL)
Quantum computing for simulating high energy collisions
The simulation of high energy collisions at experiments like the Large Hadron Collider (LHC) relies on the performance of full event generators and their accuracy and speed in modeling the complexity of multi-particle final states. The rapid improvement of quantum devices presents an exciting opportunity to construct dedicated algorithms to exploit the potential quantum computers can provide. I will present general and extendable quantum computing algorithms to calculate two key stages of an LHC collision; the hard interaction via helicity amplitudes and the subsequent parton shower process. These algorithms fully utilise the quantum nature of the calculations and the machine's ability to remain in a quantum state throughout the computation. It is a first step towards a quantum computing algorithm to describe the full collision event at the LHC.
Friday 23 October 2020, 16:00 : Julieta Gruszko (UNC)
Shedding 'Nu' Light on the Nature of Matter: NuDot and the Search for Majorana Neutrinos
Why is the universe dominated by matter, and not antimatter? Neutrinos, with their changing flavors and tiny masses, could provide an answer. If the neutrino is its own antiparticle, it would reveal the origin of the neutrino's mass, demonstrate that lepton number is not a conserved symmetry of nature, and provide a path to leptogenesis in the early universe. To discover whether this is the case, we must search for neutrinoless double-beta decay. As the upcoming ton-scale generation of experiments is built, it is key that research and development (R&D) efforts continue to explore how to extend experimental sensitivities to 1029 years and beyond. These next-next-generation experiments could make a discovery, if neutrinoless double-beta decay is not found at the ton-scale, or offer insight into the mechanism behind lepton number violation, if it is. NuDot is a proof-of-concept liquid scintillator experiment that will explore new techniques for isotope loading and background rejection in future detectors. I'll discuss the progress we've already made in demonstrating how previously-ignored Cherenkov light signals can help us distinguish signal from background, and the technologies we're developing with an eye towards the coming generations of experiments.
Friday 31 July 2020, 16:00 : Chris Young (CERN)
Testing lepton flavour universality
Friday 19 June 2020, 16:00 : Susanne Westhoff (Heidelberg)
Co-scattering dark matter at the LHC
In this talk I will show how to search for feebly coupled dark matter at the LHC. If dark matter freezes out through co-annihilation or co-scattering in the early universe, the observed relic abundance predicts a clear signature at colliders: soft displaced objects. I will present a strategy to search for soft displaced leptons at the LHC during Runs 2+3.
Friday 05 June 2020, 16:00 : Anna Sfyrla (Geneva)
The FASER experiment is a new small and inexpensive experiment that will be placed 480 meters downstream of the ATLAS experiment at the CERN LHC. The experiment will shed light on currently unexplored phenomena, having the potential to make a revolutionary discovery. FASER is designed to capture decays of exotic particles, produced in the very forward region, out of the ATLAS detector acceptance. FASERnu, a newly proposed FASER sub-detector, is designed to detect collider neutrinos for the first time and study their properties. This talk will present the physics prospects, the detector design, and the construction progress of FASER. The experiment is expected to be installed in 2020 and to take data during the LHC Run-3, starting in 2021.
Friday 29 May 2020, 16:00 : Stefan Schoernert (TUM)
The Large Enriched Germanium Experiment for Neutrinoless Double-Beta Decay
Friday 22 May 2020, 16:00 : Robert Thorne (UCL)
Inferring the effective fraction of the population already infected with Covid-19 by comparing rates in different regions of the same country
I use a very simple deterministic model for the spread of Covid-19 in a large population. Using this to compare the relative decay of the number of deaths per day between different regions in Italy, Spain and England, each applying in principle the same social distancing procedures across the whole country, I obtain an estimate of the total fraction of the population which has already become infected. In the most heavily affected regions, Lombardy, Madrid and London, this fraction is higher than expected, i.e. ~ 0.3. This result can then be converted to a determination of the infection fatality rate ifr, which appears to be ifr ~ 0.0025-0.005, and even smaller in London, somewhat lower than usually assumed. Alternatively, this can also be interpreted as a effectively larger fraction of the population than simple counting would suggest if there is a variation in susceptibility to infection with a variance of up to a value of about 2. The implications are very similar for either interpretation or for a combination of effects.
Friday 15 May 2020, 16:00 : Matthias Becker (Dortmund)
The Neutrino Portal To Dark Matter
If Dark Matter is an electroweak gauge singlet, it cannot interact with the standard model via these interactions, thereby requiring the existence of so-called portal couplings. The three renormalizable portal couplings are the Higgs portal, the vector portal, and the neutrino portal. In this talk, we investigate the neutrino portal to Dark Matter and inspect the viable production mechanisms and different constraints on the resulting parameter space.
Friday 24 April 2020, 16:00 : Ioannis Katsioulas (Birmingham)
Search for Light dark matter with Spherical Proportional Counters
Friday 10 April 2020, 16:00 : EASTER
No seminar
Friday 03 April 2020, 16:00 : IOP Practice TALKS
Friday 27 March 2020, 16:00 : Kirsty Duffy (FNAL)
Latest neutrino cross-section results from MicroBooNE
MicroBooNE, the Micro Booster Neutrino Experiment at Fermilab, is an 85-ton active mass liquid argon time projection chamber (LArTPC) located in the Booster Neutrino Beam at Fermilab. The LArTPC technology with 3mm wire spacing enables high-precision imaging of neutrino interactions, which leads to high-efficiency, low-threshold measurements with full angular coverage. As the largest liquid argon detector worldwide taking neutrino beam data, MicroBooNE provides a unique opportunity to investigate neutrino interactions in neutrino-argon scattering at O(1 GeV) energies. These measurements are of broad interest to neutrino physicists because of their application to Fermilab's Short Baseline Neutrino program and the Deep Underground Neutrino Experiment (which will both rely on LArTPC technology), as well as the possibility for new insights into A-dependent effects in neutrino scattering on heavier targets such as argon. In this seminar I will present the most recent cross-section results from MicroBooNE, including measurements of inclusive charged-current neutrino scattering, neutral pion production, and low-energy protons. Many of the results I will show represent the first measurements of these interactions on argon nuclei, as well as an exciting demonstration of the potential of LArTPC detector technology to improve our current understanding of neutrino scattering physics.
Friday 20 March 2020, 16:00 : Theresa Fruth (UCL)
Searching for Dark Matter with the LZ experiment
The nature of dark matter remains one of the biggest mysteries of the universe. Extensions to the Standard Model of particle physics provide potential candidates for it. Such dark matter particles can be searched for using direct detection experiments. The LUX-ZEPLIN (LZ) experiment is a next-generation direct detection experiment, which employs a two-phase, liquid xenon time projection chamber. It is currently under construction 4850 feet underground in an old gold mine in South Dakota. In this talk I will give an overview of the experiment and its projected sensitivity reach, as well as the current status of construction and integration.
Friday 13 March 2020, 16:00 : Susanne WestHoff (Heidelberg) POSTPONED
Dark matter searches with long-lived particles POSTPONED
In cosmological scenarios beyond thermal freeze-out dark matter interactions with standard-model particles can be tiny. This leads to mediators with a lifetime that is long compared with the scales of particle colliders. In this talk I will discuss two new ideas for collider searches with long-lived mediators: soft displaced objects as signs of compressed dark sectors at the LHC; and displaced vertices from long-lived light scalars at flavor and long-distance experiments. I will show that novel search strategies allow us to explore dark matter interactions ranging over several orders of magnitude.
Friday 06 March 2020, 16:00 : Alex Martyniuk (UCL)
Recent results from the LHC
Between 2015 to 2018 (a.k.a. Run 2) the LHC delivered around 160fb^-1 to both of its general purpose detectors: ATLAS and CMS. In this talk I will try to roughly outline what the experiments have done so far with this bounty of data, and where they are headed in the future. (This talk was previously given as the opening talk of the Lake Louise Winter Institute).
Friday 14 February 2020, 16:00 : Ilektra Christidi (UCL)
Research Software Development at UCL and the software landscape in HEP
With computing and large amounts of data becoming more and more an everyday reality in all research domains, the world is waking up to what High Energy Physicists knew all along: software is an integral part of research, and as such, it is a necessity to have the infrastructure and people to support its sustainable development. The UCL Research Software Development Group, a centralised service for the whole university research community, and the first group of its kind in the UK, has been here since 2012 to serve exactly this purpose. In the first part of this talk, I'll introduce the scope and work of our group, and will try to identify (with your help!) ways that we can be of service to the UCL HEP researchers. In the second part, I'll present a review of the latest PyHEP workshop, a forum where developments in the use of Python in Particle Physics are presented.
Friday 07 February 2020, 16:00 : Katharina Behr (DESY)
The puzzle of dark matter: missing pieces at the LHC?
Unravelling the particle nature of dark matter is one of the key goals of the LHC physics programme. Dark matter cannot be detected directly by the LHC experiments but would manifest itself as missing energy in the detector signature of collision events. Complementary resonance searches targeting new mediator particles between dark and known matter provide an additional approach to explore the interactions of dark matter. To date, no evidence for dark matter or related mediators has been found. Could dark matter interactions be more complex or have otherwise have evaded detection? I will review the diverse programme of dark matter searches on LHC Run 2 data and address strategies to extend our coverage of possible dark matter signatures at the LHC.
Friday 31 January 2020, 16:00 : Tevong You (Cambridge)
Where art thou, new physics?
Searching for new fundamental physics beyond the Standard Model, by experimenting, observing and theorising, is a tremendously exciting journey. One of our most reliable guides in this voyage of exploration is the framework of effective field theory. Through this lens, I will survey the landscape of where new physics may be hiding, from electroweak precision observables, diboson, Higgs, and flavour physics to light dark sectors. I conclude with the question of what, if anything, could we ultimately discover at future colliders?
Friday 24 January 2020, 16:00 : POSTPONED till 14.02.
Enjoy cake and puppies instead
Friday 13 December 2019, 16:00 : Jessica Turner (FNAL)
Neutrino masses from gravity
In this talk I will discuss neutrino masses in general and demonstrate that non-zero neutrino masses can be generated from gravitational interactions. In this work we solve the Schwinger-Dyson equations to find a non-trivial vacuum thereby determining the scale of the neutrino condensate and the number of new particle degrees of freedom required for gravitationally induced dynamical chiral symmetry breaking. We show for minimal beyond the Standard Model particle content, the scale of the condensation occurs close to the Planck scale.
Friday 06 December 2019, 16:00 : Tevong You (Cambridge) — CANCELED!
Friday 29 November 2019, 16:00 : Ilektra Christidi (UCL) — CANCELED!!
Research Software Development Group
Friday 22 November 2019, 16:00 : Lucas Lombriser (Geneva)
Towards Understanding the Cosmic Expansion at Late Times
I will first discuss how recent gravitational wave measurements have brought the predicted challenges to explaining cosmic acceleration by a modification of General Relativity as alternative to the cosmological constant. In a second part, I will describe a possible solution of the cosmological constant problem. I will show how interpreting the Planck mass in the Einstein-Hilbert action as a global Lagrange multiplier prevents vacuum energy from gravitating and with account of the inhomogeneous cosmic small-scale structure predicts an energy density parameter of the cosmological constant of 0.704, in good agreement with observations. Finally, I will argue that there is no Hubble tension. Rather, the discrepant measurements imply that we are located in a 50% underdense 40 Mpc region of the Cosmos, within cosmic variance and in good agreement with the measured local distributions of galaxies and clusters.
Friday 08 November 2019, 16:00 : Raffaella Radogna (UCL)
A calorimeter for particle therapy range verification
Particle beam therapy provides significant benefits over conventional X-ray radiotherapy. Protons and heavier ions lose most of their energy in the last few millimetres of their path (Bragg Peak), enabling tumours to be targeted with greater precision and reducing the collateral damage to surrounding healthy tissue. An important challenge in particle therapy is the uncertainty in the range of the beam. To ensure that treatment is delivered safely, a range of quality assurance (QA) procedures are carried out each day before treatment starts. A detector is currently under development at University College London to provide fast and accurate proton range verifications, and speed up the daily QA process. The new system utilises a multi-layer calorimeter to record the depth-dose distribution of a proton therapy treatment beam and make direct measurements of the Water Equivalent Path Length (WEPL) with high resolution at clinical rates. The range calorimeter is also used to test the achievable sensitivity of real-time theranostics for carbon treatment using a mixed He/C beam. Range uncertainties caused by intra-fractional motion during carbon ion treatment could be monitored online using a small contamination of helium ions in the beam. At the same energy per nucleon, helium ions have about three times the range of carbon ions, which could allow, for certain tumours, a simultaneous use of the carbon beam for treatment and the helium beam for imaging. In this talk, the design and performance of the Quality Assurance Range Calorimeter (QuARC) are presented.
Friday 01 November 2019, 14:00 : Sarah Heim (DESY): UNUSUAL TIME and PLACE: Physics E7
Higgs differential cross section measurements in the H->ZZ*->4l decay channel with the ATLAS detector.
One of the most promising approaches for probing Higgs boson production and decays are differential cross sections. Measuring the Higgs boson transverse momentum and other distributions can shed lights on the couplings of different Standard Model particles to the Higgs boson. I will discuss fiducial and differential cross section measurements in the H->ZZ*->4l decay channel, often called the golden channel, and highlight the important role of lepton reconstruction and identification. Furthermore I will discuss a number of interpretations and give an outlook for Higgs differential cross section measurements at the High-Luminosity LHC.
Friday 25 October 2019, 16:00 : Katharina Behr (DESY) — CANCELED!
The puzzle of dark matter: missing pieces at the LHC
Friday 18 October 2019, 16:00 : Ralf Kaiser (Glasgow)
Cosmic Ray Muography
Muons are fundamental, charged particles that form part of our naturally-occurring background radiation. They are produced in particle showers in the upper atmosphere from the impact of cosmic rays. These muons are incident at sea-level at a rate of about one per square centimetre per minute and with average energies of about 3 GeV - approximately four orders of magnitude more than typical X-rays. Muons are highly penetrating and can traverse hundreds of metres of rock, which has opened up the possibility to use them for challenging imaging applications. Muography is an established technique in volcanology, it has been used to find a cavity in the pyramid of Khufu in Egypt and over the last years a wide variety of applications have been explored – ranging from cargo screening to nuclear waste characterisation and carbon storage monitoring. Lynkeos Technology is a spin-out company from the University of Glasgow, founded in 2016 following a 7-year, £4.8M research programme funded by the NDA. The Lynkeos Muon Imaging System is the worldwide first, CE-marked muon imaging system for the characterization of nuclear waste containers. It has been successfully deployed on the Sellafield site in October 2018. This talk will give an overview of muography applications worldwide and present the activities of Lynkeos Technology in detail, with a focus on the characterization of nuclear waste containers.
Monday 14 October 2019, 16:00 : Nivedita Ghosh (IACS Kolkata)
Associated $Z^\prime$ production in the flavorful $U(1)$ scenario for $R_{K^{(*)}}$
The flavorful $Z^\prime$ model with its couplings restricted to the left-handed second generation leptons and third generation quarks can potentially resolve the observed anomalies in $R_K$ and $R_{K^*}$. After examining the current limits on this model from various low-energy processes, we probe this scenario at 14 TeV high-luminosity run of the LHC using two complementary channels: one governed by the coupling of $Z'$ to $b$-quarks and the other to muons. We also discuss the implications of the latest LHC high mass resonance searches in the dimuon channel on the model parameter space of our interest.
Friday 20 September 2019, 16:00 : Yanhui Ma (UCL)
Observation of H-->bb decays and VH production with the ATLAS detector
A search for the decay of the Standard Model Higgs boson into a bb pair when produced in association with a W or Z boson is performed with the ATLAS detector. The analysis of 13 TeV data collected by ATLAS during Run 2 of the LHC in 2015, 2016 and 2017 leads to a significance of 4.9σ – alone almost sufficient to claim observation. This result was combined with those from a similar analysis of Run 1 data and from other searches by ATLAS for the H→bb decay mode, namely where the Higgs boson is produced in association with a top quark pair or via a process known as vector boson fusion (VBF). The significance achieved by this combination is 5.4σ.
Friday 13 September 2019, 16:00 : Evan Grohs (Berkeley)
Neutrino dynamics in big bang nucleosynthesis
The laboratory of the early universe provides a setting for testing Beyond Standard Model (BSM) physics in the particle and cosmological sectors. Any BSM physics in operation at early times may produce slight deviations in the primordial element abundances and cosmic microwave background observables predicted within the standard cosmology. The identification and characterization of such BSM signatures require a precise numerical treatment of the neutrino energy and flavor wave functions when the neutrinos decouple from the electromagnetic plasma. This weak decoupling process occurs during Big Bang Nucleosynthesis (BBN) and we employ Quantum Kinetic Equations (QKEs) to follow the out-of-equilibrium neutrino evolution. I will give an overview on the role neutrinos play in BBN, as well as give an introduction to the full QKE problem with neutrino oscillations and collisions. A QKE treatment of early-universe neutrino physics will greatly assist observers and theorists as the next-generation-cosmological experiments come on line in the near future.
Monday 19 August 2019, 16:00 : Jon Butterworth (UCL)
Highlights from EPS HEP 2019
I gave the highlights talk for this year's EPS meeting in Ghent and will repeat it for anyone in the group who is around and interested.
Monday 22 July 2019, 16:00 : Alfredo Galindo-Ubarri (Oak Ridge National Laboratory) — Harry Massey Lecture Theatre
Neutrino Physics Opportunities at ORNL
The Physics Division at ORNL is exploring key opportunities for neutrino physics and supporting the formation of an experimental program at the intersection of particle, nuclear, and astrophysics. The Spallation Neutron Source (SNS) and the High Flux Isotope Reactor (HFIR) of ORNL are two very powerful neutrino sources that open new physics opportunities. Two new experiments, PROSPECT and COHERENT, make use of these unique capabilities and enable us to broaden the understanding of neutrino properties. PROSPECT consists of segmented 6Li-loaded liquid scintillator antineutrino detectors designed to probe short-baseline neutrino oscillations and precisely measure the reactor antineutrino spectrum. The COHERENT collaboration aims to measure CEvNS (Coherent Elastic Neutrino-Nucleus Scattering) at the SNS. The CEvNS process is cleanly predicted in the Standard Model and its measurement provides a Standard Model test. I will present a novel neutrino experiment which consists of a differential measurement of coherent-elastic neutrino-nucleus scattering using isotopically enriched Ge detectors. I will review some of the current activities taking place at ORNL including the development of ultrasensitive analytical techniques to detect trace elements of interest for neutrinoless double beta decay experiments and will present recent results.
Thursday 18 July 2019, 16:00 : Alex Keshavarzi (Fermilab)
The Muon g-2: theory and experiment
The study of the muon g-2 stands as an enduring and stringent test of the Standard Model (SM), where the current 3.5 standard deviations (or higher) discrepancy between the theoretical prediction and the experimental measurement could be an indication of new physics beyond the SM. The precision of the SM prediction is limited by hadronic contributions and, therefore, the Muon g-2 Theory Initiative are working hard to improve the SM evaluation in time for the next experimental result. In tandem, the Muon g−2 experiment at Fermilab is set to measure the muon anomaly with a four-fold improvement in the uncertainty with respect to previous experiment, with an aim to determine whether the g−2 discrepancy is well established. The experiment recently completed its first physics run and a summer programme of essential upgrades, before continuing on with its experimental programme. The Run-1 data alone are expected to yield a statistical uncertainty of 350 ppb and the publication of the first result is expected in late-2019. In this talk, I will discuss the advances in both the theoretical and experimental determinations of the muon magnetic anomaly, placing focus on my own evaluations of the hadronic vacuum polarisation contributions to the Muon g-2 and my current contributions to the Muon g-2 experiment at Fermilab.
Friday 07 June 2019, 16:00 : Fabon Dzogang (ASOS)
Friday 31 May 2019, 16:00 : Erika Catano-Mur (William and Mary)
Recent results and outlook for the NOvA neutrino experiment
NOvA is a long-baseline neutrino oscillation experiment, which consists of two finely-segmented liquid-scintillator detectors operating 14 mrad off-axis from Fermilab's NuMI muon neutrino beam. With an 810 km baseline, the measurements of muon neutrino disappearance and electron neutrino appearance allow the determination of neutrino oscillation unknowns, namely the mass hierarchy, the octant of the largest neutrino mixing angle, and the CP violating phase. In this talk, I will present the latest results of the NOvA oscillation analyses from four years of data taking, and discuss the experiment's projected sensitivity to determine the mass hierarchy and to discover CP violation in the neutrino sector in future analyses with increased exposure.
Friday 24 May 2019, 16:00 : Peter Wijeratne (UCL)
From the Higgs to Huntington's: methods for learning from data
With the advent of datasets of unprecedented size and dimensionality in both physics and medicine, there is high demand for new methodologies that can extract hidden or obscured information. This is particularly important in ill-posed problems - such as the reconstruction of particles from detector interactions - and in problems with non-informative priors, which occur frequently in biological processes. In this talk I will give a high level view of the types of computational methods used to tackle these problems at the UCL Centre for Medical Image Computing, with a particular focus on Bayesian methods and unsupervised machine learning. I will also discuss how my previous research on ATLAS led to my current research in computational modelling of Huntington's disease, a devastating neurological condition that computational methods are shining new light on.
Friday 17 May 2019, 16:00 : Kate Scholberg (Duke)
Coherent elastic neutrino-nucleus scattering
Coherent elastic neutrino-nucleus scattering (CEvNS) is a process in which a neutrino scatters off an entire nucleus at low momentum transfer, and for which the observable signature is a low-energy nuclear recoil. It represents a background for direct dark matter detection experiments, as well as a possible signal for astrophysical neutrinos. Furthermore, because the process is cleanly predicted in the Standard Model, a measurement is sensitive to beyond-the-Standard-Model physics, such as non-standard interactions of neutrinos. The process was first predicted in 1973. It was measured for the first time by the COHERENT collaboration using the high-quality source of pion-decay-at-rest neutrinos from the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory and a CsI[Na] scintillator detector. This talk will describe COHERENT's recent measurement of CEvNS, the status and plans of COHERENT's suite of detectors at the SNS, and future physics reach. I will also cover prospects for supernova neutrino detection if time permits.
Friday 10 May 2019, 16:00 : Monika Wielers (RAL)
HL-LHC and HE-LHC physics prospects
The Large Hadron Collider (LHC) has been successfully delivering proton-proton collision data at a centre of mass energy of 13 TeV. An upgrade is planned to increase the instantaneous luminosity delivered by the LHC by a factor of 5-7 (HL-LHC) for running in 2026 and beyond and there is also another possible future upgrade considered for running at an energy of 27 TeV at the high-energy LHC (HE-LHC). In the last 1.5 years, the LHC experiments prepared a CERN Yellow Report which summarises the physics reach for HL-LHC and HE-LHC and serves as input to the European Strategy this year. This talk shows highlights of the physics reach at the HL-LHC and HE-LHC detailed in the report. The physics prospects are shown for Higgs couplings measurements, di-Higgs boson production sensitivity, Vector Boson Scattering prospects as well as the discovery potential for electroweak SUSY and other exotic benchmark scenarios.
Friday 03 May 2019, 16:00 : NO SEMINAR
Friday 26 April 2019, 16:00 : Sudan Paramesvaran (Bristol)
The CMS Trigger: Through the Ages
A trigger represents a fundamental component of hadron collider experiments. The ability to separate signal from background on extremely short time-scales represents a major technological and intellectual challenge. This seminar will seek to explain how the trigger system for the CMS Experiment at the LHC has evolved from the first run (2010 - 2012) through to its first major upgrade in 2016, where a significantly more advanced trigger was deployed. The focus will be on discussing custom electronics, architecture choices, and ultimate performance. With preparations well underway for High Luminosity LHC, the challenges and status of the next major overhaul of the trigger system will be also be described.
Friday 19 April 2019, 16:00 : EASTER FRIDAY
Friday 12 April 2019, 16:00 : Zara Grout (UCL)
A detector-corrected ATLAS measurement of four leptons designed for re-interpretation
This recently published analysis is at the forefront of designing precision measurements using the ATLAS detector which are sensitive to new physics. Traditional ATLAS searches provide maximally sensitive results on specific models with a fast turnaround using highly optimised event selections. I will discuss how these searches can be complemented by detector-corrected measurements which have a long shelf-life and can be used to search for Beyond the Standard Model scenarios by those within and external to the collaboration. In this instance the invariant mass spectrum of four leptons is shown to provide insight into a number of Standard Model processes and BSM scenarios.
Friday 05 April 2019, 16:00 : Preema Rennee Pais (EPFL)
'The LHCb detector upgrades, and prospects for rare decays and LFU measurements'
The LHCb detector is a single-arm forward spectrometer, designed to study decays of hadrons containing beauty and charm quarks. A major upgrade of the experiment is being performed during the ongoing LHC long shutdown 2. The upgraded detector will operate at an instantaneous luminosity five times higher than in Run 2. It can be read out at the full LHC bunch-crossing frequency of 40 MHz, enabling the use of a flexible software trigger system. The high luminosity LHC could provide peak luminosities of up to $2 \times 10^{34$} cm$^{-2}$ s$^{-1}$ to LHCb, about an order of magnitude above Upgrade I conditions. The collaboration is planning an Upgrade II detector, to be installed during long shutdown 4 of the LHC (2030), which will enhance sensitivity to a wide range of physics signatures. In this talk, I will present an overview of the Upgrade I detectors, and highlight detector design options and recent R&D to meet the challenge of real-time reconstruction in the HL-LHC environment. The talk will conclude with a discussion of the outlook for measurements of rare decays of $b$-hadrons and tests of lepton flavour universality with the upgrade datasets.
Friday 29 March 2019, 16:00 :
IoP practce talks
Friday 22 March 2019, 16:00 : TBC
Friday 15 March 2019, 16:00 : Sebastian Trojanowski (Sheffield)
Looking forward to new physics with FASER: ForwArd Search ExpeRiment at the LHC
One of the most rapidly developing areas of research in particle physics nowadays is to look for new, light, extremely weakly-interacting particles that could have avoided detection in previous years due to the lack of luminosity. These, so-called intensity frontier searches, have also broad cosmological connections to e.g. dark matter, as well as can help to unravel the mystery of neutrino masses. In this talk, we will summarize the current status of this field with a particular emphasis on a newly proposed experiment to search for such particles produced in the far-forward region of the LHC, namely FASER, the ForwArd Search ExpeRiment. FASER has been proposed as a relatively cheap detector to supplement traditional experimental programmes searching for heavy new physics particles in the high-pT region and, therefore, to increase the whole BSM physics potential of the LHC. On top of potentially far-reaching implications to BSM particle physics and cosmology, the newly proposed detector can also be used to measure high-energy SM neutrino cross sections.
Friday 08 March 2019, 16:00 : Patrick Dunne (Imperial)
Latest neutrino oscillation results from the T2K experiment
T2K is a long baseline neutrino experiment situated in Japan. We fire beams of muon neutrinos and antineutrinos 295km across the country then observe them using the 50 kTon Super Kamiokande detector. By studying how many of these neutrinos have oscillated into different flavours and whether the oscillations occur differently for antineutrinos we have sensitivity to CP violation in the neutrino sector, the neutrino mass hierarchy and the mixing angles between the neutrino flavours. I will present the latest results from the T2K collaboration including limits on the CP violating parameter \delta_{CP}.
Friday 01 March 2019, 16:00 : Louie Corpe (UCL)
Hacking the ATLAS detector: looking for exotic long-lived particles using displaced jets
Long-lived particles (LLPs) are nothing new: semi-stable particles abound in the SM. There's no reason why they wouldn't occur in extensions to the SM too. The lifetime of exotic LLPs in BSM models is typically unconstrained, and since collider detectors are usually designed assuming that the action happens near the beam crossing, LLPs that decay far from the beamline could easily have been missed by standard searches. To look for them, we therefore need to 'hack' our detectors to do something they were not designed to do: search for decays deeper in the detector volume. This talk describes one such search for pairs of neutral, long-lived particles decaying in the ATLAS calorimeter, leading to the formation of narrow, trackless displaced jets.
Friday 22 February 2019, 16:00 : Fredrik Bjorkeroth (Frascati)
Flavourful axion phenomenology (and impact on Mu3e/Mu2e)
Traditional axion (or ALP) models assume the axion does not distinguish between fermion generations, i.e. it is flavour-universal. This is not the case in flavoured axion models, where the symmetry that dictates fermion mass structures is (or generates) a Peccei-Quinn symmetry. Such models predict flavour-violating axion-fermion couplings which, in highly constrained flavour models, can be fixed by mass and mixing data. I will discuss the phenomenology of flavoured axions, in particular contributions to heavy meson decays and lepton flavour violating processes.
Friday 15 February 2019, 16:00 : Agni Bethani (Manchester)
Double Higgs production in ATLAS
In the post-Higgs discovery era, studying the Higgs properties and understanding the Higgs mechanism is crucial. The next major milestone in understanding the Higgs mechanism is measuring the tirlinear self-coupling (λΗΗΗ) via Higgs boson pair- production (HH). This would be, arguably, the most important result at the LHC since the Higgs discovery. The HH cross-section is predicted to be very small in the Standard Model (SM) however it can be enhanced in case new physics is present. In the seminar I will discuss the current status of HH searches in ATLAS along with some estimations of what we can achieve in the future.
Friday 08 February 2019, 16:00 : Nicola McConkey (Manchester)
SBND - a state of the art Liquid Argon TPC for Neutrino Physics
The field of neutrino physics is now moving towards the era of precision physics in order to test the 3-neutrino paradigm, neutrino mass hierarchy and CP asymmetry in the lepton sector. The next generation of neutrino detectors, currently under development and construction, will have sensitivity to the fundamental parameters which describe these phenomena. Liquid argon is an excellent detector medium, with good scintillation and charge transport properties. Coupled with the three dimensional position reconstruction possible with a time projection chamber, it makes for a powerful particle detector which has become one of the detectors of choice for rare event physics, especially in neutrino detection. This rapidly developing field has many technical challenges as the desired detector volume increases to the multi-kiloton scale. I will discuss the Short Baseline Neutrino (SBN) programme, with a focus on the detector technology used, current status and future prospects for the Short Baseline Near Detector (SBND).
Friday 01 February 2019, 16:00 : Celeste Carruth (Berkley)
Testing CPT Symmetry with Antihydrogen at ALPHA
One of the biggest unsolved questions in physics is the absence of any large amount of antimatter in the universe. Charge-Parity-Time Symmetry requires that energy convert to equal quantities of matter and antimatter, so at the creation of the universe, the large amount of energy present should have produced equal amounts of matter and antimatter. If an antimatter galaxy existed in the observable universe, we would expect to see radiation coming from particles annihilating in the interstellar space between the matter and antimatter galaxy, but no such signature has yet been discovered. The ALPHA collaboration at CERN combines cold plasmas of antiprotons and positrons to make and trap antihydrogen, and then performs precision measurements on the trapped antimatter. I'll discuss our method for trapping antihydrogen and our recent results of the hyperfine transition and the 1S-2S and 1S-2P spectroscopies.
Friday 25 January 2019, 16:00 : NO SEMINAR
Friday 18 January 2019, 16:00 : Tessa Baker (QMUL)
The Gravitational Landscape for the LSST Era
The Large Synoptic Survey Telescope (LSST), due for first light later this year, spearheads the next generation of cutting-edge astronomical survey facilities. One of its key science goals is to settle questions surrounding dark energy or possible corrections to General Relativity, which are posited to resolve outstanding problems of the standard cosmological model. In this talk I'll explain how we plan to use LSST to test of the landscape of extended gravity theories. In particular, I'll focus on the new wave of parameterised techniques developed as the smartest way to probe the landscape of gravity/dark energy models in the current literature. I'll also discuss some exciting theoretical developments, sparked by recent gravitational wave detections, that offer powerful insights on this model space.
Friday 11 January 2019, 16:00 : Jennifer Ngadiuba (CERN)
Deep Learning on FPGAs for L1 trigger and data acquisition for particle physics
Machine learning methods are becoming ubiquitous across particle physics. However, the exploration of such techniques in low-latency environments like Level-1 (L1) trigger systems has only just begun. In this talk, I will present a new software, based on High Level Synthesis, to generically port several kinds of network models (BDTs, DNNs, CNNs) into FPGA firmware. The task of tagging high-pT jets as H->bb candidates using jet substructure is considered here as benchmark physics use case. The resource usage and latency are mapped out versus types of machine learning algorithms and their hyper-parameters. A set of general practices to efficiently design low-latency machine-learning algorithms on FPGAs will be discussed.
Friday 14 December 2018, 16:00 : Alexander Deisting (RHUL)
Taking a Time Projection Chamber to high pressure
Time Projection Chambers (TPCs) have been employed with great success as tracking detectors and to provide particle ID, since they provide a large active volume and low momentum threshold for particle detection. We currently develop a high pressure TPC (HPTPC) which can operate at gas pressures of up to 5 barA. Increasing the pressure allows to increase the target mass inside the active volume, thus making a HPTPC a promising candidates to characterise neutrino beams at the next generation long baseline neutrino oscillation experiments such as DUNE or Hyper-K. Our HPTPC prototype features a gas amplification stage with three parallel meshes, charge readout at each mesh and an optical readout with four CCD cameras. The talk will focus on the hardware of the detector, its operation and the according challenges as well as some measured results. Possible future developments will be discussed as well.
Friday 07 December 2018, 16:00 : Loredana Gastaldo (Heidelberg)
Metallic Magnetic Calorimeters for Neutrino Physics
The wish to understand properties of neutrinos has been driving very challenging experiments since the discovery of these elusive particles. In the early '80, the development of low temperature detectors, operated at millikelvin temperatures, was strongly motivated by the possibility to enhance our knowledge on neutrino Physics. Even today, in the landscape of experiments investigating neutrino properties, low temperature detectors are still playing a very important role. This talk will describe the use of a particular type of low temperature detectors, metallic magnetic micro-calorimeters in experiments for the determination of the electron neutrino mass and for the search of neutrinoless double beta decay. Finally we will discuss other applications as search for keV sterile neutrinos or measurements of coherent neutrino nucleus scattering in which metallic magnetic micro-calorimeters could play a very important role.
Friday 30 November 2018, 16:00 : Chris Stoughton (Fermilab)
Muon g-2 and other news
After 50 years of operation, Fermilab is still going strong. I will discuss the motivation, status, and prospects of the FNAL g-2 experiment. I'll place it in historical context, especially regarding Fermilab's future programs. I will end with a brief explanation of Fermilab's "smallest" experiment, the Holometer, which measures effect of Planck-scale physics.
Friday 16 November 2018, 16:00 : Andi Chisholm (Birmingham)
Searching for Higgs boson decays to charm quark pairs with charm jet tagging at ATLAS
The Standard Model (SM) Higgs boson is expected to decay to a charm quark pair in around 3% of cases. While this number seems small, the success of the LHC Higgs boson measurement programme is such that this contribution represents one of the largest expected contributions to the total Higgs boson decay width for which we have no experimental evidence. Furthermore, all experimental evidence for Yukawa couplings is limited to the third generation fermions and the smallness of the SM charm quark Yukawa coupling makes it particularly sensitive to modifications from potential physics beyond the SM. I will describe a novel charm jet tagging algorithm recently commissioned by the ATLAS experiment and discuss how it can be employed to perform the first direct search for Higgs boson decays to charm quark pairs with the ATLAS experiment (Phys. Rev. Lett. 120 (2018) 211802, arXiv:1802.04329). Looking beyond this LHC Run 2 result, the prospects and projected sensitivity for this channel at the HL-LHC will also be discussed.
Friday 09 November 2018, 16:00 : Jeanne Wilson (QMUL)
SNO+: Current Status and Prospects
The SNO+ experiment is a multi-purpose low energy neutrino experiment based in the SNOLAB deep underground facility in Canada. The experiment builds on the infrastructure of the successful SNO experiment, with liquid scintillator replacing the original heavy water detection medium. The main goal of SNO+ is to search for neutrino-less double beta decay of tellurium-130, which will be dissolved in the liquid scintillator. If observed, neutrino-less double beta decay would confirm the Majorana nature of neutrinos and provide information on absolute neutrino mass. Additionally, SNO+ plans to make measurements of reactor neutrinos, geo-neutrinos, solar neutrinos and will be sensitive to neutrinos from galactic supernovae. For the past year, SNO+ detector commissioning has involved collecting data with H2O as a detection medium, allowing observation of solar neutrinos with extremely low background and a search for invisible modes of nucleon decay, which will be presented in this talk.
Friday 02 November 2018, 16:00 : Ioana Maris (ULB)
Current status and future of Ultra High Energy Cosmic Rays experiments
The Earth's atmosphere is constantly bombarded by ultra high energy cosmic rays (UHECRs). These particles carry the largest energies known to us: they can reach more than 10²⁰ eV. Their flux is very low, and thus very large detectors were built to be able to detect the secondary particles produced by UHECRs after entering the atmosphere. I will present the results of the forerunner experiments, Pierre Auger Observatory and Telescope Array, regarding the energy spectrum, mass composition and arrival directions. Even though a large progress has been made in the last 10 years, we still do not know where these particles are coming from. In the last part of this talk I will present the future plans to advance in the quest of the origin of UHECRs.
Friday 26 October 2018, 16:00 : Eram Rizvi (QMUL)
Precision EW Measurements from ATLAS - sin^2\theta_eff
The phenomenal operation of the LHC in Run-1 has allowed high precision measurements to be attained for single vector boson production in pp collisions. A new measurement of the cross section for Z\gamma production at \sqrt{s}=8 TeV will be presented triple differentially in dilepton invariant mass, |y| and \cos\theta covering the region 46$<$m$<$200 GeV; 0$<|y|<$3.6; and -1$<$cos\theta$<$+1. The measurement is designed to be simultaneously sensitive to the proton PDFs and to the weak mixing angle. A precision of better than 0.5% in the region m ~ mZ (excluding luminosity uncertainty) is achieved. The value of sin^2\theta_eff is extracted using this cross section data, and also using the approach of scattering amplitude coefficients. An accuracy of ±36 x 10^{-5} is achieved reaching the combined CDF+D0 accuracy and approaching the LEP and SLD results.
Friday 19 October 2018, 16:00 : Physics Gala
No seminar this week
Friday 12 October 2018, 16:00 : Christoph Andreas Ternes (IFIC Valencia)
Status of 3-neutrino oscillations
In this talk I will present the current status of neutrino mixing angles and masses. I will explain how simulations are performed and give a brief introduction to the theory of neutrino oscillations. Afterwards I will discuss all the experiments included in the global fit, before going to the results of the combined analysis. There I will focus on the currently open problems, such as atmospheric octant, CP-violation and neutrino mass ordering.
Thursday 14 June 2018, 16:00 : Teppei Katori (QMUL)
Observation of a Significant Excess of Electron-Like Events in the MiniBooNE Short-Baseline Neutrino Experiment
The MiniBooNE experiment at Fermilab reports results from an analysis of electron neutrino (nue) appearance data from 1.3E21 protons on target (POT) in neutrino mode, an increase of approximately a factor of two over previously reported results (PRL110(2013)161801). A nue charged-current quasi-elastic (CCQE) event excess of 381.2 +/- 85.2 events (4.5sigma) is observed in the energy range 200 < EnuQE < 1250 MeV. Combining these data with the electron anti-neutrino (nuebar) appearance data from 1.1E21 POT in antineutrino mode, a total nue plus nuebar CCQE event excess of 460.5 +/- 95.8 events (4.8 sigma) is observed. If interpreted in a standard two-neutrino oscillation model (numu to nue), the best oscillation fit to the excess has a probability of 20.1% while the background-only fit has a chi2-probability of 5E-7 relative to the best fit. The MiniBooNE data are consistent in energy and magnitude with the excess of events reported by the Liquid Scintillator Neutrino Detector (LSND), and the signicance of the combined LSND and MiniBooNE excesses is 6.1 sigma. All of the major backgrounds are constrained by in-situ event measurements, so non-oscillation explanations would need to invoke new anomalous background processes. Although the data are fit with a standard oscillation model, other models may provide better fits to the data. https://arxiv.org/abs/1805.12028
Friday 01 June 2018, 16:00 : Justin Evans (Manchester)
A combined view of sterile-neutrino constraints from CMB and neutrino-oscillation measurements.
I will present a comparative analysis of constraints on sterile neutrinos from the Planck experiment and from current and future neutrino-oscilllation experiments, as reported in S. Bridle et al., Phys. Lett. B764, 322 (2017). In this paper, we, for the first time, expressed joint constraints on N_eff and m_eff^sterile from the CMB in the Δm^2, sin^2(2θ) parameter space used by oscillation experiments, and expressed constraints from oscillation experiments in the N_eff, m_eff^sterile cosmology parameter space. We focused on oscillation experiments that probed mixing of the muon flavour into a fourth mass state: MINOS, IceCube, and the SBN programme. I will then present new work in which we have looked at mixing of the electron flavour to allow comparison of CMB constraints with limits from the reactor experiments NEOS and Daya Bay. Finally, we allow both the electron and muon flavours to simultaneously mix with a fourth mass state, and compare CMB constraints to the numubar->nuebar appearance signals from LSND and MiniBooNE.
Friday 18 May 2018, 16:00 : Anne Norrick (William&Mary)
Recent and upcoming Results from the MINERvA Experiment
The MINERvA experiment is a dedicated neutrino cross section experiment stationed in the Neutrinos from the Main Injector (NuMI) beam line at Fermi National Accelerator Laboratory in Batavia, IL, USA. Recent results from the MINERvA experiment, as well as upcoming results using a beam of increased energy and intensity and their impact on the field will be discussed.
Friday 11 May 2018, 16:00 : Jim Dobson (UCL)
Searching for dark matter with the LUX and LUX-ZEPLIN direct detection experiments
For the past decade liquid xenon time-projection chambers (TPCs) hidden deep underground have led the race to make a first direct detection of dark matter here on Earth. In this talk I'll present the final results from the recently completed Large Underground Xenon (LUX) experiment as well as the status and physics reach of its successor, the 40 times as massive LUX-ZEPLIN experiment currently being constructed and due to start data taking in 2020.
Friday 04 May 2018, 16:00 : Giuliana Galati (Naples)
Final results of the OPERA experiment on tau neutrino appearance in the CNGS beam
The OPERA experiment (Oscillation Project with Emulsion tRacking Apparatus) was designed to conclusively prove the muon neutrino into tau neutrino oscillation in appearance mode. In this talk the improved and final analysis of the full data sample, collected between 2008 and 2012, will be shown. The new analysis is based on a multivariate approach for event identification, fully exploiting the expected features of tau neutrino events, rather than on the sheer selection of candidate events by independent cuts on topological or kinematical parameters as in previous analyses. It is performed on candidate events preselected with looser cuts than those applied in the previous cut-based approach. Looser cuts increase the number of tau neutrino candidates, thus leading to a measurement of the oscillation parameters and of the tau neutrino properties with a reduced statistical uncertainty. For the first time ever, Delta m^2_{23} has been measured in appearance mode, tau neutrino CC cross-section has been measured with a negligible contamination from τ antineutrinos and tau neutrino lepton number has been observed. Moreover, given the higher discrimination power of the multivariate analysis, the significance of the tau neutrino appearance is increased.
Friday 27 April 2018, 16:00 : Michela Massimi (Edinburgh)
Perspectival modeling at LHC
The goal of this paper is to address the philosophical problem of inconsistent models, and the challenge it poses for realism and perspectivism in philosophy of science. I analyse the argument, draw attention to some hidden premises behind it, and deflate them. Then I introduce the notion of "perspectival models" as a distinctive class of modeling practices, whose primary function is heuristic or exploratory. I illustrate perspectival modeling with two examples taken from contemporary high-energy physics at LHC, CERN (one from ATLAS concerning pMSSM-19 and one from the CMS experiment). These examples are designed to show how a plurality of seemingly incompatible models (once suitably understood) is in fact methodologically very important for scientific progress and for advancing knowledge in cutting-edge areas of scientific inquiry.
Friday 13 April 2018, 16:00 : Stephen West (RHUL)
Dark Matter models
I will start by reviewing a range of non-standard dark matter models including models employing asymmetric dark matter and freeze-in. I will then go on to outline models of Nuclear Dark Matter, where the dark matter states are composite objects consisting of ``dark nucleons". I will outline some of the novel features of this idea and how the composite nature can lead to interesting signatures in direct detection experiments.
Friday 06 April 2018, 16:00 : Tuomas Rajala (UCL)
Detecting local scale interactions in highly multivariate point patterns
Highly multivariate point patterns, such as the location patterns of 300 different plant species in a rainforest, are the big data of point pattern statistics. One of the main data analysis goals is the detection of local scale interactions between different point types/species. This talk will discuss two approaches for detecting such interactions: A Monte Carlo framework based on pairwise non-parametric tests, and a multivariate Gibbs model technique relying on automatic variable selection.
Thursday 22 March 2018, 16:00 : Tianlu Yuan (UW-Madison)
IceCube: a nu-window into the Universe
The IceCube Neutrino Observatory, a cubic-kilometer in-ice detector at the South Pole, offers a unique window into the smallest and largest scales of our universe. Over the past several years, IceCube has detected the first high-energy neutrinos of astrophysical origin, measured atmospheric neutrino oscillations, and performed searches of neutrino sources throughout the sky. As more data is collected, a reduction of systematic uncertainties becomes ever more important for neutrino astronomy and neutrino property measurements in IceCube. These two paths are connected as, at the highest energies, the angular resolution of events without an observable muon is limited primarily by ice-property uncertainties. To pave the road forward, in this talk I will explore improvements to event reconstruction and systematic treatment in the high-energy starting event (HESE) analysis. I will discuss a new high-energy cross-section measurement using the HESE sample and a novel calculation of the atmospheric neutrino background. I will conclude with an outlook for the future with IceCube-Gen2.
Friday 16 March 2018, 16:00 : Seth Zenz (Imperial)
Understanding the Higgs Boson: Where We Are, Where We're Going, and How To Get There
In 2012, the ATLAS and CMS experiments at the CERN Large Hadron Collider discovered a new particle. With analysis of data through 2016 now largely completed, we know more precisely than ever that this particle is highly consistent with the Standard Model (SM) Higgs boson. But is the SM realized exactly, or do some differences in the Higgs boson's properties provide a window into new physics? During the 2020's and 2030's, the High Luminosity LHC will supply a large enough dataset to answer this question with very precise and fine grained measurements. I will outline the current understanding of the Higgs boson, the plans for long-term studies at the LHC, and the measurements we can make now to build up our knowledge in the medium-term and prepare better for the long program ahead.
Friday 09 March 2018, 16:00 : Kate Pachal (SFU)
Searches for new low-mass resonances in jetty events at ATLAS
"Searches for beyond-standard-model particles in dijet invariant masses have been a been a key feature of physics programs across 30 years of collider experiments. As no new particles at high masses have yet been discovered, some analyses have moved from pushing this final state towards higher and higher scales in favour of searching for small cross section or small branching ratio signals at lower masses. This is ideal for searching for dark matter mediator candidates, where the dijet channel can be a powerful constraint. Low masses pose a serious challenge for searches in fully hadronic final states because of the trigger prescales which make dataset accumulation difficult in this regime. Two ways around this constraint have been explored in Run II: first, the analysis can be performed with minimal event information at the level of the trigger. This poses a lot of unique technical challenges because of the need for a custom jet calibration. Second, the analysis can search for dijet resonances produced in association with an object which can be used for triggering: a higher-pT jet or a photon. This seminar will discuss the most recent public results for both analysis methods."
Friday 02 March 2018, 16:00 : Kate Pachal (SFU) — POSTPONED!!!
POSTPONED!!! Searches for new low-mass resonances in jetty events at ATLAS
Searches for beyond-standard-model particles in dijet invariant masses have been a been a key feature of physics programs across 30 years of collider experiments. As no new particles at high masses have yet been discovered, some analyses have moved from pushing this final state towards higher and higher scales in favour of searching for small cross section or small branching ratio signals at lower masses. This is ideal for searching for dark matter mediator candidates, where the dijet channel can be a powerful constraint. Low masses pose a serious challenge for searches in fully hadronic final states because of the trigger prescales which make dataset accumulation difficult in this regime. Two ways around this constraint have been explored in Run II: first, the analysis can be performed with minimal event information at the level of the trigger. This poses a lot of unique technical challenges because of the need for a custom jet calibration. Second, the analysis can search for dijet resonances produced in association with an object which can be used for triggering: a higher-pT jet or a photon. This seminar will discuss the most recent public results for both analysis methods.
Friday 23 February 2018, 16:00 : Francesco Coradeschi (Cambridge)
Precision physic at colliders: introducing reSolve, a transverse momentum resummation tool
Since the early days of the Large Hadron Collider (LHC), a large part of the experimental effort was focused on direct searches for signals of New Physics. It is however important (and increasingly so, given the absence of any direct detection so far) to also explore alternative strategies, and foremost among these is precision physics. Traditionally, hadron machines such as the LHC were not considered particularly well-suited to precision studies, but experimental collaborations at CERN have already provided us with excellent measurements which exceed our best theoretical predictions precision-wise, and can only be expected to get better as the LHC continues its run. It is crucial for the theoretical community to keep up with this trend, especially considering that most realistic, still viable, extensions of the Standard Model (SM) are compatible with only small deviations from SM predictions, at LHC energies, for arbitrary observables. The transverse momentum spectrum is a particularly interesting observable for the precision program: in general processes, a majority of events is produced at relatively soft transverse momentum scales, and the physical behaviour at these soft scales is an highly nontrival prediction of perturbative QCD (and thus of the SM) which requires a resummation of logarithmically-enhanced contributions to all orders in the strong coupling \alpha_s. In this talk, I will introduce the new tool reSolve, a Monte Carlo differential cross-section and parton-level event generator whose main new feature is to add transverse momentum resummation to a general class of inclusive processes at hadron colliders. reSolve uses the impact parameter formalism, which is particularly well-suited to general studies. During the talk I will briefly review transverse momentum resummation in general, the peculiarities of its implementation in reSolve, and conclude commenting some of the possible phenomenological applications and future developments.
Friday 16 February 2018, 16:00 : Nassim Bozorgnia (Durham)
The dark halo of Milky Way-like galaxies
One of the major sources of uncertainty in the interpretation of dark matter direct and indirect detection data is due to the unknown astrophysical distribution of dark matter in the halo of our Galaxy. Realistic numerical simulations of galaxy formation including baryons have recently become possible, and provide important information on the properties of the dark matter halo. I will discuss the dark matter density and velocity distribution of Milky Way-like galaxies obtained from high resolution hydrodynamical simulations. To make reliable predictions for direct and indirect detection searches, we identify simulated galaxies which satisfy the Milky Way observational constraints. Using the dark matter distribution extracted from the selected Milky Way-like galaxies, I will present an analysis of current direct detection data, and discuss the implications for the dark matter interpretation of the Fermi GeV excess.
Friday 09 February 2018, 16:00 : Stefan Guindon (CERN)
Recent ATLAS Results in the search for ttH production
The observation of the production of ttH is an important test of the top Yukawa coupling of the Higgs boson, and one of the main goals of Run-2 of the Large Hadron Collider. It remains one of the most important characteristics of the Higgs boson which has yet to be directly observed. Many models of new physics beyond the Standard Model predict significant deviations of this coupling, which would be directly observable via the measurement of ttH production. The new ATLAS searches for ttH associated production at centre-of-mass energies of 13 TeV will be presented. The search targets several Higgs boson decays, including final states with multiple b-quarks, multileptons, and two photons.
Friday 02 February 2018, 16:00 : Yiannis Andreopoulos (UCL)
Deep Learning from Compressed Spatio-Temporal Representations of Data
Deep learning has allowed for (and incentivised) researchers to look at data volumes and processing tasks that have previously only been hypothesized. While the first-generation of deep supervised learning achieved significant advances over shallow learning methods, it is increasingly becoming obvious that many approaches were naive in their design and we are only scratching the surface of what is possible. The notion of strong supervision (i.e., the use of labels during training) is impractical and easy to fool by adversarial examples and, perhaps most importantly, operating with uncompressed samples (e.g., input image pixels of video or audio) does not scale. For instance, data generated from visual sensing in Internet-of-Things (IoT) application contexts will occupy more than 82% of all IP traffic by 2021, with one million minutes of video crossing the network every second [Cisco VNI Report, Jun. 2017]. This fact, in conjunction with the rapidly-increasing video resolutions and video format inflation (from standard to super-high definition, 3D, multiview, etc.), makes the scale-up of deep learning towards big video datasets unsustainable. To address this issue, we propose to go beyond the pixel representations and design advanced deep learning architectures for classification and retrieval systems that directly ingest compressed spatio-temporal activity bitstreams produced by: (i) mainstream video coders and (ii) neuromorphic vision sensing cameras. By exploiting the compressed nature of our inputs, our approach can deliver 100-fold increase in processing speed with comparable classification or retrieval accuracy to state-of-the-art pixel-domain systems and has the potential to be extended to self-supervised deep learning. The talk will explain the key steps of our approach and can motivate researchers to think carefully about the sensing and supervision modalities of their problems prior to embarking on the use of deep learning tools for data analysis. Related paper: https://arxiv.org/abs/1710.05112
Friday 26 January 2018, 16:00 : Jonathan Davis (Kings)
CNO Neutrino Grand Prix: The race to solve the solar metallicity problem
Several next-generation experiments aim to make the first measurement of the neutrino flux from the Carbon-Nitrogen-Oxygen (CNO) solar fusion cycle. This will provide crucial new information for models of the Sun, which currently are not able to consistently explain both helioseismology data and the abundance of metal elements, such as carbon, in the solar photosphere. The solution to this solar metallicity problem may involve new models of solar diffusion or even the capture of light dark matter by the Sun. I look at how soon electronic-recoil experiments such as SNO+, Borexino and Argo will measure the CNO neutrino flux, and the challenges this involves. I also consider experiments looking for nuclear-recoils from CNO neutrinos, which requires sensitivity to very low energies, and discuss how the same technology is also key to direct searches for sub-GeV mass dark matter.
Friday 19 January 2018, 16:00 : Chris Backhouse (UCL)
New results from NOvA
The phenomenon of neutrino oscillations, which implies that neutrinos are not massless as we had previously believed, raises a wealth of new and intriguing questions. What is the ordering of the neutrino mass states? Might neutrino oscillations violate matter/antimatter symmetry? What structure, if any, does the neutrino mixing matrix have? The NOvA experiment directly addresses these questions by measuring the changes undergone by a powerful neutrino beam over an 810 km baseline, from its source at Fermilab, Illinois to a huge 14 kton detector in Ash River, Minnesota. I will give a brief overview of neutrino oscillations, then present updated NOvA measurements of the disappearance of muon neutrinos and their transformation into electron neutrinos, the implications of these results, and prospects for the future.
Friday 15 December 2017, 16:00 : Silvia Pascoli (Durham)
Going beyond the standard 3-neutrino mixing scenario
Although the standard 3 neutrino mixing scenario fits very well most existing data, the possibility that new effects, e.g. sterile neutrinos, NSI…, exist is still open. In this talk I will focus mainly on sterile neutrinos. They are neutral fermions which do not have Standard Model interactions but mix with light neutrinos. They constitute a minimal extension of the standard model. They have interesting signatures in cosmology and in laboratory searches. I will review the current situation, discussing the hints for their existence and their tests in neutrino oscillation, beam dump, SBN, DUNE and other experiments. I will conclude presenting a possible explanation of the MiniBooNE excess which is compatible with current data.
Friday 08 December 2017, 16:00 : Mauricio Bustamante (Niels Bohr Institute)
Heaven-sent neutrino interactions from TeV to PeV
Neutrino interactions are vital to particle physics and astrophysics. Yet, so far, they had remained unprobed beyond neutrino energies of 350 GeV. Now, thanks to the discovery of high-energy astrophysical neutrinos by IceCube, we have measured neutrino-nucleon cross sections from a few TeV up to a few PeV. We did this by using the Earth itself as a target: neutrino interactions with matter inside the Earth imprint themselves on the distribution of neutrino arrival directions at the detector. When measuring high-energy particle interactions, the sky is the limit.
Friday 01 December 2017, 16:00 : Anthony Hartin (UCL)
Strong field QED effects in the quantum vacuum generated by laser-electron interactions
When considered in a non-perturbative QFT, the interactions of intense lasers and relativistic charged particles lead to novel strong field QED effects amenable to experimental tests with available technology. I review the theory and simulation strategy necessary in order to design discovery experiments for these effects. The intense laser field is taken into account exactly in the bound Dirac equation whose solutions are required for different field configurations. The simulation strategy requires a PIC code combined with a monte-carlo of the quantum interactions. The workhorse process for such experiments is the trident process which I will also review.
Friday 24 November 2017, 16:00 : Mitesh Patel (Imperial)
Anomalous measurements at the LHCb experiment
A number of measurements of B-meson decays made at the LHCb experiment give anomalies with respect to the predictions of the Standard Model. The status of the relevant measurements, the connections between them and their theoretical interpretation will be discussed, along with the prospects for the future.
Friday 17 November 2017, 16:00 : Pontus Stenetorp (UCL)
DIS CDT seminar: Artificial Intelligence for Reading the Scientific Literature
We are currently experiencing an unprecedented increase of interest in applying Artificial Intelligence methods for various tasks. Spurred on advances using modern reincarnations of neural networks – Deep Learning – machine learning-based approaches have seen recent successes in complex games like Go, autonomous vehicles, and language-based question answering. In this talk I will present the current state of the art in Natural Language Processing (NLP) and its limitations. I will then relate this to previous and ongoing efforts in applying NLP-methods to the scientific literature – allowing scientists to cope with an ever increasing number of publications. Primarily I will cover ongoing work from the UCL Machine Reading group on text-based multi-step inference applied to the biomedical literature and a collaboration with the UCL Space & Climate Physics to track astronomical measurements in the astrophysics literature.
Friday 10 November 2017, 16:00 : John LoSecco (Notre Dame)
The History of the Atmospheric Neutrino Anomaly
This talk covers the early days of particle astrophysics when it was slowly realized that the major background to proton decay, atmospheric neutrino interactions, were an important discovery in their own right.
Friday 03 November 2017, 16:00 : Moritz Backes
Recent results from Supersymmetry Searches at ATLAS
This talk discusses a selection of the latest ATLAS results for searches for supersymmetric (SUSY) particles performed with pp collisions at a centre-of-mass energy of 13 TeV, using the full 2015 and 2016 dataset. Weak and strong production of SUSY particles in both R-Parity conserving and violating scenarios are considered assuming either prompt decays or longer-lived states.
Friday 27 October 2017, 16:00 : Jens Dopke (RAL)
Reading charcoal: Using HEP detectors to decipher papirii
I will introduce the problem of reading ancient documents, in this particular project working on reconstruction of text from scrolls found in Herculanuem near Mt. Vesuvis. These are made from papyrus and have been written on using organic inks. After the eruption of Mt. Vesuvius in 79 AD, these been slow-cooked under exclusion of oxygen, and hence turned into lumps of charcoal. All attempts of unrolling these subjects have been destructive. First indications show that dark-field x-ray imaging allows to make the ink of these documents visible and there is good hope that this assumption holds for multi-layered documents that cannot be unrolled. I will introduce dark-field x-ray imaging, report on where we are with the project, what's lacking at the moment and where we plan to get to within the next year(s).
Friday 20 October 2017, 15:00 : Valerio Dao: NOTE: unusual time and location: Harrie Massey
Physics with Hbb at ATLAS
Since its discovery in 2012 the Higgs boson particle has opened new possibilities to further improve our understanding of the Standard Model physics landscape. Precise characterisation of its production and its properties could be used to indirectly probe for new physics effects and, at the same time, the decay of new particles could directly lead to signatures with a SM-like Higgs boson in the final state. The decay of the Higgs boson into a b-quark pair has, in this respect, a key role; while experimentally very challenging, the large branching ratio makes it the largest contributor to the total width of the Higgs boson and, thanks to the large rate, provides the best chance to see evidence for new phenomena that lead to deviations at high pT. Recently the ATLAS collaboration achieved an important milestone by obtaining the evidence for the H->bb decay in the associated production of the Higgs boson and a vector boson using Run2 data at 13 TeV. This seminar will mainly focus on this recent result discussing the experimental challenges that have been faced to extract the signal from a very large background. In addition, key examples of how H->bb signature could be used for directly searching for new physics effects will be given.
Friday 01 September 2017, 16:00 : Evan Niner (Fermilab)
Deep Learning Applications of the NOvA Experiment
The NOvA experiment is a long-baseline neutrino oscillation experiment that uses two detectors separated by 809 kilometers to measure muon neutrino disappearance and electron neutrino appearance in the beam produced at Fermilab. These oscillation channels are sensitive to unknown parameters in neutrino oscillations including the mass hierarchy, θ23, and CP violation. In this talk I will focus on the development and application of deep learning algorithms to the task of event reconstruction and classification in NOvA. These algorithms, adapted from computer vision applications, resulted in a performance gain equivalent to a 30% increase in exposure in the 2016 analysis. I will also look at future deep learning applications.
Wednesday 12 July 2017, 16:00 : Steven Prohira (Kansas University)
Radio interactions with particle-shower plasmas, with implications for high-energy astro-particle physics
The collision of a high-energy particle with stationary matter, such as ice, results in a shower of secondary particles. As these secondary particles traverse the interaction medium, cold ionization electrons are produced. For high enough primary particle energies, this cloud of ionization electrons is dense enough to form a tenuous plasma which may reflect radio-frequency (RF) energy. As such, RF scatter techniques have been proposed as a robust technology to remotely detect such high-energy particle interactions by illuminating a detection volume with RF energy and remotely monitoring the same volume for a reflected RF signal from a particle-shower plasma. Though the Telescope Array RADAR (TARA) experiment, the first dedicated attempt at the radio-scatter method, yielded no results, the intriguing lack of signal has caused renewed interest in the technique. Surprisingly, a dedicated laboratory measurement of RF scatter from particle-showers has never been performed at high (GeV) energies. This talk will discuss experimental results from TARA and will detail a laboratory measurement to take place at the End-Station facilities at SLAC to quantify the parameters of the particle-shower plasma. A high-energy electron beam will be fired into a large target to initiate a shower from which RF will be reflected. The observables from this measurement will be discussed, along with outlook.
Friday 23 June 2017, 16:00 : Prof. Tsutomu Yanagida (IPMU, U. of Tokyo)
The Origin of Matter in the Universe
Paul Dirac proposed the baryon symmetric universe in 1933. This proposal has become very attractive now since it seems that all pre -existing asymmetry would have been diluted if we had an inflationary stage in the early universe. However, if our universe began baryon symmetric, the tiny imbalance in numbers of baryons and anti-baryons which leads to our existence, must have been generated by some physical processes in the early universe. In my talk I will show why the small neutrino mass is a key for solving this long standing problem in understanding the universe we observe today.
Friday 16 June 2017, 16:00 : Sarah Bridle (Manchester)
Gastrophysics: Food security for astro/physicists
In this seminar I will describe my motivations for being interested in food research, and my belief that STFC researchers bring a lot of relevant skills to the challenges of providing food that consumers want. The STFC Food Network+ aims to bring STFC researchers and facilities together with food researchers and industry, through network meetings and funding for new projects. Some background about food: There is an impending perfect storm of pressure on our food production system, with increasing population and changing consumer tastes, in the face of rising temperatures and extreme weather events. Tim Gore, head of food policy and climate change for Oxfam, said "The main way that most people will experience climate change is through the impact on food: the food they eat, the price they pay for it, and the availability and choice that they have.". Yet, at the same time, food production is a bigger contributor to climate change than transport. A 2014 Chatham House report states "Dietary change is essential if global warming is not to exceed 2C.".
Friday 02 June 2017, 16:00 : Edward Daw (Sheffield)
Axions and Axion-Like Particles
In recent years, searches for new physics beyond the standard model have focussed on the electroweak scale, using channels that are reachable in high energy accelerators and through induced interactions of hypothesised electroweak-scale relic particles (WIMPs) in tonne-scale direct search experiments. A second possibility which is scientifically just as well-motivated is that the dark matter consists of much lighter pseudoscalars, called axions, whose origin is in low energy quantum chromodynamics, whose coupling to ordinary matter is feeble because their low mass, and consequently their feeble couplings to other particles, renders them 'invisible' to ordinary accelerator searches. The same feeble couplings mean that axions have very long decay times, so that relic axions generated in the very early Universe may be the dark matter evidenced through its gravitational effects today. I shall survey the field of axion-sector experimental searches. Experiments that look directly for the axion itself attempt to induce dark matter axions to convert into microwave photons in closed electromagnetic resonators. Other experiments identify other by-products of the symmetry breaking mechanism that may have yielded axions; these other by-products are sometimes called ALPs (axion-like particles), and experiments to search for ALPs typically use a 'light shining through a wall' technique. Overall this is an exciting and growing field. I will also discuss some of my own work on axion searches with the ADMX experiment, aimed towards improvements in the sensitivity and search rate of such experiments by means of novel modifications to the resonant detector design.
Friday 26 May 2017, 16:00 : Leigh Whitehead (CERN)
Sterile Neutrino searches with MINOS and MINOS+
Three-flavour neutrino oscillations have proved very successful in describing the observed neutrino oscillation data. However, there are also some anomalies, including the excesses of appeared electron neutrino interactions in LSND and MiniBooNE, and a sterile neutrino state at a larger mass-splitting scale can provide an explanation for these results. The MINOS/MINOS+ experiment was a long-baseline neutrino experiment in the US, collecting beam and atmospheric neutrino interactions from 2003 until 2016. MINOS was optimised for the study of muon neutrino disappearance in the NuMI beam at Fermilab. The continuation of the experiment with a medium energy beam configuration is called MINOS+. A sterile neutrino in MINOS/MINOS+ would appear as a modulation on the three-flavour oscillations. A search for sterile neutrinos has been performed using charged-current and neutral-current interactions in two detectors separated by a distance of 734km. The inclusion of two years of MINOS+ data and an improved fit method provides a much increased sensitivity over the previous MINOS result that was combined with Daya Bay.
Friday 19 May 2017, 16:00 : Gabriel Facini (UCL)
Searches for new phenomena with the ATLAS detector
Many theories beyond the Standard Model (BSM) predict new phenomena accessible by the LHC which prevent the need of fine-tuning of the Higgs Boson mass or expand the gauge sectors of the SM or the nature of Dark Matter for example. Searches for new physics models are performed using the ATLAS experiment at the LHC focusing on exotic signatures that can be realized in serval BSM theories. The results reported use the pp collision data sample collected in 2015 and 2016 by the ATLAS detector at the LHC with a centre-of-mass energy of 13 TeV.
Friday 05 May 2017, 16:00 : Christopher McCabe (GRAPPA)
Low energy signals in xenon detectors: from supernova neutrinos to light dark matter
One of the major achievements of the LUX collaboration was to accurately calibrate xenon dark matter direct detection experiments to sub-keV energies. This means that reliable predictions of low-energy signals can now be performed. In this talk, I'll explore new low-energy signals from neutrinos and low-mass dark matter that could be measured with the forthcoming generation of multi-tonne xenon detectors. Based primarily on arXiv:1606.09243 and arXiv:1702.04730.
Friday 28 April 2017, 16:00 : Tommi Tenkanen (QMUL)
Observational properties of feebly interacting dark matter
Can dark matter properties be constrained if dark matter particles interact only feebly with the Standard Model fields? The answer is yes. By studying both cosmological and astrophysical constraints, we show that stringent constraints on dark matter properties can be derived even in case the dark matter sector is practically uncoupled from the Standard Model sector. By taking the Higgs portal model as a representative example, we study in detail scenarios where the hidden sector does not thermalize with the Standard Model sector and, among other things, derive a lower bound on dark matter self-interaction strength.
Friday 21 April 2017, 16:00 : Jonas Lindert (IPPP Durham)
High-precision predictions for V+jet production
In LHC searches for Dark Matter one of the dominant systematic uncertainties arises from the determination of the irreducible Z(->vv)+jet background. The modelling of this background relies on accurate measurements of V+jet production processes with visible final states and their extrapolation to the signal region via a global fit based on high-precision Standard Model predictions for pp->W+jets, pp->Z+jets and pp->gamma+jets including higher-order QCD and EW corrections. I will present such predictions with a focus on the mandatory determination of robust estimates of remaining theoretical uncertanties at the percent level and their correlation amongst processes.
Friday 07 April 2017, 16:00 : Luise Skinnari(Cornell)
Track-triggering at CMS for the High-Luminosity LHC
The high luminosity upgrade of the LHC, scheduled for 2024-2025, will increase the luminosity by a factor of 10 beyond the original LHC design. The resulting large datasets will allow precise measurements of Higgs properties, searches for rare processes, and much more. To cope with the challenging environment from the high luminosity, significant upgrades will be required for the LHC experiments. A key upgrade of the CMS detector is to incorporate tracking information in the hardware-based trigger. We are exploring different strategies for performing the hardware-based track finding, including a fully FPGA-based approach. I will give an overview of the CMS track trigger upgrade, describe its expected performance, and show results from developments of a hardware demonstrator system.
Friday 31 March 2017, 16:00 : Andreas Warburton (McGill/[UCL])
Searches for New Physics on the Intensity Frontier: The Belle II Experiment
The Belle II collaboration comprises over 600 physicists from 23 countries building a detector on the high-luminosity SuperKEKB electron-positron collider in Japan. The detector, a successor to the successful BaBar and Belle experiments, has capabilities complementary to efforts with similar goals at the LHC and is the latest experimental tool in the now three-decade-long B-factory era. I will outline the prospects for discoveries of new physics at Belle II and discuss the latest status and schedule of the accelerator and detector upgrades, currently in progress.
Friday 24 March 2017, 16:00 : Jose No (KCL)
Beyond simplified models for dark matter searches @LHC: Making a case for the pseudoscalar portal
The Higgs sector is a well-motivated portal to dark matter (DM). I discuss scalar/pseudoscalar portal models for DM, as a powerful tool to exploit the complementarity between LHC searches and direct/indirect DM detection experiments, and their connection to Higgs physics. I then analyze the shortcomings of so-called "simplified DM models" in this context, highlighting the key physics these models fail to capture and its impact on LHC searches.
Friday 10 March 2017, 16:00 : Mark Lancaster (UCL)
The Fermilab Mu2e Experiment
In the SM the only mechanism to violate charged lepton flavour conservation is via neutrino oscillations which results in a branching rate for neutrinoless muon interactions of order 10^-50. As such any observation of a neutrinoless muon interaction would be evidence of new physics. In this talk I will describe the Fermilab Mu2e experiment that is seeking to detect the neutrinoless conversion of a muon to an electron in the field of a nucleus using 10^20 muons with a branching ratio sensitivity down to 6x10^-17: a factor of 10^4 better than the previous limit which allows the experiment to probe new physics mass scales up to 8000 TeV, well beyond that probed by direct searches at the LHC.
Friday 03 March 2017, 16:00 : Fady Bishara (Oxford)
The next frontier for Higgs couplings
The LHC experiments have, so far, measured many of the Higgs couplings and found excellent agreement with the minimally-realized electroweak symmetry breaking (EWSB) mechanism in the Standard Model. Nevertheless, there are important couplings that are currently out of reach which test the nature of EWSB and fermion mass generation. This talk will focus on two of them: the charm Yukawa and the hhVV couplings. A measurement of the first would confirm that the 125 GeV Higgs which gives mass to third generation fermions also gives mass the second generation. To this end, I will describe recent ideas to probe the charm Yukawa coupling in particular by using Higgs differential distributions. In the second case, deviations of the hhVV coupling from the SM would signal non-linearity and herald new physics at higher energies. As I will show, double Higgs production in VBF at the LHC can provide such a test at the 20% level by the end of the high luminosity run while a percent level constraint can be obtained at a future circular collider.
Friday 10 February 2017, 16:00 : Mercedes Paniccia (Geneva)
The Alpha Magnetic Spectrometer on the International Space Station: the era of precision cosmic-ray physics
The Alpha Magnetic Spectrometer (AMS) is the most powerful and sensitive cosmic-ray detector ever deployed in space to produce a complete inventory of charged particles and nuclei in cosmic rays near Earth in the energy range from GeV to few TeVs. Its physics goals are the study of cosmic-ray properties, indirect search for Dark Matter and direct search for primordial antimatter. The improvement in accuracy over previous measurements is made possible through its long duration time in space, large acceptance, built in redundant systems and its thorough pre-flight calibration in the CERN test beam. These features enable AMS to analyse the data to an accuracy of ~1%. Since its installation on the International Space Station in May 2011, AMS has collected more than 90 billion cosmic-ray events and has produced precision measurements of electron, positron, proton, antiproton, He and light nuclei fluxes and of their ratios in cosmic rays of energy ranging from GeV to few TeVs. The percent precision of the AMS results challenges the current understanding of the origin and of the acceleration and propagation mechanisms of cosmic rays in the galaxy and thereby requires new theories to be developed by the physics and astrophysics community. In this talk, after a brief introduction to cosmic-ray physics, I will present the latest AMS results based on its first five years of data taking, pointing out their implication for cosmic-ray modelling and for Dark Matter searches.
Friday 27 January 2017, 16:00 : Jan Kretzschmar (Liverpool)
Precision W and Z cross-sections and the first measurement of the W boson mass at ATLAS
The Large Hadron Collider has produced more W and Z bosons than any other collider before. The large samples of leptonic bosons decays provide a unique opportunity for precision studies of the strong interaction and the electroweak interaction. These studies are facilitated by the high experimental precision achieved in these measurements after a careful detector calibration. New cross-section measurements allow novel insights into the proton structure. Specifically, strong constraints of the poorly known strange-quark distribution are demonstrated in a NNLO QCD analysis. The mass of the W boson is a key parameter in the global electroweak fit to test the overall consistency of the Standard Model. The first complete W-boson mass measurement at the LHC is presented, which requires an extraordinary control over both experimental and theoretical effects.
Friday 20 January 2017, 16:00 : Jon Butterworth (UCL)
Making measurements and constraining new physics at the LHC
Particle-level differential measurements made in fiducial regions of phase-space at colliders have a high degree of model-independence. These measurements can therefore be compared to BSM physics implemented in Monte Carlo generators in a very generic way, allowing a wider array of final states to be considered than is typically the case. A new method providing general consistency constraints for Beyond-the-Standard-Model (BSM) theories, using measurements at particle colliders, is presented.
Monday 09 January 2017, 16:00 : Jordan Myslik (LBNL) — NOTE: UNUSUAL DATE/PLACE!!! Physics E7
The MAJORANA DEMONSTRATOR
Abstract: The MAJORANA DEMONSTRATOR is an experiment searching for neutrinoless double-beta decays of germanium-76. This lepton-number-violating process is connected to the nature, absolute scale, and hierarchy of the neutrino masses. The MAJORANA DEMONSTRATOR consists of two modular arrays of natural and 76Ge-enriched germanium detectors totalling 44.1 kg, located on the 4850' level of the Sanford Underground Research Facility in Lead, South Dakota. While seeking to demonstrate backgrounds low enough to justify a tonne-scale experiment and the feasibility of its construction, the MAJORANA DEMONSTRATOR's ultra-low backgrounds and excellent energy resolution also allow it to probe additional physics beyond the Standard Model. This talk will discuss the physics and the design elements of the MAJORANA DEMONSTRATOR, its results to date, and its future prospects, along with the plans for a future 1 tonne germanium-76 neutrinoless double-beta decay experiment.
Friday 16 December 2016, 16:00 : Eric Jansen (CMCC)
Numerical ocean modelling and the case of MH370
In this seminar I will take you on a small excursion into the world of numerical ocean modelling, and more specifically: data assimilation. The field of ocean data assimilation is concerned with updating ocean forecasts using daily observations of the ocean state in order to produce more accurate forecasts for the next day. Observations are incredibly diverse: buoys and moorings that measure time-series in a specific location, autonomous vehicles that can dive and resurface, coastal radar systems that measure currents, but also satellite measurements of the sea surface. I will discuss how we can assimilate these observations using a Kalman filter technique, applied to an ensemble of ocean model realisations. The second part of the seminar will focus on how we can apply these ocean models to a real-world problem: the disappearance of flight MH370. Malaysia Airlines flight 370 was a commercial flight from Kuala Lumpur to Beijing, which vanished without a trace in March 2014. It is believed to have crashed somewhere off the west-coast of Australia, but despite extensive search operations the main wreckage was never found. Now, more than two years later, parts belonging to the aircraft are found 4000km away on the African coast. How can we use this information to locate the crash site?
Wednesday 14 December 2016, 16:00 : Prof. Un-Ki Yang (Seoul National Univ.) — UNUSUAL LOCATION: E7
Searches for heavy neutrinos at the LHC
Heavy neutrinos occur in various extensions of the Standard Model (SM) and may explain the observed small masses of the SM neutrinos via several possible variants of the seesaw mechanism. We present results on searches for heavy neutrinos at the LHC. Searches are performed in the dilepton+jets channel and the trilepton channel, and results are interpreted in terms of the Left-Right Symmetrical model, Type-I, and the Type-III seesaw mechanism.
Friday 02 December 2016, 16:00 : Clare Burrage (Nottingham)
Detecting Dark Energy with Atom Interferometry
I will discuss the possibility that the nature of the dark energy driving the observed acceleration of the Universe on giga-parsec scales may be determined first through metre scale laboratory based atom interferometry experiments. I will begin by discussing why our attempts to solve the cosmological constant problem lead to the introduction of new, light degrees of freedom. In order to be compatible with fifth force constraints these fields must have a screening mechanism to hide their effects dynamically. However, this doesn't mean that they are impossible to detect. I will discuss the constraints that arise from a range of laboratory experiments from precision atomic spectroscopy to collier physics. Finally I will show that atom-interferometry experiments are ideally suited to detect a large class of the screening mechanisms known as chameleon. This will then allow us to either rule out large regions of the chameleon parameter space or to detect the force due to the dark energy field in the laboratory.
Friday 25 November 2016, 16:00 : Lucian Harland-Lang (UCL))
Photon-photon collisions at the LHC
I will discuss the possibilities for using the LHC as a photon colliding machine. The colour singlet nature of the photon means that it can readily lead to exclusive or semi-exclusive events, with limited or no extra particle production in the final state. I will show how such processes provide a well understood environment in which to test the Standard Model and search for BSM physics. This is particularly relevant at the LHC, where these events may be selected using dedicated 'proton tagging' detectors installed in association with ATLAS and CMS. I will also consider the impact of photon-initiated processes on the more common inclusive production modes at the LHC. Here, the photon must be included in addition to the quarks and gluons as a parton in the proton. I will demonstrate that, despite indications to the contrary from earlier studies, the photon parton density in the proton is in fact quite precisely known, and consider the implications for LHC (and FCC) phenomenology.
Friday 18 November 2016, 16:00 : James Currie (IPPP Durham)
Jet Production at NNLO
Jet production is one of the most ubiquitous yet important reactions at the LHC. Calculating higher order corrections to this observable allows us to gain sensitivity to phenomenological parameters such as the strong coupling and the gluon PDF as well as providing a rigorous test of QCD over a vast kinematic range. It is well known that calculating higher order corrections is a non-trivial task and requires a powerful method to subvert the various infrared singularities present in the calculation. I will introduce the Antenna Subtraction method as a means of calculating finite cross sections and present some recent results obtained by applying this method to the problem of jet production at NNLO.
Friday 11 November 2016, 16:00 : Marcel Vos (IFIC Valencia))
Future of top Physics
In this seminar I discuss the potential of planned facilities - lepton and hadron colliders - to bring our understanding of the top quark to the next level. The focus is on measurements of top quark properties and interactions with exquisite sensitivity for signals of the physics that lies behind the Standard Model: the top quark mass measurement, the search for FCNC interactions involving top quarks and those characterizing the top quark couplings to the gluon, EW gauge bosons and the Higgs boson. The precision that new experiments can achieve in these key measurements is compared to the current state-of-the-art and the expectation for the complete LHC programme.
Friday 04 November 2016, 16:00 : Marco Pappagallo (University of Bari & INFN)
exotic hadronic states in LHCb (Penta/Tetra quarks)
The latest years have seen a resurrection of interest in searches for exotic states. Using the data collected at pp collisions at 7 and 8 TeV by the LHCb experiment, unambiguous new observations of exotic charmonia hadrons produced in B and Lambdab decays are presented. Results of a search for a tetraquark state decaying to Bs pi+- are reported as well
Friday 28 October 2016, 15:00 : Jeff Foreshaw (Manchester) — NOTE EARLIER TIME AND DIFFERENT LOCATION E7!
"Probabilities and Signalling in Quantum Field Theory"
I will talk about a way to compute transition probabilities that works directly that the level of probabilities and not amplitudes. The formalism guarantees that the initial and final states are always linked by a chain of retarded propagators and it has a nice diagrammatic approach.
Wednesday 26 October 2016, 14:00 : Blackett Lab LT1, Imperial College London
Gravitational Wave Symposium
A half day physics meeting on gravitational waves. Leading scientists from the Astro and HEP communities will present the recent discovery of gravitational waves and future prospects of this new field in fundamental physics. The talks are aimed at interested researchers at all levels across all of London's universities. See: https://indico.cern.ch/event/563302
Friday 21 October 2016, 13:00–17:00 : 2nd year PhD Students — NOTE different time/place!
2nd year PhD talks!
1pm-5pm in E3/E7
Friday 14 October 2016, 16:00 : Sally Shaw (UCL)
The LUX & LZ Dark Matter Experiments: WIMP Hunting in the Black Hills
Discovery of the nature of dark matter is recognised as one of the greatest contemporary challenges in science, fundamental to our understanding of the universe. Weakly Interacting Massive Particles (WIMPs) that arise naturally in several models of physics beyond the Standard Model are compelling candidates for dark matter. The LUX experiment, operated 1.5 km underground in the Davis Cavern of the SURF laboratory, USA, is the world leader in the direct hunt for WIMPs. I will present the latest LUX results and describe the unique in-situ calibrations that have allowed low energy nuclear recoil measurements in liquid xenon, greatly enhancing our sensitivity to low mass WIMPs. I will also discuss the status of LUX's multi-tonne successor, LZ, demonstrating how its sensitivity is ideally matched to explore the bulk of the remaining theoretically favoured electroweak phase space towards galactic dark matter discovery.
Friday 07 October 2016, 16:00 : Leszek Roszkowski (Sheffield and NCBJ)
Particle dark matter: what it is and how to determine its properties
After a brief introduction I will comment on some recently discussed WIMP candidates for dark matter. In particular, I will provide arguments that a supersymmetric neutralino with mass around 1 TeV and well defined properties has emerged as an attractive and experimentally testable candidate for dark matter in light of measured Higgs boson properties and ensuing implications for supersymmetry searches at the LHC. On the other hand, for a wide range of WIMPs, and independent of any specific particle physics scenarios, when a dark matter signal is eventually measured in direct or indirect search experiments, or both, it may prove rather challenging to work out ensuing WIMP properties.
Tuesday 19 July 2016, 16:00 : Mark Scott (TRIUMF)
T2K and NuPRISM: An experimental solution to the problems of neutrino interactions in long baseline neutrino experiments
T2K is a long baseline neutrino experiment in Japan that has published world-leading measurements of neutrino oscillations and has the potential to find evidence of CP violation in the lepton sector. The first half of this talk will give a brief description of neutrino oscillations before presenting the latest neutrino oscillation results from the T2K experiment. The difficulties of neutrino interaction modelling and how this affects oscillation analyses will be discussed before introducing the NuPRISM detector. NuPRISM is an intermediate water Cherenkov detector that continuously samples the neutrino beam across a range of off-axis angles. This detector can remove the problems associated with neutrino interactions by recreating the oscillated far detector spectrum using near detector data. The second half of the talk will describe the NuPRISM analysis method, highlighting the unique abilities of the detector and will present an oscillation analysis that is insensitive to incorrect neutrino interaction modelling.
Friday 01 July 2016, 16:00 : Srubabati Goswami (Physical Research Laboratory, Navrangpura, Ahmedabad)
Probing the Leptonic CP violation in current and future experiments
In this talk I will summarize the current status of the CP phase in the lepton sector. I will discuss the main difficulties associated with the measurement of this parameter. In particular I will discuss the problem of parameter degeneracies. How the synergy between beam based and atmospheric neutrino experiments can help in the determination of the CP phase will be shown. I will also discuss the sensitivity of the DUNE experiment to measure this parameter.
Friday 17 June 2016, 16:00 : Andreas Korn (UCL)
Impressions from DM@LHC2016
I will give my personal impressions from the Dark Matter workshop DM@LHC2016. Slides from my own talk, titled "Latest results in the mono-jet and di-jet channels" will be reused and augmented by selected highlights from a spectrum of other talks.
Friday 10 June 2016, 16:00 : Suchita Kulkarni (HEPHY Viena)
Impact of LHC monojet searches on new physics scenarios
Dark matter searches at the LHC are exploring new models and new regime with every new result. I take a specific example of monojet dark matter searches at the LHC and sketch their impact on two dark matter scenarios. I discuss the complementarity of the results with the direct and indirect detection searches. The two models under considerations are, dark matter motivated explanations of the 750 GeV diphoton excess and dark matter interactions with the Standard Models involving derivative couplings.
Friday 03 June 2016, 16:00 : Miguel Arratia (Cambridge)
"Inelastic proton-proton cross-section at 13 TeV with ATLAS,
The inelastic proton-proton cross-section is a basic property of proton interactions, yet it cannot be calculated from first principles. In 1973 experiments at CERN discovered that it rises with energy—as Heisenberg had predicted. Today, the LHC sets the energy frontier at 13 TeV, and theory predicts an asymptotic "black-disk" limit. In this seminar, I will present a recent measurement of the inelastic cross-section with the ATLAS detector. One of the key ingredients for this study is the rate at which the LHC produces proton collisions—the luminosity. I will illustrate how we measure the LHC luminosity and achieve a percent-level accuracy. Finally, I will describe how this result relates to one of the open questions in cosmic ray physics. "
Friday 27 May 2016, 16:00 : Heidi Sandaker (Oslo)
"The AEGIS experiment"
The AEGIS experiment, situated at the Antimatter Decelerator (AD) at CERN aim to measure for the first time the Earth's gravitational acceleration of anti-Hydrogen. To achieve this the AEGIS collaboration plan to produce a pulsed cold anti-Hydrogen beam and send it through a classical Moire deflectometer before the anti-Hydrogen is detected by a system of position sensitive detectors. Beyond the gravitational measurements, AEGIS will also provide long-term anti-matter spectroscopy measurements. This talk will present both the scope and current status of the AEGIS experiment as well as discuss future measurements.
Friday 13 May 2016, 16:00 : Kazuki Sakurai (Durham)
"Search for Sphalerons: LHC vs. IceCube"
In a recent paper, Tye and Wong (TW) have argued that sphaleron-induced transitions in high-energy interactions should be enhanced compared to previous calculations, based on a construction of a Bloch wave function in the periodic sphaleron potential and the corresponding pass band structure. In this talk, I present our resent work studying future prospects of observing sphaleron transitions at high energy hadron colliders and IceCube, based on TW results. I first discuss the production rate and possible signatures of the sphaleron-induced processes at high energy hadron colliders. We recast the early ATLAS Run-2 search for microscopic black holes to constrain the rate of sphaleron transitions at 13 TeV LHC. In the second half of the talk, I will discuss the possibility of observing sphaleron transitions induced by cosmogenic neutrinos at IceCube. I calculate the sphaleron event rate at IceCube and discuss the signature of such events. Finally I compare the performance of the sphaleron searches at the LHC and IceCube and find complementarity of these experiments.
Friday 06 May 2016, 16:00 : David Hesketh (Tradinghub)
Financial Rogue Trading and Market Abuse – Finding the Signal in the Noise
The actions of Jerome Kerviel and Kweku Adobole resulted in billion dollar losses for SocGen and UBS. Similarly, the FX and LIBOR scandals have seen banks fined in the hundreds of millions of dollars. This presentation asks why the banks struggle to identify the signals left by these traders. Topics covered include what the traders were doing in each case, the big data statistical approaches used by the anti-terrorist sections of the CIA and MI5 that have been co-opted by the banks and the unique statistical performance metrics used by TradingHub to solve the problem.
The amount of data from various sources: intelligence networks, traffic monitoring, social media, mobile phones, genomics, health etc) data now available for analysis has increased substantially in the last couple of years. However at the same time the requirements on the privacy of the underlying data have become more stringent requiring the need for algorithms to be developed that preserve the privacy of the underlying data while still extracting information with value (economic and otherwise). In this talk the algorithms and methodology being deployed by Privatar will be described.
Jason Mcfall (Privitar)
Privacy in the age of data science
The collection of vast data sets, and the availability huge compute power to analyse them, brings striking new threats to individual privacy. I'll talk through some of the risks and examples of privacy breaches, and survey some promising techniques for protecting data privacy by statistical means, and talk about why this is a hard problem to solve.
Friday 15 April 2016, 16:00 : Richard Amos (UCL/UCLH)
Targeting cancer with proton beams: developments at UCL Hospital
Proton beam therapy (PBT) offers potential clinical advantages over conventional radiotherapy for cancer due to the physical characteristics of charged particle interaction. As protons traverse patient anatomy they lose energy, becoming more densely ionising as they approach their end-of-range, at which point they stop. This manifests into dose deposition as a function of depth that increases to a maximum, the Bragg peak, towards their end-of-range, with no dose beyond. By choosing proton beams of initial energy such that the Bragg peak region is delivered at the depth of the clinical target volume, the therapeutic dose can be realized with reduced dose to surrounding healthy tissue compared to that delivered by photons. Reduced dose to surrounding tissue offers the potential for reduced acute toxicity and secondary cancer risk. These potential clinical advantages have led to a rapid growth in availability of PBT worldwide in recent years and such capability will soon be available at UCL Hospital (UCLH) for NHS patients indicated for protons.
This presentation will describe contemporary PBT technology and practice with particular emphasis given to recent developments and the current status of the UCLH PBT project.
Friday 08 April 2016, 16:00 : Nick Ryder (Oxford)
A SoliD seminar: The SoLid experiment: searching for neutrino oscillations within 10 m from a nuclear reactor
The SoLid collaboration aim to solve the reactor neutrino anomaly by determining whether it is due to oscillations to a new type of `sterile' neutrino. By measuring the anti-neutrino flux as a function of energy and distance between 5 and 10 m from a reactor core, a direct search for oscillations can be performed without relying on flux calculations. I will explain the reactor anomaly and other motivations for our experiment from unexpected structure in the reactor flux spectrum and nuclear safeguards. Performing a neutrino experiment so close to a reactor core presents a number of challenges. I will explain these challenges and the novel, highly segmented, composite scintillator anti-neutrino detector we have developed to overcome them. I will discuss the prototype detectors that we have built and our plans for a phased deployment of our full scale detector starting later this year.
Friday 18 March 2016, 16:00 : Richard Amos (UCL/UCLH) — POSTPONED TO 15/04/2016!
Friday 11 March 2016, 16:00 : IOP Practise TALKS!
3rd years!
Exciting new results!
Wednesday 09 March 2016, 16:00 : Prof. Chris Done (Durham University)
XXVI SPREADBURY LECTURE: Black Holes - Science fact, fiction or fantasy ?
Black holes are a key plot device in science fiction and fantasy: wormholes through space and time ! In this lecture I'll separate out the fact from the fiction, and talk about how black holes went from a speculative extension of Einstein's gravity to a mainstream observational science via the development of rockets at the start of the space age.
Friday 26 February 2016, 16:00 : Chiara Casella (Geneva)
The SAFIR (Small Animal Fast Insert for mRi) experiment
SAFIR - Small Animal Fast Insert for mRi - is a non conventional positron emission tomography (PET) detector for fast and simultaneous hybrid PET/MRI imaging on small animals. The PET detector is designed specifically to be used inside the bore of a commercial 7T pre-clinical magnetic resonance scanner and with ultra short acquisition durations of the order of a few seconds, to enable quantitative dynamic studies of fast biological processes (e.g. blood perfusion and cerebral blood flow with 15O tracers). To compensate for the statistics losses due to the short scan durations, SAFIR will use up to 500 MBq injected activities, one order of magnitude increase with respect to state-of-the-art preclinical systems. Beside the clear MR compatibility, severe important challenges have to be met by the SAFIR detector, mainly in terms of spatial resolution, timing resolution and sensitivity, high number of readout channels, high rate capability per channel and huge data throughput. The detector will rely on matrices of L(Y)SO crystals, optically coupled to SiPM arrays and readout by existing fast readout ASICs. The seminar will describe the SAFIR goal, its concept and the status of the project, with particular emphasis on the characterization measurements of the detector components, including the recent high rate tests performed with different ASICs for the SiPM readout.
Friday 19 February 2016, 16:00 : Uli Haisch (Oxford)
Indirect probes of Higgs effective theory
I review the existing indirect constraints on Wilson coefficients entering the Higgs effective theory. These limits are compared to the direct bounds obtained from LHC and LEP physics.
Friday 12 February 2016, 16:00 : Katharina Bierwagen (MIT)
Measurement of inclusive W/Z production cross sections and CMS performance at sqrt(s)=13 TeV
Precise measurements of W and Z boson production provide an important test of the SM and can be used to further constrain the Parton Distribution Functions. The production of W and Z bosons is well understood in the SM and the clean signatures of the decay particles provide an ideal test bench for the commissioning of the electron, muon and missing energy reconstruction algorithms in the LHC environment. The first results of inclusive W and Z production cross section measurements in pp collisions at sqrt(s)=13 TeV with the CMS detector are presented.
Friday 05 February 2016, 16:00 : Will Barter (CERN)
Recent LHCb measurements of electroweak boson production in Run-1
We present the latest LHCb measurements of forward Electroweak Boson Production using proton-proton collisions recorded in LHC Run-1. The seminar shall discuss measurements of the 8 TeV W & Z boson production cross-sections. These results make use of LHCb's excellent integrated luminosity determination to provide constraints on the parton distribution functions which describe the inner structure of the proton. These LHCb measurements probe a region of phase space at low Bjorken-x where the other LHC experiments have limited sensitivity. We also present measurements of cross-section ratios, and ratios of results in 7 TeV and 8 TeV proton-proton collisions. These results provide precision tests of the Standard Model.
The seminar shall also present a measurement of the forward-backward asymmetry (A_FB) in Z boson decays to two muons. This result allows for precision tests of the vector and axial-vector couplings of the Z boson, providing sensitivity to the effective weak mixing angle (sin^2(theta_W^eff)). The A_FB distribution visible in the LHCb acceptance is particularly sensitive to this angle, as the forward phase-space means that the initial state quark direction is better known than in the central region. This reduces theoretical uncertainties in extracting sin^2(theta_W^eff) from A_FB, and allows LHCb to make the currently most precise determination of sin^2(theta_W^eff) at the LHC.
Friday 22 January 2016, 16:00 : Prof. Andre Schoening (Heidelberg)
The Mu3e Experiment
The Mu3e experiment will search for the Lepton Flavour violating decay mu^+ -> e^+ e^+ e^- with unprecedented sensitivity of 1 out of 10^16 muon decays. This process is heavily suppressed in the Standard Model and any signal would be a clear sign of new physics. The Mu3e experiment is based on the new HV-MAPS detector technology which allows to build ultra-light and fast high resolution pixel detectors.
Friday 15 January 2016, 16:00 : Ben Allanach (Cambridge)
Anatomy of the ATLAS diboson excess
I shall discuss some measurements from 8 TeV Run I LHC data that showed an excess over 2 sigma with respect to Standard Model expectations. In particular, we shall review the ATLAS di-boson excess, and show a couple of new physics scenarios that explain the excess. If there is time, I shall then discuss some excesses in CMS data: in a W_R search, in a di-leptoquark search and a new physics explanation that has implications for neutrinoless double beta decay.
Friday 11 December 2015, 16:00 : Dr. Juan Rojo (Oxford)
Reduction strategies for the combination of PDF sets and the PDF4LHC15 recommendations
We discuss recently developed strategies for the statistical combination of PDF sets. These methods allow to combine the predictions of many different PDF sets into a single combined set, in terms of optimized sets of Hessian eigenvectors or Monte Carlo replicas. These combination strategies have been used for the implementation of the 2015 PDF4LHC recommendations, required for the estimation of the total PDF+alphas uncertainties for LHC cross-sections at Run II. We discuss the improvements as compared to the 2011 PDF4LHC prescription, and the impact of the new prescription on precision measurements at Run II.
Friday 04 December 2015, 16:00 : Dr. Sofiane Boucenna (Frascati)
Common Frameworks for DM and Baryogenesis
The nature of dark matter (DM) and the origin of the baryon asymmetry of the Universe (BAU) are two enduring mysteries in particle physics and cosmology. Many approaches have been developed to tackle them simultaneously leading to diverse models and paradigms, such as Asymmetric dark matter (ADM). In the first part of the talk I will give a (brief) overview of what has been done so far, highlighting the important concepts of these constructions. Then, in the second part, I will present a new framework relating the weakly interacting massive particles paradigm to ADM in a minimal and model-independent way.
Friday 27 November 2015, 16:00 : Frank Tackmann (DESY)
XCone: N-jettiness as an Exclusive Cone Jet Algorithm
I will discuss a new jet algorithm XCone, which is based on minimizing the event shape N-jettiness and has several attractive features. It is exclusive and always returns a given fixed number of jets. Well-separated jets are conical and practically identical to anti-kT jets of the same size. Overlapping jets are automatically partitioned by nearest-neighbor. In this way, XCone allows to smoothly transition between the resolved regime where all jets are well separated and the boosted regime where they overlap. It also inherits the theoretical factorization properties of the underlying N- jettiness variable. I will show examples of its application for dijet resonances, Higgs decays to bottom quarks, and all-hadronic top pairs.
Friday 20 November 2015, 16:00 : Katharine Leney (UCL)
Searching for Di-Higgs→bbττ at ATLAS
Several models for new physics predict TeV scale resonances that decay to pairs of Higgs bosons. This talk will review the ATLAS search for pair-produced Higgs bosons decaying to bbττ final states using 20 fb-1 proton-proton collision data with √s = 8 TeV and present the combination of all Run 1 ATLAS di-Higgs searches. I will also review the prospects for di-Higgs searches in the bbττ channel for Run 2 and beyond.
Friday 13 November 2015, 16:00 : Geoff Hall (Imperial)
Tracking and trigger upgrades of CMS
CMS will upgrade its trigger during the forthcoming 13 TeV run to maintain performance under increasingly challenging pileup conditions due to the success of the LHC machine in delivering high luminosity collisions. In the longer term, a new tracker is proposed for installation around 2023, which will for the first time provide data to be used in the L1 trigger. LHC data taking is expected to continue until well into the 2030 decade. The motives for these developments and progress towards them will be explained.
Friday 06 November 2015, 16:00 : Sarah Malik (Imperial)
Review of the status and future prospects of dark matter searches at colliders
Understanding the nature of dark matter is one of the most compelling, long standing questions in physics. Reports of possible dark matter signals from several direct detection experiments have further highlighted the need for independent verification from non-astrophysical experiments, such as colliders. I will review the results of searches for dark matter at CMS in Run 1 of the LHC, what we can expect in Run 2 and beyond, as well as recent developments in dark matter phenomenology.
Friday 30 October 2015, 16:00 : Doug Cowen (Penn State)
High Energy Atmospheric Neutrino Appearance and Disappearance with IceCube
The IceCube neutrino observatory, buried deep in the ice at the South Pole, has detected neutrinos that span over five orders of magnitude in energy. Fulfilling one of its original stated goals of discovering cosmological ultrahigh energy neutrinos, its large instrumented volume also provides us with a surprisingly powerful instrument for studying neutrino oscillations with an unprecedented statistical sample of energetic atmospheric neutrinos. In this presentation we will describe the IceCube detector and focus on its current and future atmospheric neutrino oscillation measurements with DeepCore, IceCube's low-energy in-fill array. We will also describe a new proposed low-energy extension, the Precision IceCube Next Generation Upgrade (PINGU), highlighting its ability to measure one of the remaining fundamental unknowns in particle physics, the neutrino mass hierarchy.
Friday 23 October 2015, 16:15 : Gavin Hesketh (UCL) — Auditorium XLG2 | Christopher Ingold Building
Life after the Higgs: Where next for particle physics?
The Large Hadron Collider, the world's largest particle accelerator, recently restarted after a 2 year break. Smashing protons together at almost twice the energies previously achieved, the data taken next few years may reveal more about the fundamental building blocks of the universe. But, with the discovery of the Higgs Boson in 2012, the theory of these building blocks, the Standard Model, was "completed". So what could we hope to learn next, either at the LHC or elsewhere? http://events.ucl.ac.uk/event/event:hbc-icokdek5-siapwd/life-after-the-higgs-where-next-for-particle-physics
Friday 16 October 2015, 16:00 : Louis Lyons (Oxford/Imperial)
"Statistical Issues in Searches for New Physics
Given the cost, both financial and even more importantly in terms of human effort, in building High Energy Physics accelerators and detectors and running them, it is important to use good statistical techniques in analysing data. This talk covers some of the statistical issues that arise in searches for New Physics. They include topics such as:
Blind analyses
Should we insist on the 5 sigma criterion for discovery claims?
$P(A|B)$ is not the same as $P(B|A)$.
The meaning of $p$-values.
Example of a problematic likelihood.
What is Wilks' Theorem and when does it not apply?
How should we deal with the `Look Elsewhere Effect'?
Dealing with systematics such as background parametrisation.
Coverage: What is it and does my method have the correct coverage?
Combining results, and combining $p$-values
The use of $p_0$ v $p_1$ plots.
This is an extended version of a talk given at the LHCP2014 Conference in New York in June 2014.
Friday 09 October 2015, 16:00 : Linda Cremonesi (UCL)
Neutrino interactions at the T2K near detector complex
The T2K long-baseline neutrino oscillation experiment observed electron neutrino appearance in 2011 and reported the first results of a search for electron anti-neutrino appearance in 2015. Systematic uncertainties relating to the models of neutrino interactions on atomic nuclei are increasingly problematic as the precision of oscillation measurements improves. Interaction cross-section measurements are therefore vital for the correct interpretation of neutrino data, and consequently reducing the uncertainties on the oscillation measurements. The near detector complex of T2K, with scintillating tracking detectors on-axis (INGRID) and a magnetised fine-grained tracking system off-axis (ND280), offers a unique opportunity to study neutrino interactions in the region of 0.6~1GeV. In this seminar I will report the latest cross-section measurements performed at ND280 and INGRID, which include muon neutrino charged current interactions on different targets (scintillator or water) and with various final states (inclusive, zero pion, one pion and coherent pion production). I will illustrate the value of this data and the difficulties that still remain.
Thursday 27 August 2015, 16:00 : Foteini Oikonomou (Penn State)
Note: Unsual time/date 2pm:
A multi-messenger quest for the sources of the highest energy cosmic rays
The sources of cosmic rays with energy exceeding 10^18 electron volts remain unknown, despite decades of observations. The discovery of the sources of these fascinating cosmic messengers, termed ultra-high energy cosmic rays (UHECRs), will unravel the workings of the Universe's most violent accelerators. I will discuss the constraints imposed by UHECR observations on their sources, focusing on the arrival direction distribution of the UHECRs detected at the Pierre Auger observatory. Constraints on the sources of UHECRs are also imposed by observations of the secondary particles (gamma-rays and neutrinos) that UHECRs produce during their propagation in the intergalactic medium. By way of illustration, I will present models of UHECR emission in blazars, and discuss the detectability of the signatures of such (hadronic) processes in blazar gamma-ray spectra. Finally, I will present the current status of experimental efforts to pin down the origin of UHECRs, by detecting UHECR, neutrino, and electromagnetic transient emission, through real-time coincidence searches, within a global, multi-messenger, alert network.
Friday 19 June 2015, 16:00 : Diego Aristizabal (Université de Liège)
Neutrino masses beyond the tree level
The standard approach to Majorana neutrino masses relies on the type-I seesaw, where neutrino masses are induced through tree level exchange of heavy right-handed neutrinos. Other scenarios, however, are possible and they typically offer testable predictions and in some cases even connections with dark matter. In this talk, after reviewing some well-known loop-induced neutrino mass models, I will discuss a generic approach for two-loop-induced neutrino masses. Finally, I will comment on some possible "generic" accelerator tests of these scenarios.
Friday 12 June 2015, 16:00 : Alex Martyniuk (UCL)
Search for diboson resonances at ATLAS using boson-tagged jets
With the advent of 13TeV proton-proton collisions at the LHC it is natural to trawl through this data searching for new resonances with the highest possible masses. Do we have any clues of what we might expect in this new energy regime? An ATLAS paper released in the dying moments of LHC Run-1* offers us a possible direction. This paper describes a search for a heavy resonance (either a W' or Kaluza-Klein Graviton) decaying into a diboson pair, leading to highly boosted hadronic jets. By exploiting jet substructure techniques to pick the signal out of the dominant QCD backgrounds the analysis can take advantage of the high branching fraction enjoyed by the fully hadronic decay channel. In this seminar I will describe the jet-substructure techniques explored by the analysis, present the results of the analysis of the 20fb^-1 of 8TeV ATLAS data and finally look to the prospects with the new collision data.
Friday 05 June 2015, 16:00 : Yue-Lin Sming Tsai (IPMU)
Singlet Majorana fermion dark matter: LHC14 and ILC
Exploring a dark matter (DM) candidate using an effective field theory (EFT) framework is a popular approach. However, since the central energy of LHC is going to be 14 TeV and future ILC experiment could also reach 1 TeV, the EFT is not valid any more. In this talk, I will first review our previous work, 1407.1859, then show how we conservatively estimate the power LHC&ILC on EFT by comparing with the UV completed model.
Friday 29 May 2015, 16:00 : Ben Cooper/David Wardrope (UCL)
Searching for new physics with di-Higgs to 4b final states at ATLAS.
It is not often these days that a completely new channel is developed to search for evidence of new physics at the LHC. When Run-1 of the LHC began, the general consensus was that, compared to final states containing leptons, fully hadronic final states would not be competitive in the search for new physics, because the enormous QCD backgrounds would be overwhelming. In this seminar we will tell the story of how this notion was reversed for Higgs pair production, present the latest Run-1 results of the di-Higgs to 4b searches by ATLAS and CMS, and give the prospects for exploiting this channel in the future, including the potential for dramatically extending the physics reach of the HL-LHC.
Friday 22 May 2015, 12:30 : Leigh Whitehead (UCL) — NOTE UNUSUAL TIME
The CHIPS Experiment
CHIPS (Cherenkov Detectors In mine PitS) is an R&D project aiming to blaze the trail towards affordable megaton scale neutrino detectors whilst contributing to the world knowledge on the neutrino mass hierarchy and delta_cp. The first step on the way to this goal was the deployment of CHIPS-M, a small prototype in an open mine pit in northern Minnesota, exposed by the NuMI beam from Fermilab. The second 10 kton prototype, CHIPS-10, is in the design process and is due for deployment in the summer of 2016. CHIPS-10 combined with NOvA and T2K will give over three sigma sensitivity to the mass hierarchy and delta_cp. I will give an overview of the experiment, show the first data from CHIPS-M, and discuss the design of CHIPS-10 and our plans for the future.
Friday 15 May 2015, 16:00 : Josef Pradler (HEPHY Vienna)
Dark Vectors in Cosmology and Experiment
More often than not, astrophysical probes are superior to direct laboratory tests when it comes to light, very weekly interacting particles, and it takes clever strategies and/or ultra-pure experimental setups for direct tests to be competitive. In this talk, I will highlight this competition on the example of dark photons. When they are dark matter, direct detection probes can be superior to stellar constraints. When they decay, cosmology offers unique sensitivity through BBN and CMB.
Thursday 30 April 2015, 16:00 : Alain Blondel (Geneva)
CERN: the next 60 years and 100 kilometers
CERN is hosting the design study of Future Circular Colliders fitting in a new tunnel of 100km circumference around Geneva. A possible first step is the "Electroweak Factory", a high luminosity electron-positron (lepton) collider covering the energy range from the Z pole to above the top threshold, for the study of several TeraZ, okuW, MegaHiggs and Megatops. The tunnel would fit, as ultimate goal, a 100 TeV pp collider. The project will be described with special attention to the electron machine. The combination of the two machines offers a remarkable potential for discoveries, from a blend of precision measurements, high statistics, high energies and sensitivity to very small couplings. In particular the search for sterile right-handed neutrinos (aka neutral heavy leptons), with mass up to the Z mass, will be shown to reach couplings as small as predicted by the see-saw limit.
Friday 17 April 2015, 16:00 : Deepak Kar (Glasgow/TBC)
All about showering at the LHSea!
Improving the parton shower model in Monte Carlo generators is important for precision measurements as well as for searches at the LHC. ATLAS and CMS performed many interesting measurements sensitive to to non-perturbative QCD effects and compared the results with existing MC models and tunes, and clear discrepancies and new features have been observed in many cases. All these data are being used in improving the modelling. Also, many jet substructure techniques depend on modelling the shower accurately, and I will briefly discuss one such technique, called shower deconstruction, and the promising results it yields.
Friday 10 April 2015, 16:00 : Cheryl Patrick (NorthWestern)
Neutrino-nucleus interactions at MINERvA
Fermilab's MINERvA experiment is designed to make precision measurements of neutrino scattering cross sections on a variety of materials. After introducing the MINERvA detector, I will explain why these measurements are so important to the current neutrino program. I will then describe several recently published results that are already being used by the neutrino community to improve their modelling of neutrino interactions, focusing particularly on the quasi-elastic analysis. There will also be a chance to look at interesting analyses that will be published in the coming months, and at the plans for MINERvA's longer-term future.
Friday 27 March 2015, 16:00 : UCL HEP Students
IOP practise talks
Note unusual start time: 2pm
Friday 20 March 2015, 16:00 : Phillip Litchfield (UCL)
The AlCap experiment // A tour of muon physics from NuFact
The AlCap experiment is a joint project between the COMET and Mu2e collaborations. Both experiments intend to look for the lepton-flavour violating conversion μ+A→e+A, using tertiary muons from high-power pulsed proton beams. In these experiments the products of ordinary muon capture in the muon stopping target are an important concern, both in terms of hit rates in tracking detectors and radiation damage to equipment. The goal of the AlCap experiment is to provide precision measurements of the products of nuclear capture on Aluminium, which is the favoured target material for both COMET and Mu2e. The results will be used for optimising the design of both conversion experiments, and as input to their simulations. Time allowing, I will also present a (necessarily brief) tour of active and planned muon experiments, as presented at NuFact. The muon is something of a special case. Although we know there are three generations of Standard Model particles, the world around us is essentially built up from the first generation. The low mass and correspondingly long lifetime of the muon means that is one of very few higher-generation particles that can be manipulated for study, and as such provides a complimentary window to the standard 'brute force' approach of bringing first-generation particles together at ever higher energies and intensities. I will give a very brief summary of the muon projects discussed at NuFact 2014.
Friday 13 March 2015, 16:00 : Jennifer Jentzsch (Dortmund)
Quality assurance measurements during the ATLAS Insertable B-Layer production and integration
The ATLAS Detector is one of the four big particle physics experiments at CERN's LHC. Its inner tracking system consisted of a 3-Layer silicon Pixel Detector (~ 80M readout channels) in the first run (2010-2012) and has been upgraded by an additional layer over the last two years. The Insertable B-Layer (IBL) adds ~12M readout channels for improved vertexing, tracking robustness and b-tagging performance for the upcoming runs before the high luminosity of the LHC will take place. The active part of the detector is roughly 66cm long and consists of 14 parylene coated carbon foam support structures, so-called staves, at an average distance of 33.25mm away from the beam. The IBL includes new sensor and readout chip designs finding their first application in high energy physics experiments. Production accompanying measurements as well as preliminary results after integration into the ATLAS Detector, right before the start of the second LHC run, will be presented and discussed.
Wednesday 11 March 2015, 16:00 : Jenny Thomas — Harrie Massey LT
XXV Spreadbury Lecture: Neutrino Oscillations At Work
The observation that the three types of neutrino flavour oscillate among themselves led to the realisation that neutrinos have a very small but non-zero mass. This is extremely important because the supremely successful Standard Model of particle physics had expected, and indeed needed, the neutrinos to have exactly zero mass. Since the discovery of neutrino oscillations over the last 15 years, the parameters of the oscillations have been sufficiently well measured to turn neutrino oscillations into a tool for learning more about the elusive neutrino. I will explain the concept of neutrino oscillations, and report on the recent results from around the world and the new challenges now facing researchers trying to infer the remaining unknown neutrino properties. I will talk briefly about an exciting new project on the horizon for the very near future.
Friday 06 March 2015, 16:00 : Freya Blekman (Brussels)
Exploring new physics in the Top quark sector using Beyond-Two-Generations Quarks with the Compact Muon Solenoid
In many models of physics beyond the Standard model the coupling of new physics to third generation quarks is enhanced or signatures are expected that mimic top production. I will present a review of mostly non-MSSM-inspired searches for new physics beyond the standard model in final states containing top quarks or bottom quarks performed by the CMS experiment. Many of these techniques used have solid roots in precision measurements of the standard model, and applying these techniques from measurements to searches has opened a rich and varied searches program in the CMS experiment. Examples include searches for heavy gauge bosons, excited quarks, sequential and vector-like top quark partners. The analyses span a range of final states, from multi-leptonic to entirely hadronic, and many use convoluted analysis techniques to reconstruct the highly boosted final states that are created in these topologies. I will focus on the recent results, using data collected with the CMS experiment in proton-proton collisions at the LHC at a centre-of-mass energy of 8 TeV.
Friday 27 February 2015, 16:00 : Anastasia Basharina-Freshville (UCL)
Calorimetry for Cancer Proton Therapy - Can We Help?
Proton therapy is an advanced form of radiotherapy that provides significantly improved cancer treatment to patients. In Summer 2015 UCLH will commence the building of a 250 MeV proton beam treatment centre. Challenges currently exist in the field of proton therapy, such as the requirements of precise measurements of the beam energy and spread. We address some of these challenges using a calorimeter module designed for the SuperNEMO experiment, which we have tested at the only currently running proton treatment beam in the UK at the Clatterbridge Centre for Oncology.
Friday 20 February 2015, 16:00 : Andrea Banfi (Sussex)
A general method for final-state resummations in QCD
We present a novel method that makes it possible to resum event-shape distributions and jet rates at NNLL accuracy. We present results for suitable observables in e+e- annihilation and discuss the generalisation of the method to hadron-hadron collisions and higher logarithmic accuracy.
Friday 13 February 2015, 16:00 : Adam Gibson (UCL)
Tracking emissions during proton therapy
Proton radiotherapy uses a beam of protons at up to 250 MeV to deliver a dose of ionising radiation to the body, usually with the intention to cure cancer. The physics of proton interactions with tissue provides a particular advantage: most of the energy is deposited in the Bragg peak, meaning that deeper tissues are largely spared a substantial radiation dose. This reduces the likelihood of side effects, which include an increased risk of cancer later in life. Medical imaging provides excellent knowledge of the internal anatomy of the body, allowing the dose distribution to be precisely predicted. However, patient movement, weight loss and shrinkage of the tumour mean that imaging is not always sufficient to determine the dose distribution delivered. In this talk, I will discuss the physical interactions of the proton beam with tissue, particularly concentrating on the possibility of measuring x-ray, gamma, optical and acoustic emissions so as to predict in real time the distribution of radiation dose to the body.
Friday 30 January 2015, 16:00 : Simon Bevan/John Loizides
From Particles to Electronic Trading
We will give a detailed overview of the exciting world of FX electronic trading (eFX), detailing the cutting edge technology and mathematics that drive the modern markets. Throughout we will highlight how the skills developed during our stint in HEP translated directly into eFX and why particle physicists are still in such demand.
Friday 12 December 2014, 16:00 : Teppei Katori (QMUL)
Liquid argon detector R&D in USA
Liquid Argon time projection chamber (LArTPC) is the candidate technology for the next generation large neutrino detectors. Unprecedented resolution and capability of particle ID (ionization energy loss, scintillation light) make it attractive for future high precision neutrino experiments. The necessary technologies were developed by the ICARUS collaboration in Italy and they are further studied in USA. In this talk, I would like to describe the overview of the LArTPC efforts in USA, with a special emphasis on the liquid argon scintillation light detection technology.
Friday 05 December 2014, 16:00 : Gary Royle (UCL)
Proton and Advanced Radiotherapy
Radiation therapy is a technology based clinical area which uses an array of photons and particles to target cancer sites. It has a number of areas in common with high energy physics. The talk will cover the basis of radiation therapy, the future technological needs, areas where high energy physicists can get involved from research to careers, and will highlight some clinical problems within the treatment of cancer patients that are relevant to the translation of high energy physics concepts and technology.
Monday 01 December 2014, 16:00 : Jorge S Diaz (KIT)
Extra ordinary seminar: Testing Lorentz and CPT invariance with neutrinos (NEMO-3 and (Super)NEMO data)
Lorentz symmetry is a cornerstone of modern physics. As the spacetime symmetry of special relativity, Lorentz invariance is a basic component of the standard model of particle physics and general relativity, which to date constitute our most successful descriptions of nature. Deviations from exact symmetry would radically change our view of the universe and current experiments allow us to test the validity of this assumption. In this talk, I will describe how we can search for deviations from exact Lorentz and CPT invariance with neutrino oscillations, time-of-flight measurements, ultra-high-energy neutrinos, and double beta decay.
Friday 28 November 2014, 16:00 : Prof. Philip Burrows (Oxford)
Precison Higgs Physics: The International Linear Collider Higgs Factory
An international team has recently completed the Technical Design Report for the International Linear Collider (ILC). The ILC is an electron-positron collider with a design target centre-of-mass energy of 500 GeV. Following the Higgs boson discovery it has been proposed to realise the ILC by building a 250 GeV 'Higgs Factory', and subsequently to upgrade it in stages to higher energies of 350 GeV, where it would also serve as a 'top factory', and eventually to 500 GeV to allow access to the top-Higgs and Higgs self- couplings. The Japanese particle physics community has proposed to host the collider in Japan. I will describe the programme of precision Higgs-boson measurements at the ILC. I will give an overview of the collider design, and report on the project status.
Friday 21 November 2014, 16:00 : Richard Savage (Warwick)
Using machine learning to cure cancer (!)
Medicine is undergoing a data revolution. From whole-genome sequencing to digital imaging and electronic health records, new sources of data are promising to revolutionise how we treat disease. With these opportunities, however, come significant challenges. The data are often high-dimensional, noisy, with complex underlying structure. And we may wish to combine multiple data types from very different sources. I'll give a tour of some of these issues, focusing on some of the projects we're working on to use statistical machine learning to get the most out of these data and hence improve, in particular, the treatment and curing of cancer.
Friday 14 November 2014, 16:00 : Stuart Mangles (Imperial)
Laser wakefield accelerators: a laboratory source of femtosecond x-ray pulses
Laser wakefield accelerators are now capable of accelerating electron beams up to 1 GeV in just 1 centimetre of plasma. During the acceleration process the electron beam can oscillate, producing very bright femtosecond duration x-rays. In this talk I will introduce some of the key concepts of laser wakefield acceleration and x-ray generation. The x-rays we can generate have some highly useful properties including an ultra short (few femtosecond) duration, micrometer sized source and broad spectral coverage. I will discuss the use of this unique source of x-rays for applications such as probing matter under extreme conditions and medical imaging.
Friday 07 November 2014, 16:00 : Cristina Lazzeroni (Birmingham)
Search for rare and forbidden Kaon decays at NA62
The NA62 Kaon programme will be reviewed. A selection of recent results on rare and Standard Model forbidden kaon decays will be presented, and the current status of the experiment and prospects for the measurement of the decay K+ to pi+ nu nubar will be summarised.
Friday 31 October 2014, 16:00 : Malcolm Fairbarn (Kings College)
Future Dark Matter searches and the Neutrino Background.
I will describe very simple models of dark matter and show how even with reasonable masses, effects such as resonances can lead to very small cross sections at direct detection experiments. I will then briefly discuss the usefulness in slightly more complicated simplified models than the effective operator approach. I will then present some work constraining such models using dijet studies at the LHC and indirect detection and show how direct detection cross sections can be very small in such scenarios, even with good relic abundance. Small cross sections run the risk of being potentially undetectable due to the neutrino background, even with very large future direct detection experiments. I will spend the rest of my talk explaining future strategies for getting the dark matter signal out of the neutrino background.
Friday 17 October 2014, 16:00 : Aurélien Benoit-Lévy (UCL)
Inflation, B-modes and dust: Planck's view on BICEP2 results.
The Planck collaboration has recently published new results on the characterisation of polarised dust emission at intermediate and high Galactic latitudes. Although these results specifically focus on the properties of Galactic dust, they are relevant for cosmological studies. Indeed dust is a known contaminant of the long sought-after primordial B modes from Inflation. In this talk, I will focus on how these new results from Planck impact the interpretation of the recent claims from the BICEP2 collaboration.
Wednesday 24 September 2014, 16:00 : HEP Group Day (E3/E7)
Tuesday 23 September 2014, 16:00 : 1st Year PhD Talks (E3/E7)
Thursday 11 September 2014, 16:00 : Richard Ruiz (University of Pittsburgh)
State-of-the-Art Tests of Lepton Number Violation and Seesaw Mechanisms at Hadron Colliders
Friday 13 June 2014, 16:00 : Prof. Ulrik Egede (Imperial)
Search for HIdden Particles (SHIP) at the SPS
Particles physics faces the contradiction of a Standard Model, that seems perfect and can exist without corrections to the Planck scale, and the inability of the same Standard Model to explain dark matter, neutrino masses and baryogenesis. I will give a brief overview of the vMSM, a minimal extension to the Standard Model that may solve this contradiction. The vMSM predicts the existence of new GeV mass neutral leptons and I will present how a fixed target experiment located at a new beamline of the SPS is ideal to search for these. Possibilities for involvement of UK groups will be discussed.
Tuesday 27 May 2014, 14:00 : Jouni Suhonen (University of Jyväskylä)
Rare Weak Decays and Nuclear Structure
I will discuss different types of rare decays. I divide these decays in three categories: A. Decays with (ultra) low Q values; B. Decays between states with large differences in the initial and final angular momenta; C. Weak-interaction processes of higher order. The A category contains potential candidates for neutrino-mass measurements in beta decays. Category B highlights cases where single and double beta decays compete. Category C highlights the large variety of different double beta decays that are possible via exchange of a light Majorana neutrino. In particular, the less discussed positron-emission modes of double beta decays, including the interesting resonant neutrinoless double electron capture mode, are elucidated.
Tuesday 27 May 2014, 16:00 : Mikhail Shaposhnikov (EPFL)
Higgs inflation at the critical point
Higgs inflation can occur if the Standard Model is a self-consistent effective field theory up to inflationary scale. This leads to a lower bound on the Higgs boson mass, M_h > M_crit. If M_h is more than a few hundreds of MeV above the critical value, the Higgs inflation predicts the universal values of inflationary indexes, r= 0.003 and n_s= 0.97, independently on the Standard Model parameters. We show that in the vicinity of the critical point M_crit the inflationary indexes acquire an essential dependence on the mass of the top quark m_t and M_h, and can be consistent with BICEP 2 data.
Friday 16 May 2014, 16:00 : Phillipe Mermod (Geneva)
"Magnetic monopoles at the LHC and in the Cosmos"
Dirac showed in 1931 that the existence of one magnetic monopole in the Universe would explain why electric charge is quantised. The monopole also arises as a natural consequence of Grand Unification theories. While collider experiments provide direct laboratory studies, stable particles with masses beyond the reach of man-made accelerators could have been produced in the early Universe and still be present today. I shall review experimental searches for monopoles at colliders, focusing on recent developments at the LHC. I shall also provide a survey of monopole searches with cosmic-ray detectors and trapped in matter, and propose a few promising avenues for the future.
Friday 09 May 2014, 16:00 : Moritz McGarrie (University of the Witwatersrand)
SUSY model building for a 126 GeV Higgs
his talk will review the current status of minimal models of supersymmetry breaking and explore two possible models which can achieve the correct Higgs mass and still allow for sparticles accessible to the LHC's reach. In the first example we use the HEP tool SARAH to build two tailor made spectrum generators to analyse a Higgs extension (non decoupled D-terms) of the MSSM. We then explore the LHC & ILC's capability to determine their effect through their enhancement of Higgs branching ratios with respect to the Standard Model. In the second model, we explore flavour-gauge mediation to obtain light 3rd generation squarks, whilst keeping the 1st and 2nd generation above exclusions. In particular this model can also generate non-degenerate 1st and 2nd generation squarks and, although in the framework of gauge mediation, leads to mild flavour changing neutral currents at, below and above current limits.
Friday 04 April 2014, 16:00 : Dimitris Varouchas(LPNHE-Paris)
H -> tautau in ATLAS
In this seminar, a search for the Standard Model (SM) Higgs boson with a mass of 125 GeV decaying into a pair of tau leptons will be reported. The analysis is based on data samples of p-p collisions collected by the ATLAS experiment at the LHC, corresponding to an integrated luminosity of 20.3.0 1/fb at centre-of-mass energy of sqrt(s)=8 TeV. The observed (expected) deviation from the background-only hypothesis corresponds to a significance of 4.1 (3.2) standard deviations, and the measured signal strength is μ = 1.4+0.5 −0.4. This is evidence for the existence of H → τ+τ− decays, consistent with the Standard Model expectation for a Higgs boson with mH = 125 GeV. A brief comparison with the respective CMS result will be also presented.
Friday 28 March 2014, 16:00 : UCL third year students — MOVED TO 31/03/2014 !
MOVED! UCL third year student IOP practise talks!
The UCL third year student IOP practise talks will now be on Monday 31/03/2014!
Friday 21 March 2014, 16:00 : Prof. Tegid Jones (UCL)
Forty years since the Neutral Current (Z0) Discovery in Gargamelle. The UCL Contribution.
The discovery of neutral currents (1973/74) was the first confirmation of the SU(2)xU(1) electro-weak unified theory. UCL HEP was part of the Gargamelle collaboration which made the first truly significant discovery at CERN. Forty years later the UCL-HEP group contributed to the discovery of the Higgs Boson, thereby completing the understanding of symmetry breaking in the SU(2)xU(1) model.
Friday 14 March 2014, 16:00 : Ian P. Shipsey (Oxford)
The Large Synoptic Survey Telescope (for particle physicists)
Recent technological advances have made it possible to carry out deep optical surveys of a large fraction of the visible sky. These surveys enable a diverse array of astronomical and fundamental physics investigations including: the search for small moving objects in the solar system, studies of the assembly history of the Milky Way, the exploration of transient sky, and the establishment of tight constraints on models of dark energy using a variety of independent techniques. The Large Synoptic Survey Telescope (LSST) brings together astrophysicists, particle physicists and computer scientists in the most ambitious project of this kind that has yet been proposed. With an 8.4 m primary mirror, and a 3.2 Gigapixel, 10 square degree CCD camera, LSST will provide nearly an order of magnitude improvement in survey speed over all existing optical surveys, or those which are currently in development. Expected to begin construction later in 2014, and to enter commissioning in 2020, in its first month of operation LSST will survey more of the universe than all previous telescopes built by mankind. Over the full ten years of operation, it will survey half of the sky in six optical colors down to 27th magnitude. Four billion new galaxies and 10 million supernovae will be discovered. At least 800 distinct images will be acquired of every field, enabling a plethora of statistical investigations for intrinsic variability and for control of systematic uncertainties in deep imaging studies. LSST will produce 15 terabytes of data per night, yielding a data set of over 100 petabytes over ten years. Dedicated Computing Facilities will process the image data in near real time, and issue worldwide alerts within 60 seconds for objects that change in position or brightness. In this talk some of the science that will be made possible by the construction of LSST, especially dark energy science, which constitutes a profound challenge to particle physics and cosmology, and an overview of the technical design and current status of the project will be given.
Friday 28 February 2014, 16:00 : Jo van den Brand (NIKHEF)
Probing dynamical spacetimes
Albert Einstein's theory of general relativity, published in 1915, gave science a radically new way of understanding how space, time and gravity are related. Gravity is defined as the curvature of spacetime and is caused by the four-momentum of matter and radiation. Einstein predicted that accelerating objects will cause vibrations in the fabric of spacetime itself, so-called gravitational waves. The detection of gravitational waves is the most important single discovery to be made in the physics of gravity. Gravitational waves exist in any theory of gravity that incorporates a dynamical gravitational field, be it a metric theory such as general relativity (or one of its generalizations), or a non-metric theory such as string theory. Observations of binary pulsars, whose orbital motion evolves in agreement with general relativity, revealed that gravitational radiation must exist. However, no direct observation of gravitational waves has been reported to date. Discovering gravitational waves would confirm once and for all that gravity is a fundamental dynamical phenomenon. The Virgo detector for gravitational waves consists mainly of a Michelson laser interferometer made of two orthogonal arms being each 3 kilometres long. Virgo is located within the site of EGO, European Gravitational Observatory, based at Cascina, near Pisa on the river Arno plain. Virgo scientists, in collaboration with LIGO in the USA and GEO in Germany, have developed advanced techniques in the field of high power ultra-stable lasers, high reflectivity mirrors, seismic isolation and position and alignment control. In 2015 these collaborations with turn on their advanced instruments in their quest for first detection of gravitational wave events.
Wednesday 26 February 2014, 16:00 : George Efstathiou (Cambridge)
SPREADBURY LECTURE (JZ Young LT): The Birth Of The Universe
Modern physics attempts to explain the full complexity of the physical world in terms of three principles: gravity, relativity and quantum mechanics. This raises important fundamental questions such as why is our Universe so large and old? Why is it almost, but not perfectly, homogeneous and isotropic? I will describe how recent measurements of the cosmic microwave background radiation made with the Planck Satellite can be used to answer these questions and to elucidate what happened within 10-35 seconds of the creation of our Universe.
Friday 21 February 2014, 16:00 : Nikos Konstantinidis (UCL)
The High Luminosity LHC programme
As the LHC machine and experiments are preparing frantically to start data taking at design energy and luminosity (and slightly above), an equally intense and exciting programme of R&D and physics studies is ongoing for the High Luminosity (HL-) LHC project, proposed to start in about 10 years, that would deliver 3000/fb to each general purpose detector by the mid-2030s. I will discuss the science case for HL-LHC, the challenges for the accelerator and the experiments, and the ongoing R&D, particularly on the tracking and triggering systems of the experiments
Friday 14 February 2014, 16:00 : Werner Vogelsang (UNI Tuebingen)
QCD resummation for jet and hadron production
Cross sections for the production of jets or identified hadrons in pp collisions play an important role in particle physics. At colliders, jets are involved in many reactions sensitive to new physics and their backgrounds. In lower-energy collisions, produced hadrons probe the inner structure of the nucleon and the fragmentation process. Both observables have in common that their use crucially relies on our ability to do precision computations of the underlying hard-scattering reactions in QCD perturbation theory. In this talk, we discuss the role of higher-order QCD corrections to these reactions. Specifically, we address the resummation of large logarithmic "threshold" corrections to the relevant partonic cross sections. Among other things, this allows us to determine dominant next-to-next-to-leading order QCD corrections to jet production at the LHC and Tevatron. Detailed phenomenological studies are presented.
Friday 07 February 2014, 16:00 : Paschal Coyle (In2p3, France)
Neutrinos out of the blue
The road to neutrino astronomy has been long and hard. The recent observation of a diffuse flux of cosmic neutrinos by IceCube heralds just the start of this new astronomy. In this seminar a brief outline of the various experimental efforts worldwide to detect cosmic neutrinos are described and a selection of the physics results presented. Particular emphasis is given to ANTARES, a neutrino telescope located in the deep sea 40km off the southern coast of France. The European neutrino astronomy community has recently started the construction of KM3NeT, a several cubic kilometre neutrino telescope in the Mediterranean Sea. The plans of this new research infrastructure are described. Finally, the potential for a measurement of the neutrino mass hierarchy, with a densely instrumented detector configuration in ice (PINGU) and water (ORCA) is discussed.
Friday 31 January 2014, 16:00 : Alexander Mitov (Cambridge)
Recent developments in top physics at hadron colliders
I will review the available NNLO results for top pair production at hadron colliders and will demonstrate their effect on various analyses of SM and bSM physics. I will then discuss the prospects for further NNLO level calculations in top physics and how they may influence existing results and open problems in top physics and beyond.
Friday 24 January 2014, 16:00 : Jennifer Smillie (University of Edinburgh)
Jets, Jets, Higgs & Jets
The LHC is pushing the limits of our theoretical descriptions, especially in multi-jet processes. I will discuss the challenges posed by the large higher-order perturbative corrections, and describe the High Energy Jets framework which provides an alternative all-order description encoding the dominant pieces of the hard-scattering matrix elements. I will illustrate the effectiveness of this method with comparisons to LHC data, and what it teaches us about QCD at the LHC. In the last part of the talk, I will discuss the importance of this in the light of Higgs+dijets studies with an emphasis on vector boson fusion (VBF) channels and will discuss the implications of the previous results. I will show some results from ongoing work to describe the impact of VBF cuts on the gluon-gluon fusion contribution.
Friday 13 December 2013, 16:00 : Lea Reichhart (UCL)
First results from the LUX Dark Matter Experiment
A vast number of astronomical observation point towards the existence of an unknown dark component dominating the matter content of our Universe. The most compelling candidates for dark matter are the Weakly Interacting Massive Particles (WIMPs), which have great potential to be detected in deep underground low background experiments, looking for direct interactions of WIMPs with dedicated target materials. Very recently, the Large Underground Xenon (LUX) experiment, operated in the Davis Cavern of the SURF laboratory, USA, has announced results from its first science run. From an exposure of 85 days, having found no evidence of signal above expected background, LUX has set constraints on scalar WIMP-nucleon interactions above 7.6x10-46 cm2 at 33 GeV/c2 WIMP mass (90% C.L.) - three times more sensitive than any competing experiment. This first result also seriously challenges the interpretation of hints of signal detected in other experiments as arising from low-mass WIMPs.
Friday 06 December 2013, 16:00 : Bryan Lynn (UCL/CERN)
Chiral Symmetry Restoration, Naturalness and the Absence of Higgs-Mass Fine-Tuning 1: Global Theories
The Standard Model (SM), and the scalar sector of its zero-gauge-coupling limit -- the chiral-symmetric limit of the Gell Mann-Levy Model (GML) -- have been shown not to suffer from a Higgs Fine-Tuning (FT) problem due to ultraviolet quadratic divergences (UVQD). In GML all UVQD are absorbed into the mass-squared of pseudo Nambu-Goldstone (pNGB) bosons. Since chiral SU(2)_{L-R} symmetry is restored as the pNGB mass-squared or as the Higgs vacuum expectation value (VEV) are taken to zero, small values of these quantities and of the Higgs mass are natural, and therefore not Fine-Tuned. Our results on the absence of FT also apply to a wide class of high-mass-scale (M_{Heavy}>>m_{Higgs}) extensions to a simplified SO(2) version of GML. We explicitly demonstrate naturalness and no-FT for two examples of heavy physics, both SO(2) singlets: a heavy (M_S >> m_{Higgs}) real scalar field (with or without a VEV); and a right-handed Type 1 See-Saw Majorana neutrino with M_R >> m_{Higgs}. We prove that for |q^2| << M_{Heavy}^2, the heavy degrees of freedom contribute only irrelevant and marginal operators. The crucial common property of such high-mass-scale extensions is that they respect chiral SO(2)_{L-R} symmetry. GML is therefore natural and not FT, not just as a stand-alone renormalizable field theory, but also as a low energy effective theory with certain high-mass-scale extensions. Phenomenological consequences include the renewed possibility of thermal lepto-genesis, and subsequent baryon-number asymmetry, in the neutrino-MSM. We conjecture that, since gravity couples democratically to particles, certain quantum gravitational theories that respect chiral symmetry will also retain low-energy naturalness, and avoid FT problems for GML (and maybe the SM). Absent a SM FT problem, there should be no expectation that LHC will discover physics beyond the SM which is unrelated to neutrino mixing, the only known experimental failure of the SM.
Friday 29 November 2013, 16:00 : Matt Lilley (Imperial)
Feeling the Fusion Burn
The age of fusion energy is almost upon us, creating and sustaining hot plasmas of 150 million degrees is now a routine operation performed all around the world. We are ready for next big challenge - the burning plasma - in which the fusion reactions are self sustaining. This is a highly non thermal system which is prone to instability. The nonlinear character of the instability determines the fate of the plasma, either ignition or a mere fizzle. In this presentation we will explore the physical processes behind these burning plasma instabilities and discus the challenges that lie ahead.
Friday 22 November 2013, 16:00 : Nikos Konstantinidis (UCL) — CANCELED!
Friday 15 November 2013, 16:00 : TBA
Friday 08 November 2013, 16:00 : Gabriel Facini (CERN)
H -> bb in ATLAS
Since the discovery of a Higgs-like boson by the ATLAS and CMS experiments at the LHC, the emphasis has shifted towards measurements of its properties and the search in more challenging channels in order to determine whether the new particle is the Standard Model (SM) Higgs boson. Of particular importance is the direct observation of the coupling of the Higgs boson to fermions. A comprehensive review of the latest ATLAS result in the search for the Higgs boson decaying to a b-quark pair int associated production channel will be given.
Friday 25 October 2013, 16:00 : Lauren Tompkins (University of Chicago)
FTK: A hardware-based track finder for the ATLAS trigger
The spectacular performance of the LHC machine challenged the ATLAS and CMS detectors to contend with an average of 25 proton-proton interactions per beam crossing in 2012. Projections for 14 TeV running in 2015 and beyond suggest that the detectors should prepare for up to 80 interactions per crossing. In these dense environments, identifying the physics objects of interest, such as isolated leptons, taus and b-jets is of paramount importance for a successful physics program. The ATLAS experiment is developing a hardware based track finder, FTK, which will perform full silicon detector tracking within 100 microseconds of a Level 1 trigger accept at luminosities of 3x10^34 cm^-2 s^-1, significantly improving the track-based isolation, secondary vertex tagging and track-based tau finding done at Level 2. I will discuss the FTK design and performance prospects, as well as report on successful prototype tests completed thus far.
Friday 18 October 2013, 16:00 : TBA
Friday 11 October 2013, 16:00 : Bhupal Dev, University of Manchester
Heavy Neutrino Searches at the LHC
One of the simplest extensions of the Standard Model to explain the non-zero neutrino masses is to introduce heavy neutrinos. In this talk, we will review the existing experimental constraints on the masses and mixing of these heavy neutrinos. We will then discuss their ongoing searches at the LHC and some recent efforts to improve their sensitivity.
Tuesday 24 September 2013, 13:00–18.00 : 1st Year Student PhD Talks
B05 Lecture Theatre in the Chadwick Building.
Monday 23 September 2013, 13:00–17.00 : Group Day + drinks (E1)
B17 (Basement) 1-19 Torrington Place.
Wednesday 18 September 2013, 10:00–17.00 : 2nd Year Student PhD Talks
E3/E7 Physics and Astronomy Department.
Thursday 04 July 2013, 16:00 : Prof. Jose Valle, (IFIC/CSIC - U. Valencia)
Neutrinos and Dark Matter
I will review the status of neutrino mass and mixing parameters, theoretical modeling and cosmological implications. In particular I discuss how neutrino mass and dark matter may be closely connected and indicate possible direct, indirect and collider detection prospects.
Friday 28 June 2013, 16:00 : Brian Rebel, Fermilab
Liquid Argon Detectors at Fermilab: From R&D to LBNE
Liquid argon time projection chambers (LArTPCs) are an exciting new technology for neutrino detectors. This technology provides excellent position resolution that rivals bubble chamber images, but in a digital format. The striking advantage of liquid argon time projection chambers for neutrino physics is the ability to distinguish between electrons, produced in charged current interactions, and gammas, produced by the decay of neutral pions created in neutral current interactions, with high efficiency. This talk will outline the Fermilab R&D program aimed toward development of the multi-kiloton LBNE detector for long baseline neutrino physics. Results from the various aspects of the program will be presented, as well as the status of LBNE.
Wednesday 29 May 2013, 16:00 : Alexander Grohsjean, DESY
A quark comes of age: latest highlights in top quark physics.
The discovery of the top quark in 1995 at the Fermilab Tevatron collider was a remarkable confirmation of the standard model of particle physics. Its short lifetime provides the possibility to probe the properties of a bare quark With increasingly large integrated luminosities, the characteristics of this particle, as well as its production and decay properties have been measured with ever greater precision. The analysis of top-quark events triggered the development of new analysis tools and offered an excellent starting point for searches of new phenomena. In this summary, following a short historic perspective, I present recent measurements from the D0 experiment, as well as new ATLAS results from the LHC at 7 and 8 TeV.
Friday 17 May 2013, 16:00 : Jonathan Hays, Queen Mary
LHC Higgs results
Last year the ATLAS and CMS experiments announced the discovery of a new particle with a mass of around 126 GeV and a strong candidate for being a Higgs boson. Measurements with the full 2011+2012 dataset have further confirmed this. The latest results in experimental Higgs physics are presented from both experiments, concentrating on the new particle. This includes the continuing search for signals in those modes yet to find evidence for the new particle, property measurements in the diboson modes, and a variety of global fits to data across different channels to investigate compatibility with the Standard Model. Additionally, the compatibility of the results across experiments will be briefly discussed along with some thoughts on the outlook for the Higgs programme at the LHC.
Friday 10 May 2013, 16:00 : Dr Will Thomas, Centre for the History of Science, Technology and Medicine, Imperial College
Problems in Particle Detection 1930-1950: New Ways to Talk about the History of Physics
Early developments in particle physics were based on the analysis of cosmic rays and radioactive materials before these sources were supplanted by high-energy accelerators circa 1950. Commonly used particle detection technologies included cloud chambers, arrays of coincidence counters, and, after 1945, nuclear emulsions. These technologies all yielded fairly imprecise information about the particles passing through them, necessitating experimenters to deploy strategies to arrive at what they viewed as legitimate interpretations of events. These strategies included using inference to establish what sorts of particles were being detected, the aggregation of evidence, and an increasingly intensive use of nuclear physics knowledge to narrow a range of possible interpretations. It will be suggested that articulating the nature of these strategies, and paying attention to how experimenters deployed them, allows for a good way of discussing historical experimenters' skill, certainly over and above a simple cataloguing of their discoveries.
Friday 03 May 2013, 16:00 : Gino Isidori, INFN, Frascati National Laboratories
Standard Model and beyond after the the Higgs discovery
We discuss the implications of the recent Higgs discovery, and particularly of the Higgs mass measurement, for the stability of Higgs potential and, more generally, for the completion of the Standard Model at high energies.
Friday 15 March 2013, 16:00 : Dr. Bela Majorovits, Max-Planck-Institut für Physik
Understanding Neutrinos?GERDA and the Neutrinoless Double Beta-Decay
Observation of neutrinoless double beta (0vbb) -decay could answer the question whether Neutrinos are their own anti-particles or not and could yield information on the absolute mass scale of neutrinos. The most stringent half-life limit for 76Ge is T1/2>1.9 1025 years. This can be translated to the lowest present limit for the effective Majorana neutrino mass of < 0.3eV. Part of the Heidelberg-Moscow collaboration claims to have observed 0vbb-decay in 76Ge with T1/2=1.2 1025 years, however this result is controversial. A short motivation for 0vbb-decay searches will be given. The principle of 0vbb-searches utilizing High Purity Germanium enriched in the isotope 76Ge detectors will be introduced. The general design features of the GERDA experiment - designed to confirm or refute the claim within one year of measurement will be shown. Results from the GERDA commissioning runs and the status of GERDA data taking with enriched detectors will be discussed. Plans and status of preparations of the second phase of the GERDA experiment will be shown.
Friday 08 March 2013, 16:00 : Mat Charles, Oxford
Charm results from LHCb
Highlights from LHCb's charm physics programme are presented, including searches for the highly suppressed decays D+ -> pi+ mu- mu+, D+ -> pi- mu+ mu+, and D0 -> mu- mu+; a measurement of meson mixing in D0 -> K+ pi-; and a search for CP violation in two-body D0 decays.
Friday 01 March 2013, 16:00 : Prof. Chris Mabey, Middlesex Business School
Big Lessons from Big Science
The ATLAS collaboration comprises 3000 physicists from 140 Institutes in 37 countries collaborating on a 'big science' project based at CERN near Geneva. As a loosely-coupled, global network of knowledge activists working at the forefront of science, it is prototypical of many knowledge-intensive agencies and firms. What can be learnt from this unusual collaboration about the way tacit knowledge is surfaced and exchanged across professional, cultural and geographic boundaries? ATLAS is feted as a remarkably democratic and highly productive partnership. How does it achieve this and what are the lessons for the effective leadership of knowledge? Chris will share insights from his recent ESRC-funded project (2009-12).
Friday 15 February 2013, 16:00 : Prof. Robert Thorne, UCL
Parton Distribution Functions at the LHC
I discuss the current status of parton distributions, compare to LHC data and present the range of predictions for LHC processes. Some significant discrepancies are found between different PDF sets, particularly regarding predictions for Higgs boson cross sections and the asymmetry between W^+ and W^- production. I examine possible causes for this, concentrating on issues of parameterisation dependence and the treatment of heavy flavours in the fits.
Friday 08 February 2013, 16:00 : Dr. Bobby Acharya, Kings College London
Generic Predictions from string/M theory for Particle Physics and Dark Matter
Friday 01 February 2013, 16:00 : Dr. Ricardo Silva, Department of Statistical Science, UCL
The Structure of the Unobserved
Hidden variables are important components in many multivariate models, as they explain dependencies among recorded variables and may provide a compressed representation of the data. In this talk, I will provide some overview of my line of work on how latent structure can be exploited in machine learning and computational statistics applications. In particular, we will go through the following topics: 1. How to measurement error problems have a causal interpretation and what can potentially be done to identify probabilistic and causal relations among variables that cannot be recorded without error 2. How dependencies among interacting individuals in a network can be explained by hidden common causes and what their roles are in prediction problems 3. How measurements can be compressed into fewer items without losing relevant information from the data, as postulated from a latent variable model, with applications in social sciences
Friday 25 January 2013, 16:00 : Prof. Buzz Baum, LMCB, UCL
A noisy path to order: refinement of a developing tissue
In my talk I will discuss the process of tissue refinement, whereby an ordered epithelial is generated from an initially disordered state through noisy processes that cause cells to compete for space and fate.
Thursday 24 January 2013, 16:00 : Jennifer Smillie, (University of Edinburgh)
Friday 18 January 2013, 16:00 : Dr. Maurizio Piai, Swansea
Holographic techni-dilaton
I review the status of theoretical and phenomenological studies on the holographic techni-dilaton, a light composite scalar present in the spectrum of a class of strongly-coupled models of electroweak symmetry breaking. The experimental signatures of such scalar are similar to those of the Higgs particle of the minimal version of the Standard Model, with important observable differences is some of the search channels. The 125-126 GeV scalar discovered by ATLAS and CMS could be such particle, and I will discuss how to test this hypothesis in future theoretical as well as experimental studies.
Friday 11 January 2013, 16:00 : Prof. Jon Butterworth, UCL
Standard Model physics
I'll review a selection of LHC measurements of jets, photons and weak bosons, show comparisons to Standard Model predictions, and discuss some lessons learned and future prospects.
Friday 14 December 2012, 16:00 : Prof. Jeff Forshaw, Manchester
The breakdown of collinear factorization in QCD
Collinear factorization underpins the calculation of many particle physics cross sections, through the use of parton distribution and fragmentation functions. All of the principal Monte Carlo event generators exploit it in their design. In this talk I will explain that in general the factorization does not hold in hadron-hadron collisions and shed light on the mechanism of the breakdown in the language of perturbation theory.
Friday 07 December 2012, 16:00 : Dr. Stephen West, RHUL
Models of Dark Matter
I will outline some alternatives to the standard neutralino dark matter scenario. In particular, I will review asymmetric dark matter and the production of dark matter via ``freeze-in". I will present the possible ways in which we can search for these candidates in dedicated dark matter search experiments and at colliders.
Friday 30 November 2012, 16:00 : Dr. Chamkaur Ghag, UCL
Direct Dark Matter detection
Discovery of the nature of dark matter is internationally recognized as one of the greatest contemporary challenges in science, fundamental to our understanding of the Universe. The most compelling candidates for dark matter are Weakly Interacting Massive Particles (WIMPs) that arise naturally in several models of physics beyond the Standard Model. Although no definitive signal has yet been discovered, the worldwide race towards direct detection has been dramatically accelerated by the progress and evolution of liquid xenon (LXe) time projection chambers (TPCs). The XENON phased programme operates LXeTPCs at Gran Sasso, Italy, and has released results from analysis of 225 days of WIMP search data from the XENON100 detector - presently the most sensitive instrument in the worldwide hunt for WIMPs. XENON100 finds no evidence of signal above expected background and constrains scalar WIMP-nucleon interactions above 2x10-45 cm2 at 55 GeV/c2 WIMP mass (90% C.L.) - over an order of magnitude more stringent than any competing experiment. This result seriously challenges interpretation of the DAMA, CoGeNT or CRESST-II observations as being due to scalar WIMP-nucleon interactions.
Friday 23 November 2012, 16:00 : Prof. David Evans, Birmingham
Probing the Quark-Gluon Plasma - recent results from ALICE at the LHC
ALICE is a general purpose heavy-ion experiment aimed at studying QCD at extreme energy densities and the properties of the deconfined state of matter, known as a quark-gluon plasma. A selection of the latest results will be presents, together with the first results from proton-lead collisions.
Friday 26 October 2012, 16:00 : Dr. Simon Jolly
Proton Accelerators for Cancer Therapy
Proton beam therapy (PBT) is a more sophisticated form of radiotherapy for the treatment of cancer. Due to the Bragg Peak, protons can deliver the necessary dose to the tumour site much more precisely than the 6-18 MeV photons used in conventional radiotherapy. This is particularly valuable for tumours in the head and neck and central nervous system and for treating children, whose growing organs need to be protected from excessive dose. Until now proton therapy was only available from the Clatterbridge Centre for Oncology on the Wirral, and then only for eye treatments using 62 MeV protons. Two new sites are planned for the UK to deliver large-scale proton therapy treatment for the first time: at the Christie Hospital in Manchester and at UCL Hospital. I will describe the reasons for using protons over photons and the accelerator technologies used to deliver 70-250 MeV protons for the full range of treatment we will offer at UCLH. I will also cover some of the issues in selecting the right accelerator technology in my role as the accelerator lead for the new UCLH PBT facility and as an advisor to the Christie programme.
Friday 19 October 2012, 16:00 : Prof. Kael Hanson, Brussels
Particle astrophysics at 90° south: reports from the IceCube Neutrino Observatory
The IceCube Neutrino Observatory is a kilometer-scale cosmic ray muon and neutrino telescope deployed in the deep ice at the South Pole. It detects the Cherenkov radiation emitted by charged particles in transit through the transparent glacial medium by means of a huge array of photomultiplier tubes, each of which independently registers the intensity and arrival time of the radiated photons. The resulting ensemble of hits is processed by event reconstruction algorithms which determine the energy, direction, and type of particle underlying the event. Under construction since 2003, the so-called IC86 array was finally completed December 2010 with the installation of the 86th deep ice string and the full detector was commissioned and placed in operation in May 2011. This talk outlines the science goals of the facility and highlights the results to date that have been released by the IceCube collaboration: searches for high-energy and ultrahigh-energy cosmic neutrinos and lower-energy neutrinos from dark matter; measurements of the cosmic ray anisotropy; detection of neutrino oscillations at high energies. Planned extensions to IceCube are additionally described. Finally, the Askaryan Radio Array (ARA) is introduced. It is an array of radio antennas located next to the current IceCube array which is expected to eventually cover an area of over 100 square kilometers and which targets the detection of the GZK neutrino flux at extreme high-energies which should result from the observed absorption of cosmic rays at these energies.
Friday 12 October 2012, 16:00 : Dr. Tamsin Edwards, Bristol
Predicting future changes in climate and sea level
How can we predict the future of our planet? I will give an overview of modelling and assessment of uncertainty for future climate change and sea level, focusing on the world-leading UK Climate Projections 2009 and our recent research on Antarctica.
Friday 05 October 2012, 16:00 : Dr. Kumiko Kotera
From the magnetized Universe to neutrinos: a life of an ultrahigh energy cosmic ray
The origin of ultrahigh energy cosmic rays (UHECRs, particles arriving on the Earth with energy 10^17- 10^21 eV) is still a mystery. I will review the experimental and theoretical efforts that are being deployed by the community to solve this long-standing enigma, including the recent results from the Auger Observatory. I will describe in particular the interactions experienced by UHECRs while propagating from their sources to us, in the cosmic magnetic fields and the various intergalactic backgrounds. These interactions, that induce deflections and multi-messenger production (neutrinos, gamma-rays and gravitational waves) could reveal crucial information about the path taken by these particles, and help us track down their progenitors. I will also focus on one candidate source that has been little discussed in the literature: young rotation-powered pulsars. The production of UHECRs in these objects could give a picture that is surprisingly consistent with the latest data measured with the Auger Observatory.
Friday 29 June 2012, 16:00 : Prof. Mark Lancaster & Dr. Dave Waters
History of the Tevatron & The Mass of the W Boson
CDF, with significant involvement of the UCL group, have published a W mass measurement with greater precision that all previous measurements combined. We'll take a look at the history of the Tevatron project and particularly the latest ground-breaking W mass measurement. This is an important legacy of the Tevatron as all eyes are now focused on the LHC ...
[slides(Dave),slides(Mark)]
Thursday 14 June 2012, 16:00 : Prof. Kam-Biu Luk, University of California at Berkeley and Lawrence Berkeley National Laboratory
Latest results of the Daya Bay Reactor Antineutrino Experiment
The goal of the Daya Bay Reactor Antineutrino Experiment is to determine the neutrino-mixing angle, &theta13, with a precision better than 0.01 in sin2(&theta13). The value of sin2(&theta13) is measured by comparing the observed electron-antineutrino rates and energy spectra with functionally identical detectors located at various baselines from the reactors. This kind of relative measurement using a near-far configuration significantly reduces the systematic errors. Daya Bay began data taking near the end of 2011 and reported the observation of a non-zero value for &theta13 recently. In this seminar, an overview of the experiment and the latest results from Daya Bay will be presented.
[ slides]
Friday 01 June 2012, 16:00 : Basil Hiley, Birkbeck College and Rob Flack, UCL
Weak measurement: a new type of quantum measurement and its experimental implications
We will discuss the notion of a 'weak measurement', originally introduced by Aharonov et al and carefully analyzed by Duck et al. This technique opens up new experimental possibilities for exploring quantum phenomena. It has already been used to measure the spin Hall effect of light and to measure photon 'trajectories' in a two slit interference set up, traditionally deemed to be impossible without destroying the interference effects. We will discuss the theoretical basis for the experimental technique and propose new experiments to explore foundational issues, throwing new light on the Bohm interpretation.
[slides-1, slides-2]
Friday 18 May 2012, 16:00 : Dr. Frank Deppisch, UCL
Lepton Flavour and Number Violation in Left-Right Symmetrical Models
We discuss lepton flavour and number violating processes induced in the production and decay of heavy right-handed neutrinos at the Large Hadron Collider. Such particles appear in Left-Right symmetrical extensions of the Standard Model as the "messengers"' of neutrino mass and may have masses of order TeV, potentially accessible at the LHC. We determine the expected sensitivity on the right-handed neutrino mixing matrix, as well as on the right-handed gauge boson and heavy neutrino masses, and compare the results with low-energy probes such as searches for mu-e conversion in nuclei and neutrinoless double beta decay.
Friday 11 May 2012, 16:00 : Prof. Max Klein, Liverpool
The LHeC Project at CERN
An overview is given on the physics, detector and accelerator designs of the Large Hadron electron Collider. The LHeC is a new electron- proton/ion collider, which, operating at TeV energy and using the intense LHC p/A beams, is designed to open a new chapter of deep inelastic lepton-hadron physics.
Friday 04 May 2012, 16:00 : Dr. Marumi Kado, LAL
Higgs searches at the LHC
This talk will give an overview of the searches for the Higgs boson with the ATLAS and CMS detectors at the LHC with the full 2011 dataset, corresponding to an integrated luminosity of nearly 5 fb-1. Both experiments have explored the Higgs boson mass hypotheses range from 110 GeV up to 600 GeV. Most of this range is now excluded at a high confidence level. However, at its low end, for Higgs boson mass hypotheses close to 125 GeV, both experiments observe an excess of events above the background expectation. More data are required to determine the origin of this excess. These results will be reviewed and the short to medium term prospects will be briefly discussed.
Friday 27 April 2012, 16:00 : Dr. Tracey Berry, Royal Holloway
Searching for Extra Dimensions at ATLAS
The ATLAS experiment at the LHC has been recording data from proton-proton collisions at a centre-of-mass energy of 7 TeV since March 2010. In 2011 it collected an integrated luminosity of over 5 inverse femtobarns. The combination of the large amount of data available and the excellent detector performance has enabled searches for evidence of new physics at this unprecedented energy scale to be performed. In this talk I will give an overview of the ATLAS searches for extra dimensions.
Friday 20 April 2012, 16:00 : Prof. Mark Lancaster, UCL
The Muon: a probe for new physics
The muon whose discovery in 1937 caused a furore at the time is about to have a renaissance. The availability of new high intensity proton sources at PSI, J-PARC and FNAL will allow the muon's decay modes and dipole moments to be probed to an unprecedented precision. Lepton violation measurements can probe physics far beyond the LHC energy scale and elucidate and resolve degeneracy in new physics models potentially exposed by the LHC. In conjunction with measurements of neutrinoless double beta decay and neutrino oscillations, the muon measurements can also shed light on the mechanism that has generated the universe's matter anti-matter asymmetry. In this talk I will discuss the motivation for, and describe, the next generation of muon experiments and particularly the UK involvement in the COMET experiment.
Friday 23 March 2012, 16:00 : Dr. Aidan Robson, Glasgow
Final Higgs results from the Tevatron
I will present Higgs search results from the complete Tevatron dataset, which were shown for the first time two weeks ago, and discuss them in the context of recent LHC results.
Friday 09 March 2012, 16:00 : Dr. Dan Browne, UCL
Putting Bell inequality violation to work
Entanglement and the violation of Bell inequalities are the most striking examples of the incompatibility of quantum physics and the classical world. Quantum experiments can exhibit correlations which would be impossible in any classical world, unless information could travel faster than light. This behaviour is captured by Bell inequalities and other effects. Giving a general introduction to these effects for the non-specialist, I will give examples of the non-classical correlations which can lead to Bell inequality violations and describe how, in my own recent research, Bell inequality violation has been shown to represent something useful - computation.
Friday 02 March 2012, 16:00 : Dr. Boris Kayser, Fermilab
Neutrino Phenomenology, News, and Questions
We will explain the quantum mechanics of neutrino oscillation, which is a quintessentially quantum mechanical phenomenon. Then we will summarise what has been learned so far about neutrino oscillation experiments and discuss several experimental surprises. Finally, we will turn to the future, focusing on neutrino questions that will be addressed by the search for neutrinoless double beta decay.
[slides(B)]
Friday 24 February 2012, 16:00 : Prof. Ben Allanach, Cambridge
LHC versus SUSY
We review what last year's searches mean for supersymmetry, focusing on what they mean in terms of naturalness and fits to indirect data.
Friday 17 February 2012, 16:00 : Dr. Morgan Wascko, Imperial
T2K's First Neutrino Oscillation Result
The discovery of non-zero neutrino mass, via neutrino flavor oscillation, is the only confirmed observation of physics beyond the standard model of particle physics. Neutrino oscillation experiments have so far measured two of three mixing angles. I will describe the T2K experiment, a long baseline accelerator neutrino experiment in Japan searching for the third mixing angle, and present our first neutrino oscillation results.
Friday 10 February 2012, 16:00 : Dr. Sarah Bridle, UCL
Quantifying Dark Energy using Cosmic Lensing
I will describe the great potential and possible limitations of using the bending of light by gravity (gravitational lensing) to constrain the mysterious dark energy which seems to dominate the contents of our Universe. In particular we have to remove the blurring effects of our telescopes and the atmosphere to extreme precision, and account for possibly coherent distortions of galaxy shapes due to processes in galaxy formation. I will discuss these issues in more detail and review some recent progress in tackling them, putting them into the context of the upcoming Dark Energy Survey.
Friday 03 February 2012, 16:00 : Prof. Dmitri Vassiliev, UCL
Is God a geometer or an analyst?
The speaker is a specialist in the analysis of partial differential equations (PDEs) and the talk is an analyst's take on theoretical physics. We address the question: why do all the main equations of theoretical physics such as the Maxwell equation, Dirac equation and the linearized Einstein equation of general relativity contain the same physical constant - the speed of light? The accepted point of view is that this is because our world was designed on the basis of geometry, with the speed of light encoded in the concept of Minkowski metric. We suggest an alternative explanation: electromagnetism, fermions and gravity are different solutions of a single nonlinear hyperbolic system.
Friday 27 January 2012, 16:00 : Dr. Chris White, Glasgow
Polarisation Studies in Ht and Wt Production
The polarisation of the top quark can be an efficient probe of new physics models. In this seminar, I will focus on the associated production of a single top quark with either a charged Higgs boson or a W boson. Angular and energy observables relating to leptonic decay products of the top will be presented, which carry strong imprints of the top polarisation. These can be used to constrain the parameter space of two Higgs doublet models, as well as reduce backgrounds to either Ht or Wt production. The talk is based on arXiv:1111.0759.
Friday 16 December 2011, 16:00 : Jenny Thomas
MINOS + MINOS+, so good they named it twice
MINOS has delivered a number of important measurements which have moved the field of neutrino oscillations into the precision arena. The whole field is in a state of heightened excitement with recent results from a number of experiments which point to a large value of he mixing angle theta13. This will enable investigation of the mass hierarchy and CP violation in the experiments which are presently being constructed and being thought about. MINOS+ will be unique in its ability to probe with precision the correctness of the 3x3 PMNS mixing model which is presently assumed to be correct, both via the interference of other models on the oscillation probability over long distances and also via the search for sterile neutrinos, which would imply at least one extra neutrino family. In all cases, the next few years will be a great time for the neutrino field.
Friday 21 October 2011, 16:00 : Jocelyn Monroe (Royal Holloway London)
Searching for the Dark Matter Wind: Recent Progress from the DMTPC Experiment
The DMTPC directional dark matter detection experiment is a low-pressure CF4 gas time projection chamber, instrumented with charge and scintillation photon readout. This detector design strategy emphasizes reconstruction of WIMP-induced nuclear recoil tracks, in order to determine the direction of incident dark matter particles. Directional detection has the potential to make a definitive observation of dark matter using the unique angular signature of the dark matter wind, which is distinct from all known backgrounds. This talk will review the experimental technique and current status of DMTPC.
Friday 07 October 2011, 16:00 : Mitesh Patel (Imperial College London)
Friday 03 June 2011, 16:00 : Luca Panizzi (CNRS-IN2P3)
Lorentz violation in neutrinos from SN 1987a and MINOS
Lorentz invariance can be precisely tested using neutrinos from supernovae or long baseline experiments. I will discuss which limits can be imposed on general phemoneological parametrisations of Lorentz violation which go beyond the usual linear or quadratic power-law behaviour inspired by quantum-gravitational models.
Friday 27 May 2011, 16:00 : Costas Andreopoulos (RAL)
First Neutrino Oscillation Results from T2K
T2K is a front-runner, second-generation long-baseline neutrino oscillation experiment. It utilises a new and powerful, relatively pure muon-neutrino beam produced at Japan Proton Accelerator Research Complex (JPARC). The beam is aimed almost 2 degrees off-axis from the position of the Super-Kamiokande water Cherenkov detector in western Japan, 295 km away. The experiment also benefits from a near detector complex, instrumented with finely segmented solid scintillator and TPCs, located 280 m downstream of the beam-line target. The experiment aims to accumulate 8E+21 protons-on-target over the next 5 years. At the end of this running period, T2K aims to have improved present knowledge of sin^{2} (2\theta_{13}), sin^{2} (2\theta_{23}) and \Delta m^{2}_{23} by an order of magnitude. I will present initial muon-neutrino disappearance and electron-neutrino appearance results using the dataset accumulated during the first T2K physics run (January-June 2010), which corresponds to an integrated JPARC neutrino flux exposure of 3.23E+19 protons-on-target.
Friday 20 May 2011, 15:00 : Clive Speake (Birmingham) — Maths 505
Experimental Gravitation
I will describe experimental work currently underway at University of Birmingham in gravitation. We are building an experimental test of the inverse square law of gravity at short ranges using a superconducting suspension. I will describe this experiment and the challenges that need to be overcome. We have used a room temperature torsion balance to search for time variations of the gravitational constant over diurnal and semidiurnal periods as predicted by Kostelecky and Tasson (PRL 010402 2009). I will report on recent results from this experiment and improvements that are underway.
Friday 13 May 2011, 16:00 : Tony Padilla (Nottingham)
Meddling with Einstein
Einstein gravity has reigned supreme for over 100 years, but is it right? Of course not. We know it is definitely wrong in the super Planckian regime of quantum gravity, but perhaps it is also wrong at very low energies. I give an overview of why we might want to meddle with Einstein, and how we might do so without screwing everything up. I also give a taste of some recent ideas I've had in attempting to "solve" the cosmological constant problem.
Friday 29 April 2011, 16:00 : Tony Padilla (Nottingham)
Modified gravity
Friday 15 April 2011, 16:00 : Samuel Wallon (Laboratoire de Physique Theorique)
Mueller Navelet Jets
Friday 01 April 2011, 16:00 : Sam Harper (RAL)
Recent CMS Searches for Gauge Bossons Decaying into High Pt Leptons
Many new physics scenarios beyond the Standard Model predict the existence of new heavy gauge bosons decaying to electrons and muons. Evidence of these new particles has so far not been found experimentally. In 2010, the LHC made available a new energy frontier, significantly extending the potential experimental search region in these channels. In this talk I will briefly motivate these models and then describe the CMS searches for them using 40pb of 7 TeV proton-proton collision data. Finally I will conclude with the outlook for the near term future.
Friday 11 March 2011, 16:00 : Matthew Wing (UCL)
Proton Driven Plasma Wakefield Accelerators
Friday 04 March 2011, 16:00 : Guennadi Borissov (Lancaster)
Search for new sources of CP violation with DZero detector
I will review the latest results of the DZero experiment on the search for new sources of CP violation and will discuss in detail the measurement of the like-sign dimuon charge asymmetry. This result will be compared with other Tevatron studies of CP violation in Bs system.
Friday 25 February 2011, 16:00 : Mario Campanelli (UCL)
Jet Physics at ATLAS
Friday 04 February 2011, 16:00 : Gary Barker (University of Warwick)
Liquid Argon Detectors
Friday 17 December 2010, 16:00 : John Baines (RAL)
Performance of the ATLAS Trigger in 2010 running
The ATLAS trigger has been used very successfully to collect collision data during 2009 and 2010 LHC running at centre-of-mass energies of 900 GeV, 2.36 TeV, and 7 TeV. The trigger system reduces the event rate, from the design bunch-crossing rate of 40 MHz, to an average recording rate of 200 Hz. The ATLAS trigger is composed of three levels. The first (L1) uses custom electronics to reject most background collisions, in less than 2.5 us, using information from the calorimeter and muon detectors. The upper two trigger levels, known collectively as the High Level Trigger (HLT), are software-based triggers. As well as triggers using global event features, such as missing transverse energy, there are selections based on identifying candidate muons, electrons, photons, tau leptons or jets. I will give an overview of the performance of these trigger selections based on extensive online running during LHC collisions and describe how the trigger has evolved with increasing LHC luminosity. I will end with a brief overview of plans for forthcoming LHC running including future trigger upgrades.
Friday 05 November 2010, 16:00 : Justin Evans (UCL)
Latest Results from the MINOS experiment.
The MINOS experiment uses a muon neutrino beam, over a baseline of 735 km, to measure neutrino oscillation parameters. A number of new results have been released this year. The observation of muon neutrino disappearance has allowed the world's most precise measurement of the largest neutrino mass splitting, and a competitive measurement of the mixing angle θ_23. A search for the appearance of electron neutrinos yields limits on the as-yet-unmeasured mixing angle θ_13. A measurement of the neutral current interaction rate allows limits to be placed on the existence of sterile neutrinos. Data has also been taken with a dedicated muon antineutrino beam. This data has been used to make the first precision measurement of the muon antineutrino oscillation parameters; any observed difference from the neutrino parameters would be evidence for physics beyond the standard model.
Friday 29 October 2010, 16:00 : Gunther Roland (MIT)
Long-range correlations in high-multiplicity proton-proton collisions at the LHC
The CMS collaboration recently announced the observation of long-range near-side angular correlations in proton-proton collisions at LHC. In this measurement, we have found a novel correlation where particles produced in the collision are aligned in their azimuthal angle over a large pseudorapidity region. This "ridge"-like structure is absent in minimum bias events but emerges as the produced particle multiplicity reaches very high values. This phenomenon has not been observed before in proton-proton collisions but resembles similar effects seen in collisions of nuclei such as copper and gold ions at RHIC. In this talk, I will report on the experimental aspects of this measurement and discuss some of the recent proposals regarding the physical origin of the effect.
Friday 08 October 2010, 16:00 : Adam Davison (UCL)
Exciting results from the ICHEP 2010 conference
A review of the exciting results presented at this year's International Conference on High Energy Physics in Paris this summer.
Monday 10 May 2010, 16:00 : Subir Sarkar
Antimatter in cosmic rays: new physics or old astrophysics?
There has been considerable excitement generated by recent 'anomalies' in galactic cosmic ray and gamma-ray observations, indicative of dark matter annihilation or decay. It is essential to have a reliable evaluation of the astrophysical 'background' in order to evaluate such claims. I will focus on the PAMELA, Fermi and HESS data and discuss whether these can be accommodated in the 'standard model' of galactic cosmic ray origin and propagation or whether new physics is needed.
Thursday 15 April 2010, 16:00 : Krisztian Peters (University of Manchester)
Higgs searches - latest results from the Tevatron
The current status of Standard Model Higgs searches at the Tevatron is presented. A comprehensive program of searches in many Higgs boson production and decay channels is underway, with recent results using up to 5.4/fb of data collected with the CDF and D0 detectors. The major contributing processes include associated production (WH→lνbb, ZH→ννbb, ZH→llbb) and gluon fusion (gg→H→WW). Improvements across the full accessible mass range resulting from the larger data sets, improved analyses techniques and increased signal acceptance will be discussed. A new CDF and D0 combined result with the updated data set will also be presented. Finally, I will discuss prospects for Higgs searches at the Tevatron.
Friday 19 March 2010, 16:00 : Steve Biller (University of Oxford)
The SNO+ Project
The continuously surprising and peculiar nature of neutrinos and the weak interaction have been a source of puzzlement since they were first discovered. In recent years, a remarkable paradigm has emerged that seeks to explain hidden symmetries, the scale of neutrino masses and the reason for the imbalance between matter and antimatter in the universe. A cornerstone of this includes the notion that the physical neutrinos we see are their own antiparticles. The only viable mechanism known to have a chance of testing this is the process of neutrinoless double beta decay (0nbb). A pioneering new approach to searching for this rare process involves a novel use of the unique SNO detector in Canada as part of the SNO+ programme, which has a large UK involvement. In concert with other experiments using different isotopes, Phase I aims to convincingly establish or bound 0nbb for equivalent neutrino masses in excess ~100meV and, if seen, constrain the physics mechanism by 2015. Methods to push beyond this are also being explored. In addition, SNO+ will perform precision measurements of solar neutrinos at the transition between matter and vacuum-dominated oscillations to critically test fundamental neutrino couplings as well as studing geo-neutrinos, reactor neutrinos and a variety of other physics.
Friday 12 March 2010, 16:00 : Yoshi Uchida (Imperial College London)
Charged Lepton Flavour Violation: a factor one million improvement
Flavour-changing transitions of charged leptons have been a topic of experimental investigation since the early days of particle physics, and this helped shape some of the basic laws that any successful model of particle physics would have to obey, including what we now call the Standard Model. While such phenomena have never been observed to date, when the discovery of neutrino masses and oscillations broke the Standard Model, it transformed the question "Does charged lepton flavour violation exist?" into "How much?" and "How?", and even "Why haven't we seen it yet?". In this seminar, I will describe the current experimental and theoretical state of the field, and why the next generation of experiments could hold the keys that lead the way to a fuller understanding of our universe, offering complementary discoveries that experiments at the high-energy frontier cannot reach. I will then focus on mu-e conversion experimentation, and specifically the COMET/PRISM programme which is promising a sensitivity improvement of four orders of magnitude compared to the current record, potentially improving to six orders, which could open the path to precision measurements with multiple lepton flavour-violating probes of Physics Beyond the Standard Model.
Wednesday 03 March 2010, 16:00 : Friedrich Hehl, University of Cologne (M304 Kathleen Lonsdale Building - Origins Seminar)
Nonlocal gravity simulates dark matter
A nonlocal generalization of Einstein's theory of gravitation is constructed within the framework of the translational gauge theory of gravity. In the linear approximation, the nonlocal theory can be interpreted as linearized general relativity but in the presence of dark matter that can be simply expressed as an integral transform of matter. It is shown that this approach can accommodate the Tohline-Kuhn treatment of the astrophysical evidence for dark matter.
Monday 22 February 2010, 16:00 : Friedrich Hehl, University of Cologne (Maths room 500 - Origins Seminar)
On the change in form of Maxwell's equations during the last 150 years --- spotlights on the history of classical electrodynamics ---
Starting with Maxwell's equations for the electromagnetic field (1865), we first point out how Maxwell brought his system of equations into quaternionic form. Subsequently, we recognize that what we call Maxwell's equation nowadays is a creation of Heaviside and Hertz. We touch the development of vector calculus (Hamilton, Grassmann, Gibbs, Foeppl) and of tensor calculus (Riemann, Christoffel, Ricci, Levi-Civita) both around 1900. Then we study the impact of special and of general relativity on Maxwell's equations. In particular we follow up the metric-free and topological version of Maxwell's equations via exterior differential forms and period integrals. Some alternative formulations via spinors, Clifford algebras, chains and cochains... are mentioned.
Friday 12 February 2010, 16:00 : Jon Butterworth (UCL)
Subjet structure as physics tool at the LHC: and some early LHC Data
I will show some of the first jet data from the 2009 LHC run, and discuss the prospects for using jet substructure to search for new physics in high energy running.
Friday 05 February 2010, 16:00 : Adrian Bevan (Queen Mary)
Super Flavour Factories: SuperB
The Standard Model of particle physics is the pinnacle of achievements of science. In one simple model we can describe the behaviour of all known particles extremely well. However we know that there are missing pieces to this puzzle. We can use SuperB to learn about the missing pieces through a multi-prong approach: i) study rare decays, ii) search forbidden decays, and iii) over-constrain measurements of Standard Model sensitive observables to see if they all agree. I will discuss just a few of the many measurements that can be made at SuperB and how these can be used to improve our understanding of particle physics. Having motivated the reasons for a new experiment, I will briefly discuss some of the aspects of such a proposed facility.
Friday 22 January 2010, 16:00 : Morgan Wascko (Imperial College London)
Neutrino Cross Section Measurements at SciBooNE
As we enter the era of precision neutrino oscillation measurements, the need to improve neutrino interaction cross section measurements is paramount. SciBooNE at Fermilab uses a fine grained tracking detector to make precise neutrino and antineutrino cross section measurements on carbon and iron. I will present SciBooNE's latest physics results, which are measurement of neutral current neutral pion production by neutrinos. These measurements are of direct relevance to the future global long baseline accelerator neutrino program, especially the T2K experiment in Japan.
Friday 11 December 2009, 16:00 : Alain Blondel (University of Geneva)
R&D for neutrino factory and muon collider: the MICE experiment at RAL
Neutrino Factory and Muon Collider are novel accelerators and their development is of great interest for the particle physics community. Producing, capturing and accelerating enough of these particles that live 2.2 microseconds is a challenge at every step. One of the novel technologies is ionization cooling. The Muon Ionization Cooling Experiment at RAL is set to demonstrate the feasibility and performance of a section of cooling channel on a real muon beam. After a brief introduction to the specific virtues of muon colliders and neutrino factory, the MICE experiment and its present status will be described. We will conclude with an invitation to study future possibilities.
Friday 04 December 2009, 16:00 : Mark Dorman (UCL)
Preliminary CCQE Neutrino-Nucleus Scattering Results from MINOS
Charged-current quasi-elastic scattering is the dominant neutrino interaction mode at low energies and accurate knowledge of the cross section is important for current and future oscillation experiments. CCQE scattering is also hugely interesting in it's own right as a fundamental process, a probe for the axial nature of the nucleon and a window into the complex world of nuclear effects. I will first introduce the MINOS experiment and discuss CCQE scattering theory and the current status of CCQE cross section measurements. I will then present preliminary results made with a high statistics neutrino scattering dataset collected by the MINOS Near Detector.
Friday 06 November 2009, 16:00 : Henrique Araujo (Imperial College)
Direct dark matter searches with ZEPLIN-III and beyond
ZEPLIN-III is a two-phase xenon experiment deployed 1100 m underground at the Boulby mine (UK) to search for galactic dark matter WIMPs. These Weakly Interacting Massive Particles are the lead candidate to explain the missing non-baryonic matter in the universe. ZEPLIN-III operates on the principle that electron and nuclear recoils, produced in liquid xenon by different particle species, generate different relative amounts of scintillation light and ionisation charge. WIMPs are expected to scatter elastically off Xe atoms (much like neutrons), and the recoiling atom will produce a different signature to gamma-rays, which create electron recoils. The first science run at Boulby placed a 90%-confidence upper limit on the WIMP-nucleon cross-section of 8.1x10-8 pb, at the level of the world's most sensitive experiments. We are now embarking on the second run after upgrading the instrument and adding an anti-coincidence veto system. With one year of running we expect to improve our sensitivity ten-fold, biting significantly into the parameter space favoured by Supersymmetry. To fully probe the SUSY prediction requires target masses in the tonne scale and above. Achieving this whilst keeping sensitivity to nuclear recoils in the keV scale - and possibly looking for no more than a hand-full of events per year - is a serious technical challenge. This is now the priority for the most competitive technologies, namely cryogenic germanium and the noble liquids. For this next phase we have teamed up with the US LUX collaboration to deploy a tonne-scale xenon target at Homestake (South Dakota, US), possibly followed by a 20-tonne experiment. The nature of dark matter is one the main open questions in Physics today, and the race is on to claim a discovery!
Friday 30 October 2009, 16:00 : Mark Lancaster (UCL)
Latest Results from CDF
The latest results from CDF since the spring of 2009, 50 in total, from the world's highest energy collisions (until the LHC beats it by 120 MeV !) at the proton anti-proton Tevatron collider will be presented. These include results on CP anomalies in the b-sector, top quark physics, searches for physics beyond the Standard Model and the latest searches for the Higgs boson.
Friday 16 October 2009, 16:00 : Evgueni Goudzovski (University of Birmingham)
A precision test of lepton universality in K --> l nu decays at the CERN NA62 experiment
Measurement of the helicity-suppressed ratio of charged kaon leptonic decay rates BR(K --> e nu)/BR(K --> mu nu) has long been considered as excelent test of lepton universality and the Standard Model description of weak interactions. However it was realised only recently that the helicity suppression enhances the sensitivity to SUSY-induced effects to an experimentally accessible level. The NA62 experiment at CERN has collected a record number of over 10^5 K --> e nu decays during a dedicated run in 2007, aiming at achiving a 0.5% precision. Experimental strategy, details of the analysis, preliminary results, and future prospects of the measurement will be discussed.
[(slides)]
Friday 25 September 2009, 16:00 : Geoff Mills (Subatomic Physics Group, Los Alamos National Laboaratory)
Nus and Anti-Nus from MiniBooNE
The MiniBooNE experiment, a short baseline neutrino oscillation experiment currently running at Fermilab, has spent the last two years building up its supply of anti-neutrino data, and has combed through it and the already substantial neutrino data stockpile. The intriguing results will be explored along with future possibilities for short baseline programs.
Friday 07 August 2009, 16:00 : Freya Blekman (Cornell University)
CMS Pixel Detector
The Compact Muon Solenoid (CMS) is one of two general purpose experiments at the Large Hadron Collider. The CMS experiment prides itself on an ambitious, all silicon based, tracking system.
After over 10 years of design and construction the CMS tracker detector has been installed and commissioned. The tracker detector consists of ten layers of silicon microstrip detectors while three layers of pixel detector modules are situated closest to the interaction point. The pixel detector consists of 66M pixels of 100um*150um size, and is designed to use the shape of the actual charge distribution of charged particles to gain hit resolutions that will eventually be down to 12 um. This presentation will focus on commissioning activities in the CMS tracker, with extra attention on the pixel detector. Results from cosmic ray studies will be presented, in addition to results obtained from the integration of the detector within the CMS detector and various calibration and alignment analyses.
Friday 05 June 2009, 16:00 : Doug Cowen (Pennsylvania State University)
Physics with IceCube's Deep Core Sub-array
The low energy reach of the IceCube Neutrino Observatory will be significantly extended with the addition of a sub-array called "DeepCore." DeepCore will be fully deployed in February 2010 in the clearest ice at the bottom center of the larger array. It will feature a 10x higher pixel density and 40% higher quantum efficiency photomultiplier tubes. It will also benefit greatly from the use of the surrounding IceCube array as an extremely effective veto against the copious background from downward-going cosmic-ray muons.
In this talk we will show that DeepCore extends the energy reach of IceCube to neutrino energies as low as 5-10 GeV. This will allow IceCube to search for lower mass solar WIMP annihilations, astrophysical neutrino sources in the southern sky, and to measure atmospheric neutrino oscillations. After an overview of IceCube and the design and deployment schedule for DeepCore, we will focus on neutrino oscillations with DeepCore. We will present some early results and predictions showing how well we can measure muon neutrino disappearance, how well we might be able to measure tau neutrino appearance, and whether we have a chance to determine the sign of the neutrino hierarchy.
Friday 15 May 2009, 16:00 : Mark Dorman (UCL) -- Physics E7
Recent Results from the MINOS Experiment
The MINOS long baseline neutrino oscillation experiment has been taking data in the NuMI neutrino beam since 2005. In this seminar I will introduce the experiment and present the latest physics results from MINOS. I'll first discuss atmospheric neutrino mixing and then present a number of other oscillation results; the search for sterile neutrinos, the search for electron neutrino appearance (theta_13) and anti-neutrino oscillations in MINOS. Finally I'll present neutrino cross section measurements from the Near Detector and summarise the outlook for MINOS.
Friday 08 May 2009, 16:00 : Peter Krizan (University of Ljubljana, Jozef Stefan Institute)
From Belle to SuperBelle
The seminar will first review some recent highlights of measurements of B and D meson properties that have been carried out by the Belle collaboration. We will discuss the motivation for a future Super B factory at KEK, as well as the requirements for the detector. Finally, the present status of the project will be presented together with the plans for the future.
Friday 01 May 2009, 15:15 : Michela Massimi (UCL) — E7
Are we justified to believe in colored quarks? A philosopher's look at the debate
Friday 24 April 2009, 16:00 : Filipe Abdalla (UCL)
Neutrino Mass contstraints from cosmology
Friday 17 April 2009, 16:00 : Jim Hinton (Leeds)
Gamma Ray Astronomy
Friday 27 March 2009, 16:00 : Are Raklev (University of Cambridge)
Gravitino Dark Matter
Friday 20 March 2009, 16:00 : Phil Harris (University of Sussex)
Testing Time Reversal
Friday 13 March 2009, 16:00 : Marcella Bona (CERN)
Flavour physics as a test of the SM and a probe of new physics
The vast amount of flavour physics results delivered by the B factories and the Tevatron continuously improving Bs system measurements allows for precision test of the Standard Model (SM). The Unitarity Triangle (UT) analysis for the extraction of the CKM matrix parameters is a powerful tool for combining all the available experimental data in the flavour sector and the lattice QCD calculations to check SM consistency and determine the values of SM observables. The measurements of the UT angles recently performed at B factories provide a determination of the UT comparable in accuracy with the one performed using the other available data. Thus the UT fit is now overconstrained. It is therefore possible to add new physics (NP) contributions to all quantities entering the UT analysis and to perform a combined fit of NP contributions and SM parameters. Thus the UT fit analysis can be turned to a new physics search.
Friday 27 February 2009, 15:00 : Gustave Tuck Lecture Theatre
Origins Launch
The keynote speeches will be given by Sir Paul Nurse and Prof. John Ellis.
Wednesday 25 February 2009, 16:00 : Tom McLeish
Friday 13 February 2009, 13:30 : Maths 706
Origins -- Mathematical Foundations
Friday 23 January 2009, 16:00 : Andrej Gorisek
The ATLAS Diamond PIXEL Upgrade
The goal of this project is to construct diamond pixel modules as an option for the ATLAS pixel detector upgrade. This is made possible by progress in three areas: the recent reproducible production of high quality polycrystalline Chemical Vapor Deposition diamond material in wafers, the successful completion and test of the first diamond ATLAS pixel module, and the operation of a diamond after irradiation to 1.8x10^16 p/cm2. I will summarize the results in these three areas and describe our plan to build and characterize a number of ATLAS diamond pixel modules, test their radiation hardness, explore the cooling advantages made available by the high thermal conductivity of diamond and demonstrate industrial viability of bump-bonding of diamond pixel modules.
Friday 05 December 2008, 16:00 : Prof. Alain Blondel (University of Geneva)
Status of MICE
Friday 28 November 2008, 16:00 : Harry van der Graaf
Alternative ATLAS Upgrade?
Friday 31 October 2008, 16:00 : Dr Robert Flack
Results from Neutrino '08
In May 2008 I attended the conference Neutrino08 in Christchurch New Zealand and gave a plenary talk about the latest results from NEMO 3 and SuperNEMO. There were approximately fifty plenary talks and 100 "beer and pizza" talks in the evening. I will give a summary of the main plenary talks.
Friday 24 October 2008, 16:00 : Simon Bevan
Ultra High Energy Neutrino Astronomy with ACoRNE
An overview of UHE neutrino astronomy will be presented emphasising on the exciting new results from ACoRNE. The ACoRNE collaboration has successfully been taking data from the Rona array since December 2005. Here the progress in match filters, co-incident pulse finding tools, and neural networks will be discussed, showing the development of a successful reduction and analysis package used on the Rona data. The results of this analysis will be presented as the UK's first acoustic limit.
Friday 17 October 2008, 16:00 : Prof. Gary Varner (University of Hawai'i)
Microwave Radio Detection of UHE Air Showers
Extensive air showers deposit such large amounts of energy into the atmosphere that even relatively weak emission mechanisms may be observable at considerable distances. It has been known since the 1960's that UHE Cosmic Ray interactions lead to a concurrent radio emission, though the process was not well understood or characterized at the time. In recent years there has been renewed interest in this lower frequency radio mechanism, attributed to geosynchrotron emission during shower evolution. However since this emission is "beamed" onto the ground, observation is similar to traditional ground array observations of the shower particles. By contrast, in analog to distant shower observation via Nitrogen fluorescence, molecular bremsstrahlung emission of shower electrons leads to isotroptic radiation at microwave radio frequencies. While incoherent emission may be detectable at kilometer scale distances, accelerator measurements indicate that at least partial coherence is present in such cascades, significantly enhancing the fiducial acceptance of this technique. Since the atmosphere is almost completely transparent to microwave and operation is insensitive to light background or weather, an order of magnitude better livetime may be expected compared with fluorescent observation, with potentially lower systematic energy uncertainty. A preliminary satellite dish-based system has been operated in Hawaii and a follow-on prototype system is described that will be deployed at the Auger Observatory in spring 2009.
Friday 10 October 2008, 16:00 : Prof. Yoshitaka Kuno (Osaka University)
Physics of Lepton Flavor Violation of Charged Leptons and an Experimental Proposal to Search for Muon to Electron Conversion at J-PARC
Lepton flavor violation of charged leptons has attracted much interest from theorists and experimentalists since it would have potentials to get important hints for new physics beyond the Standard Model, such as SUSY-GUT and/or SUSY-Seesaw models. In particular, a process of muon to electron conversion in a muonic atom is considered to be one of the best to search for. In this talk, a new experimental proposal to search for muon to electron conversion with sensitivity of less than 10^{-16} at J-PARC in Japan will be presented.
Friday 03 October 2008, 16:00 : Dr. Chris Hays (University of Oxford)
Search for High-Mass Resonances in CDF Dimuon Data.
Neutral resonances have a long and illustrious history, and could well provide the next big discovery in particle physics. I present a new search for high-mass neutral resonances in the CDF dimuon data, the most sensitive such search to date. The analysis applies the novel technique of probing the inverse mass spectrum, for which the detector resolution is constant in the search region. The results are interpreted in terms of spin 0, 1, and 2 resonances, using sneutrino, Z', and graviton models, respectively.
Friday 26 September 2008, 16:00 : Jonathon Coleman (SLAC)
The Status of mixing and CP searches in the charm sector.
During 2007 the Flavor Factories surprised the physics community with unexpected results in charm-mixing. Since this time there has been several confirmations of this phenomena. I will present an overview of recent experimental results from for the $D0$ meson oscillating into its own anti-particle, (or vice versa).
Friday 19 September 2008, 16:00 : Fabrizio Palla (INFN-Pisa)
Motivation and possible architechtures for a Level-1 track Trigger in CMS and the SuperLHC
Friday 13 June 2008, 16:00 : Carlo Carloni Calame — Maths 500
Electroweak radiative corrections to Drell-Yan processes at hadron colliders
The status of the electro-weak radiative corrections to Drell-Yan processes is summarized. Their impact on observables which are important for Tevatron and LHC physics is discussed. Particular emphasis will be given on their implementation in the event generator Horace.
Friday 30 May 2008, 16:00 : Dave Waters (UCL) -- E7
Weighing Up the Weak Force: W Boson Mass and Width Measurements from CDF Run II
The W boson, carrier of the weak nuclear force, is the least well measured of the Standard Model's force carriers. Precision measurements of the mass and the lifetime of the W boson provide a stringent test of the Standard Model and, indirectly, allow us to probe the physics that may lie beyond the Standard Model. I present recent direct measurements of the W boson mass and width from the CDF experiment, both of which are now the single most precise measurements in the world. I discuss several of the challenges involved in performing these analyses, and outline the prospects for such measurements in the future.
Friday 16 May 2008, 16:00 : Aidan Robson (University of Glasgow)
Higgs Searches at CDF
2008 is going to be significant for Higgs physics: we expect to reach 95%CL sensitivity to a 160GeV Higgs with the combined Tevatron data. I will talk about CDF's Higgs->WW analysis, focusing on techniques, set it in the context of CDF's low-mass Higgs searches, and give an outlook for the next few years.
Friday 09 May 2008, 16:00 : Ken Peach (John Adams Institute for Accelerator Science -- University of Oxford and Royal Holloway University of London)
A new accelerator for advanced research and cancer therapy
Although Fixed-Field Alternating Gradient Accelerators were invented in the 1950s, they have never made any significant impact, the technology being superseded by the synchrotron. However, interest has recently been revived, particularly in Japan, where "proof of principle" proton FFAGs have been built. More recently, a new concept - the "non-scaling FFAG" - has been advanced, which offers the prospect of developing relatively compact, high acceleration rate accelerators for a variety of purposes, from neutrino factories and muon acceleration to cancer therapy. However, there are formidable technical challenges to be overcome, including resonance crossing. We have recently been awarded funding in the UK to construct a demonstrator non-scaling FFAG at the Daresbury laboratory (EMMA, the Electron Model with Many Applications), and to design a prototype machine for proton and carbon ion cancer therapy (PAMELA, the Particle Accelerator for MEdicaL Applications). I will describe some of the motivations for developing this new type of accelerator, and discuss the status of the EMMA and PAMELA projects.
Friday 02 May 2008, 16:00 : Terry Sloan (Lancaster University)
Cosmic Rays and Global Warming
In this seminar I will explore the proposed link between changes in cloud cover and the changes in the rate of ionization in the atmosphere produced by the effects of solar magnetic activity on cosmic rays. I will briefly review the mechanism by which this could cause global warming,comparing it with the more conventional view of the cause arising from the increased the concentration of greenhouse gases. I will go on to describe our searches for evidence to corroborate the cosmic ray cloud cover link.
Friday 18 April 2008, 16:00 : Fabio Maltoni (Université catholique de Louvain)
The ttbar invariant mass as a window on new physics
I explore in detail the physics potential of a measurement of the ttbar invariant mass distribution at the Tevatron and the LHC. First, the accuracy of the best available predictions for this observable are considered with the result that in the low invariant mass region the shape is very well predicted and could be even used to perform a top mass measurement. Second, I study the effects of an heavy s-channel resonance on the ttbar invariant mass distribution, in a model independent way and outline a simple three-step analysis towards a discovery.
Friday 14 March 2008, 16:00 : Fedor Simkovic (Comenius University Bratislava)
Double Beta Decay: History, Present and Future
The properties of the neutrinos have been the most important issues in particle physics, astrophysics and cosmology. After neutrino oscillations discovery, search for neutrinoless double beta decay (0v-decay) represents the new frontiers of neutrino physics, allowing in principle to fix the neutrino mass scale, the neutrino nature (the Dirac or Majorana particles) and possible CP violation effects. Many next generation 0v-decay experiments are in preparation or under consideration. In this presentation the development in the field of nuclear double beta decay is reviewed. A connection of the 0v-decay to neutrino oscillations and other lepton number violating processes is established. The light and sterile neutrino exchange mechanisms as well as R-parity breaking mechanisms of the 0v-decay are analyzed. The problem of the reliable determination of the 0v-decay nuclear matrix elements is addressed. The possibility of boson neutrino and partially boson neutrino is studied in light of the 2v-decay data. The perspectives of the field of the double beta decay are outlined.
Friday 07 March 2008, 16:00 : Dr. Tim Namsoo
A prelude to some ZEUS results to be shown at DIS08
The DIS08 conference is fast approaching and is to be hosted by UCL and Oxford University, in the UCL campus in the week starting the 7th April. The ZEUS collaboration, that operated a general purpose detector on the HERA electron-proton collider in Hamburg, and to which the UCL HEP group belongs, are preparing to send a number of results to DIS08. The number and range of these results makes covering them all in a single seminar unfeasible so instead, a subset of results, in which the speaker is involved, will be discussed. These topics will include the measurement of the longitudinal proton structure function, the energy dependence of the photoproduction cross section, studies involving the multiplicity and momentum spectra of charged hadrons in jets and three- and four-jet cross sections in photoproduction. The talk will not include final results on all of these topics but instead will motivate and discuss various aspects of the measurements at a level not possible during the conference itself due to stricter time constraints.
Friday 29 February 2008, 16:00 : Dr. Chris Parkes
The LHCb Vertex Locator and the LHCb upgrade - or finishing the bicycle wheel spokes and turning it into a vespa.
LHCb is the dedicated heavy flavour physics experiment at the LHC, and is currently being commissioned. Arguably the single most critical element of the detector is the silicon vertex locator. The unique design has 40 micron pitch silicon strip detectors operated in vacuum only 8mm from the LHC beam and retracted and reinserted between each fill. It is the only LHC detector yet to have seeen beam data - the partially assembled final system having been tested in the CERN SPS. The design, testing, alignment and performance of this detector are discussed. The discovery potential of the experiment, or the measurement of the characteristics of new physics discovered during the first phase, can be significantly enhanced through the upgrade of the experiment to run at higher luminsoity. The plans for the LHCb upgrade are presented.
Friday 22 February 2008, 16:00 : Dr. Steve Boyd (Warwick)
T2K - The Next Generation.
It is now a well established fact that the weak and the mass eigenstates of neutrinos are not identical. As a consequence we can observe transitions from one neutrino species to another. Over the past decade neutrino experiments have measured most of the parameters describing neutrino oscillations with ever increasing accuracy. One parameter so far eludes measurement, the so-called mixing angle theta_13. T2K is one of the next generations experiments. While it aims to improve the knowledge of some of the other parameters, its main objective is the measurement of theta_13, which is responsible for muon to electron neutrino transitions. This angle may hold the key to study CP violating effects in the lepton sector of the SM. The seminar will summarise the current status of neutrino oscillation studies, and will outline the future contributions of the T2K Experiment.
Friday 08 February 2008, 16:00 : Dr. Lisa Falk-Harris
The Double Chooz Reactor theta_13 Experiment
The Double Chooz reactor experiment is the next step for neutrino oscillation measurements: It will measure, or limit, the last undetermined PMNS mixing angle, theta_13, down to an order of magnitude below the current limit. Deploying two identical detectors, one near the reactor cores and one at a distance of 1.05 km, will permit us to control the systematics to the level required to reach a sensitivity of sin^2 2theta_13 ~0.03. I will give a brief summary of our present knowledge of the neutrino mixing parameters, and a motivation for new reactor experiments. I will then describe the design and status of the Double Chooz experiment and of its detector components. Finally, I will discuss systematics, expected sensitivity and the schedule for installation and data-taking.
Friday 01 February 2008, 16:00 : Dr James Libby (Oxford)
The precise determination of the unitarity triangle angle gamma: LHCb and CLEOc
The LHCb experiment will be introduced. The precise determination of gamma is a cornerstone of the LHCb physics programme; the motivation for this will be presented. Decays of the type B+ -> DK+, where the D is a D0 or a D0bar decaying to the same final state, allow a theoretically clean measurement of gamma. The prospects with LHCb for modes where the D decays to 2, 3 and 4 bodies are discussed. This discussions introduces the importance of measurements of the strong parameters of the relevent D-decays to allow precise determination of gamma and the crucial role played by quantum correlated e+e- -> psi(3770) ->D0D0bar data currently being collected by CLEOc. The CLEOc measurements essential to a precise determination of gamma are presented.
Friday 25 January 2008, 16:00 : Prof. Stephen Watts (Manchester)
Data Mining and Visualisation
There are more ways to represent ones data than by using graphs, histograms and scatterplots. There are many new techniques which exploit computer technology that enable one to visualise and explore multidimensional data. The seminar will explain these ideas, show how they are related to data mining, and apply them to particle physics data analysis.
Thursday 24 January 2008, 16:00 : Prof. Peter Rowson (Stanford Linear Accelerator)
A Novel Approach to Neutrinoless Double Beta Decay - EXO.
The "Enriched Xenon Observatory", or EXO, is an ongoing experimental and R&D program to employ a new technique for the search for neutrinoless double-beta decay. The decay and detection medium is xenon enriched in the 136 isotope. A phase I experiment known as EXO200, using 200 kg of xenon enriched to 80% xenon 136, is presently being installed at the WIPP underground facility in Carlsbad, New Mexico, and is scheduled to begin data taking at the end of the coming summer. In a two year run, this liquid xenon TPC experiment will observe and study two-neutrino double beta decay in xenon for the first time, and should exceed presently available sensitivity for the neutrinoless mode. In parallel, an R&D program is underway for a future multi-tonne detector employing a laser-fluoresence based daughter-nucleus tagging scheme that should be able to reach sensitivities to the effective neutrino mass in neutrinoless double beta-decay down to the 10 meV range.
Friday 18 January 2008, 16:00 : Prof. Bob Nichol (Portsmouth)
Seeing Dark Energy
Nearly 10 years ago, astronomers discovered the expansion of the Universe was accelerating driven by a mysterious quantity now know as "Dark Energy". Explaining the existence and properties of dark energy is a major challenge for physicists. In this seminar, I will review the evidence for dark energy (including my own research) and our present understanding of dark energy. I will conclude the talk with an overview of future dark energy research which will dominate cosmology for the next decade.
Friday 30 November 2007, 16:00 : Borut Kersevan (University of Ljubljana, Jozef Stefan Institute)
AcerMC Monte Carlo Generator and Heavy Flavor Matching
I will present the functionality of the AcerMC Monte Carlo event generator with the emphasis of newly developed procedure for matching hard processes with parton shower approximations in the presence of heavy quarks in the initial state.
Wednesday 14 November 2007, 16:00 : Jan Michael Rost
Department Colluquium (Massey Theatre?)
Friday 09 November 2007, 16:00 : Prof. Jenny Thomas
Lepton-Photon Neutrino Results
The most exciting results at this years Lepton-Photon came from the neutrino field. I will review the talks giving special attention to Accelerator neutrino Experiments.
Friday 02 November 2007, 16:00 : Dr. Chris Hill (Bristol)
Green HEP: Doing Particle Physics with Beam Switched OFF
There are numerous scenarios of physics beyond the Standard Model (e.g. split supersymmetry) which predict the production of a heavy quasi-stable particles in 14 TeV proton-proton collisions. If these particles are charged, they will lose energy via ionisation as they traverse the experimental apparatus. Consequently, if these particles are not produced with too much initial kinetic energy and their lifetime is long enough, they will come to rest in the detector. These "stopped" particles will subsequently decay at some later time, perhaps after the beam has been switched off. I will review the theoretical motivations which suggest that such particles could be copiously produced (and stopped) at the LHC. I will also discuss the status of the experimental effort to search for such particles using the CMS detector.
Friday 19 October 2007, 16:00 : Prof. Alan Watson (Leeds)
Recent measurements on ultra-high energy cosmic rays
Knowledge about cosmic rays above 10^19 eV has greatly improved through the successful operation of the Auger Observatory in Argentina where more events above 10^18 eV have now been recorded than from the sum of all previous efforts. I will outline the reasons for interest in this field, describe the Observatory and review our latest results on the mass composition, the energy spectrum and the anisotropy of what are the highest energy particles in Nature.
Wednesday 17 October 2007, 16:00 : Don Eigler
The Small Frontier (W.H. Bragg Lecture)
For more than twenty years, the scanning tunneling microscope has given us a kind of virtual presence in the world of atoms. This wonderful instrument not only allows us to "see" the atomic and electronic landscape, but we also can use it to build structures of our own design with individual atoms as the building blocks. In this presentation I will describe how the microscope works and give some examples of how we use it to broaden our knowledge of the physical properties of nanometer-scale structures. I will show examples of our efforts to explore ways in which future computation might be performed using atomic-scale components.
Friday 05 October 2007, 16:00 : Prof. Jon Butterworth
Hot (non-neutrino) Results from the EPS Conference
Friday 22 June 2007, 16:00 : Sebastian Boeser (UCL)
Towards acoustic detection of ultra-high energy neutrinos
Detection of neutrinos with energies at the far end of the cosmic ray spectrum promises valuable insight not only on cosmological questions, but also for high-energy particle physics. However, the tiny neutrino cross-section and the small flux require target volumes in the order of 100 km^3 - and thus the development of new detection techniques. Among a variety of different target media and interaction signatures, registration of acoustic waves in the South Polar ice cap from the neutrino-induced cascades seems to be promising. The development of a setup to verify the predicted absorption length of ~10km as well as to measure the sound velocity profile and background noise is presented.
Friday 01 June 2007, 16:00 : Tim Londergan (Indiana University)
How do we interpret the NuTeV result?
Friday 25 May 2007, 16:00 : Tim Gershon (Warwick)
Super Flavour Factory
Recent studies have highlighted the essential role of flavour physics in searching for and understanding physics beyond the Standard Model. At the same time, advances in accelerator technology have made realistic the possibility of an e+e- machine reaching unprecedented luminosities above 1036/cm2/s1. I will summarise the opportunities presented by the SuperB project, as described in the recent Conceptual Design Report.
Friday 18 May 2007, 16:00 : Jeppe R. Andersen (Cambridge)
Hard Multi-Jet Predictions from High Energy Factorisation
The dynamics of QCD (and other field theories) simplifies greatly in the so-called perturbative "high energy limit", characterised by large centre of mass energy and fixed (perturbative) transverse scales. We will present a framework based on this high energy factorisation of scattering amplitudes, which allows for the prediction of multi(>=2)-jet rates. We will present predictions for e.g. W+jets, H+jets, and pure n-jet events at the LHC and Tevatron. Finally, we will discuss recent efforts towards improving the accuracy of the calculations.
Friday 04 May 2007, 16:00 : Ryan Nichol (UCL)
Ultra-High Energy Neutrino Astronomy in Antarctica
Ultra-high energy neutrino astronomy is a rapidly emerging field at the crossroads of particle physics, astronomy and astrophysics. This talk will address the history and scientific motivation of neutrino astronomy, and discus the detection mechanisms and prospects of current and currently proposed experiments. Particular attention will be paid to the ANITA project, which successfully completed a 35 day flight over Antarctica during the Austral summer of 2006/7.
Friday 20 April 2007, 16:00 : Athanasios Dedes (IPPP Durham)
A Natural Nightmare for the LHC
A very simple singlet-extension of the Standard Model which results in naturally light Dirac neutrino masses may also explain the baryon asymmetry of the Universe. This extension requires breaking of a global symmetry associated with neutrinos at low energies which in turn results in a Nambu-Goldstone boson. In this talk I will discuss how the presence of this particle may completely change our view for Higgs boson searches at the LHC.
Friday 30 March 2007, 16:00 : Student practice talks for the IoP conference (Details to follow)
Friday 23 March 2007, 16:00 : Vladimir Tretyak (Institute for Nuclear Research, Kiev)
Searches for rare decays in nuclear and particle physics in INR
Experiments are reviewed which were performed by the INR (Kiev) group to search for double beta decays of several isotopes, rare alpha and beta decays, and exotic charge non-conserving (CNC) processes like decays of electron, CNC beta decays and disappearance of nucleons. In particular, in the experiment with 116-CdWO_4 crystal scintillators in the Solotvina Underground Laboratory (Ukraine) the two-neutrino double beta decay of 116-Cd was observed (T1/2=2.9e19 yr) and the best limit for neutrinoless double beta decay was set (1.7e23 yr). Fourth-fold forbidden beta decay of 113-Cd was investigated (7.7e15 yr). Rare beta decay of 115-In (4e20 yr) was observed at the first time; probably, it has the lowest value of Q_beta of ~0.5 keV. Two rare alpha decays: of 180-W (1.1e18 yr) and 151-Eu (5e18 yr) were observed at the first time. Limits on exotic decays of electron, nucleons, etc. are mostly the most stringent among known world limits. Experiments were performed either independently or in collaboration with other groups.
Friday 16 March 2007, 16:00 : Dr Peter Richardson (Durham)
Herwig++
After a brief recap of the basic physics of Monte Carlo event generators I will describe the development of the new Herwig++ event generator. I will concentrate on recent progress allowing the simulation of hadron-hadron events and physics improvements. I will conclude with our plans for the future.
Sunday 11 March 2007, 16:00 : Morgan Wascko (Imperial)
Results from the MiniBoone neutrino experiment
Results from the MiniBoone neutrino experiment, which is investigating the anomalous LSND neutrino oscillation result.
Friday 09 March 2007, 16:00 : Prof. Toshimitsu Yamazaki (University of Tokyo and Japan Academy)
Quasi-stable exotic atoms, molecules and nuclei composed of antiproton, pion and kaon
In recent years, unexpectedly long-lived exotic atoms, molecules and nuclei with constituents of antiprotons, negative pion and antikaon have been discovered. The following topics will be covered in this talk: 1) Metastable antiprotonic helium composed of an antiproton, a helium nucleus and an electron as a unique interface between the particle and the antiparticle worlds. High-precision laser spectroscopy for a CPT test and quantum tunneling effect in chemical reactions. 2) Pionic nuclei as a probe for chiral symmetry restoration in nuclear medium. 3) Kaonic nuclear systems as anomalously dense bound states mediated by antikaons. Connection with kaon condensed matter and stars.
Wednesday 07 March 2007, 16:00 : Jeff Forshaw
Does there have to be a Higgs boson?
The Large Hadron Collider (LHC) at CERN will soon turn on and it is widely expected to discover the Higgs boson: the particle responsible for the origin of mass and the missing piece in the Standard Model of particle physics. However, there is no guarantee that nature will be so obliging. This talk will explain why we expect to see a Higgs, why it may not be there and why we are so confident that in any event something new should show itself at the LHC.
Friday 02 March 2007, 16:00 : Dr Un-ki Yang (Manchester)
Precise Measurements of Top Mass at CDF ( slides )
Tuesday 27 February 2007, 14:00 : Dr Olga Mena (Universita "La Sapienza") — Pearson LT in Pearson building
Landscape & Strings in vacqua
I will talk about the high energy neutrino detection capabilities of the long-STRING-Cherenkov Icecube neutrino in ACQUA (ice), sorrounded by the beautiful Antarctica LANDSCAPE. We explore the matter-induced oscillation effects on emitted high energy neutrino fluxes, using the energy dependent ratio of electron and tau induced showers to muon tracks, in the upcoming Icecube neutrino telescope. Although the energy of supernova neutrinos lies far below the threshold for track reconstruction in long-STRING detectors a la Icecube, a supernova neutrino burst could be recorded with a signal-to-noise ratio of ~400. The black hole at the center of the galaxy is a powerful lens for supernova neutrinos. In the very special circumstance of a supernova near the extended line of sight from Earth to the galactic center, lensing could dramatically enhance the neutrino flux at Earth and stretch the neutrino pulse. The Icecube neutrino observatory could be sensitive to both effects.
Friday 23 February 2007, 16:00 : Will Venters
A social study of the development and use of Grid for the LHC
Pegasus is a joint research project between the LSE's Information Systems Group and UCL HEP Group to explore the development and use of Grid infrastructure for the LHC. The project is funded by the EPSRC programme: "Usability challenges from e-science" and draws on the LSE group's focus on how information and communication technology influences, and is influenced, by the social context in which it is developed and used as well as by its technical characteristics. Through qualitative research the project is exploring how the variety of people involved in developing and using GridPP infrastructure collaborate. This seminar will introduce the project, its research methods and its expected contribution. We will also outline some of our initial observations from recent research at CERN, and discuss our future research plans.
Friday 16 February 2007, 16:00 : Kostas Fountas (Imperial)
The CMS Trigger
Friday 02 February 2007, 16:00 : Dr Katherine George (QMUL)
It's not just about B's. Latest Results from BABAR
Friday 19 January 2007, 16:00 : Prof. Alan Watson (Leeds)
Progress in the search for the origin of the highest energy cosmic rays
I will explain why there is interest in the highest energy cosmic rays and describe the Pierre Auger Observatory, now the premier instrument available for their study. The latest results on the energy spectrum, mass composition and arrival direction distribution will be discussed and compared with those from other groups. Some speculations about the origin of ultra high energy cosmic rays will be made.
Monday 15 January 2007, 16:00 : Doug Gingrich (University of Alberta and TRIUMF)
Very High-Energy Gamma-Ray Astronomy with STACEE
The study of galactic and extragalactic objects using very high-energy gamma rays is an evolving field of science. New gamma-ray telescopes are allowing us to proceed from an era of simply detecting objects to an era of precision measurements of energy spectra and time profiles. Surprisingly, there is still an unexplored energy region of the electromagnetic spectrum: 10 GeV to 250 GeV. The Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE) is reaching into this unexplored energy region to observe pulsars, supernova remnants, active galactic nuclei and gamma-ray bursts. My talk will focus on the interest of active galactic nuclei in the unexplored energy region and observations of blazars using STACEE.
Friday 12 January 2007, 16:00 : Silvia Capelli (University of Milano)
CUORICINO and CUORE: bolometric experiments for Double Beta Decay research
The positive results obtained in the last few years in neutrino oscillation experiments have stimulated great interest in Neutrinoless Double Beta Decay (DBD0n) research. Cuoricino is a running 40.7 kg TeO2 bolometric experiment dedicated to the search of DBD0n of 130Te atoms.Due to the very low expeted rate for the searched decay, a extremely low level of radioactive background is mandatory, mainly in veiw of the future large mass experiment CUORE, aimed to reach a sensitivity for the neutrino Mayorana mass in the range predicted for an inverse hierarchy scheme for neutrino masses. Last results of CUORICINO and the R&D performed in order toreduce the background level to the wanted sensitivity for CUORE will be presented.
Friday 15 December 2006, 16:00 : Dr Bostjan Golob (Ljubljana)
Topics in charm hadron studies at Belle
The Belle detector at the KEK-B e^+e^- B-factory proves to be an excellent experimental environment for a wide variety of measurements. Besides the study of the CP violation in the system of B mesons, various measurements of the properties of charmed hadrons are performed. A short overview of D^0 meson mixing searches, measurements of charmonium-like resonances and properties of the recently observed charm baryons will be given.
Friday 01 December 2006, 16:00 : Dr Steve Fitzgerald (Culham)
Fusion energy and nanoscale materials science
I will give a brief overview of the nuclear fusion research at UKAEA Culham, and then discuss some recent work on the modelling of materials for use in the ferocious environment of a fusion reactor.
Friday 24 November 2006, 15:30–17:30 : David and Tegid Fest
Speakers include Albrecht Wagner, George Kalmus and David Wark. Refreshments will be provided.
Friday 17 November 2006, 16:00 : Dr Cinzia DaVia (Brunel)
3D silicon sensors
The LHC upgrades will demand the innermost layers of the tracking detectors to withstand fluences of about 10^16 n/cm^2 with improved spatial and time resolutions. Forward physics trackers will also require to reduce as much as possible the insensitive area at the detector'e edge. Active edge 3D sensors, where the p and n type electrodes penetrate through the entire silicon bulk thickness, have proven to match these requirements. Results will be reported of their response to charged particles using radioactive sources and particle beams with LHC-compatible readout electronics. The 3D fast time response studied using 0.13 um readout electronics and their signal efficiency after an exposure to reactor neutrons equivalent to ~1.4x10^16 high energy protons per sq. cm. will be discussed.
Friday 03 November 2006, 16:00 : Dr Muge Unel (Oxford)
Highlights of PhyStat
The PHYSTAT05 conference held last year in Oxford concentrated on current statistical issues in analyzing data in particle physics, astrophysics and cosmology, as a continuation of the popular PHYSTAT series at CERN (2000), Fermilab (2000), Durham (2002) and Stanford (2003). In-depth discussions on topical issues were presented by leading statisticians and researchers in their relevant fields and the latest, state-of-the-art techniques and facilities were introduced and discussed. I will summarize the conference with an attempt to highlight most of the popular and important topics which are becoming more relevant as the start-up of the LHC experiments approaches.
Friday 20 October 2006, 16:00 : Dr Zornitza Daraktchieva (UCL)
Results from the MUNU experiment on neutrino-electron scattering
The MUNU detector was designed to study electron antineutrino - electron elastic scattering at low energy. The central tracking detector is a Time Projection Chamber filled with CF4 gas, surrounded by an anti-Compton detector. In this talk I will present the final analysis of the data recorded at 3 bar and 1 bar pressure. From the 3-bar data a new upper limit on the neutrino magnetic moment of 9x10^-11 at 90 % C.L. is derived. At 1-bar a Cs-137 photopeak is reconstructed by measuring both the energy and direction of the Compton electrons in the TPC.
Wednesday 18 October 2006 : IoP HEPP meeting at UCL
LHC - The first year
The programme and registration details can be found here
Monday 16 October 2006, 16:00 : Are Raklev (CERN, Bergen) — E7
Search for Charged Metastable SUSY Particles at the LHC
Friday 06 October 2006, 16:00 : Dr Mark Lancaster (UCL)
Hot results from the summer conferences
Wednesday 21 June 2006, 16:00 : Joey Huston (MSU)
Lessons from the Tevatron and Standard Model Benchmarks for the LHC
I will discuss results and tools from the Fermilab Tevatron and theapplication of these tools towards the understanding of the Standard Model at the LHC during the early running.
Wednesday 14 June 2006, 17:30 : Prof. Jonathan Butterworth (UCL) — Harry Massey amphitheatre
High energy collisions and fundamental physics; or why the proton is like the tardis
Friday 19 May 2006, 16:00 : Patricia Vahle (UCL)
Preliminary Results Of An Accelerator Based Search For Muon-Neutrino Disappearance By The MINOS Experiment
Friday 28 April 2006, 16:00 : Rick Jesik (Imperial)
Beautifully Strange Physics at DZero
The Tevatron collider experiments have a proven track record for making significant B physics measurements. The hadronic environment provides for the production of B species not readily accessible to the B-factories. One of these states, the Bs meson, is of particular interest. I will report on studies of this meson by the DZero experiment using 1 fb-1 of data. Measurements of the Bs lifetime and lifetime difference, and the first direct upper and lower bounds on the Bs mixing frequency (Delta ms) will be presented. Our measurement of the CP violation parameter in B0 mixing and decay will also be shown.
Friday 07 April 2006, 16:00 : Bino Maiheu (UCL)
Hadronization at HERMES
In the HERMES experiment, situated at the HERA storage ring in Hamburg, 27.5 GeV positrons or electrons are scattered off a fixed gaseous, polarised proton target. Data taking started in 1995 and since then HERMES has produced quite some interesting results about the nucleon's internal spin structure. To disentangle the individual contributions from the different quark flavours to the total spin 1/2 of the proton, the understanding of the production of hadrons through fragmentation is very important. Hadrons in HERMES are identified using a Ring Imaging Cherenkov (RICH) detector which is capable to disentangle with high efficiency the different hadron kinds over a momentum range of 2 to 15 GeV/c. By extracting multiplicity distributions we can study the production of hadrons more closely and moreover compare to previously published results at higher energy. Quite some effort was put into obtaining model independent results using an unfolding method to correct for experimental and QED radiative smearing as well as geometrical detector acceptance and efficiency. This talk will highlight some topics of the HERMES physics program with emphasis on the extraction of multiplicity distributions.
Friday 03 March 2006, 16:00 : Erkcan Ozcan (UCL)
Radiative Penguin Decays at B Factories
For two decades, radiative penguin decays have played a central role in the search for the "new physics" of the day: top-quark mass, extended TC, anomalous trilinear gauge couplings, fouth generation of quarks and so on. Today, at the brink of the LHC experiments, many phenomenology articles on beyond-the-SM physics, and essentially almost all of the SUSY related ones, again refer to constraints from the radiative penguin decays. In the last few years, the excellent luminosity provided by the B Factories have enabled high-precision measurements of these rare decays. This talk aims to give an overview of the experimental techniques used and current limits obtained, focusing particularly on the inclusive decay b->s\gamma, the most commonly encountered member of the radiative-penguin family in the SUSY literature.
Monday 27 February 2006, 16:00 : Carlos Frenk (University of Durham)
Cosmology confronts some of the most fundamental questions in the whole of science. How and when did our universe begin? What is it made of? How did it acquire its current appearance? There has been enormous progress in the past few years towards answering these questions. For example, recent observations have established that our universe contains an unexpected mix of components that include not only ordinary atoms, but also exotic dark matter and a new form of energy called dark energy. Gigantic surveys of galaxies like one recently completed using the Anglo-Australian Telescope in Siding Spring, New South Wales, tell us how the universe is structured. Large supercomputer simulations recreate the evolution of the universe and provide the means to relate processes occuring near the beginning of the Universe with the structures seen today. A coherent picture of cosmic evolution, going back to about a micro-second after the Big Bang, is beginning to emerge. However, fundamental issues, like the nature of the dark energy, remain unresolved. These will require understanding of what went on at even earlier times.
Friday 03 February 2006, 16:00 : Ilija Bizjak (UCL)
Measurement of |Vub| at Belle
The measurement of the Cabibbo-Kobayashi-Maskawa matrix element |Vub| is an important counterpart to the measurement of the CP violation parameter sin(2 phi1) in testing the Kobayashi-Maskawa mechanism of CP violation. Recent measurements managed to obtain |Vub| with a precision that is better than 10% and have shown that the theoretical and experimental methods employed can be used to further improve the measurement. I will present the current status of inclusive and exclusive measurements of |Vub| at Belle.
Tuesday 10 January 2006, 16:00 : Maury Goodman (Argonne National Laboratory) — in A19
Reactor experiments for neutrino studies
In their simplest form, neutrino oscillations are described by three mixing angles and two differences of mass squared. From solar and atmospheric neutrino experiments, four of the five numbers are approximately known. The fifth value, known as theta-13, is only constrained by an upper limit on its value, which was set by the reactor neutrino experiment CHOOZ. Ideas to measure or further limit this parameter are being pursused by "off-axis" accelerator neutrino experiments, and also by improving reactor neutrino disappearance experiments. Six such reactor experiments are being pursued: Double Chooz, Braidwood, Daya Bay, KASKA, RENO and Angra. The Double Chooz experiment, in a region of France along the Belgian border, will probably be the first one in operation. The prospects and status of this experiment will be described.
Wednesday 07 December 2005, 16:00 : James Stirling (IPPP, Durham)
Quantum Chromodynamics and High-Energy Colliders: Fundamental Physics from Non-fundamental Particles
Friday 02 December 2005, 16:00 : John Ellis (CERN)
How to look for supersymmetry?
Supersymmetry is still the most promising option for physics beyond the Standard Model, and there are good chances that it could be seen at the LHC. The experimental signature most frequently considered is missing energy, but an interesting alternative is offered by models with a metastable charged particle decaying into gravitino dark matter.
Friday 25 November 2005, 16:00 : Charles Loomis (LAL Orsay)
Challenges of Analysis for Grid Computing
The LCG/EGEE production service is the largest operating grid infrastructure in the world. It is currently being used by both HEP and non-HEP applications for large scale productions. However, users wanting to run analyses on the grid will bring higher expectations and more difficult requirements with them. The presentation summarizes the current state of the grid and specific areas which will need to be improved to support analysis on grid infrastructures.
Wednesday 23 November 2005, 16:00 : Jens Als-Nielsen (Niels Bohr Institute, Copenhagen)
X-Ray synchrotron radiation - glimpses from the past and of the future
It is well known that synchrotron X-ray science - or maybe more precisely science based on X-ray synchrotron radiation (SR) - has undergone a revolution within the last couple of decades. The brilliance, the figure of merit often used for SR sources, exhibits a growth rate exceeding that of Moore~Rs law for semiconductors, and this development will continue with new large-scale projects such as the free electron X-ray laser. In this talk I shall, however, emphasize another aspect of X-ray science which will have a more direct impact on our daily lives, and that is the proliferation to universities, hospitals and companies of affordable, laser based SR sources equipped with novel X-ray optics. Amongst other benefits, this combination will allow the routine clinical use of phase-contrast imaging rather than conventional absorption-contrast imaging.
Friday 11 November 2005, 16:00 : Silvia Pascoli
Determining neutrino masses
In recent years strong evidence of neutrino oscillations has been found. This implies that neutrinos are massive particles. At present, we have measured the solar and atmospheric mass squared differences but the overall neutrino mass scale and the type of hierarchy are still unknown. This information is crucial for our understanding of the origin of neutrino masses. I will review the present knowledge of neutrino parameters. I will discuss the strategies for determining the absolute values of neutrino masses (absolute mass scale and type of hierarchy). In particular, I will focus on neutrinoless double beta decay and long baseline experiments and, briefly, on direct mass searches. I will finally comment on the complementarity of these different approaches.
Friday 04 November 2005, 16:00 : Oliver Rosten (Southampton)
Computing in SU(N) Yang-Mills without Fixing the Gauge
A manifestly gauge invariant Exact Renormalisation Group is constructed for SU(N) Yang-Mills, in a form convenient for computation at any loop order. A diagrammatic calculus is developed which facilitates a calculation of the two-loop beta function, for the first time without fixing the gauge or specifying the details of the regularisation scheme.
Wednesday 26 October 2005, 16:00 : Steve Jones (UCL)
Did Adam meet Eve? - the view from the genes
Friday 14 October 2005, 16:00 : Alan Barr (UCL)
SUSY or extra dimensions? Can we find out with ATLAS?
Two of the most widely talked-about extensions to the Standard Model are supersymmetry and extra spatial dimensions. If applicable at the TeV-scale, both could be discovered at the LHC. However the signals from some extra dimensional models share many of the properties of supersymmetric events. I discuss some recent work which shows that ATLAS has the potential to distinguish between such models.
Friday 07 October 2005, 16:00 : Hans Drevermann (CERN)
Data oriented projections in the ATLAS event display ATLANTIS
Based on simulated data of the two inner 3D tracking chambers of ATLAS, it will be shown that special projections of the data allow to check and understand complicated ATLAS interactions. These projections are optimized to suit human perception. The detection of the interactions and the cleaning of high luminosity data sets containing many noisy channels will be demonstrated.
Friday 17 June 2005, 16:00 : Georg Weiglein (Durham)
Higgs Physics and Supersymmetry at Present and Future Colliders
Precision tests of the electroweak Standard Model (SM) and its minimal supersymmetric extension (MSSM) are analysed, and indirect constraints on the Higgs-boson mass in the SM and the scale of supersymmetry are discussed. In the MSSM the mass of the lightest Higgs boson is not a free parameter as in the SM, and a firm upper bound can be established. The phenomenology of Higgs physics in the MSSM can differ very significantly from the SM case. The phenomenology of Higgs physics in the MSSM at the Tevatron, the LHC and the ILC is discussed. The possible interplay between the LHC and ILC in analysing the mechanism of electroweak symmetry breaking and the underlying structure of Supersymmetric models is investigated.
Friday 03 June 2005, 16:00 : Edwige Tournefier (Annecy)
Gravitational waves and the VIRGO detector
Gravitational waves are predicted by general relativity but have not yet been experimentally observed. A new generation of interferometric experiments aimed at their detection, such as Virgo, is now being commissioned. After an introduction on gravitationnal waves I will give an overview of the world programme before focusing on the Virgo detector and its commissioning.
Tuesday 31 May 2005, 16:00 : Wit Busza (MIT)
Surprises from RHIC
Friday 20 May 2005, 16:00 : Claire Gwenlan (Oxford)
QCD analyses of HERA data: determination of proton PDFs
Friday 13 May 2005, 16:00 : Lucio Cerrito (Oxford)
Top Quark Physics at the Tevatron
The observation of top quarks is still an exclusive privilege of only two experiments operating at the Tevatron Collider. However, never forming bound states, rapidly decaying and with unmeasurably small rare decays the top may appear like a rather uninteresting specie. Or is it ? I will review the latest measurements in top physics at the Tevatron, with focus on the CDF experiment. It emerges a rich programme that is currently testing the Standard Model at the highest energy ever reached in laboratory.
Friday 06 May 2005, 16:00 : Simon Dean (UCL)
Once in a lifetime: Investigating Z -> tau tau -> e mu + neutrinos at D0
The sum of signed impact parameters in a two-track system is proposed as a tool for discriminating channels with lifetime in the final state. Z ->tau tau -> e mu is shown to be a realistic process for demonstrating this effect and a signal sample is selected from the D0 RunII dataset.
Friday 22 April 2005, 16:00 : Emily Nurse (UCL)
Electroweak physics at the Tevatron
Run II of the Tevatron is now well underway and results are starting to come out thick and fast. I will present the recent electroweak results from CDF and Dzero, going through one analysis in detail to give some insight into how precision measurements are done at hadron colliders.
Friday 15 April 2005, 16:00 : Philip Burrows (QMUL)
The Linear Collider: microscope on physics at the TeV scale
I will discuss the power of the Linear Collider in terms of precision measurements and discovery potential for new physics at the TeV scale. I will discuss the joint potential of the Linear Collider and the LHC. I will review briefly the status of the International Linear Collider project and the UK contributions.
Monday 04 April 2005, 16:00 : Chris Quigg (FermiLab)
The Double Simplex; envisioning particles and interactions
Tuesday 01 March 2005, 16:00 : Glen Marshall
Recent results from the TWIST experiment
The TRIUMF Weak Interaction Symmetry Test (TWIST) has recently completed its first physics analyses. The results represent a significant increase in precision for two of the four Michel parameters describing the energy and angle distributions of positrons from polarized positive muon decay. This is the first step toward our eventual goal of improving upon previous determinations for three of the parameters by at least an order of magnitude, as a test the Standard Model in the purely leptonic decay interaction. TWIST uses a polarized muon beam stopping at the center of a spectrometer consisting of a low mass, high precision array of planar drift chambers in a two tesla solenoidal field. The talk will focus on the operation of the device and the methods which are used to extract the decay parameters in a reliable way. Systematic uncertainties are especially important, as they will limit the final results.
Friday 25 February 2005, 16:00 : Yoshi Uchida
The KamLAND experiment
Friday 18 February 2005, 16:00 : Hans Kraus (Oxford)
The CRESST Dark Matter Search
Dark Matter is one of the key issues in the field of particle-astrophysics. We understand many phenomena in great detail, but yet, we do not know what the bulk of the Universe is made of. Dark Matter and Dark Energy appear to dominate normal matter by a large factor. I summarize the astrophysics evidence we have for the existence of Dark Matter; explain how particle physics provides a good candidate for Dark Matter particles and talk about the experimental challenges one has to master in attempting to detect WIMP Dark Matter. I review the general requirements every Dark Matter experiment has to satisfy and summarize some of the physics carried out within underground laboratories. The focus of the presentation is on novel cryogenic detectors that CRESST uses to search for WIMPs. These detectors are scintillating low-temperature calorimeters operating in the milli-kelvin temperature range and providing event type recognition to distinguish between a potential signal and backgrounds. The presentation concludes with a summary on how far the world-wide search for Dark Matter has progressed.
Friday 04 February 2005, 16:00 : Steward Boogert (UCL)
Physics of/at a linear e+ e- collider
Friday 28 January 2005, 16:00 : Philip Harris (Sussex)
The neutron EDM experiment
Friday 03 December 2004, 16:00 : Jon Butterworth (UCL)
Implications of HERA measurements for the LHC
Friday 26 November 2004, 16:00 : Ofer Lahav (UCL)
Neutrinos and Cosmology
Friday 19 November 2004, 16:00 : Bill Murray (RAL)
Results from MuScat
Ionization cooling is foreseen for the muon collider and neutrino factory, and is to be tested in action in the Mice experiment at RAL. The cooling effect of dedx losses is offset by multiple scattering, which is difficult to calculate and has not been measured at the appropriate muon momenta. The MuScat experiment fixes that deficiency. The results of the MuScat experiment are presented, along with details of the performance.
Friday 15 October 2004, 16:00 : David Miller (UCL)
Physics at the Future Linear Collider
Friday 08 October 2004, 16:00 : Nikos Konstantinidis (UCL)
The Higgs Search: Past, present and future
Almost four years after the last data from LEP, (preparation for) the search for the wholy grail of particle physics continues. Starting with a quick review of LEP's legendary contribution to the Higgs searches, I will discuss (a) the more recent progress in precision electroweak measurements and the information they provide for the SM Higgs; and (b) the expectations in the coming years from the Tevatron and the LHC.
Friday 04 June 2004, 16:00 : David Evans (Birmingham)
"ALICE and the Physics of Quark Matter"
QCD predicts that hadronic matter, under extreme conditions of temperature and pressure, will undergo a phase transition into a deconfined soup of quarks and gluons called a Quark-Gluon Plasma (QGP) The University would have been in such a state some 10^-6 to 10^-5 seconds after the Big Bang. Colliding lead ions at LHC energies will produce such a QGP and the ALICE experiment will study QCD in detail under these extreme conditions.
"The MINOS experiment: Just 7 months away from the first beam event!"
MINOS is a first generation long baseline neutrino oscillation experiment designed to study the oscillation parameter space hinted at by the results of atmospheric neutrino experiments. In order to achieve this goal we plan to use two magnetized iron calorimeters with solid scintillator as active material: a 5.4 kt one in Soudan mine in Minnesota, 735 km away from an intense, low energy neutrino beam source at Fermilab, and a smaller 1 kt one just 1 km downstream of the neutrino source. Neutrino oscillation measurements will be carried on by comparing the neutrino event rates and spectra at the two detectors. In this talk, I will briefly summarize the current experimental evidence for neutrino oscillations and describe the current status of the MINOS experiment which is now very close to the end of its construction phase. I will also try to illustrate what challenges we are going to face, after the beam starts up, using highlights from my recent work in event reconstruction, event classification and neutrino interaction model tuning/validation.
Friday 21 May 2004, 16:00 : Simon Dean (Manchester)
"Once in a Lifetime: Investigating Z -> tau tau -> e mu (+neutrinos) at D0"
The sum of signed impact parameters in a two-track system is proposed as a tool for discriminating channels with lifetime in the final state. Z -> tau tau -> e mu is shown to be a realistic process for studying this effect and the current best method of signal selection is presented
Friday 27 February 2004, 16:00 : Todd Huffman, Oxford
"Why B Physics is Interesting, (with a CDF focus)"
We all are convinced that the Standard Model of particle interactions is not the last word in the theory of the fundamental forces of nature. There are many searches proposed and on-going to try to add data to the speculations of what lies beyond the Standard Model. I show that the study of B mesons is actually much more fascinating than the simple measurements of lifetimes and branching ratios. These particles probe some very basic questions about the nature of matter in the Universe and are already beginning to constrain the physics beyond the standard model. Using recent results from the CDF experiment at the Fermilab Tevatron, I show how the search for physics at the TeV scale can take place at the GeV scale and why CDF is, at the moment, unique in it's capability for that search.
Friday 06 February 2004, 16:00 : Frank Close, Oxford
"Pentaquarks and tetraquarks: the end of the constituent quark model?"
Friday 16 January 2004, 16:00 : Giacomo Polesello, Pavia/CERN
"Discovery physics with the ATLAS detector at the LHC"
We give an overview of the potential of the ATLAS detector for the discovery of new particles at the LHC, based on the expected detector performance and on the proposed analysis strategies. A special emphasis is given to the signatures predicted by supersymmetric models.
Friday 21 November 2003, 16:00 : Peter Richardson (CERN/Durham) *TBC*
"Parton Shower Monte Carlos: The State of the Art"
Friday 07 November 2003, 16:00 : Ben Allanach (LAPTH)
"R-parity Violating SUSY" (including SUSY primer)
Friday 24 October 2003, 16:00 : Rob Edgecock (Rutherford Appleton Laboratory)
"Neutrino Factory: Physics and Prospects"
Friday 10 October 2003, 16:00 : Subir Sarkar, Oxford University
The world according to WMAP: Does cosmology now have a Standard Model?
Recent precision observations of anisotropies in the relic 2.7 K radiation by NASA's Wilkinson Microwave Anisotropy Probe are supposed to have determined the values of cosmological parameters to a few per cent, and in particular, to have established the presence of a cosmological constant. We will discuss the assumptions that underlie these results and draw attention to alternative cosmological models which also appear to be consistent with the data.
Friday 06 June 2003, 16:00 : Dr. Ruben Saakyan (UCL)
"Neutrinoless Double Beta Decay & NEMO"
Friday 30 May 2003, 16:00 : Joao Seco (ICR) — ***POSTPONED UNTIL 20/6/2003***
"Simulations for Intensity Modulated Radiation Treatment (IMRT)"
Friday 23 May 2003, 16:00 : Dr. Nigel Smith (RAL) — ***POSTPONED UNTIL 13/6/2003***
"The search for WIMPs in the UK"
Friday 14 March 2003, 16:00 : Professor Tim Sumner (Imperial College London)
"The observation of gravitational waves from space using LISA"
Friday 07 March 2003, 16:00 : Ankush Mitra (Oxford University)
"LICAS: The Linear Collider Alignment and Survey Project"
Friday 14 February 2003, 16:00 : Silvia Miglioranzi (University College London)
"Algorithms Designed to Measure b-quark Production at the ZEUS Experiment at HERA"
Friday 31 January 2003, 16:00 : Chris Damerell (Rutherford Appleton Laboratory)
"Feedback from Vertex 2002 Conference"
Monday 20 January 2003, 16:00 : Professor Karl van Bibber (Lawrence Livermore National Laboratory)
"A Large-Scale Search for Dark-Matter Axions"
Friday 29 November 2002, 16:00 : Stewart Boogert (University College London) — POSTPONED UNTIL 13/12/2002
"Introduction to Linear Collider Physics"
Friday 22 November 2002, 16:00 : Dr. Jon Butterworth (University College London)
"HERA physics as a preparation for LHC"
Friday 08 November 2002, 16:00 : Professor Tom Ferbel (University of Rochester & Imperial College)
"New Measurement of the Mass of the Top Quark"
Friday 11 October 2002, 16:00 : Dr. Lee Thompson (University of Sheffield)
"ANTARES - a neutrino telescope in the Mediterranean" ... or ... "Diving for WIMPs - neutrino astrophysics in the Mediterranean"
Thursday 05 September 2002, 16:00 : Professor Paul Singer (Technion - Israel Institute of Technology)
"Rare Charm Decays and the Search for New Physics"
Friday 14 June 2002, 16:00 : Greg Landsberg (Brown University)
"Black Hole Production at Future Colliders and Cosmic Rays"
Friday 10 May 2002, 16:00 : Steve Biller (Oxford)
"SNO: Latest Results"
Friday 22 March 2002, 16:00 : Dr.Cristina Lazzeroni (Cambridge)
"CP Violation Results from NA48"
Friday 15 March 2002, 16:00 : Richard Hall-Wilton (UCL)
"Charm Production in DIS at HERA"
Friday 08 March 2002, 16:00 : Stefano Moretti (CERN-TH and Durham-IPPP)
"Improving the discovery potential of charged Higgs bosons at Tevatron and the LHC"
Friday 01 February 2002, 16:00 : Ed McKigney (Imperial College London)
"MuScat and MICE: Steps toward a Neutrino Factory"
Friday 31 August 2001, 16:00 : Ernest Ma (University of Califormia, Riverside)
Verifiable Origin of Neutrino Masses at the TeV Scale and the Muon Anomalous Magnetic Moment.
Tuesday 08 May 2001, 15:00 : Uri Karhson (Weizmann/UCL)
Heavy Quark Production at HERA
Spreadbury Lecture
Friday 08 December 2000, 16:00 : Ivan Reid (UCL)
Muons in Sulphur and Selenium
Friday 24 November 2000, 16:00 : Vitaly Kudryavtsev (Sheffield)
Dark Matter searches and neutrino astrophysics.
Friday 17 November 2000, 16:00 : David Miller (UCL) (In A19)
Has LEP found the Higgs?
Friday 20 October 2000, 16:00 : Angela Wyatt (Manchester).
Rapidity Gaps Between Jets at HERA
Friday 13 October 2000, 16:00 : Murray Moinester (Tel Aviv).
Colour Transparency and Exclusive Meson Production at COMPASS
Wednesday 23 August 2000, 14:00 : Akos Csilling (UCL)
Photon Structure and More
Don't miss your chance to meet our other OPAL postdoc!
Friday 07 April 2000, 16:00 : Dave Waters (Oxford) (TBC)
Measuring the W at the Tevatron
Monday 20 March 2000, 17:15 : Alan Watson, University of Leeds — The Elizabeth Spreadbury Lecture
The Quest for the Origin of the Highest Energy Cosmic Rays
Cruciform Lecture Theatre No.1. Tea, squash biscuits etc in South Cloisters at 4.45
Friday 17 March 2000, 16:00 : Mark Hindmarsh (Sussex)
The Dynamics of Phase Transitions in the Early Universe
Friday 03 March 2000, 16:00 : Pedro Teixeira-Dias (Glasgow)
Higgs searches at LEP
Friday 11 February 2000, 16:00 : Paul Newman (Birmingham) (TBC)
Diffractive DIS at HERA.
Friday 21 January 2000, 16:00 : Dave Wark (Oxford/Sussex/RAL)
Status of the Sudbury Neutrino Observatory
Friday 10 December 1999, 16:00 : Brian Cox, Manchester
Hard Colour-Singlet Exchange at HERA and Tevatron
Note; Rearranged Date!
Friday 26 November 1999, 16:00 : Nigel Glover, Durham
Prompt Photon production at LEP and Tevatron
Thursday 25 November 1999, 16:00 : Jeff Forshaw, Manchester
Is there a Higgs Boson?
2pm. Note unusual day!
Friday 12 November 1999, 16:00 : Roger Jones, Lancaster
Quark and Gluon Jets and Jet Shapes
Friday 05 November 1999, 16:00 : Antonio Soares, UCL
Medical Application of a Gamma Camera.
Friday 21 May 1999, 16:00 : Prof. Douglas Ross, Southampton.
Electric Dipole Moments of the Electron and Neutron as a Window on New Physics
Tuesday 30 March 1999, 14:00 : Dr. Burton Richter, SLAC — Darwin Theatre
The Future of High Energy Physics; a Personal View
Open lecture for the UK particle physics community - Sponsored by PPARC.
Wednesday 24 March 1999, 16:00 : Prof. Stanislaw Jadach, CERN
How precise are MC calculations for WW final states?
Friday 19 March 1999, 16:00 : Dr. Jason McFall, Bristol
Status of Babar
Friday 26 February 1999, 16:00 : Dr. Phil Harris, Sussex
The Search for the Electric Dipole Moment of the Neutron
The discovery of an electric dipole moment of the neutron would have dramatic consequences for particle physics beyond the Standard Model; it may even help us to understand why the universe contains more matter than antimatter. The latest EDM experiment, now running in Grenoble, applies magnetic resonance techniques to stored ultracold neutrons to obtain an extraordinarily high sensitivity, while a newly-developed magnetometer based upon spin-polarised atomic mercury allows unprecedentedly low systematic uncertainties. This talk will cover the motivation behind and the technique of the measurement; the latest results will be presented, together with some ideas for the possible future evolution of the experiment.
Friday 12 February 1999, 16:00 : Dr. Neville Harnew, Oxford
LHCB - Detector and Physics Challenge
Friday 05 February 1999, 16:00 : Matthew Wing, McGill University/UCL
Semileptonic Decays in ZEUS and Open Beauty at HERA
Friday 29 January 1999, 16:00 : Dr. Witek Krasny, Paris/Oxford
Nuclear Options at HERA
The HERA ep collider turned out, rather unexpectedly, to be the best machine for studies of strong interaction physics in the most intriguing domain of transition between soft and hard interactions of real and virtual photons at large and small distances. This unique microscope provide means to look at quarks and gluons in a very broad range of "precisely tunable space-time volumes" and, if nuclear option is realized, using "tunable magnification" of the color forces. What are the perspectives of nuclear program at HERA? Is a curiosity driven research program for eA scattering feasible? I shall try to answer these and other questions in my seminar.
Friday 15 January 1999, 16:00 : Dr. John Hassard, ICSTM.
What Can Particle Physics do for DNA Sequencing?
The HEP community in the UK has suffered a large drop in real funding in the last ten years. Paradoxically, our research prospects in HEP may never have been brighter. However, there has been a remarkable drop in applicants to physics - at all levels - partly as a consequence of a perceived lack of future for the subject as a discipline in its own right. Few would dispute the long-term importance of pure research at a cultural and societal level; many consider an emphasis on short-term spin-offs to be irrelevant or even counterproductive. I will illustrate the short-term utility of physics in general and HEP in particular by discussing our biotechnology and biomedicine programmes and making the connection to pure research. My central thesis is that if we take care of the short term, the long term takes care of itself. The key is in making the connection.
Monday 14 December 1998, 16:00 : Carsten Hensel (Hamburg University)
Beam induced Background at a Detector at a 500 GeV Electron-Positron Linear Collider
Wednesday 25 November 1998, 16:00 : Aude Gehrmann-De Ridder (Karlsruhe University)
Photon structure at LEP and Linear Collider
Wednesday 07 October 1998, 13:00 :
IoP Half day meeting on future physics.
Wednesday 30 September 1998, 16:00 : Jason Ward (Glasgow)
W Physics Results from ALEPH
Tuesday 15 September 1998, 16:00 : Thomas Hadig (Aachen)
Measuring the Gluon Density in the Proton at H1
Monday 20 July 1998, 16:00 : Greg Landsberg (Brown)
Search for Exotics at D0
Wednesday 03 June 1998, 16:00 : David Miller (RAL) (no relation)
Heavy Quarks from Gluon Splitting at LEP
Wednesday 20 May 1998, 16:00 : Marco Stratmann (Durham)
Physics at a Polarised HERA
Wednesday 13 May 1998, 16:00 : Paul Dervan (UCL)
Silicon Vertex Detectors
Wednesday 06 May 1998, 16:00 : Ben Allanach (RAL)
Increased Predictivity in Supersymmetry
Wednesday 29 April 1998, 16:00 : Chris Llewellyn-Smith (CERN)
Colloquium - The LHC Program
Wednesday 22 April 1998, 16:00 : David Miller (UCL)
Lepton collider prospects
Friday 03 April 1998, 16:00 : Rajendran Raja (FNAL)
Top quark Physics results from the Tevatron
Wednesday 18 March 1998, 16:00 : Martin McDermott (Manchester)
Diffractive Heavy Quarkonium production
Monday 16 March 1998, 16:00 : Malcolm Longair (Cambridge)
The Enigma of the Cosmological Constant, The Elizabeth Spreadbury Lecture
Wednesday 11 March 1998, 16:00 :
IoP HEPP Group Half Day at UCL on Neutrino Physics
Wednesday 04 March 1998, 16:00 : Michael Kramer (RAL)
Current status of Quarkonia Physics
Wednesday 18 February 1998, 16:00 : Paul Dervan (UCL)
The SLD vertex detector upgrade.
Monday 16 February 1998, 16:00 : Martin Rees (at STS)
Our Universe and Others
Wednesday 28 January 1998, 16:00 : Simon George (RHBNC)
The ATLAS SCT Trigger
Wednesday 21 January 1998, 16:00 : Prof. H. M. Chan (RAL)
Are post-GZK air showers due to strongly interacting neutrinos?
Joint with Astronomy group, in the Massey Theatre
Wednesday 14 January 1998, 16:00 : Prof. D. H. Davis
Wednesday 10 December 1997, 16:00 : Paul Dauncey (RAL)
Measuring the CKM angle gamma at BaBar
Tuesday 09 December 1997, 16:00 : Neville Harnew (Oxford)
Observation of ring patterns with a pixellated single-photon detector
The proposed LHC-B experiment at CERN will use a novel photon detection device called the hybrid Photon Detector (HPD) to recognise rings of Cherenkov light produced by high energy particles in matter. Electrons from a photocathode surface in the HPD are accelerated and detected in a silicon pad detctor. NB. This is not a Bloomsbury seminar, but one of a series organised but CAIS/Sira UCL postgrad centre. It will be at 3pm in the M.Res seminar room, 66-72 Gower St.
Wednesday 03 December 1997, 16:00 : Phil Burrows (SLAC)
Testing the Standard Model using Polarised Electrons, a Micro-vertex Detector and Particle Identification: the SLD Experiment at SLAC
Wednesday 19 November 1997, 16:00 : Prof. Ian Percival, QMW
Quantum technology and quantum foundations (Joint with Molecular and Atomic Physics, will be held in A1)
Wednesday 12 November 1997, 16:00 : Prof. Basil Hiley, Birkbeck
Alternative Quantum Mechanical Interpretations: Do they really help? (Joint with Molecular and Atomic Physics, will be held in A1)
Wednesday 05 November 1997, 16:00 : Stan Wojcicki (Stanford U.)
Brookhaven Rare Kaon Decay Experiment
Wednesday 22 October 1997, 16:00 : Herbi Dreiner (RAL)
R-parity violation
Wednesday 15 October 1997, 16:00 : Vakhtang Kartvelishvili (Manchester)
SVD Approach to Data Unfolding
Distributions measured in high energy physics experiments are usually distorted and/or transformed by various detector effects. A regularization method for unfolding these distributions is re-formulated in terms of the Singular Value Decomposition (SVD) of the response matrix. A relatively simple, yet quite efficient unfolding procedure is explained in detail. The concise linear algorithm results in a straightforward implementation with full error propagation, including the complete covariance matrix and its inverse. Several improvements upon widely used procedures are proposed, and recommendations are given how to simplify the task by the proper choice of the matrix. Ways of determining the optimal value of the regularization parameter are suggested and discussed, and several examples illustrating the use of the method are presented.
Thursday 02 October 1997, 16:00 : Mike Seymour (RAL) THURSDAY!!
The Internal Structure of QCD Jets
Wednesday 24 September 1997, 16:00 : Jan Lauber (UCL)
Summary of Jerusalem conference (EPS97)
Wednesday 04 June 1997, 16:00 : Mark Hayes (Bristol University)
Hard diffractive scattering in photo-production at HERA
Wednesday 28 May 1997, 16:00 : Tony Rooke (University College London)
Summary of the Photon 97 conference
Wednesday 21 May 1997, 16:00 : Peter F. Smith (RAL)
Measurement of the neutrino mass from supernova
Wednesday 14 May 1997, 16:00 : Allan Skillman (University College London)
Determination of the Trilinear Gauge Couplings in WW events at LEP II at a center of mass energy of 172 GeV
Wednesday 26 March 1997, 16:00 : John Thompson (RAL)
ALEPH LEP 2 Results
Wednesday 19 March 1997, 16:00 : Claude Bourrely (Birkbeck)(UCL)
Theory of Deep inelastic scattering
Wednesday 12 March 1997, 16:00 : Lynne Orr (University of Rochester)
Gluon Radiation in Top Quark Production and Decay
The strong force causes quarks to radiate gluons with a large probability. Because these gluons appear in experiments as jets of hadrons which are typically indistinguishable from jets due to quarks, making sense of these experiments requires understanding gluon radiation. This is particularly important for top physics because uncertainties in future top measurements will be dominated by systematic effects associated with gluon radiation. Gluons can be radiated during both top production and decay; both processes must be taken into account. In this seminar I discuss gluon radiation in top quark production and decay at present and future colliders (both hadron and electron-positron colliders), and some implications of the results.
Tuesday 25 February 1997, 16:00 : Jon Butterworth (UCL)
HERA and the Leptoquark — Is the Standard Model dead ?
Results presented last week by the H1 and ZEUS experiments at DESY, Hamburg (and submitted to journals this week) have caused something of a stir. Interest focusses on comparisons between positron-proton scattering data and the predictions of the 'Standard Model' of particle physics. Anomalies are seen in the data when the four-momentum transfer is high (Q^2 > 15000 GeV^2) and the momentum fraction of the struck quark in the proton is around x = 0.5. After a simple outline of the Standard Model and of the experiments is given, the significance and implications of these anomalies will be examined.
Wednesday 19 February 1997, 16:00 : Boris Ruskov (Oxford)
From QED to QCD — from similarities to differences
The purpose of this lecture is to give an elementary introduction to basic ideas of the non-perturbative QCD. The universal gauge-theory description of QCD and QED will be given. Considering both models in parallel, we will explain the reason for similarities between them in the weak-coupling regime and crucial differences in the strong-coupling (non-perturbative) region. We will discuss some ideas on how these properties can be observed.
Wednesday 12 February 1997, 16:00 : No Seminar
OPAL UK Meeting
Wednesday 05 February 1997, 16:00 : Roger Barlow (Manchester)
Why physics lectures
Wednesday 29 January 1997, 16:00 : Jon Flynn (Southampton)
Some recent results from Lattice QCD
I review some recent lattice QCD results for quantities of phenomenological interest. After a brief consumer guide I will survey recent results for some or all of: B-meson decay constant and mixing parameter, K-meson mixing parameter, strong coupling constant, light quark masses and the lightest scalar glueball.
Wednesday 22 January 1997, 16:00 : Hugh Gallagher (Oxford)
New Results on Atmospheric Neutrino Oscillations from Soudan 2
Wednesday 27 November 1996, 16:00 : Dr C. Damerell (RAL)
The international e+e- linear collider programme: physics prospects and recent design developements
The future e+e- linear collider was one of the options studied in detail at the recent Snowmass Workshop 'New Directions for High Energy Physics'. This talk (which includes material presented in the closing parallel session) summarises recent progress on the accelerator and detector design, and contrasts the physics prospects with other options discussed during the workshop.
Wednesday 20 November 1996, 16:00 : Prof. I Percival (QMW)
Quantum mechanics, cat-fleas and gravit
By analogy with the use of Brownian motion to detect flucuations on the atomic scale, it is shown that modern matter interferometry experiments might detect fluctuations of space-time on a Planck scale, despite the small values of the Planck length and time.
Wednesday 13 November 1996, 16:00 : Dr B. Webber (Cavendish Laboratory)
"Hadronisation" corrections to QCD observables
Predictions in perturbative QCD refer to final states consisting of quarks and gluons, rather than the hadrons actually observed in experiments. It has therefore been customary to apply `hadronization'' corrections, based on Monte Carlo models, to the theory before comparing with experiment. Although this procedure seems to fit the data quite well, the corrections applied have not been well justified theoretically. Empirically, they behave like inverse powers of a large momentum scale of the process, for example the centre-of-mass energy in e+e- annihilation or the quark mass in heavy quark processes. Recent theoretical work has suggested that such power-suppressed corrections can arise from divergences of the perturbation series at high orders, which are called `renormalons''. This talk will explore the insights into hadronization corrections that can be obtained from the renormalon approach.
Wednesday 06 November 1996, 16:00 : Dr C. Sutton (Oxford)
Wednesday 30 October 1996, 16:00 : Dr O Teryaev (Dubna and Birkbeck)
QCD single spin asymmetries in the RHIC and HERA-N programmes
Wednesday 23 October 1996, 16:00 : Dr J. McGovern (Manchester)
Chiral symmetry in nuclei
Partial restoration in nuclear matter of the chiral symmetry of QCD is discussed together with some of its possible signals. Estimates of corrections to the leading, linear dependence of the quark condensate are found to be small, implying a significant reduction of that condensate in matter. The importance of the pion cloud for the scalar quark density of a single nucleon indicates a close connection between chiral symmetry restoration and the attractive two-pion exchange interaction between nucleons. This force is sufficiently long-ranged that nucleons in nuclear matter will feel a significant degree of symmetry restoration despite the strong correlations between them. Expected consequences of this include reductions in hadron masses and decay constants. Various signals of these effects are discussed, in particular the enhancement of the axial charge of a nucleon in matter.
Wednesday 16 October 1996, 16:00 : Prof E. Leader (Birkbeck)
Report on the 12th Int. Symp. on High Energy Spin Physics, Amsterdam
Wednesday 09 October 1996, 16:00 : Prof D. Miller (UCL) and Dr J. Lauber (UCL)
Report on the INternational Conference on High Energy Physics, Warsaw
Wednesday 05 June 1996, 16:00 : Dr Dhiman Chakraborty (State University of New York at Stony Brook)
The current status of top physics at the Tevatron
The Tevatron accelerator at Fermilab has recently concluded a run in the collider mode before switching to the fixed-target mode of operation for the next 3 years. Each of the two collider experiments, CDF and D0, has collected over 100 pb**-1 worth of data which is about twice what the last batch of publications were based on. The first results from this full data sample have just started to come out. In the arena of top physics, both experiments have revised their calculations of production cross-section and the mass of top quarks with reduced statistical and systematic errors. These as well as some new results from more difficult final states and a breif outline of future prospects will be presented.
Wednesday 29 May 1996, 16:00 : Dr Mike Charlton (University College)
AntiHydrogen
Antihydrogen was recently observed at high energies at CERN. We briefly review this experiment, but suggest that the primary motivations for studying this object, which include CPT and WEP tests, can only be addressed by its controlled production at very low energies. Techniques which will allow this are described, including the capture and cooling of antiprotons to temperatures below 20K and the accumulation of dense positron plasmas.
Wednesday 22 May 1996, 16:00 : Nick Allen (Brunel University)
Electroweak Measurements at SLD
Wednesday 15 May 1996, 16:00 :Dr George Lafferty (Manchester)
Inclusive particle production in hadronic Z decay
An overview will be given of the physics results from the study of inclusive particle production in multihadronic Z decays at LEP, the primary aim of which is to reach an understanding of the non-perturbative hadronisation process in QCD. Topics covered will include: aspects of quark and gluon fragmentation; local parton-hadron duality; QCD-based models of parton hadronisation; two-particle correlations; and spin effects in fragmetation.
Wednesday 08 May 1996, 16:00 : Barry Macevoy (Imperial College)
Defect kinetics in silicon detector material
A numerical kinetics model has been used to investigate the evolution of complex defects in high resistivity silicon detector material during fast neutron irradiation to levels expected at the CERN LHC. The complex V_2O is identified as a candidate for a deep-level acceptor state which gives rise to experimentally observed changes in the effective doping concentration. The importance of the initial oxygen impurity concentration in determining the radiation tolerance of the detectors is demonstrated. The characteristics of devices heavily irradiated with Colbalt 60 photons are modelled satisfactorily by using a semiconductor simulation in conjunction with the kinetics model. It is postulated that inter-defect transitions between divacancy states in the terminal damage clusters are responsible for apparent discrepancies in the modelling of data from neutron-irradiated devices. This mechanism (if correct) may have important consequences for te prospects of "defect-engineering" a radiation hard device.
Tuesday 12 December 1995, 16:00 : Dr Greg Heath (Bristol)
Hunting the Higgs Boson - In a Haystack
Particle physics experiments produce enormous raw data rates. Millions of events per second must be filtered on-line to find a few dozen of interest. Filtering processors will be described and the strategies used on them, for current experiments and for the future Large Hadron Collider at CERN.
Wednesday 29 November 1995, 16:00 : Dr Jeff Forshaw (Manchester)
Report on the Beijing Lepton-Photon Conference
Wednesday 22 November 1995, 16:00 : Dr Roger Phillips (DRAL)
Speculations about the KARMEN Anomaly
The KARMEN experiment, studying neutrinos from the pi+ decay chain, has found an anomaly in the time-dependence that suggests a new unstable particle x with m(x)=33.9 MeV, produced via pi -> mu+x . Could this be a new neutrino or what? Pros, cons, ifs and buts will be reviewed.
Wednesday 15 November 1995, 16:00 : Dr Subir Sarkar (Oxford)
"No-crisis" for Big Bang Nucleosynthesis
The synthesis of the light elements at the end of the `first three minutes' provides the most detailed probe of physical conditions in the early universe and has proved very useful in constraining new physics, viz. the existence of new particles and forces. Recently it was claimed that the observationally inferred abundances of deuterium, helium and lithium are in fact inconsistent with even the Standard Model, thus undermining this programme. I will review the situation and argue that there is no such crisis.
Wednesday 08 November 1995, 16:00 : Prof John Edgington (QMW)
Neutrino Oscillations - Weighing the Evidence
Do neutrinos possess mass? The Standard Model says no, without giving a reason. So far there is no direct and sustainable experimental evidence for mass, despite occasional hints to the contrary. To some, non-zero mass is more "natural" and cosmologists seeking missing mass have supported this theological view with enthusiasm. Massive neutrinos will have wave functions which are combinations of separate flavour eigenstates; thus production of neutrinos of one flavour and their detection as another flavour is evidence for non-zero mass. Experimental searches for neutrino oscillations will be reviewed, concentrating on the KARMEN experiment at Rutherford Appleton Laboratory and comparing it with the LSND experiment at Los Alamos National Laboratory. Some problems of drawing conclusions will be discussed, including the relevance of a Bayesian approach to data handling.
Wednesday 01 November 1995, 16:00 : Dr. D. B. Stamenov (INR, Bulgarian Academy of Sciences)
Constraints on "Fixed Point" QCD from the CCFR data on DIS
Wednesday 25 October 1995, 16:00 : Dr Robert Thorne (DRAL)
Report on the European Physical Society HEP Meeting, Brussels
I will attempt to summarize some of the most important and interesting developments reported at the Meeting. This is clearly a personal choice, but the topics to be discussed include: developments in perturbative QCD, particularly small x physics, high pT jets, and higher twist corrections; particle-astrophysics and the solar neutrino problem; precision measurements (particularly R_b and R_c) and calculations in the standard model, and implications for new physics; and duality in supersymmetric theories.
Wednesday 11 October 1995, 16:00 : Prof John Dainton (Liverpool)
The Partonic Structure of Diffraction
Recent new measurements of the deep-inelastic scattering of electrons on protons have revealed a new layer in our understanding of the structure of the proton. At the highest possible interaction energies, which are available only at the HERA ep collider in DESY in Hamburg, new measurements have revealed a contribution which is due to the way protons interact with other hadrons. The first measurements of the short distance structure of this contribution reveal that it is partonic, and furthermore that it may be understood in terms of Quantum Chromodynamics (QCD). For the first time, deep-inelastic lepton scattering experiments are able to probe one of the oldest conundrums in physics, namely the way nucleons interact with other nucleons (often generically referred to as diffraction), and to provide measurements which elucidate an understanding in terms of QCD. The new results which lead to these conclusions are presented and discussed.
Wednesday 14 June 1995, 16:00 : Jason Ward
Measurement of the Photon Structure Function at OPAL
An introduction to how photon structure is studied at an electron-positron collider is given, with motivation for such a study. The new contributions that LEP can make to the measurement of the photon structure function are discussed and the OPAL measurements are presented. These measurements are compared with theory. The photon structure function measurements that we expect from future LEP-II running are also discussed.
Wednesday 31 May 1995, 16:00 : Prof. Roger Cashmore (Oxford)
The Top Quark - Discovery and Implications
The evidence for the recent discovery of the top quark at Fermilab will be reviewed and the implications for other areas of particle physics discussed. Further progress in top quark physics will be made with upgrades to the Tevatron and the construction of the LHC while a high energy linear e+/e- collider offers intriguing possibilities. These aspects will be developed and put in perspective.
Wednesday 24 May 1995, 16:00 : Dr Alexei Yung (St. Petersburg / Swansea)
Why Topological Models are Relevant to Physics
An elementary introduction to what is topological field theory is given. The problem how topological models could serve physics is discussed. In particular, the idea that the physical theory could correspond to the broken phase of the topological theory is explained. The example of such breakdown phenomenon in the two dimensional topological sigma model is presented.
Wednesday 17 May 1995, 16:00 : Dr Paul Harrison (Queen Mary)
A Solution to the Solar and Atmospheric Neutrino Problem
The physics of solar and atmospheric neutrinos is reviewed, and the evidence of anomolous detection rates is summarised. Vacuum neutrino oscillations are discussed and an especially simple and elegant form for the lepton mixing matrix is proposed, based on a cyclic permutation symmetry among the generations. Predictions are compared with experiment, and an excellent fit is obtained, the data requiring a hierarchical spectrum of mass-squared differences for the neutrinos. Implications for future experiments are discussed.
Wednesday 10 May 1995, 16:00 : Dr Brian Foster (Bristol)
The BaBar Experiment
I will motivate the study of CP violation in the B system by the need to understand the baryone asymmetry of the Universe. I will discuss the possible mechanisms of CP violation and various formalisms in which to understand and express them. I will discuss the requirements for an experiment to study CP violation in the B system and then show how these are being realised in the design of the BaBar experiment at PEP-II
Wednesday 03 May 1995, 16:00 : Dr Alfred Goldhaber (Stony Brook / Cambridge)
An Open Universe from Inflation
A model is described in which inflation occurs, with many of the features customarily associated with inflation, but the universe today may still exhibit appreciable negative curvature, e.g. Omega = 0.2. This is achieved without extremely fine tuning if the inflation proceeds in two stages. The first, or old inflation, stage, is one with a local minimum in the vacuum energy associated with the inflaton field, and implies a de Sitter universe. At some point a bubble forms through a a quantum transition, with new or slow-roll inflation proceeding inside. This bubble evolves into our visible universe, and the slow roll must involve a change in the magnitude of the inflaton field of the same order as the Planck scale. Tuning in the initial value (delta Phi)/Phi around 0.01 to 0.1 would be sufficient to give agreement with current observations. The model has been presented in a recent paper by M. Bucher, A.S. Goldhaber, and N. Turok, and further developed in work by Bucher and Turok.
Wednesday 15 March 1995, 16:00 : Dr Apostolos Pilaftsis (DRAL)
How Left-Right Asymmetry, Tau Polarization and Lepton Universality Constrain Unified Theories at the Z Peak
We suggest the use of a universality-breaking observable based on lepton asymmetries at the Z peak, which can efficiently constrain the parameter space of unified theories. The new observable is complementary to the common lepton-universality quantity relying on partial width differences and depends critically on the chirality of a possible non-universal Z-boson coupling to like- flavour leptons. The LEP potential of probing universality violation is discussed in representative low-energy extensions of the Standard Model (SM) that may be motivated by supersymmetric grand unified theories, such as the SM with left-handed and/or right-handed neutral isosinglets, the left-right symmetric model, and the minimal supersymmetric SM.
Wednesday 08 March 1995, 16:00 : Dr Mike Green (RHBNC)
The Search for the Higgs Boson at LEP I and LEP II
One of the most important outstanding issues for the standard model is the origin of mass. The most popular mechanism for generating mass requires the existence of one or more Higgs bosons with a mass below about 1 TeV. Prior to the operation of LEP there was very little opportunity to search for these and only a very small mass range close to zero was excluded. In the first few years of LEP running the whole mass range below 60 GeV has been excluded for the minimum standard model Higgs. At LEP II the sensitivity will be extended up to at least 80 GeV and hopefully higher. Direct searches above this mass will depend on LHC. However virtual effects in electroweak processes and a direct measurement of the top mass can help pin the Higgs down to a restricted mass region
Wednesday 22 February 1995, 16:00 : Dr Werner Vogelsang (DRAL)
Prompt Photon Production at Hera
We first review the status of prompt photon production in hadronic collisions, focussing in particular on the situation at high-energy ppbar colliders, where we show how the longstanding discrepancy between data and theory at small pt/S is removed by using steep small-x parton distributrions and also by taking into account the fragmentation contribution in next-to-leading order. Then we turn to prompt photon production at HERA and study its sensitivity to the parton, in particular the gluon, content of the photon in the framework of a complete next-to-leading order calculation. Special attention is paid to the issue of isolation constraints imposed on the cross section by experiment.
Wednesday 15 February 1995, 16:00 : Dr Tim Greenshaw (Liverpool)
Recent Results from H1
Wednesday 25th January 1995, 16:00 : Prof David Miller
The Next Collider(s) after LHC
The LHC must come first, but then what? There is important physics which other colliders can do but which LHC will find difficult or impossible. Technical and physics questions will be reviewed.
Wednesday 18 January 1995, 16:00 : Dr Neville Harnew (Oxford)
Exotic Physics Searches at Hera
During the 1993 running period of the HERA e-p collider, the ZEUS and H1 detectors each recorded approximately 550 inverse picobarns of integrated luminosity. I report on searches from both experiments for particles beyond the Standard Model, in which HERA is able to explore an entirely new kinematic range. Topics considered include the hunt for direct, indirect and flavour-changing leptoquarks, predicted in extensions to the Standard Model. I will also review the status of the search for excited electrons, neutrinos and quarks, all of which are predicted in composite models. Finally, I will discuss the prospects for the discovery of SUSY at HERA, both in the R-parity conserving and R-parity violating production and decay modes.
Wednesday 07 December 1994, 16:00 : Dr Allan Skillman
Inclusive Strange Vector and Tensor Meson Production in Hadronic Z Decays
Measurements have been made in the OPAL experiment at LEP of the inclusive production of strange vector phi(1020) and K*(892) mesons, and the neutral tensor meson K*(1430). The measurements for the vector states update previously published results based on lower statistics, while the K*(1430) rate represents the first measurement of a strange tensor state in Z0 decay. Both the overall production rates, and normalised differential cross sections for the vector states, have been compared to JETSET and HERWIG predictions. The peak positions in the xi = ln(1/xp) distributions have been measured and found to be consistent with measurements of other hadron states.
Wednesday 30 November 1994, 16:00 : Dr Jeff Forshaw (DRAL)
Diffractive Vector Meson Production at Large Momentum Transfer
HERA is able to observe diffractive production of vector mesons. Their observation at large momentum transfer will provide important information regarding the nature of the pomeron in QCD
Wednesday 23 November 1994, 16:00 : Dr Nick Brook (Glasgow)
The Hadronic Final State in Deep Inelastic Scattering at Zeus
The general characteristics of the hadronic final state in deep inelastic, neutral current electron-proton interactions at ZEUS are investigated for Q2 > 10 GeV2. The general properties of events with a large rapidity gap are investigated and compared to events without a rapidity gap. The kinematic properties of the jets in 2-jet events are measured and compared to NLO calculations. Fragmentation effects are investigated in the current fragmentation region. The data are used to study QCD coherence effects in DIS and to compare with corresponding e+e- data in order to test the universality of quark fragmentation.
Wednesday 02 November 1994, 16:00 : Prof Elliot Leader
The Present Status of Polarised Deep Inelastic Scattering
The unexpected results of the EMC experiment in 1987, using a longitudinally polarised lepton beam on a longitudinally polarised hydrogen target, with its suggestion of a "spin crisis in the parton model" catalysed an enormous amount of theoretical and experimental work. What that was regarded as a trivial extension of the unpolarised case is now seen to be a rich field of its own, full of subtleties. Several new experiments reported results during 1993/4. The first data on neutrons appeared and led initially to claims that the fundamental Bjorken sum rule was violated. The seminar will cover both the theory and phenomenology .
Friday 28 October 1994, 16:00 : Dr Lev Chekhtman (CERN/Novosibirsk)
Microstrip Gas Chambers - Possible Applications for X-ray Detection
Existing low-dose digital X-ray systems have much worse spatial resolution than photographic film/screen techniques. Microstrip gas chambers offer resolutions of around 100 microns and may have larger area than solid-state strip or CCD detectors.
Wednesday 26 October 1994, 16:00 : Dr Robert Bingham (DRAL)
Particle Acceleration in Plasmas using High Powered Lasers
The generation of relativistic plasma waves in low density plasmas is important in the quest for producing ultra-high acceleration gradients for accelerators. At present two methods are being pursued vigorously to achieve ultra-high acceleration gradients. The first is the beat wave mechanism which uses conventional long pulses (>100 ps) modest intensity lasers (I ~ 10E14 Watts/cm2 - 10E16 Watts/cm2) and the second uses the new breed of compact high brightness lasers (< 1 ps) and intensities > 10E18 Watts/cm2. With the development of these compact short pulse high brightness lasers new areas of study for laser matter interactions is opening up. In the ultra-high intensity regime laser plasma interactions are highly nonlinear and relativistic leading to new phenomenon such as plasma wakefield excitation for particle acceleration, relativistic self-focusing, remote guiding of laser beams, harmonic generation and photon acceleration.
Friday 21 October 1994, 16:00 : Dr John Hassard (IC)
Diamond Detectors: Towards the Frontiers of Technology
Diamond films are strictly insulators, but they are excellent photoconductors (and conductors of heat). As detectors for ionising particles they may be especially interesting for high-radiation environments in nuclear medicine and at future particle colliders.
Wednesday 12 October 1994, 16:00 : Prof Tegid Jones, Prof David Miller, Dr Mark Thomson
Glasgow Conference Review
Tegid, David and Mark will be telling us about the highlights of that sunny week in July, north of the border.
Website designed by Simon Jolly and Fern Pannell
|
CommonCrawl
|
Beta and Frequency of Data
Why are the betas of individual securities essentially the same whether we use daily or weekly data when calculating?
$\begingroup$ This would only be true in population, the actual estimates would differ with probability 1 in an almost surely sense (since estimator distribution is over reals). $\endgroup$ – user2763361 Apr 12 '14 at 12:02
$\begingroup$ I see that you explicitly mentioned daily and weekly time scales, but more generally this is not the case because of microstructure contamination and issues like the Epp's Effect. This is particularly relevant for high frequencies. $\endgroup$ – Jacob M. Morley Apr 15 '14 at 6:24
Suppose you have $$X\equiv\left(x_{1},\: x_{2}\right) $$ where $x_{1}$ are the daily log returns of the security and $x_{2}$ are the daily log returns of the market. Assume further that $X$ is iid multivariate normal $$X\sim N\left(\mu,\Sigma\right) $$ People frequently calculate beta as $$\beta_{1,2}\equiv\frac{\Sigma_{1,2}}{\Sigma_{2,2}} $$ If you convert $X$ from a daily series to a weekly series, you could say that the weekly variables are just the sum of the daily variables. Due to the properties of a normal distribution, this means you could write $$X_{weekly}\sim N\left(5\mu,5\Sigma\right) $$ This implies a weekly beta of $$\beta_{1,2}^{weekly}\equiv\frac{5\Sigma_{1,2}}{5\Sigma_{2,2}}=\frac{\Sigma_{1,2}}{\Sigma_{2,2}}$$ or that the beta is the same as the daily version.
There are a few wrinkles to this argument. First, returns may not be iid normally distributed, which would mean that the covariance of the weekly data may not be proportional to the covariance of the daily data. Second, the beta that really matters to an investor is the forward-looking beta on arithmetic returns. That beta is more complicated to calculate since it involves the conversion between the log returns and the arithmetic returns.
$\begingroup$ It is only true that the beta would be unchanged under particular assumptions about the underlying process and what beta you're using. $\endgroup$ – John Apr 11 '14 at 18:34
$\begingroup$ There could be many different betas (e.g., Fama-French, APT), but I'm referring more to how to calculate beta. See papers.ssrn.com/sol3/papers.cfm?abstract_id=1619923 In reality, returns are not normally distributed, why should I assume they are? $\endgroup$ – John Apr 11 '14 at 18:44
Not the answer you're looking for? Browse other questions tagged beta or ask your own question.
is beta of a portfolio always meaningful?
How to account for jumps in intraday data when calculating beta?
what is a reasonable beta in CAPM?
Beta and the Assumption of IID Returns
What do you do with low r-squared when calculating high-frequency beta
Historical beta: Beta estimation for which time horizon?
Creating Factor mimicking portfolio returns
How to verify if beta "works" for hedging?
Calculating beta to market
Calculating beta when holding market portfolio
|
CommonCrawl
|
Impact of vaccine supplies and delays on optimal control of the COVID-19 pandemic: mapping interventions for the Philippines
Carlo Delfin S. Estadilla ORCID: orcid.org/0000-0003-3444-02421,
Joshua Uyheng2,
Elvira P. de Lara-Tuprio1,
Timothy Robin Teng1,
Jay Michael R. Macalalag3 &
Maria Regina Justina E. Estuar4
Infectious Diseases of Poverty volume 10, Article number: 107 (2021) Cite this article
17k Accesses
Around the world, controlling the COVID-19 pandemic requires national coordination of multiple intervention strategies. As vaccinations are globally introduced into the repertoire of available interventions, it is important to consider how changes in the local supply of vaccines, including delays in administration, may be addressed through existing policy levers. This study aims to identify the optimal level of interventions for COVID-19 from 2021 to 2022 in the Philippines, which as a developing country is particularly vulnerable to shifting assumptions around vaccine availability. Furthermore, we explore optimal strategies in scenarios featuring delays in vaccine administration, expansions of vaccine supply, and limited combinations of interventions.
Embedding our work within the local policy landscape, we apply optimal control theory to the compartmental model of COVID-19 used by the Philippine government's pandemic surveillance platform and introduce four controls: (a) precautionary measures like community quarantines, (b) detection of asymptomatic cases, (c) detection of symptomatic cases, and (d) vaccinations. The model is fitted to local data using an L-BFGS minimization procedure. Optimality conditions are identified using Pontryagin's minimum principle and numerically solved using the forward–backward sweep method.
Simulation results indicate that early and effective implementation of both precautionary measures and symptomatic case detection is vital for averting the most infections at an efficient cost, resulting in \(>99\%\) reduction of infections compared to the no-control scenario. Expanding vaccine administration capacity to 440,000 full immunizations daily will reduce the overall cost of optimal strategy by \(25\%\), while allowing for a faster relaxation of more resource-intensive interventions. Furthermore, delays in vaccine administration require compensatory increases in the remaining policy levers to maintain a minimal number of infections. For example, delaying the vaccines by 180 days (6 months) will result in an \(18\%\) increase in the cost of the optimal strategy.
We conclude with practical insights regarding policy priorities particularly attuned to the Philippine context, but also applicable more broadly in similar resource-constrained settings. We emphasize three key takeaways of (a) sustaining efficient case detection, isolation, and treatment strategies; (b) expanding not only vaccine supply but also the capacity to administer them, and; (c) timeliness and consistency in adopting policy measures.
Graphic Abstract
In the year since the emergence of the global coronavirus disease 2019 (COVID-19) pandemic, national policies have had to decisively manage diverse issues of resource availability, institutional capacity, and collective behavioral change [1,2,3]. Striking the right balance of multiple strategies at the right time has been vital for implementing successful pandemic responses [4, 5]. Mathematical modelling has helped scientists and policymakers incorporate emergent discoveries about COVID-19 with existing knowledge to design effective interventions [6, 7].
In early 2021, the global introduction of vaccination as a viable counter to the disease prompts new analytical efforts. Regional inequalities introduce challenges to the global vaccine supply chain which may disrupt a straightforward vaccine rollout for a significant proportion of various national populations [8, 9]. Important questions emerge with respect to how governments may adequately adjust existing policies available for pandemic control in relation to multiple scenarios.
This paper undertakes an optimal control study of policies to control the COVID-19 outbreak in the Philippines, a developing country that may be particularly vulnerable to experiencing challenges to vaccine rollouts. Amidst large-scale preparations for the evaluation, selection, and distribution of vaccines, ongoing policies to respond to the pandemic continue to inform the Philippine government's strategies for pandemic management [10]. Questions around their optimal implementation are particularly salient for developing countries that face heavier burdens from both the pandemic and overly restrictive quarantine measures [11]. This study therefore asks: How should the Philippine government implement existing strategies of community quarantine and case detection in conjunction with the introduction of vaccine rollouts?
Mathematical modelling for forecasting COVID-19 outbreaks
Since the beginning of the COVID-19 pandemic, the academic literature has witnessed a vast surge of modelling studies. Existing reviews highlight the importance of compartmental models of COVID-19, in connection with other models based on time series forecasting and machine learning [12, 13]. Compartmental models mathematically encode known and emerging information about the transmission dynamics of the disease and have been locally applied across numerous contexts around the world, including major sites of COVID-19 transmission like China, India, Brazil, the United States, and the United Kingdom [14,15,16,17,18].
Mathematical modelling efforts have been beneficial for forecasting and intervention assessment [19,20,21]. For instance, in the United Kingdom, a stochastic, age-structured transmission model was used to quantify the costs and mortalities of unmitigated outbreaks without interventions, highlighting the need for sustaining combined control efforts [22]. In another example, an age-structured model with social contact matrices was used to compare the impacts of different reopening strategies on the relative reduction of cases in different regions in China [23].
Optimal control theory for modelling pandemic response
In this work, we utilize optimal control theory to model effective pandemic response. Optimal control theory refers to a field of study that deals with finding optimal solutions to a problem expressed in the form of a nonlinear dynamical system [24, 25]. This helps identify efficient methods of achieving desired outcomes, such as cost-effective infection control [15, 26].
Numerous studies have implemented optimal control theory toward similar end goals. In the absence of vaccines, most early studies focused on non-pharmaceutical interventions, including various combinations of rapid testing, contact tracing, and awareness campaigns [27,28,29,30,31]. Ullah and colleagues sought to disentangle the impacts of quarantine and case detection rates on exposed, critical, and hospitalized COVID-19 patients [32]. Other research modelled the effect of limited total testing resources, through the addition of an isoperimetric constraint to the optimal control problem [33].
Eventually, however, newer research was further able to consider the impacts of eventual vaccine availability. In an age-structured model, Bonnans and Gianatti studied minimization of the death toll, cost of confinement, and hospitalization peaks discussing a possible extension of their model when a vaccine becomes available [34]. Libotte and colleagues likewise explored programs for vaccine administration within a multi-objective setup, determining a set of Pareto optimal strategies that would minimize infections while also minimizing the number of vaccines needed [35].
Responding to COVID-19 in the Philippines
The present work specifically investigates the dynamics of COVID-19 in the Philippines and aims to identify optimal strategies for efficiently controlling infections. We draw on existing modelling efforts by the national pandemic surveillance system [36] to derive realistic parameters which match existing epidemic trends and available intervention strategies in the country [37]. By deploying models informed by local parameters of the disease, we therefore aim to provide both theoretically optimal and contextually practical recommendations for policymakers [38].
In the Philippines, non-pharmaceutical interventions have primarily included phased community quarantines and mandated wearing of face masks [39]. Enhancements to the capacity of the health system to efficiently detect asymptomatic and symptomatic infectious individuals have also been key [40, 41]. In early 2021, imminent vaccine rollouts posed a salient new factor for pandemic control. We therefore sought to design optimal strategies for their distribution, and consider appropriate responses to potential obstacles which may arise in resource-constrained settings.
Aims of the current study
Burgeoning scholarship points to rich global knowledge of the effectiveness and efficiency of various policy tools against the pandemic. However, both nationally specific impacts of the pandemic and the limitations faced by intervening bodies highlight the importance of grounding optimal control analysis in the local context.
In this view, the present work therefore aims to achieve the following goals. First, we frame pandemic interventions with vaccinations as an optimal control problem to identify scenarios for effective pandemic control. Second, we explore various vaccination scenarios featuring both delays and expansions of vaccine administration. This enables a future-oriented analysis of how local policymakers may compensate for unforeseen developments in the global supply chain. Finally, we perform a systematic ablation analysis, whereby we restrict various combinations of available controls to model more limited control scenarios.
Model formulation
To capture the local dynamics of COVID-19 transmission in the country, we form a model that utilizes the local incidence data from the Department of Health-Epidemiology Bureau (DOH-EB) [42]. The COVID-19 model utilizes six compartments to subdivide the population: susceptible (S), exposed (E), infectious but asymptomatic \((I_a)\), infectious and symptomatic \((I_s)\), confirmed (C), and removed (R). These compartments are governed by the epidemic flow as illustrated in Fig. 1. Compartment S consists of individuals who have not been infected with COVID-19 but may contract the disease once exposed to the virus. Compartment E consists of individuals who have been infected but are still within the latency stage of the disease. These individuals will eventually become infectious, and categorized into two compartments depending on the presence (\(I_s\)) or absence (\(I_a\)) of symptoms. Once detected, these infectious individuals will move to compartment C, where they will be included among the active cases. Individuals in this compartment are assumed to be isolated, and hence not capable of infecting the susceptible population, and receiving treatment while in isolation. Lastly, compartment R consists of individuals who have acquired immunity from the disease. We assume that those who have recovered from the disease acquire permanent immunity and therefore move to compartment R.
Diagram of the compartmental model. Disease progression from the susceptible (S) compartment through the exposed (E), asymptomatic \((I_a)\) and symptomatic \((I_s)\) infectious, confirmed (C), and removed (R) compartments
The movements of individuals toward and out of the different compartments are governed by several parameters of the model. The transmission rate \(\beta\) is a function of the disease transmission rate \(\beta _0\), based on an assumed basic reproduction number \(R_0\), and a reduction factor \((1 - \lambda )\). The parameter \(\lambda\) reflects the effect of community quarantine imposed by the government, as well as the degree of compliance to minimum health standards, which includes practicing proper hygiene, social distancing, and wearing protective face coverings. Moreover, the parameter \(\psi\) accounts for the infectiousness of asymptomatic individuals relative to those who have symptoms. The rates of transfer to the two infectious compartments, \(\alpha _a\) for asymptomatic and \(\alpha _s\) for symptomatic, are both dependent on the incubation period \(\tau\) of the virus.
Other parameters in the model include the constant recruitment rate A into the S compartment, which is driven by new births in the population. To account for deaths by natural causes, a constant rate of \(\mu\) per unit time is applied to all compartments in the model. In addition, deaths due to the disease are included through the parameters \(\epsilon _I\) and \(\epsilon _T\), affecting the infectious symptomatic and confirmed compartments, respectively.
By taking into account the above assumptions, a mathematical model is developed, which can be described by the following system of six ordinary differential equations:
$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{dS}{dt} = A - \beta S \dfrac{\psi I_a + I_s}{N} - \mu S, S(0)\ge 0, \\ \dfrac{dE}{dt} = \beta S \dfrac{\psi I_a + I_s}{N} - (\alpha _a + \alpha _s + \mu )E, E(0)\ge 0, \\ \dfrac{dI_a}{dt} = \alpha _a E - (\mu +\omega +\delta _a+\theta ) I_a, I_a(0)\ge 0, \\ \dfrac{dI_s}{dt} = \alpha _s E+\omega I_a-(\mu +\epsilon _I+\delta _s)I_s, I_s(0)\ge 0, \\ \dfrac{dC}{dt} = \delta _a I_a+ \delta _s I_s - (\mu +\epsilon _T+r) C, C(0)\ge 0, \\ \dfrac{dR}{dt} = \theta I_a + r C - \mu R, R(0)\ge 0, \\ \end{array}\right. } \end{aligned}$$
where \(\beta = \beta _0 (1-\lambda )\), \(\alpha _a = \frac{c}{\tau }\), \(\alpha _s = \frac{1-c}{\tau }\), \(N = S + E + I_a + I_s + C + R\). The functions S, E, \(I_a\), \(I_s\), C, and R are differentiable real-valued functions on \({\mathbb {R}}\). Moreover, all parameters are nonnegative constants.
Parameter values
The values of various parameters were determined using several sources and methods. Local COVID-19 data [42] were used in calculating detection rate (\(\delta _a,\delta _s\)), post-detection recovery rate (r), and death rate of COVID-19 cases (\(\epsilon _I\), \(\epsilon _T\)). The recruitment rate (A) and natural death rate (\(\mu\)) are calculated from population data [43,44,45]. For the other parameters which cannot be computed directly from data, we rely on references that estimate their values. The basic reproduction number of COVID-19 (\(R_0\)) and relative infectiousness of asymptomatic cases (\(\psi\)) are based on estimates by the US Centers for Disease Control [46]. The incubation rate (\(\tau\)) and symptomatic transition (\(\omega\)) are obtained from reports by the World Health Organization [47, 48]. The proportion of asymptomatic cases (c) is from Mizumoto et al. [49] that studied the COVID-19 outbreak at the Diamond Princess cruise ship.
We fit the model output to data by employing a curve-fitting algorithm to estimate the value of the transmission reduction rate (\(\lambda\)), the initial values for the exposed (E(0)), infectious asymptomatic (\(I_a(0)\)), and infectious symptomatic (\(I_s(0)\)). In particular, the constrained L-BFGS optimization procedure [50] was utilized to minimize the sum of squared errors between the model output and the empirical time series. The parameter \(\lambda\) is fitted on a per-month basis starting from March 2020, to coincide with the changes in the disease dynamics and the corresponding transmission reduction policies implemented by the government that tends to be updated monthly [51]. The output is a vector of best-fit transmission reduction parameters \([\lambda _1,\lambda _2,...,\lambda _n]\) where n is the number of months since March 2020. This fitting procedure is utilized to produce forecasts for the Philippine COVID-19 epidemic [36]. Following the above parametrization, our model fits well to the Philippine data for cumulative cases of COVID-19 (Fig. 2). Table 1 summarizes the parameter values used in the model.
Fit of model output to data following L-BFGS optimization procedure
Table 1 Summary of parameter values for the COVID-19 model
Model with optimal control
We explore four control strategies to mitigate the COVID-19 epidemic-precautionary measures, detection of asymptomatic cases, detection of symptomatic cases, and vaccination. The definitions of each of these controls and how they are incorporated in the model are given below:
Precautionary measures (\(u_1(t)\)) refer to government-led efforts to inhibit possible contacts between susceptible and infectious individuals by regulating public gatherings, closing schools, suspending office work, enforcing adherence to health protocols such as social distancing, mask-wearing, hand-washing, etc. This control affects the transmission rate \(\beta\) and is incorporated in the model as a factor (\(1-u_1(t)\)), replacing (\(1 - \lambda\)). The value of \(u_1(t)\) represents the effort of precaution at time t. A value of 0 indicates that no precautionary measure is being practiced, and a value of 1 indicates full effort on precaution prohibiting any form of infection.
Detection of asymptomatic cases (\(u_2(t)\)) entails identifying and isolating infectious individuals who do not have symptoms of COVID-19. This may be done through laboratory tests such as reverse transcription-polymerase chain reaction (RT-PCR) to determine whether an individual is infectious or not. A positive case is taken to be immediately followed by isolation at home or in a dedicated quarantine facility to prevent transmission. It is assumed, therefore, that after an individual is confirmed to have COVID-19, s/he is not able to infect susceptible individuals. We incorporate this control to the model by replacing \(\delta _a\) with a time-varying control function \(u_2(t)\). The value of \(u_2(t)\) represents the effort of testing and isolation at a given time t. A value of 0 indicates the absence of testing and isolating, and a value of 1 indicates testing and isolating all infectious asymptomatic individuals on a given unit of time.
Detection of symptomatic cases (\(u_3(t)\)) follows the same definition as the detection of asymptomatic cases but applied to individuals that exhibit symptoms of COVID-19. We replace \(\delta _s\) by \(u_3(t)\) to incorporate this control to the model. Similarly, a value of 0 of this control indicates the absence of effort to test and isolate symptomatic individuals while a value of 1 indicates full testing and detection of all symptomatic individuals on a given unit of time.
Vaccination (\(u_4(t)\)) refers to the full inoculation of susceptible individuals for them to acquire protection against COVID-19 infection or protection against a severe case of the disease. We assume in this paper that vaccines give protection against infection, that is, an individual who is fully vaccinated gets immunity to COVID-19 over the period considered. Multiple vaccines with varying effectiveness rates have been identified for use against COVID-19 such as those developed and manufactured by Pfizer-BioNTech, Moderna, Sinovac, etc. We consider the average effectiveness rate of the vaccines weighted by the usage, denoted by \(\sigma\), where \(0\le \sigma \le 1\). To incorporate vaccination in the model, we add a rate of transfer from compartment S to R equal to \(\sigma u_4(t)\). The value of \(u_4(t)\) represents the effort of vaccination for the susceptible population. A value of 0 represents no vaccination efforts while a value of 1 represents vaccination of all susceptible individuals on a single unit of time.
Our goal is to identify the optimal strategy for limiting the spread of SARS-CoV-2 in a population using minimal cost of controls. In this study, the optimal control problem minimizes the number of asymptomatic (\(I_a\)) and symptomatic individuals (\(I_s\)) and the control costs. The controls are expressed in quadratic forms to incorporate nonlinear costs for the implementation of each control and to ensure the convexity of the cost function. This is a common form of an objective functional in optimal control problems [25, 52]. The objective functional is represented by:
$$\begin{aligned} J(\vec {u}) = \int _{t_0}^{t_f} \bigg (I_a(t) + I_s(t) + w_1 u_1^2(t) + w_2 u_2^2(t) + w_3 u_3^2(t) + w_4 u_4^2(t)\bigg ) \,dt, \end{aligned}$$
where \(t_0\) and \(t_f\) represent January 1, 2021 and December 31, 2022 respectively, reflecting a 2-year period. The parameters \(w_i, i=1,2,3,4,\) account for the relative costs of implementing controls \(u_i\). They represent the weights of corresponding terms in the integrand and their importance in the optimal control problem.
We aim to identify \(u_i^*(t), i=1,2,3,4,\) such that
$$\begin{aligned} J\left( u_1^*, u_2^*, u_3^*,u_4^*\right) = \min _{{\mathscr {U}}}J\left( u_1, u_2, u_3,u_4\right) , \end{aligned}$$
where for Lebesgue integrable \(u_i\),
$$\begin{aligned} {\mathscr {U}} = \left\{ \left( u_1, u_2, u_3,u_4\right) | u_{i}^{\mathrm{{min}}} \le u_i(t)\le u_{i}^{\mathrm{{max}}}, t_0\le t \le t_f \right\} . \end{aligned}$$
Here, \(u_{i}^{\mathrm{{min}}}\) and \(u_{i}^{\mathrm{{max}}}\) are the lower and upper bounds of the control \(u_i\), representing minimum and maximum implementation efforts.
The constraints of the optimal control problem are given by:
$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{dS}{dt} = A - (\beta _0(1- u_1)) S \dfrac{\psi I_a + I_s}{N} - \sigma u_4 S - \mu S, S(0)\ge 0, \\ \dfrac{dE}{dt} = \beta _0(1- u_1) S \dfrac{\psi I_a + I_s}{N} - (\alpha _a + \alpha _s + \mu )E, E(0)\ge 0, \\ \dfrac{dI_a}{dt} = \alpha _a E - (\mu +\omega +u_2+\theta ) I_a, I_a(0)\ge 0, \\ \dfrac{dI_s}{dt} = \alpha _s E+\omega I_a-(\mu +\epsilon _I+u_3)I_s, I_s(0)\ge 0, \\ \dfrac{dC}{dt} = u_2 I_a+ u_3 I_s - (\mu +\epsilon _T+r) C, C(0)\ge 0, \\ \dfrac{dR}{dt} = \theta I_a + r C + \sigma u_4 S - \mu R, R(0)\ge 0, \\ \end{array}\right. } \end{aligned}$$
$$\begin{aligned} N = S + E + I_a + I_s + C + R. \end{aligned}$$
The existence of the optimal solution can be shown using standard results in optimal control theory [25, 52]. The necessary convexity of the integrand of the objective functional, positive definiteness of system (4), and the linear dependence of the state differential equations to the controls are satisfied in our model.
We apply Pontryagin's minimum principle [24] to determine the necessary conditions using the optimality system for our problem (see Additional file 1: Appendix). This system is a two-point boundary problem with initial conditions for the state variables and terminal conditions for the adjoint variables. The solutions are solved numerically using a Runge–Kutta fourth-order scheme. The state variables are solved forward in time while the adjoint variables are solved backwards, referred to as Forward–Backward Sweep Method [25]. We update the controls using a convex combination of the latest and previous values. This process is iterated until the updates in the control values are very small or less than the machine epsilon.
The initial state values are computed using a combination of data and model fitting. We relied on model fitting to data to get the values for \(E(0),I_a(0)\) and \(I_s(0)\) [36]. The initial value for confirmed cases (C(0)) is based on data from the Department of Health [42]. The initial number of removed individuals (R(0)) is assumed to be higher than the detected recoveries on January 1, 2021, to include recoveries from undetected asymptomatic cases. We estimate that this is equal to 450,000, consistent with the output of our model [36]. Lastly, the initial susceptible population is estimated to be equal to the whole population minus the assumed values for the other compartments. Table 3 lists the initial state values used in our simulations.
Table 2 Lower (\(u_{i}^{\mathrm{{min}}}\)) and upper (\(u_{i}^{\mathrm{{max}}}\)) bounds for control strategies representing available effort in the Philippines
Table 3 Initial values for model states
We fix \(u_{i}^{\mathrm{min}}=0\) for \(i\in \{1,2,3,4\}\) while upper bounds of the controls are varied to reflect the realistic maximum efforts that can be achieved with each control. Results from model fitting to data (see Table 1) show that the highest value for precaution is 0.85. Direct computations from epidemiological data provided by the Department of Health [42] show that the minimum monthly average duration of detection, from symptom onset to confirmation of test results, may take 5 days. We take the inverse of this duration as our upper bound for both detection controls, hence \(u_{2}^{\mathrm{{max}}} = u_{3}^{\mathrm{{max}}} = 0.2\). Lastly, the upper bound for vaccination (\(u_4\)) is based on government proclamations [10]. The boundaries for the control values are summarized in Table 2.
The weight parameters \(w_i, i=1,2,3,4,\) are adjusted to balance the terms in the integrand of the cost function (2). These parameters reflect the total costs and payoffs of implementing each control strategy including the cost of the products used (test kits, vaccines, masks, etc.), operational costs (personnel salary, rent, procurement, refrigeration units, etc.), opportunity costs for the economy due to lockdowns, and so forth. To determine the values of the weight parameters, we consider the fact that the upper bounds for the controls already reflect the realistic and achievable efforts that the Philippine government can exert, given the historical and prospective cost and availability of each control. Recall that the upper bounds for precaution, detection of asymptomatic cases, and detection of symptomatic cases are based on data, and the upper bound for the vaccination control is based on government targets. Based on this, we assume that a lower \(u_i^{max}\) signifies a relatively higher implementation cost for the i-th control as implementing the control beyond this upper bound is not readily available to the government. Given this, we rescale the terms in the cost function (2) by equating the weight parameters to the inverse of the maximum allowable effort for each control (\(w_1=1/0.85, w_2=w_3=1/0.2, w_4 = 1/0.002\)). This improves the balance of the terms in the cost function and reduces the bias to implement controls that have lower upper bounds.
Limited information is available as of writing to estimate the vaccine effectiveness parameter (\(\sigma\)) for COVID-19. Moreover, various vaccines with different effectiveness rates will be deployed in the Philippines as they become available [10], making it more difficult to give a realistic estimate of this parameter. The best alternative is to equate this to the pooled effectiveness of vaccines for a similar disease such as influenza, which is at \(\sigma =0.7\) [53].
Optimal control strategies for COVID-19 in the Philippines
Solving the stated optimal control problem in Eqs. 2–4 generates the optimal levels of precaution (\(u_1\)), asymptomatic detection (\(u_2\)), symptomatic detection (\(u_3\)), and vaccination (\(u_4\)) over the 2-year period from January 1, 2021 to December 31, 2022 (Fig. 3). Notably, the maximum feasible vaccination rate must be sustained throughout the entire 2-year period. Symptomatic detection must likewise maintain a high value close to the maximum feasible value, lowering slightly in the first 6 months of 2021. Asymptomatic detection and precaution must likewise be implemented at their maximum respective values early in 2021. But asymptomatic detection may be eased to nearly zero by the second quarter of 2021, while precaution eases a bit by the second half of 2021, then is gradually reduced throughout the remainder of the 2 years under consideration.
Optimal control strategy for the COVID-19 epidemic in the Philippines
We also simulate a no-control scenario by setting the controls to 0 throughout the 2-year period. A dramatic difference is observed between the with-control and without-control conditions (Fig. 4). Without controls, a peak number of 100 million infectious individuals is achieved within the first quarter of 2021. Meanwhile, with the optimal implementation of all controls, the total number of running infections is driven down quickly in early 2021, without ever breaching the 10,000-mark. After the full-throttle implementation of all controls in early 2021, sustained efforts at detecting symptomatic individuals and proactively vaccinating susceptible populations may thus be sufficient to prevent the infected population from rising. So long as the latter strategies are maintained, the majority of the population may slowly ease stringent distancing rules and fewer resources need to be urgently allocated to asymptomatic detection.
Total infectious individuals (\(I_a + I_s\)) with full implementation of the optimal control strategy
Policy impacts of vaccine delays
With the bottleneck in the global supply of vaccines, it is of chief concern when a country can start vaccinating its population. It is not far-fetched for countries to experience delays in vaccination which in turn, would have an effect on policy. Here, we look into the impact of vaccine delay on the optimal control strategy. To achieve this, we add the following constraint to the optimal control problem 2–4:
$$\begin{aligned} u_4(t)=0, \quad t\in [t_0,t_d], \quad t_0\le t_d \le t_f, \end{aligned}$$
where \(t_d\) is the vaccine delay in days. We first solve for the optimal control profiles given vaccine delays of 180, 360, and 540 days (Fig. 5). Results reveal that increased efforts on the other controls become necessary given longer delays in vaccination. Primarily, precautionary measures should compensate when vaccines are delayed. Detection of symptomatic infectious individuals should also be strengthened for mitigation if vaccine rollout is slowed down.
To further evaluate the effect of vaccine delay, we compute the cost of the optimal strategy in each scenario. The cost of the control strategy (\({\mathscr {C}}\)) is defined as the integral of the last four terms in the cost function of the optimal control problem over the time period, specifically:
$$\begin{aligned} {\mathscr {C}} = \int _{t_0}^{t_f} \bigg (w_1 u_1^2(t) + w_2 u_2^2(t) + w_3 u_3^2(t) + w_4 u_4^2(t)\bigg ) dt. \end{aligned}$$
We observe that the optimal strategy in the no-delay scenario has the least cost and will also result in the least number of total infections. Given this, we decided to compare the cost of the optimal strategy in the scenarios with vaccine delay relative to the no-delay scenario. Specifically, given vaccine delays of 30k days, where \(k\in \{0,1,2,3,...,24\}\), we examine the resulting relative cost and total infections of the optimal control strategy (Fig. 6).
Delay of vaccine availability increases effort for other controls in the optimal strategy. \(u_1\)-Precautionary measures, \(u_2\)-detection of asymptomatic cases, \(u_3\)-detection of symptomatic cases, \(u_4\)-vaccination
Effects of delays in initiation of vaccination strategies on the relative cost of overall strategy (left) and resulting total infections over the 2-year period (right)
Based on our simulations, longer delays in vaccination result in higher relative costs of the optimal strategy. For example, delaying the vaccines by 180 days will result in an \(18\%\) increase in cost, and delaying the vaccine by 360 days will increase the cost by \(32\%\), due to the compensation of the other controls. We also observe that delaying the vaccine by 60 days or more will only increase the total infections in the optimal strategy by a relatively small amount (\(<1000\) additional infections). These two findings suggest that while the number of infections can still be effectively managed when vaccines are delayed, vaccine delay may pose more deleterious effects on the economy than on the overall health status of the population, even if the optimal strategy is implemented and new cases are minimized.
Policy impacts of expanding vaccine supply
Recall that the upper bound for the vaccination control (\(u_{4}^{\mathrm{{max}}}\)) was fixed based on the vaccination plan by the local government. Note however that the actual vaccine capacity is unknown and is dependent on negotiations and supply. Here, we want to look into whether increasing vaccine supply will have a significant effect on the optimal control strategy. To discern this relationship, we modify the value of \(u_{4}^{\mathrm{{max}}}\) to double and triple the initial value. The value of the weight parameter \(w_4\) is equal to \(1/u_{4}^{\mathrm{{max}}}\) in each scenario.
We solve for the optimal control profiles and the resulting number of vaccinations if \(u_{4}^{\mathrm{{max}}}=0.002,0.004\) or 0.006 (Figs. 7, 8). We observe that increasing the vaccine capacity will have a significant impact on the optimal control strategy. For the three scenarios considered, the maximum vaccination effort must be utilized for almost the entire period, but vaccination effort is eased earlier if the vaccine capacity is larger. Another important consequence of increasing vaccine capacity is the earlier relaxation of the other controls, mainly precautionary measures and detection of symptomatic cases.
Increasing the upper bound for vaccination control may allow for shorter and lighter precautionary measures and testing. \(u_1\)-Precautionary measures, \(u_2\)-detection of asymptomatic cases, \(u_3\)-detection of symptomatic cases, \(u_4\)-vaccination
Comparing the relative cost and the resulting total infections reveals that increasing the vaccine capacity by double or triple the initial amount will reduce the cost of the optimal strategy (Table 4). We observe a 25% cost reduction when vaccine supply is doubled, and 37% cost reduction when vaccine supply is tripled, coupled with a slight reduction in the total number of infections. This reinforces the proposition that dedicating more resources to vaccinations is more favorable in the long run owing to the reduced efforts necessary for implementing the other interventions.
Table 4 Total infections and relative cost for optimal control strategy when upper bound for vaccination (\(u_{4}^{\mathrm{{max}}}\)) is increased
Managing cost and impacts of pandemic control strategies
Finally, to integratively consider the dynamics of all interventions, we analyzed the results of a control setup featuring all interventions in conjunction with ablated control scenarios featuring various subsets of the proposed controls. We compared outcomes for optimized single control, dual control, and triple control strategies to simulate scenarios when the other controls are not available, as well as to shed light on their contributions to controlling the epidemic, and highlight the significance of implementing all four in concert. To do this, controls that are not being implemented in each scenario are fixed at 0. Full details on various control profiles are available in Additional file 1: Appendix.
The optimal number of complete vaccinations (daily and cumulative) for different values of upper bound for vaccination
A menu of control strategies visualized according to infections averted (x-coordinate) and cost (y-coordinate). Points are sized by infections averted, such that larger points symbolize control strategies that avert more infections. Points are also colored by cost, such that bluer points incur lower costs, and redder points incur higher costs. \(u_1\)-Precautionary measures, \(u_2\)-detection of asymptomatic cases, \(u_3\)-detection of symptomatic cases, \(u_4\)-vaccination
We compare all possible combinations of the controls in terms of both their cost and the infections averted relative to the no-control scenario (Fig. 9). Intuitively, an ideal scenario entails low cost and high infections averted. Strikingly, the control scenarios appear to cluster into three major categories. First are low cost, low impact strategies, which do not entail high costs, but also do not effectively curb infections. These correspond to intervention programs that do not mobilize sufficient resources to address the health crisis and subsequently do not achieve the desired impact. We observe here that this cluster of intervention combinations primarily exclude precautionary measures like community quarantines, meaning various scenarios implementing only vaccinations and case detection strategies. This entails that, even if these strategies might be less costly-especially from an economic perspective-than prolonged lockdowns, they may not be sufficient on their own to control outbreak trajectories. Especially in the early months of 2021, it will be vital for local governments to limit unnecessary contact between individuals, and enforce such procedures reliably and consistently. Otherwise, even maximally implemented case detection and vaccination strategies will not be able to protect a significant proportion of the population from infection.
The second category represents the worst-case scenario: high cost, low impact strategies. These indicate attempts by governing entities to invest resources in public health interventions, which ultimately still do not effectively control outbreaks. This therefore presents a severe misuse of resources without achieving desired outcomes. Note here that these intervention combinations primarily exclude efficient detection of symptomatic cases. This means that without efficient detection of symptomatic cases—even at full implementation of precautionary quarantine measures, vaccinations, and overall high costs to the economy at large-few infections will be averted.
The final category represents the most favorable category of interventions. Here, medium cost, high impact strategies pertain to scenarios involving some investment of resources, directed towards the most efficient policy levers. This results in an effective minimization of infections, thereby constituting well-targeted policy decisions that achieve the objective of controlling the pandemic. Now we see that both precautionary measures and efficient symptomatic case detection are vital to achieving this set of outcomes. Even if these interventions do introduce higher costs, they can effectively quell outbreak trajectories early on. Moreover, in these setups, their implementation is even given an allowance for relaxation over time if executed effectively and consistently in the early months. This therefore reduces costs from a broader perspective, as overall fewer infections arise nationally, and fewer resources are demanded to address them.
In this study, we studied optimal strategies for controlling the spread of COVID-19 in the Philippines. We considered existing policy interventions as well as the introduction of vaccination rollouts to quell outbreak trajectories. Dramatic differences were detected in simulated infections depending on which controls were prioritized. In particular, we observed the importance of early and effective implementation of precautionary measures like community quarantines, coupled with efficient detection of symptomatic cases. Furthermore, we found that even if vaccinations alone do not constitute an efficient response to the pandemic, expanding vaccine supply relaxes the need for these more resource-intensive interventions. Meanwhile, although less than ideal, delays in vaccine administration may also be compensated through the remaining policy levers.
These insights bear particular consequences for policies in developing countries like the Philippines [1, 2]. Here, we highlight three key takeaways. First, more than a year into the pandemic, it remains crucial to sustain efficient case detection, isolation, and treatment strategies, particularly for symptomatic cases. In the Philippines, where long-term states of community lockdown have prevailed as the government's response to short-term fluctuations in COVID-19 cases [39], our findings suggest that an optimal, cost-effective strategy would actually entail relaxations to such measures—but only under the condition that symptomatic case detection is properly implemented. Hence, improving the capacity of the local health system to identify, process, and manage these cases efficiently should be a top priority beyond cyclically adjusting quarantine levels [54].
Second, policymakers need to consider how to expand not just vaccine supply, but also the capacity to administer them. This includes both logistical concerns regarding the strategic use of facilities to vaccinate individuals, inform the public regarding vaccine availability and eligibility, as well as reducing vaccine hesitancy through culturally sensitive health promotion programs that strengthen public trust [3, 55, 56]. Only when such health communication objectives are accomplished and collective behavioral change is initiated can the vaccination strategies posited in this work be made feasible. Otherwise, even procuring sufficient supplies of vaccines will not achieve its intended effects to stop local outbreaks. Initial efforts along these lines are underway by the Philippine National Vaccine Operations Center, for which our model results strongly reaffirm the urgency.
Third, timeliness and consistency must be emphasized in adopting policy measures [4, 5]. Across all favorable scenarios simulated, high levels of key interventions were needed in the early months of 2021, with relaxations projected only mid-2021 or in 2022. Systems for detecting existing cases, while preventing new ones, are needed for vaccinations to meaningfully impact outbreak trajectories and reduce overall costs—especially as these systems need to be robustly sustained in the event of potential delays in acquiring sufficient vaccines for the entire population. This ensures that even if developing countries like the Philippines do not hold sway over the global supply chain of vaccines, the pandemic may still be kept under control through means over which local policymakers do wield authority.
It is worth noting that the conclusions of this work rest on several assumptions. These assumptions constrain the interpretation of our findings but likewise point to promising avenues for future work [6, 7]. A number of limitations pertain to the realism of our model. For instance, we assume a total population that is unaffected by immigration and emigration flows. This can be a source of confounding given that, despite additional precautions in travel protocols, relevant susceptible, exposed, and infectious populations, in reality, include individuals who may leave or arrive within national borders. Furthermore, ordinary differential equation models of diseases, such as the one utilized in this paper, assume homogeneous mixing of individuals in the population. That is, our model assumes that a susceptible individual has a uniform chance of being infected by any infectious individual, regardless of their geographic proximity. However, in an archipelago such as the Philippines, this assumption of free-mixing does not necessarily hold due to different patterns of movement within the country and the tendency of the outbreaks to be concentrated within more urban areas. In relation to this heterogeneity, we further hypothesize that a multi-region approach to mitigation such as in [57] will further lower the total cost of the optimal control strategies. Hence, though we do not include such migratory flows and heterogeneity in our analysis, extensions may valuably consider these factors as well.
We additionally assume that vaccines work by transitioning individuals to a removed compartment. However, it is not the case that all vaccines guarantee 100% immunity for all individuals. Additionally, for parsimony, our model does not incorporate a number of wide-ranging issues which remain pressing to address, yet are beyond the scope of this work. These include: prioritized vaccinations of various segments of the population, lesser-known dynamics of reinfection with COVID-19, the variability in the economic cost of the controls, the impact of emerging variants of the pathogen, documented distinctions between being protected from symptoms while being able to transmit the virus to others, or the practical circumstances of administering vaccines requiring multiple doses [58, 59]. With regard to this latter limitation, we specifically do not model potential logistical impediments in vaccine scheduling or temporary states of partial protection resulting from initial doses [60]. Such considerations therefore place caveats on the implications of our results, further highlighting the need for robust investment in these strategies when translating them into real-world policies. These factors may likewise be modelled with greater precision in succeeding work as growing knowledge continues to accumulate in line with close monitoring by the scientific community [61].
This study applies optimal control theory to an epidemiological model to calculate the optimal efforts required for precautionary measures, asymptomatic case detection, symptomatic case detection, and vaccination to mitigate the impact of the COVID-19 pandemic. Using parameter values suitable to the Philippines, we show that precautionary measures and symptomatic case detection are essential interventions to minimize infections at an efficient cost. Furthermore, relaxation of measures is feasible after an early and maximal implementation of all controls. Our results also highlight that increasing vaccination capacity and timely acquisition of vaccines are key to reducing the total implementation cost, leading to earlier relaxation of the other non-pharmaceutical interventions in the optimal strategy. This work provides a quantitative reference for drafting policies designed to control the pandemic in the most efficient manner.
The data analysed for this study are available in the Department of Health COVID-19 tracker https://www.doh.gov.ph/covid19tracker.
Ahmed F, Ahmed N, Pissarides C, Stiglitz J. Why inequality could spread COVID-19. Lancet Public Health. 2020;5(5):e240.
Chiriboga D, Garay J, Buss P, Madrigal RS, Rispel LC. Health inequity during the COVID-19 pandemic: a cry for ethical global leadership. Lancet (Lond, Engl). 2020;395(10238):1690.
Van Bavel JJ, Baicker K, Boggio PS, Capraro V, Cichocka A, Cikara M, et al. Using social and behavioural science to support COVID-19 pandemic response. Nat Hum Behav. 2020;4(5):460–71.
Haug N, Geyrhofer L, Londei A, Dervic E, Desvars-Larrive A, Loreto V, et al. Ranking the effectiveness of worldwide COVID-19 government interventions. Nat Hum Behav. 2020;4(12):1303–12.
Ruktanonchai NW, Floyd J, Lai S, Ruktanonchai CW, Sadilek A, Rente-Lourenco P, et al. Assessing the impact of coordinated COVID-19 exit strategies across Europe. Science. 2020;369(6510):1465–70.
Becker AD, Grantz KH, Hegde ST, Bérubé S, Cummings DA, Wesolowski A. Development and dissemination of infectious disease dynamic transmission models during the COVID-19 pandemic: what can we learn from other pathogens and how can we move forward? Lancet Digit Health. 2020;3:e41–50.
Vespignani A, Tian H, Dye C, Lloyd-Smith JO, Eggo RM, Shrestha M, et al. Modelling COVID-19. Nat Rev Phys. 2020;2(6):279–81.
Nelson R. COVID-19 disrupts vaccine delivery. Lancet Infect Dis. 2020;20(5):546.
Usher AD. COVID-19 vaccines for all? Lancet. 2020;395(10240):1822–3.
Department of Health. The Philippine National Deployment and vaccination plan for COVID-19 vaccines. 2021. https://doh.gov.ph/node/27220. Accessed 16 Feb 2021.
Loayza NV. Costs and trade-offs in the fight against the COVID-19 pandemic: a developing country perspective. Washington, DC: World Bank; 2020.
Mohamadou Y, Halidou A, Kapen PT. A review of mathematical modeling, artificial intelligence and datasets used in the study, prediction and management of COVID-19. Appl Intell. 2020;50(11):3913–25.
Rahimi I, Chen F, Gandomi AH. A review on COVID-19 forecasting models. Neural Comput Appl. 2021. https://doi.org/10.1007/s00521-020-05626-8.
IHME COVID-19 Forecasting Team. Modeling COVID-19 scenarios for the United States. Nat Med. 2020;27(1):94–105.
Lin F, Muthuraman K, Lawley M. An optimal control theory approach to non-pharmaceutical interventions. BMC Infect Dis. 2010;10(1):1–13.
Neto OP, Kennedy DM, Reis JC, Wang Y, Brizzi ACB, Zambrano GJ, et al. Mathematical model of COVID-19 intervention scenarios for São Paulo-Brazil. Nat Commun. 2021;12(1):1–13.
Samui P, Mondal J, Khajanchi S. A mathematical model for COVID-19 transmission dynamics with a case study of India. Chaos Solitons Fract. 2020;140:110173.
Jia J, Ding J, Liu S, Liao G, Li J, Duan B, et al. Modeling the control of COVID-19: impact of policy interventions and meteorological factors. Electron J Differ Eq. 2020;23(23):1–24.
Chatterjee R, Bajwa S, Dwivedi D, Kanji R, Ahammed M, Shaw R. COVID-19 risk assessment tool: dual application of risk communication and risk governance. Prog Disaster Sci. 2020;7:100109.
Pluchino A, Biondo A, Giuffrida N, Inturri G, Latora V, Le Moli R, et al. A novel methodology for epidemic risk assessment of COVID-19 outbreak. Sci Rep. 2021;11(1):1–20.
Sangiorgio V, Parisi F. A multicriteria approach for risk assessment of Covid-19 in urban district lockdown. Saf Sci. 2020;130:104862.
Davies NG, Kucharski AJ, Eggo RM, Gimma A, Edmunds WJ, Jombart T, et al. Effects of non-pharmaceutical interventions on COVID-19 cases, deaths, and demand for hospital services in the UK: a modelling study. Lancet Public Health. 2020;5(7):e375–85.
Liu Y, Gu Z, Xia S, Shi B, Zhou XN, Shi Y, et al. What are the underlying transmission patterns of COVID-19 outbreak? An age-specific social contact characterization. EClinicalMedicine. 2020;22:100354.
Pontryagin LS, Boltyanskii VG, Gamkrelize RV, Mishchenko EF. The mathematical theory of optimal processes. New York: Wiley; 1962.
Lenhart S, Workman JT. Optimal control applied to biological models. New York: Chapman and Hall/CRC; 2007.
Rowthorn RE, Laxminarayan R, Gilligan CA. Optimal control of epidemics in metapopulations. J R Soc Interface. 2009;6(41):1135–44.
Tsay C, Lejarza F, Stadtherr MA, Baldea M. Modeling, state estimation, and optimal control for the US COVID-19 outbreak. Sci Rep. 2020;10(1):10711.
Perkins TA, España G. Optimal control of the COVID-19 pandemic with non-pharmaceutical interventions. Bull Math Biol. 2020;82(9):118.
Sasmita NR, Ikhwan M, Suyanto S, Chongsuvivatwong V. Optimal control on a mathematical model to pattern the progression of coronavirus disease 2019 (COVID-19) in Indonesia. Glob Health Res Policy. 2020;5(1):38.
Madubueze CE, Dachollom S, Onwubuya IO. Controlling the spread of COVID-19: optimal control analysis. Comput Math Methods Med. 2020;2020:1–14.
Obsu LL, Balcha SF. Optimal control strategies for the transmission risk of COVID-19. J Biol Dyn. 2020;14(1):590–607.
Ullah S, Khan MA. Modeling the impact of non-pharmaceutical interventions on the dynamics of novel coronavirus with optimal control analysis with a case study. Chaos Solitons Fract. 2020;139:110075.
Ndii MZ, Adi YA. Modelling the transmission dynamics of COVID-19 under limited resources. Commun Math Biol Neurosci. 2020.
Bonnans JF, Gianatti J. Optimal control techniques based on infection age for the study of the COVID-19 epidemic. Math Model Nat Phenom. 2020;15:48.
Libotte GB, Lobato FS, Platt GM, Neto AJS. Determination of an optimal control strategy for vaccine administration in COVID-19 pandemic treatment. Comput Methods Programs Biomed. 2020;196:105664.
FASSSTER. COVID-19 Philippines LGU monitoring platform. 2020. https://fassster.ehealth.ph/covid19/. Accessed 29 May 2021.
Estuar MRJE, Uyheng J, De Leon M, Benito DJ, De Lara-Tuprio E, Estadilla C, et al. Science and public service during a pandemic: reflections from the scientists of the Philippine Government's COVID-19 surveillance platform. Philipp Stud Hist Ethnogr Viewp. 2020;68(3):493–504.
Uyheng J, Pulmano CE, Estuar MRJ. Deploying system dynamics models for disease surveillance in the Philippines. In: International conference on social computing, behavioral-cultural modeling and prediction and behavior representation in modeling and simulation. Springer; 2020. p. 35–44.
Vallejo BM Jr, Ong RAC. Policy responses and government science advice for the COVID 19 pandemic in the Philippines: January to April 2020. Prog Disaster Sci. 2020;7:100115.
Buhat CAH, Duero JCC, Felix EFO, Rabajante JF, Mamplata JB. Optimal allocation of COVID-19 test kits among accredited testing centers in the Philippines. J Healthc Inform Res. 2021;5(1):54–69.
Caldwell JM, de Lara-Tuprio E, Teng TRY, Estuar MRJE, Sarmiento RF, Abayawardana M, et al. Understanding COVID-19 dynamics and the effects of interventions in the Philippines: a mathematical modelling study. medRxiv. 2021;p. 2021–01.
Department of Health-Epidemiology Bureau. COVID-19 Tracker Philippines. 2020. https://www.doh.gov.ph/covid19tracker. Accessed 8 May 2020.
macrotrends. Philippines birth rate 1950–2020. 2020. https://www.macrotrends.net/countries/PHL/philippines/birth-rate. Accessed 8 May 2020.
macrotrends. Philippines life expectancy 1950–2020. 2020. https://www.macrotrends.net/countries/PHL/philippines/life-expectancy. Accessed 8 May 2020.
Philippine Statistics Authority. Census of population and housing. 2020. https://psa.gov.ph/population-and-housing. Accessed 8 May 2020.
US Centers for Disease and Control. COVID-19 pandemic planning scenarios. 2021. https://www.cdc.gov/coronavirus/2019-ncov/hcp/planning-scenarios.html. Accessed 20 Mar 2021.
World Health Organization. Coronavirus disease 2019 situation report-73. 2020. https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200402-sitrep-73-covid-19.pdf. Accessed 8 May 2020.
World Health Organization. Report of the WHO-China Joint Mission on Coronavirus Disease 2019 (COVID-19). World Health Organization; 2020. https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.PDF.
Mizumoto K, Kagaya K, Zarebski A, Chowell G. Estimating the asymptomatic proportion of coronavirus disease 2019 (COVID-19) cases on board the Diamond Princess cruise ship, Yokohama, Japan, 2020. Eurosurveillance. 2020;25(10):2000180.
Varadhan R, et al. Numerical optimization in R: Beyond optim. J Stat Softw. 2014;60(1):1–3.
Official Gazette. Inter-agency task force for the management of emerging infectious diseases resolutions. 2020. https://www.officialgazette.gov.ph/section/laws/other-issuances/inter-agency-task-force-for-the-management-of-emerging-infectious-diseases-resolutions/. Accessed 15 Nov 2020.
Fleming W. Deterministic and stochastic optimal control. New York: Springer; 1975.
Osterholm MT, Kelley NS, Sommer A, Belongia EA. Efficacy and effectiveness of influenza vaccines: a systematic review and meta-analysis. Lancet Infect Dis. 2012;12(1):36–44.
World Health Organization. COVID-19 strategy update; 2020. https://www.who.int/emergencies/diseases/novel-coronavirus-2019/strategies-and-plans. Accessed 8 May 2020.
Dror AA, Eisenbach N, Taiber S, Morozov NG, Mizrachi M, Zigron A, et al. Vaccine hesitancy: the next challenge in the fight against COVID-19. Eur J Epidemiol. 2020;35(8):775–9.
Volpp KG, Loewenstein G, Buttenheim AM. Behaviorally informed strategies for a national COVID-19 vaccine promotion program. JAMA. 2021;325(2):125–6.
Carli R, Cavone G, Epicoco N, Scarabaggio P, Dotoli M. Model predictive control to mitigate the COVID-19 outbreak in a multi-region scenario. Annu Rev Control. 2020;50:373–93.
Logunov DY, Dolzhikova IV, Shcheblyakov DV, Tukhvatulin AI, Zubkova OV, Dzharullaeva AS, et al. Safety and efficacy of an rAd26 and rAd5 vector-based heterologous prime-boost COVID-19 vaccine: an interim analysis of a randomised controlled phase 3 trial in Russia. Lancet. 2021;397(10275):671–81.
Malkov E. Simulation of coronavirus disease 2019 (COVID-19) scenarios with possibility of reinfection. Chaos Solitons Fract. 2020;139:110296.
Saad-Roy CM, Morris SE, Metcalf CJE, Mina MJ, Baker RE, Farrar J, et al. Epidemiological and evolutionary considerations of SARS-CoV-2 vaccine dosing regimes. Science. 2021;372:363–70.
del Rio C, Malani P. COVID-19 in 2021-continuing uncertainty. JAMA. 2021;325:1389–90.
CDSE would like to thank Aurelio A. de los Reyes V, Dr. rer. nat. for his invaluable feedback on the initial design and findings of the study. All authors would like to thank the anonymous reviewers for their constructive comments.
This work was supported by the Department of Science and Technology-Philippine Council for Health Research and Development (DOST-PCHRD) and the United Nations Development Programme (UNDP) Pintig Lab through the FASSSTER project. This work is also supported by the Rizal Library Open Access Journal Publication Grant of the Ateneo de Manila University.
Department of Mathematics, Ateneo de Manila University, Katipunan Ave., Brgy. Loyola Heights, 1102, Quezon City, Philippines
Carlo Delfin S. Estadilla, Elvira P. de Lara-Tuprio & Timothy Robin Teng
Department of Psychology, Ateneo de Manila University, Quezon City, Philippines
Joshua Uyheng
Department of Mathematics, Caraga State University, Butuan City, Philippines
Jay Michael R. Macalalag
Department of Information Systems and Computer Science, Ateneo de Manila University, Quezon City, Philippines
Maria Regina Justina E. Estuar
Carlo Delfin S. Estadilla
Elvira P. de Lara-Tuprio
Timothy Robin Teng
CDSE contributed to the design of the work and coding of the optimal control algorithms. CDSE and JU contributed to the presentation of the results. EDLT, CDSE, JMRM, JU, and TRT contributed to the design of the model. All authors contributed to the drafting of the work. All authors read and approved the final manuscript.
Correspondence to Carlo Delfin S. Estadilla.
Additional file 1: Appendix S1.
The Appendix is included as a separate file as part of this manuscript submission and contains supplementary information on: (1) Optimality conditions, (2) Control profiles for (ablated) single, dual, and triple control scenarios.
Estadilla, C.D.S., Uyheng, J., de Lara-Tuprio, E.P. et al. Impact of vaccine supplies and delays on optimal control of the COVID-19 pandemic: mapping interventions for the Philippines. Infect Dis Poverty 10, 107 (2021). https://doi.org/10.1186/s40249-021-00886-5
Optimization and equity of COVID-19 vaccination
|
CommonCrawl
|
Optical fiber technology and its applications
Luo Bowen
PhysicsProject / Lab Report
\documentclass[a4paper]{article}
\usepackage{geometry}
\usepackage{floatrow}
\usepackage{layout}
\usepackage{amssymb}
\geometry{margin=1in}
\usepackage{authblk}
\title{\textbf{\huge{Optical fiber technology and its applications}}}
\author{\textbf\large{Luo Bowen}}
%\affiliation{The College, University of Chicago, Chicago, Illinois 60637, USA}
\affil{\textbf{Sichuan University, Chengdu, China}}
% Abstract
\begin{abstract}
Optical fibers use the principle that light travels through fine glass fibers by total reflection. In 1870, John Tyndall made the first use of total internal reflection to transmit light. In front of an audience at the royal society of art in London, he conducted an experiment in which light from the surface of a bucket of water could be transmitted through a small hole in the side of the bucket. He cut a small hole in the side of a large barrel and filled it with water. Shining a strong light on the water in the bucket, he turned off the light in the room, and saw that the light coming into the water flowed out with the water from the holes, and fell to the surface with the curved water, creating a light beam on the ground. Tyndall's discovery inspired scientists to use very fine filaments of glass, called fibre-optic fibres, as "wires" for light, and since then amazing progress has been made. Today's fiberglass has become a viable means of optical transmission in optical communications. Today, I will introduce to you this advanced science, the principle and application of optical fiber.Optical fiber communication has many advantages, such as large transmission capacity, small loss, light weight, good confidentiality and so on. If communication is like a road, then the wider the frequency band of the communication line, the more information it is allowed to transmit, and the larger the information capacity. Optical fiber has many applications in many fields, such as communication, computer LAN, and power system monitoring. With the progress of the society and the increasing demand for information, the transmission capacity of digital system is constantly increasing, and people urgently need to establish a unified worldwide communication network. In this situation, many shortcomings of the existing PDH are gradually exposed, mainly in North America, Western Europe and Asia, the three digital systems are not compatible with each other, there is no unified standard optical interface in the world, making the establishment of international telecommunications network and operation, management and maintenance are very complex and difficult.Optical fiber communication has many advantages, such as large transmission capacity, small loss, light weight, good confidentiality and so on.When the Angle of the incident light reaches or exceeds a certain Angle, the refracted light will disappear and all the incident light will be reflected back, which is the total reflection of light. Different substances have different refractive angles to light of the same wavelength, that is, different substances have different refractive indexes, and the same substances have different refractive angles to light of different wavelength. Fiber optic communication is based on the above principles.
\end{abstract}\maketitle
% \tableofcontents
%% Main body starts here
% Description of Section 1
\label{sec:Intro}
% Introduction
% What is the background and the context?
Light is an electromagnetic wave visible to the naked eye. In science, light is sometimes defined as all electromagnetic waves. Light consists of an elementary particle called a photon. It has the characteristics of particle and wave, or wave-particle duality. Optical fiber propagates through optical fiber through total reflection, which is a special situation of reflection. Light has both reflection and refraction phenomena. The reflection of light is like this: the light is blocked in a plane and the incident Angle and reflection Angle are the same. The refraction of light is the deflection of light as it passes through a medium of different densities. The energy loss of light in the refraction process is much greater than that of light in the reflection process. Light can transmit both information and energy, which is a very advanced and feasible technology in today's society. However, how to convey the information and the quality of the information is a more meaningful question. So, people invented optical fibers to transmit light, and we can read information from light. I will do an experiment to test the quality of the information that I come up with. In the experiment, we can see that the optical fiber clearly propagates the video information. We show here the quality of the information transmitted using optical fiber. It can be seen that the information transmitted by optical fiber is very reliable, free from interference, and of great practical value. So we can use fiber optics to transmit information somewhere else, to connect to the Internet. We hope that in the experiment, we can see the image on the display and the image we see on the camera as accurately as possible, and receive electromagnetic radiation from the surrounding wires, and also present a stable image. When the optical fiber moves, we also hope that the image displayed on the display can be stable output, we can see the clear image.
Optical fiber is widely used in communication, computer LAN, power system monitoring and other fields. With the progress of the society and the information demand growing, increasing number system of transmission capacity, there is an urgent need to build a unified global communications network. In this case, many existing PDH gradually exposed shortcomings, mainly concentrated in North America, Western Europe and Asia three digital system is not compatible with each other, there is no unified standard optical interface, make the establishment and operation of international telecommunication network, is a very complex and difficult to manage and maintain. Optical fiber communication has a large capacity of transmission, loss of small, light weight, good secrecy, etc.There are four kinds of loss in optical fiber, namely absorption loss, scattering loss, irregular loss and bending loss.
% Section 2\\
\section{What's light}
\label{sec:Sec2}
Light is an electromagnetic wave (visible spectrum) that can be seen (accepted) by the naked eye. In science, light is sometimes defined as all electromagnetic waves. Light consists of an elementary particle called a photon. It has the characteristics of particle and wave, or wave-particle duality. The object that is giving off light is called illuminant, "be" this condition must have, illuminant can be natural or man-made. Physics refers to the electromagnetic wave can emit a certain wavelength range (including visible and ultraviolet, infrared, X-ray and other invisible light) objects.
Light can travel in a transparent medium such as vacuum, air and water. The speed of light in a vacuum is the fastest known speed in the universe. Light can travel 299792458m in a vacuum for 1s. That is to say, the fastest speed of light is $c=2.99792000\times 10^{8} m/s$ in vacuum. In every other medium the velocity is lower than in a vacuum. But when it's in the air, then the speed of light is approximately $2.99792000\times 10^{8} m/s$. In our calculation, the speed of light in vacuum or air is set as $c=2.99792000\times 10^{8} m/s m/s$.(fastest, limit speed) light speed in water is much smaller than that in vacuum, which is approximately 3/4 of the speed of light in vacuum. Light travels faster in glass than in a vacuum, about two-thirds the speed of light in a vacuum. If a human being were to orbit the earth at the speed of light, it would be able to orbit the earth 7.5 times in 1 second. If a 1000km/h racing car runs continuously, it will take 17 years to cover the distance from the sun to the earth.\cite{cJ. Geisler}Light has wave-particle duality. We can think of light as either an electromagnetic wave with a high frequency or as a particle, a quantum of light, or simply a photon.
Light bounces off water, glass, and many other surfaces. The normal is a line perpendicular to the mirror; The incident Angle is the Angle between the incident ray and the normal; The Angle of reflection is the Angle between the reflected ray and the normal. In reflection, in the same plane we can notice that the reflected light, incident light and normal stay together. Reflected light and incident light are separated on both sides of the normal line;\cite{cJ. Geisler} The Angle of reflection is equal to the Angle of incidence. This is the law of light reflection. If you shine light on a mirror in the opposite direction of the reflected light, then it will be reflected back in the opposite direction of the incident light. This shows that in reflection the optical path is reversible. There are two kinds of reflection in physics: specular and diffuse. Specular reflection occurs on a very smooth surface (such as a mirror). Two parallel rays can be reflected on a reflecting object and remain parallel. Uneven surfaces, such as white paper, reflect light in all directions. This reflection is called diffuse reflection. Most reflections are diffuse.
The phenomenon of light splitting into monochromatic light is called light dispersion. Newton first observed the dispersion of light through a prism in 1666, splitting white light into bands of color (spectra). The dispersion phenomenon indicates that the refractive index $n$ (or propagation velocity $v=c/n$) of light in the medium varies with the frequency of light.\cite{cJ. Geisler} Light dispersion can be achieved by prism, diffraction grating, interferometer, etc.
The phenomenon of light splitting into monochromatic light to form a spectrum is called light dispersion. White light is a compound color light composed of red, orange, yellow, green, blue, indigo, purple and other colors. Red, orange, yellow, green and other colors are called monochromatic light. Dispersion can be achieved by using a prism or grating as a "dispersion system" instrument. After the polychromatic light enters the prism, because it has the different refractive index to each frequency light, each color light propagation direction has the different degree deflection, therefore when leaves the prism separately disperses, forms the spectrum.\\
The phenomenon in which the refractive index of a medium varies with the frequency of light waves or the wavelength of a vacuum. When the dichromatic light refracts at the dielectric interface, the dielectric has different refractive index to the light of different wavelength, and the light of different colors are separated from each other due to the different refractive Angle. In 1672, Newton used a prism to break down sunlight into bands of colored light.\cite{Optical Fibre} This was the first experiment with dispersion. The dispersion rule is usually described by the relation between the refractive index $n$ or dispersion rate $\dfrac{\mathrm{dn} }{\mathrm{d} \lambda }$ lambda and wavelength lambda of the medium. The dispersion of any medium can be divided into normal dispersion and abnormal dispersion. Let a beam of white light shine on the glass prism. After the light is refracted by the prism, it forms a color band on the white screen on the other side of the prism. Its color arrangement is red near the top corner of the prism, purple near the bottom end, and orange, yellow, green, blue and indigo in the middle. Each color in the spectrum can not be separated from other color light, called monochromatic light. Light that is a mixture of monochromatic light is called dichromatic light. In nature, the light from the sun, incandescent electric lamps and fluorescent lamps are compound color light. When light hits an object, part of it is reflected and part of it is absorbed. The light that passes through determines the color of the transparent object, and the light that reflects determines the color of the opaque object. Different objects reflect, absorb, and penetrate different colors differently, and therefore present different colors. For example, a yellow light shines on a blue object, and that object is black. Because blue objects can only reflect blue light, and not yellow light, so the absorption of yellow light, you can only see the black. But if it's white, it reflects all the colors.
% Section 3
\section{The refraction and reflection of light and optical fiber}
When light rays from one medium slant into another, the direction of propagation deflection, this phenomenon is called light refraction. The Angle between the refracted ray and the normal is called the refractive Angle. If the density of the incoming medium is greater than that of the original medium, the refractive Angle is less than the incident Angle. Otherwise, if less than, the refractive Angle is greater than the incident Angle. If the incident Angle is 0 and the refraction Angle is 0, it is part of the reflection.\cite{Optical Fibre} However, light refraction is also generated in the same heterogeneous medium. Theoretically, it can be emitted from one direction without generating refraction. However, since the boundary cannot be distinguished and there are usually several layers instead of planes, refraction will be generated no matter what. For example, seeing the bottom of a calm lake from the shore is a first refraction, but seeing a mirage is a second refraction. Convex and concave lenses are the two most common types of lenses because of the first type of refraction. In refraction, the optical path is reversible.
The law of refraction of light is one of the fundamental laws of geometrical optics. It is the law that determines the relation between the incident ray and the refracted ray during the refraction process of light. It was proposed by Snell in 1621. When light radiates from one medium to the smooth interface of another medium, part of the light is reflected by the interface, and the other part of light is refracted through the interface by the other medium. The refracted light follows the law of refraction which goes as follows: the refracted light is in the same plane as the incident light and the normal line, and the refracted light and the incident light are on both sides of the normal line respectively. The sine of the incident Angle is proportional to the sine of the refraction Angle, which is $\dfrac{sina}{sinb}= n_{12}$.Where $n_{12}$ is a constant called the relative refractive index of the second medium to the first medium.\cite{Optical Fibre}
The law of light reflection was proposed by French civil engineer and physicist Fresnel. He discovered the relationship between viewpoint Angle and reflection/refraction. Therefore, the reflection of light is also known as Fresnel reflection. If you walk by the pool and look at the water under your feet, you can tell that the water is so transparent which makes the reflection is very visible the way you look through. Then when I look at the pool in a distant, I would probably see the pool is not transparent, but the reflection of the lake is kind of obvious. There is a name for this phenomenon which is Fresnel effect. We can see all substances except metals have different degrees of "Fresnel effect". Reflected light and incident light and reflected light's law refers to the normal on the same plane. The reflected light and on both sides of the normal incident light. The Angle of reflection is the same as the Angle of incidence. It can be summarized as: "three lines coplanar, two lines, two Angle is equal to". Light is reversible. In the phenomenon of light reflection, the path of light is equal. In the special case of vertical incidence, the Angle of incidence and the Angle of reflection are all zero.
As shown below, a ray of light a first reaches the interface from medium 1 at time t.
From the triangle ABC and ADC, we can get:
$sin\theta _{1}= \dfrac{BC}{AC}= \dfrac{v_{1}\Delta t}{AC}$
$sin\theta _{2}= \dfrac{AD}{AC}= \dfrac{v_{2}\Delta t}{AC}$
$\dfrac{sin\theta 1}{sin\theta 2}= \dfrac{v_{1}}{v_{2}}= \dfrac{n_{2}}{n_{1}}= n_{12}$
$\dfrac{sin\theta 1}{sin\theta 2}= = n_{12}$
\cite{Tarja Volotinen}
\begin{figure}[h]
\includegraphics[width=0.55\textwidth]{4.png}
\caption{the refraction of light}
% \caption{the refraction of light}
There are two kinds of reflection: specular and diffuse.\\
1. Specular reflection: parallel light rays are reflected from the interface and then shot out in a certain direction, and the reflected light rays can only be received in a certain direction (the reflecting surface is a smooth plane).
2. Diffuse reflection: parallel light is reflected from the interface in different directions, that is, reflected light can be received in different directions (the reflective surface is a rough plane or a curved surface).
But both specular reflection and diffuse reflection follow the law of light reflection; Diffuse reflection is an irregular reflection caused by uneven surface. It means that there are some arcs or sharp lines on the uneven surface. Suppose a ray of light hits the surface and makes its tangent line as the reflection ray.
Note: light path is reversible in light reflection.
\begin{figure}
\caption{the reflection of light}
Due to the refraction and total reflection of the air, there will be a "mirage" in the air. When the sea is calm, standing on the beach, we can sometimes see high-rise buildings, streets and mountains overlapping in the distance. \cite{cJ. Geisler} There is a reason for this. When the atmosphere is relatively calm, with the increase of temperature, then the density of the air decreases too. The air temperature on the sea surface is lower than that in the air, and the refractive index of the air in the lower layer is higher than that in the upper layer. We can roughly divide the atmosphere in the air into many horizontal air layers, as shown in the lower layer which has a higher refractive index. When the light from a distant scene shoots into the air, it is constantly refracted, and the incident Angle of the upper layer with a lower refractive index becomes larger and larger. When the incident Angle of the light reaches a critical Angle, the phenomenon of total reflection will occur. The light will be high in the air through the air refraction gradually return to the next higher refractive index layer. An observer near the ground can observe an image created by light from the air, which is a mirage. When light propagates to the interface of two kinds of media, reflection and refraction usually take place at the same time.
%\begin{figure}
% \includegraphics[width=0.75\textwidth]{6.png}
% % \caption{}
%\end{figure}
\section{The optical fiber}
If certain conditions are met, the light will no longer be refracted, but will all return to the original media, which is called total reflection. Total reflection is a special phenomenon of refraction of light. It can only occur when the incident Angle is greater than or equal to the critical Angle and the light shoots from the optically dense medium to the optically sparsely medium. Before the occurrence of total reflection, with the increase of the incident Angle, both the refractive Angle and the reflected Angle increase, but the refractive Angle increases rapidly. Under the condition that the intensity of the incident light is certain, the refracted light becomes weaker and weaker, and the reflected light becomes stronger and stronger. When the full emission occurs, the refracted light disappears, and the intensity of the reflected light is the same as the intensity of the incident light. That's why fiber optics are used today.
It has been found that light travels along a stream of fine wine ejected from the barrel. It was also found that light can travel along curved glass rods. Since the density of medium such as water is greater than that of surrounding substances (such as air), that is, light shoots from water to air. On the surface, the light seems to bend in water. People made a transparency later which is very high, like a spider silk degree of glass -- glass fiber, when the light is at a right Angle into the glass fiber, light along the winding glass fiber. It is called an optical fiber because it can be used to transmit light.
As we can see, optical fiber is usually used for some very long-distance information transmission because the loss of electricity in the wire is much higher than that of light in optical fiber. The terms fiber and cable are often confused. Most optical fibers are always hidden by some layers of protective structures which can protect them from being harmed. The covered cables are called fiber optic cables. The cover layer of the fiber can protect the fiber from the damage of the surroundings. The superfine fiber is wrapped in a plastic sheath, can be bent without fracture. The fiber is composed of two layers of different refractive index of glass composition. The inner layer is the inner core of light, with a diameter of several microns to dozens of microns, and the outer layer has a diameter of 0.1 -- 0.2mm. The refractive index of the inner glass is 0.01 higher than that of the outer glass. According to the principle of total reflection and refraction of light, when the critical angle light to the inner core and the outer interface angle is greater than the total reflection when the light does not penetrate the interface reflection. In the light of different substances in spreading at different rates, it is between the material interface are refraction and reflection. In addition, the refracted light angle as the angle of incident light change. When the incident angle reaches or exceeds a certain angle, the light will be refracted disappear, all incident light will be reflected back, this is a light reflection. Different substances refract light of the same wavelength at different angles (that is, different substances have different refractive index), and the same substance refracts light of different wavelength at different angles. Is based on the principle of optical fiber communication. Incident light is not transmitted entirely by optical fibers to the end of the optical fibers, but only in a certain angle range can the incident light be transmitted. This is called the angle of fiber numerical aperture. Large numerical aperture of optical fibers is beneficial to fiber docking. The numerical aperture of optical fibers produced by different manufacturers is different.\cite{Optical Fibre}
\caption{What is optical fibre}
\section{Principle of optical fiber transmission}
Definition of optical fiber communication: the communication mode that takes light is the one who takes information with it as transmission medium. To understand how fiber-optic cables work, imagine an infinite length of straw or a tube. From the start, you make up a long tube in your head. Then we assume the inner wall of it is covered by full reflector. Then, suppose you look through the pipes. From the other side, a few kilometers away, one of your friends light a match to shine it into the tube. Because that inside tube is a total reflector, the light from the flashlight will be reflected back and forth across the tube (even though the tube may be twisted), and as a result, you will see light at the other end. If your friend turns his flashlight on and off in Morse code, he can communicate with you through the channel. This is the basic principle of fiber optic cable.
It is possible to make a cable from the pipe that covers the inside of the reflector, but this cable can be very thick and it is difficult to cover the inside of the pipe with a full reflector. So real fiber optic cables are made of glass. And the glass is so pure for which it is very long, but you can see the light still then. Glass is drawn into very fine strands that are as thick as a human hair. Then, cover the bread with two layers of plastic over the glass.
By wrapping the glass in plastic, a mirror is created around the glass. This reflector produces a total internal reflection, just like a total reflector covered inside a tube. You can experience this reflection in a dark room with a flashlight and Windows. If you shine a flashlight at the window at an Angle of 90 degrees, the light will pass directly to the other side of the tube. But when someone aims the light at a very small Angle, the straw glass is to be a mirror. You can notice that the light reflected from the window onto the wall of the house. It is in this small angle inside the fiber transmission light is reflected, thus completely preserved inside a fiber. Telephone calls can travel via optical fibre cables, people convert the analog voice signal to digital voice signal.\cite{Optical Fibre}
\caption{Principle of optical fiber transmission}
\section{Functions of optical fiber}
There are many types of fiber, depending on the purpose, the required function and performance also vary.
1. The refractive index distribution of silica fiber core and cladding is controlled by doping amount. Quartz series optical fibers have been widely used in cable TV and communication systems due to their low power consumption and wide bandwidth characteristics. Quartz glass fiber has the advantage of low loss. When the wavelength is 1.0-1.7 m, the loss is only 1 dB/km, and the lowest is 0.2 dB/km when the wavelength is 1.55 m.\cite{Optical Fibre}
2. The operating wavelengths of the quartz series of optical fibers developed for optical communications are limited to 2 m, although they are used for shorter transmission distances. Therefore, it can work in the field of longer infrared wavelength. The infrared fiber is named as developed fiber. And the infrared fiber is usually applied for light energy transmission.\cite{Optical Fibre}
3. Composite fiber is a kind of multi-component glass fiber made by properly mixing oxides.The characteristic of multi-component glass is that the softening point of multi-component glass is lower than that of quartz glass and the refractive index of core and cladding is very varied. And this kind of fiber is usually applied in the iatrical of optical fiber endoscopy.\cite{Optical Fibre}
4. Plastic Clad Fiber USES high-purity silica glass as the core, while plastics such as silica gel with slightly lower refractive index than quartz as the cladding step type Fiber. Compared with quartz fiber, it has the characteristics of coarse core and high numerical aperture. Therefore, easy to combine with LED light source, the loss is small.\cite{Analogue optical fibre communications}
5. This is an optical fiber with plastic core and cladding.It's rare but truly exists. Early products were mainly used for decoration and illumination as well as near-light optical communication. The main raw materials are plexiglass, polystyrene and polycarbonate. The loss is restricted by the inherent c-h binding structure of plastics, which can reach dozens of dB per km in general. In order to reduce the loss is developing the application fluoresso series plastics. With a core diameter of 1000 m, plastic fiber is 100 times bigger than the single one, easy to connect. As we can see, along with the progress of broadband, the development of multimode plastic fiber as a graded refractive index has been paid attention to by the society. Recently, it is applied quickly in the automobile internal LAN, and it may be applied in the family LAN in the future.\cite{Optical Fibre}
6.Hollow fiber is the formation of cylindrical space, optical fiber for the transmission of light. The hollow fiber is always applied to transport energy.\cite{Optical Fibre}
\caption{Function of optical fibre}
\section{loss of optical fiber transmission}
In July 1966, Dr. Charles kao and Dr. Hockham, British Chinese of the British standard telecommunications research institute, pointed out according to the dielectric waveguide theory that the the fiber loss is not inherent and high,due to impurities in fiber.In order to realize optical fiber communication, it is important to reduce the loss of optical fiber as much as possible. \cite{Optical Fibre}
There are four kinds of loss in optical fiber, namely absorption loss, scattering loss, irregular loss and bending loss.
A. Optical fiber absorption loss is caused by the absorption of light energy by optical fiber materials and impurities. They consume light energy in optical fiber in the form of heat energy, which is an important loss in optical fiber. 1. Material intrinsic absorption loss, which is caused by the intrinsic absorption of material. It has two bands, one in the near infrared region of 8 to 12 m, where the band's intrinsic absorption is due to vibration. The natural absorption band of another substance in the ultraviolet band, when the absorption is strong, its tail will be dragged to 0.7 to 1.1 m band. 2. Impurity absorption, mainly optical fiber materials containing iron, copper, chromium plasma. The higher the metal ion content, the greater the loss, as long as the strict control of the metal ion content. 3. Atomic defect absorption refers to the loss caused by atomic defects in the manufacturing process of optical fiber when glass is subjected to thermal excitation or strong radiation.\cite{Analogue optical fibre communications}
B. The scattering loss of optical fiber is caused by the coupling or leakage of optical power out of the fiber core due to the micro-fluctuation of atomic density in the fiber material component or the structural defect of fiber waveguide. It is due to the inhomogeneity of the atoms or molecules of the material and the structure of the material. The refractive index of the material produces microscopic inhomogeneity which results in the scattering of transmitted light waves. This scattering is inherent in the material and can not be eliminated. Rayleigh scattering is inversely proportional to the fourth power of the wavelength, and the loss caused by it can be calculated with the formula underneath:\\
$\alpha _{SR}\approx \dfrac{A}{\lambda _{0}^{4}}\left ( 1+B\Delta _{0} \right )$\cite{cJ. Geisler}\\
Where A and B are constants related to quartz and reference materials.
C. Structural irregularity loss is the part of loss caused by tiny structural fluctuations at the core cladding interface and uneven waveguide structure inside the fiber. When the structure of the fiber is irregular, the mode transformation will take place, and some of the transmitted energy will be shot out of the fiber core and become radiation mode, so that the loss will be increased. This loss can be reduced by improving manufacturing techniques.
D. Bending loss is the loss caused by the bending of optical fiber axis. Any deviation of the optical fiber axis visible to the naked eye from a straight line is called bending or macro bending. Fiber bending will cause the coupling between the modes in the fiber. When the energy of the propagation mode is coupled into the radiation mode or leakage mode, the bending loss will be generated. The loss increases exponentially as the radius of curvature decreases.
In the practical application of optical fiber, bending is inevitable. In order to maintain the same phase at different points on the isofacial plane, the phase velocity of optical wave needs to satisfy the following equation:\\
$\dfrac{V_{px}}{V_{pl}}=\dfrac{R+x}{R}$\cite{cJ. Geisler}\\
In the formula, R is the bending radius of the fiber, x is the distance from any point outside the fiber core to the fiber axis, and V is the phase velocity between x point and the fiber axis.\\
$x=\dfrac{(V_{px}-V_{pl})R}{V_{pl}}$\cite{cJ. Geisler}
\section{Image fiber transmission test}
Video signal transmission volume is growing, especially cable television, the need to send dozens of TV signals to thousands of households.In this experiment, I will adopt the method of direct modulation of analog signal to carry out the fiber transmission of video signal. The system mainly observes the fiber transmission of video signal and tests the performance of the fiber transmission analog signal. The experiment is essentially an experiment of transmitting analog signals by optical fiber.
The experimental principle is as follows: the small camera generates video signal (analog signal), which is sent to the optical transmitter through analog modulation. After the fiber transmission, the optical receiver detects the video signal and outputs it to the television receiver. Then test the optical fiber transmission video signal effect and characteristics to understand the optical fiber transmission characteristics of television signals. The better the image effect, the better the performance of optical fiber transmission. Before the transmission of video signal by optical fiber, the sine wave simulation transmission is adjusted to make the peak value of sine wave and the peak value of sine wave of waveform generator the same, indicating that the waveform can be transmitted normally, which is the best transmission effect of video signal.
The experimental procedure is as follows:\\
1. Connect the camera video output end with the luminous video module input end T131, and then connect T111 with a wire. The video input end of the TV set is connected with the output end T133 of the optical reception module video, and then T34 is connected to T21.\\
2. Mount 850nm optical transmitter HFBR-1414t and optical receiver HFBR-2416t, connect 1310nmT and 1310nmR with st-st fiber jumper, and form 850nm optical transmission system.\\
3. Turn the dial code switch BMI to the simulated state, and adjust the driving current of the optical generator with W112 to make the current less than 30mA.\\\cite{Analogue optical fibre communications}
4. Connect the dc power supply of the light emitting module (K10), the camera power supply and the TV power supply.\\
5. Adjust the potentiometer W111, W112 and W121 to achieve the best fiber video transmission effect.
\includegraphics[width=0.55\textwidth]{11.png}
\caption{The fiber transmission test}
\section{Conclusions}
\label{conclusions}
We can see that in the experiment, the image effect is clear, the sensitivity is high, and it is not disturbed by electromagnetic noise. So optical fiber transmission image performance is better, and our optical fiber is small, light, long life, low price. If communication is like a road, then the wider the frequency band of the communication line, the more information it is allowed to transmit, and the larger the information capacity. Optical fiber has many applications in many fields, such as communication, computer LAN, and power system monitoring. With the progress of the society and the increasing demand for information, the transmission capacity of digital system is constantly increasing, and people urgently need to establish a unified worldwide communication network. In this situation, many shortcomings of the existing PDH are gradually exposed, mainly in North America, Western Europe and Asia, the three digital systems are not compatible with each other, there is no unified standard optical interface in the world, making the establishment of international telecommunications network and operation, management and maintenance are very complex and difficult. Millions of kilometres of fiber-optic cables have been produced since the mid-1970s, when fiber-optic communications became a reality. Ships like the global sentinel built a glass highway around the world. So far, the ships have laid some 6.44 million kilometers of fiber under the ocean floor, enough to cross the Atlantic ocean a thousand times. In addition, the continents laid 483 million kilometers of intricate optical fiber, forming what people say is a glass communication necklace.
Today, the global fibre-optic network is not only responsible for the increasing volume of network traffic, but also for the transmission of information for the world's business and banking industry, as well as connecting the world's telephone. If a Chinese had called a friend living in London more than a decade ago, the call would have been transmitted via satellite. Today, the call can be converted into a laser pulse sent over an optical fiber by simply reaching the nearest telephone system relay station. Many countries in the world are now being brought closer together by undersea cables than would have been possible even a few decades ago, let alone hundreds or thousands of years ago. Optical fiber has changed the world, the distance has died, is like this. \cite{Analogue optical fibre communications}
\color{black}
\noindent\rule[0.25\baselineskip]{\textwidth}{1pt}
% Bibliography
% \small
%\bibliographystyle{unsrt}
\begin{thebibliography}{10}
\bibitem{Tarja Volotinen}
Tarja Volotinen and Willem Griffioen
\newblock Reliability of optical fibres and components.
\newblock {\em Reliability of optical fibres and components}, 1999.
\bibitem{cJ. Geisler}
cJ. Geisler,Beaven and J.P.Boutruche.
\newblock
\newblock {\em Optical fibres}, 1986.
\bibitem{Optical Fibre}
Kao;Charles K; Institution of Eletrical Engineers.
\newblock Optical Fibre
\newblock {\em Optical Fibre}, 1998
\bibitem{Analogue optical fibre communications}
Wilson;Brett; Ghassemloooy; Zabih; Darwazeh; Izzat
\newblock Analogue optical fibre communications
\newblock {\em Analogue optical fibre communications}, 1995.
\end{thebibliography}
|
CommonCrawl
|
Kirchhoff's Law
Problem (IIT JEE 2006): In a dark room with ambient temperature $T_0$, a black body is kept at temperature $T.$ Keeping the temperature of the black body constant (at $T$), sun rays are allowed to fall on the black body through a hole in the roof of the dark room. Assuming that there is no change in the ambient temperature of the room, which of the following statement(s) is (are) correct?
The quantity of radiation absorbed by the black body in unit time will increase.
Since emissivity $=$ absorptivity, hence the quantity of radiation emitted by black body in unit time will increase.
Black body radiates more energy in unit time in the visible spectrum.
The reflected energy in unit time by the black body remains same.
Solution: When sun rays fall on a black body, a portion of incident photons are absorbed and the remaining are reflected. So, the net quantity of radiations absorbed by the black body in unit time increases. At the same time, since emissivity and absorptivity are equal, the quantity of radiations emitted by the black body in unit time also increases. The wavelength $\lambda_\text{max}$ at which maximum radiation takes place depends on black body temperature $T$ and is given by Wien's displacement law $\lambda_\text{max}T=b$. Thus, A and B are correct.
Problem (IIT JEE 2003): The temperature ($T$) \emph{versus} time ($t$) graphs of two bodies X and Y with equal surface areas are shown in the figure. If the emissivity and the absorptivity of X and Y are $E_x$, $E_y$ and $a_x$, $a_y$, respectively, then,
$E_x > E_y$ and $a_x < a_y$
$E_x < E_y$ and $a_x > a_y$
$E_x > E_y$ and $a_x > a_y$
$E_x < E_y$ and $a_x < a_y$
Solution: The rate of cooling for $x$ is greater than that of $y$. The cooling occurs due to heat loss by radiation. By Stefan's law, the rate of heat loss is, \begin{align} \mathrm{d}Q/\mathrm{d}t=\sigma EA(T^4-T_0^4),\nonumber \end{align} where $E$ is emissivity. Hence, $E_x > E_y$. Since good emitters are good absorbers, the absorptivity $a_x > a_y$.
|
CommonCrawl
|
The non-abelian chiral anomaly and one-loop diagrams higher than the triangle one
Suppose chiral fermions $\psi$ interacting with gauge fields $A_{\mu,L/R}$. With $P_{L/R} \equiv \frac{1\mp\gamma_{5}}{2}$ and $t_{a,L/R}$ denoting the generators, the corresponding action reads $$ S = \int d^{4}x\bar{\psi}i\gamma_{\mu}D^{\mu}\psi, \quad D_{\mu} = \partial_{\mu} - iA_{\mu,L}^{a}t_{a,L}P_{L} - i\gamma_{5}A_{\mu,R}^{a}t_{a,R}P_{R} $$ To check the presence of the anomaly $\text{A}(x)$ in the conservation law for the current $$ J^{\mu}_{L/R,c} \equiv \bar{\psi}\gamma^{\mu}\gamma_{5}t_{c}\psi, $$ we have to calculate the VEV of its covariant divergence: $$ \tag 1 \langle (D_{\mu}J^{\mu}_{L/R}(x))_{a}\rangle_{A_{L/R}} \equiv \langle \partial_{\mu}J^{\mu}_{L/R,a} +f_{abc}^{L/R}A^{L/R}_{\mu,b}J^{\mu}_{L/R,c}\rangle_{A_{L/R}} \equiv \text{A}^{L/R}_{a}(x), $$ where $f_{abc}$ is the structure constant.
Let's study the one-loop contributions (other contributions do not exist, as was established by Adler and Bardeen) in $(1)$. In general, we have to study triangle diagrams, box diagrams, pentagon diagrams and so on, arising from the quantum effective action $\Gamma$. From dimensional analysis of corresponding integrals we conclude that the three-point vertex $$ \Gamma_{\mu\nu\alpha}^{abc}(x,y,z) \equiv \frac{\delta^{3} \Gamma}{\delta A^{\mu}_{a}(x)\delta A^{\nu}_{b}(y)\delta A^{\alpha}_{c}(z)}, $$ which generates the triangle diagram, is linearly divergent, the four-point vertex $\Gamma_{\mu\nu\alpha\beta}^{abcd}(x,y,z,t)$ is logarithmically divergent, the five-point vertex $\Gamma_{\mu\nu\alpha\beta\gamma}^{abcde}(x,y,z,t,p)$ is convergent, and so on.
Unlike the abelian case, where the only triangle diagram makes the contribution in the anomaly, here more diagrams contribute. Precisely, we know that non-zero anomaly in triangle diagram requires non-zero coefficient $$ D_{abc}^{L/R} \equiv \text{tr}[t_{a},\{t_{b},t_{c}\}]_{L/R} $$ The box diagram (with the requirement of the Bose symmetry) is proportional to $$ \tag 2 D_{abcd}^{L/R} \equiv \text{tr}[t_{a}\{t_{b},[t_{c},t_{d}]\}] = if_{cde}D^{L/R}_{abe}, $$ while the pentagon diagram - to $$ \tag 3 D_{abcde}^{L/R} \equiv \text{tr}[t_{a}t_{[b}t_{c}t_{d}t_{e]}] \sim f_{r[bc}f_{de]s}D^{L/R}_{ars} $$ Therefore, it seems that they also make the contribution in the anomaly. Moreover, from the Wess-Zumino consistency conditions we see that at leas the box diagram makes the contribution in the non-abelian anomaly.
I have two questions because of this.
1) The chiral anomaly arises in the result of impossibility of defining the local (in terms of momenta) action functional generating the counterterm which cancels the gauge invariance breaking corrections in n-point vertices. The triangle diagram is linearly divergent, and because of Bose symmetry it can be shown that the only non-local action can generate the anomaly in the limit of small momenta. In this spirit, we can cancel the box and pentagon diagrams (which diverge linearly) by adding the local counterterms (precisely, the linear shift of the momentum of integration doesn't cause the appearance of anomalous surface terms), so I don't understand why they make the contribution in the anomaly $(1)$.
2) If there is the reason why they can't be cancelled by adding the counterterm, what about hexagonal diagrams and so on? Why do they vanish? Because of something like Jacobi identity for structure constants?
An edit
It seems that the answer is that the following diagrams make the contribution in the anomaly $(1)$, but not because of $(2)$, $(3)$ (the latter just shows that box and pentagon diagrams anomalous contribution vanishes if there is no triangle anomaly). The reason to make the contribution is in the structure of the anomalous Ward identities.
Suppose we're dealing with consistent anomaly. Then we have (I've omitted the subscript $L/R$), by the definition, $$ -\text{A}_{a}(x) = \delta_{\epsilon_{a}(x)}\Gamma \equiv \partial_{\mu}^{x}\frac{\delta\Gamma}{\delta A_{\mu,a}(x)} + f_{abc}A_{\mu,b}(x)\frac{\delta \Gamma}{\delta A_{\mu,c}(x)} $$ The Ward identities for the $n$-point vertex are obtained by taking $n-1$ functional derivatives with respect to $A_{\mu_{i},a_{i}}$ and setting $A_{\mu_{i},a_{i}}$ to zero. It can be shown that the Ward identities for the derivative of the 4-point vertex (which is logarithmically divergent) contain 3-vectex functions which are anomalous. Therefore we see that the 4-point vertex also contribute to the anomaly (not by itself, since it is only logarithmically divergent, but through linearly diverging 3-point vertex).
What's about the 5-point vertex? The Ward identities for its derivative contain only the 4-point function, so in first sight it seems that it doesn't make the contribution in the anomaly. However, this is not true in particular cases. Indeed, if one of currents $\text{J}_{\mu}^{a}$ running in the loop is the global one, we can preserve the gauge invariance by pumping the anomaly to the $\text{J}^{a}_{\mu}$ conservation law. This is realized in particular by changing the 4-point vertex (not its derivative!) by the anomalous polynomial. Therefore the Ward identity for 5-point vertex becomes anomalous. However, even in this case this vertex may give no contribution in the anomaly (there is situation when the global current is the abelian one); in this case the $A^{4}$ term in the anomaly vanishes identically due to the group arguments.
This also illustrates why there are no anomalous contribution from the derivative of hexagonal diagrams, and higher.
quantum-field-theory renormalization quantum-anomalies chirality 1pi-effective-action
Name YYY
Name YYYName YYY
$\begingroup$ Hi! Sorry for bothering in an old post, but I have a question I think you are able to answer: how does the minus sign composing the commutator inside the trace in the box/pentagon diagrams show up? I understand how the anticommutator does, but cannot see why a minus sign would appear. Can you help me? Thank you! $\endgroup$ – GaloisFan Apr 9 at 4:34
The higher polygonal diagrams do not contribute to the anomaly. If you examine the actual computation of the higher polygonal diagrams, you will find that they, due to their lower/non-existent divergence, cancel out. This would also happen for the triangle diagram (one can make the diagram formally cancel by shifiting the loop variable), and this is only forbidden because you cannot find a renormalization scheme that respects that symmetry.
By power counting, the diagram with the highest divergence and therefore the only one contributing to the anomaly is the $k+1$-gon in $2k$ dimensions.
ACuriousMind♦ACuriousMind
Not the answer you're looking for? Browse other questions tagged quantum-field-theory renormalization quantum-anomalies chirality 1pi-effective-action or ask your own question.
Why can i replace a gauge field by the current it couples to in the calculation of a greens function?
Anomaly, Ward identity
Scalar Yukawa theory
From gauge anomaly to chiral anomaly
Index theorem and UV and IR face of chiral anomaly
Locality of Wess-Zumino terms and Goldstone bosons
Interpretation of the chiral anomaly a-la Alvarez-Gaume
Anomaly of the $\text{U}(1)$-$\text{SU}(2)$-$\text{SU}(3)$ triangle diagram
|
CommonCrawl
|
Solution index
About Fridge Physics
igcse-edexcel-physics-solution Archives - Page 2 of 3 - Fridge Physics
igcse-edexcel-physics-solution
Resistance is an electrical quantity that measures how a device or material reduces the electrical current flow through it.
What is Resistance?
Resistance is an electrical quantity that measures how the device or material reduces the electrical current flow through it. Resistance is measured in ohms (Ω).If we make a comparison of resistance to water flow in pipes, the resistance is greater when the pipe is thinner, so the water flow is decreased It slows down which also happens to the flow of electricity.
Resistance equation
To calculate Resistance we write the equation like this.
$V = { \mathit I \, \mathit R} $
Resistance demo
In this tutorial you will learn how to calculate the resistance in an electrical circuit.
Chilled practice question
Calculate the resistance of a bulb supplied with 8 V and a current flow of 2 A.
Frozen practice question
Calculate the current in a circuit which has a resistance of 16 Ω and a potential difference of 8 V.
Science in context
Resistance reduces the flow of electricity.
Millie's Master Methods
Millie's Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie's…
Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
The Fridge Physics Store
Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
Click here to get merched up!
The power of an appliance is the energy that is transferred per second. Electric power is the rate, per unit time at which electrical energy is transferred by an electric circuit.
What is Electrical power?
The power of an appliance is the energy that is transferred per second. Electric power is the rate, per unit time, at which electrical energy is transferred by an electric circuit. The unit for power is the watt, which is the transfer of energy at the rate of one joule per second. Electric power can be produced by electric generators and batteries.
Electrical power equation
To calculate Electrical power we use this equation.
$ {\mathit P \, \text = \mathit V \mathit I}$
Electrical power demo
In this tutorial you will learn how to calculate the electrical power of an electrical appliance.
Calculate the power in a circuit when a p.d of 18 V and a current of 4 A is measured.
Find the current flowing through an appliance which has a power rating of 14 KW and a p.d of 230 V.
The power of an appliance is the energy that is transferred per second
Density is a measure of how compact the particles are in a substance. Density is defined as the mass per unit volume.
What is Density
Density is a measure of how compact the particles are in a substance. Density is defined as the mass per unit volume. It is mathematically defined as mass divided by volume: ρ = m/V. The density(ρ) of a substance is the total mass (m) of that substance divided by the total volume (V) Of the substance.
Density equation
To calculate Density we use this equation.
${\rho} = {\text{m} \over\text{V}}$
Density demo
In this tutorial you will learn how to calculate the density of different substances.
A block has a mass of 20 Kg and a volume of 0.25 m3. Calculate its density.
A barrel has a mass of 2500 g and a density of 2 Kg/m3. Calculate the barrels volume.
Density is a measure of how compact the particles are in a substance.
The radioactivity of a sample decreases over time. Half life is a measurement of this decrease.
What is Half life?
The radioactivity of a sample decreases over time. Half life is a measurement in this decrease.The half–life of a radioactive substance is a constant for each radioactive material. It measures the time it takes for a known amount of the substance to become reduced by half due to radioactive decay, and therefore, the emission of radiation.
Half life demo
In this tutorial you will learn how to calculate the half life of a radioactive material.
The activity of a radio-isotope is 1536 cpm. What is its activity after five half lives?
A radio-isotopes activity falls from 1880 cpm to 235 cpm in 72 minutes. Calculate it's half life.
The radioactivity of a sample decreases over time. Half life is a measurement of this decrease
Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Gravitational potential energy is the energy an object has due to its height above Earth.
What is Gravitational potential energy?
Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Gravitational potential energy is the energy an object has due to its height above Earth. The equation for gravitational potential energy is GPE = mgh, where m is the mass in kilograms, g is the gravitational field strength (9.8 N/Kg on Earth), and h is the height above the ground in meters.
Gravitational potential energy equation
To calculate Gravitational potential energy we use this equation.
$E_p = mgh$
Gravitational potential energy demo
In this tutorial you will learn how to calculate the energy stored in an elevated object.
Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed.
A barrel is lifted onto a shelf 3.5 m from the ground. The barrel has a mass of 22 Kg. Calculate the energy in its G.P.E store.
A ski lift transfers 11 KJ of energy into a mans G.P.E energy store. The man has a mass of 55 Kg, calculate the height he was elevated.
Lifting an object in a gravitational field transfers energy into the objects gravitational energy store.
The efficiency of a device is the proportion of input energy that is converted to useful energy.
What is Efficiency?
The efficiency of a device is the proportion of input energy that is converted to useful energy. Efficiency is a measure of how much work or energy is conserved in an energy transfer, work or energy can be lost, for example as wasted heat energy. The efficiency is the useful energy output, divided by the total energy input, and can be given as a decimal always less than 1 or a percentage. No machine is 100% efficient.
Efficiency equation
To calculate Efficiency we use this equation.
$efficiency = {\text{useful energy output} \over\text{total energy input}}$
Efficiency demo
In this tutorial you will learn how to calculate how efficient a device is at transferring energy from one form to another.
Calculate the efficiency of a light bulb with a total input energy of 500 J. The bulb emits 200 J of light energy and 300 J of heat energy.
A tumble drier is 80% efficient. Its useful energy is 45 KJ what is the total input energy in Joules.
The efficiency of a device is the proportion of the total input energy that is converted to useful energy.
Energy Transformed
The potential difference between two points is the energy transferred per unit charge. An electrical circuit is an energy transformation device.
What is Energy transformed?
The potential difference between two points is the energy transferred per unit charge. An electrical circuit is an energy transformation device. Energy is provided to the circuit by an electrochemical cell, battery, generator or another electrical energy source. Energy is delivered by the circuit. The rate at which this energy transformation takes place has a great relevance to the design of an electrical circuit for useful functions.
Energy transformed equation
To calculate Energy transformed we use this equation.
Energy = power x time
$ {\mathit E \, \text = \mathit P \mathit t}$
Energy = charge flow x potential difference
$ {\mathit E \, \text = \mathit Q \mathit V}$
Energy transformed demo
In this tutorial you will learn how to calculate the energy transformed through an electrical circuit.
Calculate the energy transformed if a circuit is supplied with 6 V and 32 C of charge flows.
Calculate the energy transformed when a current of 4 A flows with a p.d of 230 V for 18 s.
The potential difference between two points is the energy transferred per unit charge
Forces and Work Done
The unit for work done is the joule (J), or Newton meter (N-m). One joule is equal to the amount of work that is done when 1 N of force moves an object over a distance of 1 m.
In this tutorial you will learn how to calculate the work done to move an object through a known distance.
Note-Some mobile devices may require you to tap full screen during playback to view video content.
The formula for this equation is written like this:
w = f x d
How much work has been done if a car is pushed along the road 550 cm with a force of 400 N ?
75 J of work is done pulling a trolley 150 cm. Calculate the force applied.
Work can be calculated with the equation: Work = Force × Distance. The unit for work is the joule (J), or Newton • meter (N • m). One joule is equal to the amount of work that is done when 1 N of force moves an object over a distance of 1 m.
All moving objects have momentum. Forces can cause changes in momentum. The total momentum in a collision or explosion is conserved and stays the same. Car safety features absorb energy involved in a crash they slow down the collision thus reducing the force of impact.
In this tutorial you will learn how to calculate the momentum in a moving object.
The equation is written like this:
$P = { \text m\; \text x \; \text v}$
Calculate the momentum of a 1200 Kg vehicle travelling at 5 m/s.
A girl has a mass of 60 Kg. How fast is she moving if her momentum is 240 Kgm/s ?
Change in Momentum
In this tutorial you will learn how to calculate the change in momentum and the affect it has on impact forces.
$F = { mv \unicode{x2013} mu \over \text{t}}$
Calculate the force acting if there is a change in momentum from zero to 168 Kgm/s for 7 s.
A 50 kg gymnast is thrown in the air and travels at 5 m/s. The contact time to throw the gymnast is 0.5 seconds. Calculate the force acting on the gymnast.
.All moving objects have momentum. Forces can cause changes in momentum. The total momentum in a collision or explosion is conserved and stays the same. Car safety features absorb energy involved in a crash they slow down the collision thus reducing the force of impact.
FRIDGEPHYSICS LTD
Company registration no. 12539977. Registered in England and Wales.
Website Terms & Conditions Of Supply
|
CommonCrawl
|
Matrix Rank Calculator
Created by Maciej Kowalski, PhD candidate
Reviewed by Anna Szczepanek, PhD and Jack Bowater
What is a matrix?
Definition: the rank of a matrix
How to find the rank of a matrix?
Example: using the matrix rank calculator
Welcome to the matrix rank calculator, where you'll have the opportunity to learn how to find the rank of a matrix and what that number means. In short, it is one of the basic values that we assign to any matrix, but, as opposed to the determinant, the array doesn't have to be square. The idea of matrix rank in linear algebra is connected with linear independence of vectors. In particular, a full rank matrix is an array whose rows are all linearly independent, and such objects are of particular interest to mathematicians.
You know how, when you get bored, you stare at the paint patterns on the ceiling and your mind begins to wonder and, before you know it, you've come up with a whole new system of categorizing the style of paint chips? Mathematicians are no different. Maths all started when one of them was sent (by their spouse, no doubt) to fetch some turkey for dinner and a couple of apples for the kids. One turkey, two apples. Easy enough, those are simple numbers. They come quite naturally, so we call them just that - natural numbers.
But groceries cost money, so buying all of that made a small dent in the household budget: −$10-\text{\textdollar}10−$10. This new negative number lumped in with the natural numbers to make up the so-called integers. But that's not all! The kids are fussy enough, so the apples had to be cut into halves and then into quarters. Those new values were called fractions and we grouped with what we had so far to form the rational numbers.
And then there came that Pythagoras guy from across the yard with his theorem which introduced some ugly new numbers that he called square roots. What is more, he declared that π\piπ, used in circle calculations, is also one and called the whole lot the real numbers. But that must have been the end of it, right? Surely there can't be anything more, can there?
Well, oddly enough, mathematics didn't end there. While Isaac Newton was bored enough to invent calculus, some other mathematicians figured out even more numbers and called them complex numbers. Although both are quite interesting and extremely useful, that's not why we're here.
A matrix is an array of elements (usually numbers) that has a set number of rows and columns. An example of a matrix would be
A=(3−1021−1)\scriptsize A=\begin{pmatrix} 3&-1\\ 0&2\\ 1&-1 \end{pmatrix}A=(301−12−1)
Moreover, we say that a matrix has cells, or boxes, into which we write the elements of our array. For example, matrix AAA above has the value 222 in the cell that is in the second row and the second column. The starting point here is 1-cell matrices, which are, for all intents and purposes, the same thing as real numbers.
As you can see, matrices came to be when a scientist decided that he needed to write a few numbers concisely and operate with the whole lot as a single object. As such, they are extremely useful when dealing with:
Systems of equations, as you can discover at Omni's Cramer's rule calculator;
Vectors and vector spaces;
3-dimensional geometry (e.g., the dot product and the cross product);
Eigenvalues and eigenvectors; and
Graph theory and discrete mathematics.
When we operate with regular numbers, we usually want to add them, take their fractions, and so on. More or less, the same is possible with matrices, but it tends to get messy. Adding and subtracting arrays is simple enough, we taught you to do it in the blink of an eye at our matrix addition calculator, but when we move on to multiplication, it gets tricky, trust us. Not to mention division, which here is not an operation in itself, but rather a multiplication by the inverse, which sometimes doesn't even exist. And let's not forget about the cofactor matrix and adjoint matrix...
Fortunately, all of the above is a matter for a different time and a different calculator. We're here to see how to find the rank of a matrix, and that's what we'll focus on now.
Rank in linear algebra is a number that we assign to any matrix. It is the maximal number of linearly independent rows of the matrix. Equivalently, though it's not at all obvious at first glance, it is also the maximal number of linearly independent columns. But what does all this fancy language really mean?
The definition comes from looking at a matrix row by row (or column by column). As such, we can think of our array with nnn rows as nnn separate lines of numbers. Such objects, i.e., matrices with one row, are called vectors, and they are elements of so-called vector spaces. For example, the numerical axis, the Cartesian plane, and 3-dimensional space are all examples of vector spaces.
We say that vectors v⃗1\vec{v}_1v1, vecv2vec{v}_2vecv2, v⃗3\vec{v}_3v3, ..., v⃗n\vec{v}_nvn are linearly independent if the equation
a1×v⃗1+a2×v⃗2+a3×v⃗3+...+an×v⃗n=0 \scriptsize a_1\times\vec{v}_1+a_2\times\vec{v}_2+a_3\times\vec{v}_3+...+a_n\times\vec{v}_n = 0 a1×v1+a2×v2+a3×v3+...+an×vn=0
where a1a_1a1, a2a_2a2, a3a_3a3, ..., ana_nan are some real numbers, is true if and only if a1=a2+a3=...=an=0a_1=a_2+a_3=...=a_n=0a1=a2+a3=...=an=0. Equivalently, at least one of the vectors is the sum of the other ones (with some multiplicities).
🔎 For an in-depth analysis of linear dependence and independence, we invite you to visit the linear independence calculator!
Just to paint a picture, when we are on the real plane (vectors are just pairs of real numbers), then two linearly independent vectors will span the whole plane (we say that we have a full rank matrix in this case). This means that any point, i.e., any pair of real numbers, can be represented as a linear sum of the two vectors (sum of the two with some multiplicities). However, if they are linearly dependent, then this will not be possible, and the pair will only span a line instead. Since, in this case, we only have two objects, it will mean that one is a multiple of the other.
Rank in linear algebra is a tool that keeps track of linear independence, what vector space we're in, and the vector space's dimension. So, now that we know what to use it for, let's see how to find the rank of a matrix.
There are several ways to figure out the rank of a given matrix. Arguably, the simplest one is Gaussian elimination, or its slightly modified version, Gauss-Jordan elimination. They rely on so-called elementary row operations to modify the matrix into its (reduced) row echelon form — the form you can discover at Omni's (reduced) row echelon form calculator). From there, we can easily read out the rank of the matrix.
The operations are:
Exchanging two rows of the matrix;
Multiplying a row by a non-zero constant; and
Adding to a row a non-zero multiple of a different row.
The key property here is that, although the above operations change our matrix, they don't change its rank. In other words, learning how to find the rank of a matrix boils down to learning the Gauss (or Gauss-Jordan) algorithm.
Say that your matrix is:
(a1a2b1b2c1c2)\scriptsize\begin{pmatrix} a_1&a_2\\ b_1&b_2\\ c_1&c_2\\ \end{pmatrix}(a1b1c1a2b2c2)
Then, provided that a1a_1a1 is not zero, the first step of the Gaussian elimination will transform the matrix into something in the form
(a1a20s20t2)\scriptsize\begin{pmatrix} a_1&a_2\\ 0&s_2\\ 0&t_2 \end{pmatrix}(a100a2s2t2)
with some real numbers s2s_2s2 and t2t_2t2. Then, as long as s2s_2s2 is not zero, the second step will give the matrix
(a1a20s200)\scriptsize\begin{pmatrix} a_1&a_2\\ 0&s_2\\ 0&0 \end{pmatrix}(a100a2s20)
Now we need to observe that the bottom row represents the zero vector (it has 000's in every cell), which is linearly dependent with any vector. Therefore, the rank of our matrix will simply be the number of non-zero rows of the array we obtained, which in this case is 222.
In particular, observe that, whatever we'd done, we couldn't have obtained the third row non-zero since every consecutive number in that row was eliminated by one of the rows above. This means that a 3×23 \times 23×2 can never be a full rank matrix, and further translates to the following general rule: if AAA is a matrix of size n×mn \times mn×m, then
rank(A)≤min(n,m)\scriptsize\mathrm{rank}(A)\leq \mathrm{min}(n,m) rank(A)≤min(n,m)
Phew, that was quite some time pondering over theory. How about we move onto a numerical example and see the matrix rank calculator in action?
Suppose that you're on a date in a fancy restaurant, and your partner challenges you into a matrix rank calculating competition. Apparently, it's a new viral TikTok challenge, so what can you do?
You ask a gentleman struggling with a steak on the table next to you for an example of a matrix. Obviously, he happily obliges.
A=(02−11012−13114)\scriptsize A=\begin{pmatrix} 0&2&-1\\ 1&0&1\\ 2&-1&3\\ 1&1&4 \end{pmatrix}A=⎝⎛012120−11−1134⎠⎞
Ready? Set? Go!
Luckily enough, you know the Omni Calculator website inside out and visit the matrix rank calculator straight away. To make it work in your favor, we first need to tell the calculator what we're dealing with. It's a matrix of size 4×34 \times 34×3, so we input 444 under the number of rows, and 333 under the number of columns. This will show us a symbolic example of a matrix similar to ours. We just need to give it the correct numbers.
According to the picture, the first row has elements a1a_1a1, a2a_2a2, and a3a_3a3, so we look back at our array and put its first row under these symbols:
a1=0a2=2a3=−1\scriptsize\begin{split} a_1&=0\\ a_2&=2\\ a_3&=-1 \end{split}a1a2a3=0=2=−1
Similarly, we input the other three rows:
b1=1b2=0b3=1\scriptsize\begin{split} b_1&=1\\ b_2&=0\\ b_3&=1 \end{split}b1b2b3=1=0=1
c1=2c2=−1c3=3\scriptsize\begin{split} c_1&=2\\ c_2&=-1\\ c_3&=3 \end{split}c1c2c3=2=−1=3
d1=1d2=1d3=4\scriptsize\begin{split} d_1&=1\\ d_2&=1\\ d_3&=4 \end{split}d1d2d3=1=1=4
Once we input the last number, the matrix rank calculator will spit out the rank of our matrix. Unfortunately, just as it was about to do so, your date makes you put the phone down and points out that it'll be more fun to see how much time it takes to do it without any fancy tools. Oh well, it looks like we'll have to calculate it by hand, after all.
First of all, we see that the first element of the first row is 000. We don't like zeros - we can't use them in Gauss-Jordan elimination to get rid of the other numbers in that column. So why don't we exchange the first row with the second?
(10102−12−13114)\scriptsize\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 2 &-1&3\\ 1 &1&4 \end{pmatrix}⎝⎛102102−111−134⎠⎞
Now that's more like it! With this, we can take care of the 222 and the 111 in the bottom two rows. To do this, we add a suitable multiple of the first one to these rows, so that we'll obtain zeros in the whole of the first column, apart from the first row. Since we have 111 to work with and 2+(−2)×1=02 + (-2)\times1 = 02+(−2)×1=0 and 1+(−1)×1=01 + (-1)\times1 = 01+(−1)×1=0, we add a (−2)(-2)(−2) multiple of the first row to the third one, and a (−1)(-1)(−1) multiple to the fourth. Observe that we don't have to do anything with the second row since we already have 000 there.
(10102−12+(−2)×1−1+(−2)×03+(−2)∗11+(−1)×11+(−1)×04+(−1)∗1)=(10102−10−11013)\scriptsize \begin{split} &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 2 + (-2)\times1&-1 + (-2)\times0&3 + (-2)*1\\ 1 + (-1)\times1&1 + (-1)\times0&4 + (-1)*1 \end{pmatrix}=\\ &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 0&-1&1\\ 0&1&3 \end{pmatrix} \end{split}⎝⎛102+(−2)×11+(−1)×102−1+(−2)×01+(−1)×01−13+(−2)∗14+(−1)∗1⎠⎞=⎝⎛100002−111−113⎠⎞
Alright, we've lost quite some time trying to use the matrix rank calculator before, so we need to speed up a little.
We move on to the second column. We'd like to use the 222 in the second row to eliminate the −1-1−1 and the 111 from the two bottom rows. Just as before, we add a suitable multiple of the second row: this time, it'll be 0.50.50.5 for the third row, and −0.5-0.5−0.5 for the last.
(10102−10−1+0.5×21+0.5×(−1)01+(−0.5)×23+(0.5)×(−1))=(10102−1000.5002.5)\scriptsize \begin{split} &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 0&-1 + 0.5\times2&1 + 0.5\times(-1)\\ 0&1 + (-0.5)\times2&3 + (0.5)\times(-1) \end{pmatrix}=\\ &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 0&0&0.5\\ 0&0&2.5 \end{pmatrix} \end{split}⎝⎛100002−1+0.5×21+(−0.5)×21−11+0.5×(−1)3+(0.5)×(−1)⎠⎞=⎝⎛100002001−10.52.5⎠⎞
You see your partner nervously scribbling on her piece of paper, and the gentleman on the table next to you is cheering you on. Time for the last step.
Now we'd like to get rid of the 2.52.52.5 in the fourth row using the 0.50.50.5 from the third one. We add a multiple of (−5)(-5)(−5) to obtain:
(10102−1000.5002.5+(−5)×0.5)=(10102−1000.5000)\scriptsize \begin{split} &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 0&0&0.5\\ 0&0&2.5 + (-5)\times0.5 \end{pmatrix}=\\ &\begin{pmatrix} 1&0&1\\ 0&2&-1\\ 0&0&0.5\\ 0&0&0 \end{pmatrix} \end{split}⎝⎛100002001−10.52.5+(−5)×0.5⎠⎞=⎝⎛100002001−10.50⎠⎞
The matrix has three non-zero rows, which means that rank(A)=3\mathrm{rank}(A) = 3rank(A)=3. You look triumphantly at your date and declare yourself the winner. The gentleman next to you is clapping, and you decide to celebrate it with a slice of chocolate cake. That much fun deserves a nice dessert and a good tip, don't you think?
Maciej Kowalski, PhD candidate
Number of columns
⌈ a1 a2 ⌉
| b1 b2 |
⌊ c1 c2 ⌋
First row
a₁
a₂
Second row
b₁
b₂
c₁
c₂
⌈ ⌉
rank | | = 0
⌊ ⌋
Check out 35 similar linear algebra calculators 🔢
Adjoint matrixCharacteristic polynomialCholesky decomposition… 32 more
BMR - Harris-Benedict equation
Harris-Benedict calculator uses one of the three most popular BMR formulas. Knowing your BMR (basal metabolic weight) may help you make important decisions about your diet and lifestyle.
Harris-Benedict Calculator (Total Daily Energy Expenditure)
Cofunction
The cofunction calculator is here to find the cofunction of the trigonometric function you choose for a given angle between 0 and 90 degrees.
Cofunction Calculator
Find the area, perimeter, or length of sides for a heptagon using our heptagon calculator.
Heptagon Calculator
Ideal egg boiling
Quantum physicist's take on boiling the perfect egg. Includes times for quarter and half-boiled eggs.
Ideal Egg Boiling Calculator
Other Math calculators
|
CommonCrawl
|
Passive sensing around the corner using spatial coherence
M. Batarseh1,
S. Sukhov1,
Z. Shen1,
H. Gemar1,
R. Rezvani1 &
A. Dogariu1
Nature Communications volume 9, Article number: 3629 (2018) Cite this article
Imaging and sensing
When direct vision is obstructed, detecting an object usually involves either using mirrors or actively controlling some of the properties of light used for illumination. In our paradigm, we show that a highly scattering wall can transfer certain statistical properties of light, which, in turn, can assist in detecting objects even in non-line-of-sight conditions. We experimentally demonstrate that the transformation of spatial coherence during the reflection of light from a diffusing wall can be used to retrieve geometric information about objects hidden around a corner and assess their location. This sensing approach is completely passive, assumes no control over the source of light, and relies solely on natural broadband illumination.
Imaging systems map spatially the distribution of light across an object onto a distant observation plane for further recording and processing. Of course, when objects are too distant or too small to be satisfactorily described by an imaging system, only unresolved sensing is available for estimating physical properties of the object. Whether the object is actively illuminated in a controlled manner, or it is self-luminous, or it is subject to some passive ambient lighting, the imaging procedure is typically constrained by the need for direct view to the object1.
In non-line-of-sight conditions, an ideal "specular" reflector such as a mirror preserves most of the light properties, including the wavefront, and the imaging procedure is similar to the direct line-of-sight case. Decreasing the mirror's specularity hinders this capability. A shattered mirror alters the directionality of reflected light and, as a result, only a distorted version of the image can be transferred as illustrated in Fig. 1. The blur can be mitigated if the disturbance can be quantified. Unfortunately, because of the random nature of surface scattering, there are no simple deterministic approaches like ray tracing or conventional diffraction theories to describe the relationship between the incident and reflected optical fields. The situation is further complicated if the light is redirected by a diffusing wall when the interaction is not limited to the surface of the random medium but it extends throughout its volume. In these conditions, recovering the incident wavefront is challenging. The complicated process can be described in terms of the associated transfer matrix, which can be found by controlling the properties of radiation before and after the scattering medium2,3,4,5,6,7.
Different non-line-of-sight sensing conditions. a A perfect reflector permits imaging around the corner. b A broken mirror alters the optical wavefront and impedes forming a clear image. c A random medium will alter the reflection even more due to both surface and volume scattering contributions
Nonetheless, some these limitations can be alleviated by an active control of the illumination source. For instance, one can employ time-of-flight approaches to gate the time necessary for light emerging from a controllable source to first reach an object and then a detector capable of discriminating the transient time8,9. Imaging angularly small targets hidden around a corner is also possible when using additional measurements performed on reference objects10 or when the scene is illuminated with temporally coherent light11,12,13,14. Sometimes, when an object is diffusively illuminated by a laser and its reflection generates a nonuniform intensity distribution across the scattering wall, detecting the evolution of this intensity allows tracking the object's movement15,16.
Unfortunately, the sensing conditions are significantly more restrictive when one does not have access to the source of illumination. If the object does not generate intensity variations that can be measured, one cannot reconstruct an image in the conventional intensity-based sense1. However, even in this rather limiting situation, the object itself acts as the primary (if self-luminous) or the secondary source of partially coherent radiation and relevant information about the object is carried by the statistical properties of the radiated field. The remaining practical question is: do these field properties survive the interaction with scattering obstructions?
In this paper, we demonstrate that spatial correlations of the electromagnetic field can be transferred between the incident and reflected fields in spite of the random nature of interaction with a multiple-scattering medium. Specifically, we show that scattering from randomly inhomogeneous media does not completely destroy the spatial coherence of radiation. This means that a multiple-scattering wall can act as a "broken mirror" for spatial coherence and its distortions can be partially mitigated. We demonstrate that this effect permits retrieving information about the size and shape and allows determining the location of an object even in non-line-of-sight situations.
Spatial coherence transfer in reflection off diffusive wall
We consider the situation where radiation from an incoherent source (target) reflects off a scattering surface, e.g. a painted wall, and propagates further until it reaches a detector, which can measure its spatial coherence function (SCF) \({\mathrm{\Gamma }}\left( {{\mathbf{r}},{\mathbf{s}}} \right) = \left\langle {E\left( {{\mathbf{r}} + ({{\mathbf{s}}}/{2}}) \right)E^ \ast \left( {{\mathbf{r}} - ({{\mathbf{s}}}/{2}}) \right)} \right\rangle\). Here, E(r) is the electric field at the location r and s is the distance between the points for which the field similarity is being measured (shear).
It is well known how coherence evolves in free-space propagation17. Thus, certain information about the source can always be extracted by measuring the coherence of the light at distant locations18. However, upon reflection from a scattering medium, it is expected that SCF is affected in a way that may complicate this reconstruction procedure. Let us examine the general situation of partially coherent light incident onto a scattering medium as shown in Fig. 2a. Intuitively, one can anticipate that the coherence degrades due to the additional randomization of light and the information about the source of light deteriorates. To mitigate the influence of this interaction, one needs to understand how the coherence properties transform during reflection.
Anisotropic transfer of spatial coherence. a Schematic representation of the field reflected from a diffusive wall and its SCF assessed for in-plane s|| and out-of-plane s⊥ shears. b, d Angular distributions of specific intensity corresponding to 60° and 80° angle of incidence, respectively. c, e Corresponding degrees of the spatial coherence. The incident light is fully coherent spatially and the coherence function of the output is evaluated next to the surface. Parameters of the scattering medium are indicated in the Methods. The mean slope of surface roughness of the simulated medium is σ = 0.07 rad
The transformation of SCF in reflection is well understood only for homogeneous, plane−parallel interfaces19. Earlier studies also addressed, to a certain degree, the phenomenology of coherence degradation but only in transmission through inhomogeneous media20,21. Recently, we developed a Monte Carlo technique that permits estimating the transformation of SCF in multiple-scattering media22. This method uses the directions u = (uT, uz) and weights of the "photons" leaving the random medium to evaluate the specific intensity of the scattered field IS(r, u) from which the SCF can be evaluated through a Wigner transform23:
$${\mathrm{\Gamma }}\left( {{\mathbf{r}},{\mathbf{s}}} \right) = {\int} {I_{\mathrm{S}}} \left( {{\mathbf{r}},{\mathbf{u}}} \right)\frac{{{\mathrm{exp}}({\mathrm{i}}k\,{\mathbf{s}}{\mathbf{u}}_{\mathrm{T}})}}{{\left| {u_z} \right|}}{\mathrm{d}}^2u_{\mathrm{T}},$$
where k is the wavenumber. The partially coherent beam propagates along the z-axis and uT is the projection of vector u onto a plane perpendicular to z. To treat the reflection from realistic scattering media, we augmented this method with a proper description of the surface roughness (see Methods and Supplementary Note 1). Monte Carlo simulations show that light reflected from inhomogeneous media can be effectively described as the superposition of a multiple-scattering component originating in the bulk and the single scattering at the surface (Supplementary Note 1). We found that for typical painted walls the volume scattering randomizes significantly the set of directions u corresponding to the incident field and, according to Eq. (1), the coherence information carried by this component is severely altered or even destroyed. However, the inherent single scattering at the surface of any diffusive wall leads to a much smaller randomization of the field, as we will show later.
Energetically, the volume scattering overwhelms the surface one. In the total energy balance, the contribution of surface scattering is only 4% for normal incidence and increases for larger angle of incidence (see Supplementary Note 1). Nevertheless, close to the specular direction, the specific intensity IS(r, u) corresponding to surface scattering can be quite high in the case of relatively smooth surfaces as illustrated in Fig. 2b, d. As can be seen, the single scattering contributions lie on top of a much broadly spread background corresponding to the volume scattering but this could be effectively isolated by restricting the angular range of a measurement, i.e. the field of view.
The coherence function is obtained from the specific intensity using Eq. (1) and, as can be seen in Fig. 2c, e, its extent is rather limited spatially. But, most interestingly, the coherence degradation process is not isotropic. We find that, perpendicular to the scattering plane, the spatial coherence Γ(s⊥) survives much better than for in-plane s| shears. In fact, this difference between the two corresponding coherence lengths, \(l_{\mathrm c}^ \bot\) and \(l_{\mathrm c}^\parallel\), increases with the angle of incidence, which is an effect closely related to the "glitter path" phenomenon: the elongated reflection of a low Sun or Moon on the water's surface. In this case, the angular spread of wavevectors is determined by the angle of incidence θ and the properties of the rough surface24,25. From the Wigner transformation in Eq. (1), one can then infer the coherence length \(l_{\mathrm c}^ \bot \propto \left( {\sigma \,\,{\mathrm{cos}}\,\,\theta } \right)^{ - 1}\).
We analyze this effect in detail using both Monte Carlo simulations and the complex-valued SCF measurements using the Dual Phase Sagnac Interferometer (DuPSaI) procedure detailed in Supplementary Note 3 26. A typical example of measured SCF for reflection from a diffusive wall (estimated transport mean free path 0.9 µm) is presented in Fig. 3a showing a significant difference between in-plane and off-plane shears. Moreover, in Fig. 3b one can clearly see the monotonic behavior of \(l_{\mathrm c}^ \bot\) over a significant range of angles of incidence θ. The fact that, in certain conditions, the spatial coherence survives in spite of the medium's diffusiveness can be used to recover information about the source even in non-line-of-sight circumstances.
"Glitter path" effect in reflection from random media. a Experimental values of the normalized spatial coherence for in-plane s|| and off-plane s⊥ shear corresponding to 80° incidence angle. b Experimental and simulated values of the off-plane coherence length \(l_{\mathrm c}^ \bot\) as a function of the angle of incidence θ. Both the source and detection system are located 1 m away from the multiple-scattering wall. The procedure of measurement is detailed in Supplementary Note 3. The solid line represents the Monte Carlo fit to the experimental data from which the average slope of the surface roughness was estimated to be σ = 0.07 rad. The dashed line is the corresponding analytical expression \(l_{\mathrm c}^ \bot \propto \left( {\sigma \cos \theta } \right)^{ - 1}\). The coherence length (half-width at half-maximum of SCF) of the field incident on the wall is 132 µm. The error bars represent the standard deviation of four independent measurements of coherence length
Using spatial coherence to estimate the distance to target
An analytical description for the transformation of the complex SCF in reflection was derived in Supplementary Note 2. The coherence function Γ(r, s; z) of the reflected field is essentially the product of the free-space coherence function Γ0(r, s; z) propagating the total distance z = z1 + z2 and an apodizing function ΓA(s), which depends on both the distance from the object to the wall z1 and the distance from the wall to the DuPSaI z2. The phase of the measured complex SCF from reflection coincides with the phase of SCF of a light field propagating in free space over the same distance. For light propagating in free space, the angular position of an incoherent source is encoded in the phase of complex coherence function27. The phase of SCF in the observation plane, ψ = (k/z)s⊥y, depends on the total distance z to the object, the shearing s⊥ and the displacement y of the detector with respect to the optical axis (as shown in more detail in Supplementary Note 5). Thus, to extract the absolute distance to the source, one can perform measurements of the complex coherence function at several locations and then triangulate to find the object location. The procedure is somewhat similar to the binocular disparity (parallax) concept, i.e. the positional difference between the two projections of a given point in space, and is similar to the way in which the location of nearby stars is determined in astronomy28.
As a result, the distance to the object can be obtained from multiple phase measurements of the reflected SCF at different positions y as schematically depicted in Fig. 4a. In our demonstration, the incoherent source was created by illuminating a rough object (7.5 cm square) with broadband light emitted from an LED with 30 nm bandwidth and a central wavelength of 525 nm. Light propagated z1 distance, bounced off a rough scattering wall covered with a thick layer of white paint, and the complex coherence function of the reflected field was measured at a distance z2 away as shown in Fig. 4a. The phase of SCF was evaluated in the direction s⊥ that minimizes the coherence degradation. Multiple measurements were performed by displacing the detector up to 4 cm away from the specular direction. The measured phase map is shown in Fig. 4b. By linearly fitting the phase map with the expression ψ = (k/z)s⊥y along shear s⊥ for a known displacement y, the total distance can be recovered. In this example, the SCF phase obtained for a displacement y of 3 cm (dotted line in Fig. 4b) and 4 cm was sufficient to recover the 180 cm total distance z to the object with a precision better than 2%.
Distance recovery from coherence measurements. a The object-to-wall distance is z1 = 80cm, the angle of incidence is θ = 80°, and the complex spatial coherence function of the scattered field is measured at z2 = 100 cm from the diffusing wall. The coherence detector (DuPSaI) was translated up to 4 cm from the optical axis as indicated. b Two-dimensional phase map of the measured coherence function corresponding to different transversal position of DuPSaI, the phase measurement that was used to recover the total distance to the target is represented by the dotted line
Using spatial coherence to evaluate target size and shape
The apodization effect of the diffusing wall mentioned before can be numerically evaluated from known properties of the scattering wall or it can be obtained directly by measuring the SCF in conditions similar to the one illustrated in Fig. 3. Of course, this apodizing function should be properly scaled \({\mathrm{\Gamma }}_{\mathrm{A}}\left( {{\boldsymbol{s}};z_1} \right) = {\mathrm{\Gamma }}_{\mathrm{A}}\left( {\alpha {\boldsymbol{s}};z_1^\prime } \right)\) according to the overall distance to the target. The scaling factor \(\alpha = ({{z_1}}/({{z_1 + z_2}}))(({{z_1^\prime + z_2}})/{{z_1^\prime }})\)can be estimated in advance from the phase measurements as shown before and it depends on the distance z2 from the wall to the DuPSaI and two different distances z1 and \(z_1^\prime\) between the source and the wall. The entire procedure is detailed in Supplementary Note 4.
By measuring Γ(r, s), we were able to recover the unperturbed SCF, Γ0(r, s), by dividing the coherence function reflected from the wall by the apodizing function, \({\mathrm{\Gamma }}_0\left( {{\mathbf{r}},{\boldsymbol{s}}} \right) = {\mathrm{\Gamma }}\left( {{\mathbf{r}},{\boldsymbol{s}}} \right){\mathrm{\Gamma }}_{\mathrm{A}}^{ - 1}({\boldsymbol{s}})\). Of course, the quality of this reconstruction depends on both ΓA(s) and the level of inherent noise in an experiment. Nevertheless, in practical applications the recovery procedure is essentially influenced only by the width of ΓA(s), which is represented by the extent of \(l_{\mathrm c}^ \bot\) shown in Fig. 3b.
From this, effectively unperturbed SCF, the one-dimensional projection of the intensity distribution across the target can then be found through a Fourier transformation
$$I_s\left( {u_y} \right) = \frac{k}{{2\pi }}{\int} {{\mathrm{\Gamma }}_0} \left( {s_y} \right){\mathrm{exp}}\left( { - {\mathrm{i}}ks_yu_y} \right){\mathrm{d}}s_y$$
as follows from van Cittert−Zernike theorem29. In general, the procedure is valid along any direction of shear and the entire intensity distribution across the source could be recovered. The scattering from the diffusing wall however affects the coherence information differently along different directions as shown in Fig. 3. In the following we use the out-of-plane s⊥ shearing direction where the spatial coherence is least affected. For objects which are uniformly illuminated, the reconstructed intensity distribution provides geometric information about the object and its angular dimension as demonstrated in Fig. 5. Furthermore, using the known distance z estimated from the SCF phase, the angular dimensions can be directly converted to absolute values.
Shape recovery from coherence measurements. a, b The intensity distribution across the DuPSaI field of view corresponding to the square and equilateral triangle objects, respectively. c, d Plots of real and imaginary components of SCF measured for the square and equilateral triangle objects, respectively. The imaginary component is color coded and superposed on the 3D representation of the real part of SCF. e, f Variations of real and imaginary SCF components at y = 0. The corresponding apodizing function ΓA(s) is also indicated by dashed lines. g, h The 1D projection of the intensity distributions recovered from SCF measurements (solid lines) together with the actual intensity profiles evaluated across the targets (dotted lines)
Two examples of shape reconstruction are illustrated in Fig. 5. The two targets are equal-area objects, one that is symmetric along the shear direction (square) and one that is not (triangle). We emphasize that in our conditions of operation there are no discernable intensity variations across the field of view as can be seen in Fig. 5a, b. Therefore, in this far-field setting, traditional imaging approaches fail and the targets are unresolved. The complex SCFs, however, are quite different as seen in 5c, d. Notably, the difference in the object symmetry reveals itself in the imaginary parts of the measured SCFs shown in Fig. 5e,f26. Moreover, the one-dimensional projection of the intensity distributions along the direction of shear are recovered rather well, which allows to differentiate the shape of the objects as seen in Fig. 5g, h. Within the current field of view of our shearing-based experimental setup, the Pearson coefficient evaluated with respect to the expected intensity profile is 0.93 and 0.89 for the square and triangle, respectively.
Traditional optical imaging requires either straight-line access to the object or a specific arrangement of specular reflectors that create a wrapped version of unobstructed imaging. Non-line-of-sight sensing can also be achieved but only by purposely controlling some of the properties of light during the measurement process. In this Letter, we have shown that information about a non-line-of-sight object can be obtained completely passively without using mirrors and without any access to the source of natural light. For this, we exploit a higher-dimensionality degree of freedom of the optical field. We have shown that the spatial coherence properties of light are not completely destroyed upon reflection from a scattering medium especially for shears perpendicular to the plane of incidence ("glitter path" effect). Moreover, the effect of incoherent volume scattering can be effectively suppressed in practice by limiting the field-of-view of the detection instrument. This proves that, in certain conditions of incidence, a diffuse reflector can act as a "broken mirror" for the complex coherence function of light, which can still permit recovering relevant information about the object.
The recovery procedure was validated using measurements along the out-plane direction where the coherence information is best preserved. Extensions of this method for two-dimensional shape recovery are possible using a plurality of four-dimensional SCF measurements within the available space. Additional information about the scene such as the statistical properties of the illuminating radiation can be recovered from higher-order coherence measurements that go beyond field−field correlations. Finally, in the present demonstration we used an incoherent reflector as our object. However, the approach can be easily extended to absorbing targets by invoking the Babinet's complementarity principle30.
We have considered circumstances when the light, whether produced by the object or originating from another source, reaches the detector only after intermediate scattering from a diffusive wall. This generic setting where the direct vision is impeded is typical for numerous sensing applications ranging from medicine to defense.
For the Monte Carlo simulations of volume scattering, we used typical parameters of white paints: TiO2 particles with a diameter 200 nm, refractive index 2.6763, and a fractional volume 10% distributed in a matrix with refractive index 1.5. The thickness of the simulated layer is 0.6 mm. We found that the Kirchhoff approximation for the description of surface roughness and a Gaussian distribution of the local slopes31 allows both a simple Monte Carlo implementation and a satisfactory description of experimental results. The mean surface slope was determined by matching the outcome of the Monte Carlo simulation to the measured SCF of reflected light for different angles of incidence ranging from 50° to 80°. From the small value of the slope variance (70 mrad) obtained from the fitting one can conclude that for these materials the shadowing effects are insignificant32.
Multiply scattering wall
For the reflection experiments reported here we used a diffusing reflector consisting of a large area drywall painted with commercial white paint (BEHR Premium Plus Ultra Pure White Eggschell Zero VOC interior paint).
SCF measurements
The complex SCF was measured using a fully automated wavefront shearing interferometer. The instrument combines a Sagnac interferometer integrated with a telescopic imaging system and permits determining the real and imaginary part of the complex SCF from only two measurements, thus the name Dual Phase Sagnac Interferometer (DuPSaI)26.
For the coherence measurements, the light source was a high-power LED with bandwidth of 30 nm centered at 525 nm and commercial diffuser (Thorlabs, Solis-525C, 600Grit) with a diameter of 3 mm. The in-plane and off-plane coherence measurements were performed by rotating the Dove prism inside the DuPSaI detector.
The incoherent objects consisted of a rough metallic painted square and an equilateral triangle having the same area of 22.86 cm2. The objects were placed at 80 cm from the diffusive wall, which, in turn, was positioned at 1 m from the input aperture of the DuPSaI. The objects were illuminated from the same spatially incoherent source produced by the high-power LED with a diameter of 2 inch and a 600Grit diffuser.
The data that support the findings of this study are available from the corresponding author upon reasonable request
Mait, J. N., Euliss, G. W. & Athale, R. A. Computational imaging. Adv. Opt. Photonics 10, 409–483 (2018).
ADS Article Google Scholar
Vellekoop, I. M. & Mosk, A. Focusing coherent light through opaque strongly scattering media. Opt. Lett. 32, 2309–2311 (2007).
ADS Article PubMed CAS Google Scholar
Popoff, S., Lerosey, G., Fink, M., Boccara, A. C. & Gigan, S. Image transmission through an opaque material. Nat. Commun. 1, 81 (2010).
Kohlgraf-Owens, T. & Dogariu, A. Finding the field transfer matrix of scattering media. Opt. Express 16, 13225–13232 (2008).
ADS Article PubMed Google Scholar
Popoff, S. et al. Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media. Phys. Rev. Lett. 104, 100601 (2010).
He, H., Guan, Y. & Zhou, J. Image restoration through thin turbid layers by correlation with a known object. Opt. Express 21, 12539–12545 (2013).
Xu, X., Liu, H. & Wang, L. V. Time-reversed ultrasonically encoded optical focusing into scattering media. Nat. Photonics 5, 154–157 (2011).
ADS Article PubMed PubMed Central CAS Google Scholar
Velten, A. et al. Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging. Nat. Commun. 3, 745 (2012).
O'Toole, M., Lindell, D. B. & Wetzstein, G. Confocal non-line-of-sight imaging based on the light-cone transform. Nature 555, 338 (2018).
Xu, X. et al. Imaging objects through scattering layers and around corners by retrieval of the scattered point spread function. Opt. Express 25, 32829–32840 (2017).
Katz, O., Small, E. & Silberberg, Y. Looking around corners and through thin turbid layers in real time with scattered incoherent light. Nat. Photonics 6, 549–553 (2012).
ADS Article CAS Google Scholar
Katz, O., Heidmann, P., Fink, M. & Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 8, 784–790 (2014).
Edrei, E. & Scarcelli, G. Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect. Optica 3, 71–74 (2016).
Singh, A. K., Naik, D. N., Pedrini, G., Takeda, M. & Osten, W. Looking through a diffuser and around an opaque surface: a holographic approach. Opt. Express 22, 7694–7701 (2014).
Klein, J., Peters, C., Martín, J., Laurenzis, M. & Hullin, M. B. Tracking objects outside the line of sight using 2D intensity images. Sci. Rep. 6, 32491 (2016).
Gariepy, G., Tonolini, F., Henderson, R., Leach, J. & Faccio, D. Detection and tracking of moving objects hidden from view. Nat. Photonics 10, 23–26 (2016).
Mandel, L. & Wolf, E. Optical Coherence and Quantum Optics (Cambridge University Press, Cambridge, UK, 1995).
Beckus, A., Tamasan, A., Dogariu, A., Abouraddy, A. F. & Atia, G. K. Spatial coherence of fields from generalized sources in the Fresnel regime. J. Opt Soc. Am. A 34, 2213–2221 (2017).
Wang, W., Simon, R. & Wolf, E. Changes in the coherence and spectral properties of partially coherent light reflected from a dielectric slab. J. Opt. Soc. Am. A 9, 287–297 (1992).
Cheng, C.-C. & Raymer, M. Long-range saturation of spatial decoherence in wave-field transport in random multiple-scattering media. Phys. Rev. Lett. 82, 4807 (1999).
Cheng, C.-C. & Raymer, M. Propagation of transverse optical coherence in random multiple-scattering media. Phys. Rev. A 62, 023811 (2000).
Shen, Z., Sukhou, S., & Dogariu, A. Monte Carlo method to model optical coherence propagation in random media. J. Opt. Soc. Am. A 34, 2189–2193 (2017).
Pierrat, R., Elaloufi, R., Greffet, J.-J. & Carminati, R. Spatial coherence in strongly scattering media. J. Opt. Soc. Am. A 22, 2329–2337 (2005).
Adam, J. A. A Mathematical Nature Walk, Vol. 137 (Princeton University Press, Princeton, NJ, 2011).
Lynch, D. K., Dearborn, D. S. & Lock, J. A. Glitter and glints on water. Appl. Opt. 50, F39–F49 (2011).
Naraghi, R. R. et al. Wide-field interferometric measurement of a nonstationary complex coherence function. Opt. Lett. 42, 4929–4932 (2017).
Goodman, J. W. Statistical Optics (John Wiley & Sons, Hoboken, NJ, 2015).
Freedman, R., Geller, R. & Kaufmann, W. J. Universe: The Solar System (Macmillan, London, 2010).
Goodman, J. Introduction to Fourier Optics (McGraw Hill, New York, NY, 2008).
Sukhov, S. et al. Babinet's principle for mutual intensity. Opt. Lett. 42, 3980–3983 (2017).
Ogilvy, J. Wave scattering from rough surfaces. Rep. Progress. Phys. 50, 1553–1608 (1987).
ADS MathSciNet Article Google Scholar
Smith, B. Geometrical shadowing of a random rough surface. IEEE Trans. Antennas Propag. 15, 668–671 (1967).
The authors thank A. Tamasan, G. Atia, and A. Abouraddy for helpful discussions. This work was partially funded by Defense Advanced Research Projects Agency (DARPA) (HR0011-16-C-0029).
CREOL, The College of Optics and Photonics, University of Central Florida, Orlando, FL, 32816, USA
M. Batarseh, S. Sukhov, Z. Shen, H. Gemar, R. Rezvani & A. Dogariu
M. Batarseh
S. Sukhov
Z. Shen
H. Gemar
R. Rezvani
A. Dogariu
M.B., H.G., R.R. and Z.S., performed the experiments and contributed materials and analysis tools. All authors contributed to designing the experiments, data analysis, and writing the manuscript.
Correspondence to A. Dogariu.
The authors declare no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Batarseh, M., Sukhov, S., Shen, Z. et al. Passive sensing around the corner using spatial coherence. Nat Commun 9, 3629 (2018). https://doi.org/10.1038/s41467-018-05985-w
DOI: https://doi.org/10.1038/s41467-018-05985-w
Keyhole Imaging:Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path
Christopher A. Metzler
, David B. Lindell
& Gordon Wetzstein
IEEE Transactions on Computational Imaging (2021)
Vector wave simulation of active imaging through random media
Zhean Shen
& Aristide Dogariu
Journal of the Optical Society of America A (2020)
Measuring Complex Degree of Coherence of Random Light Fields with Generalized Hanbury Brown–Twiss Experiment
Zhaofeng Huang
, Yahong Chen
, Fei Wang
, Sergey A. Ponomarenko
& Yangjian Cai
Physical Review Applied (2020)
Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging
, Felix Heide
, Prasana Rangarajan
, Muralidhar Madabhushi Balaji
, Aparna Viswanath
, Ashok Veeraraghavan
& Richard G. Baraniuk
Optica (2020)
Imaging around corners in the mid-infrared using speckle correlations
Shawn Divitt
, Dennis F. Gardner
& Abbie T. Watnik
Optics Express (2020)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editors' Highlights
Top Articles of 2019
Nature Communications ISSN 2041-1723 (online)
|
CommonCrawl
|
Experimental evaluation of the WiMAX downlink physical layer in high-mobility scenarios
Pedro Suárez-Casal1,
José Rodríguez-Piñeiro1,
José A García-Naya1 &
Luis Castedo1
The experimental evaluation of the WiMAX downlink physical layer in high-mobility scenarios is extremely difficult to carry out because it requires the realization of measurement campaigns with expensive fast-moving vehicles. In this work, however, we succeeded in doing such an experimental evaluation with a low-mobility vehicle. The key idea is the enlargement of the symbol period prior to its transmission over the air. Such enlargement reduces the frequency spacing between the orthogonal frequency division multiplexing (OFDM) subcarriers in WiMAX transmissions and hence induces significant inter-carrier interference (ICI) on the received signals. The performance impact of such ICI in terms of error vector magnitude (EVM) and throughput is analyzed under different conditions like the use of multiple antennas, the placement of the receive antennas, or the accuracy of the channel information to adapt the transmission rate.
Existing mobile communication networks are primarily designed for low user speeds below 15 km/h. Nowadays, however, there is an increased number of wireless terminals mounted on high-speed vehicles such as cars, trains, buses, subways, or airplanes. High-mobility communication networks (HMCN) aim at interconnecting such terminals to convey information not only for human users but also for machines to send all sorts of command, control, and safety information.
WiMAX is a communication standard suitable for the provision of wireless broadband connectivity. WiMAX is a term coined by the WiMAX Forum to promote the interoperability between the IEEE 802.16 family of wireless communication standards. WiMAX is the first commercially available and deployed technology for delivering mobile fourth generation (4G) services. IEEE 802.16 standardization activities started in 1999 and were the first ones to address broadband for wireless metropolitan area networks. Although long-term evolution (LTE) is currently being more widely used by 4G mobile network operators, specially in Europe and the United States, there is still a significant amount of network operators based on WiMAX, specially in Asia [1]. More specifically, WiMAX is being used in high-mobility scenarios like the metropolitan transportation system of Japan [1]. In addition, the WiMAX Forum has recently instituted an Aviation Working Group within its organizational structure to collaborate on the adaptation of WiMAX to the specific needs of the aviation community [2].
The physical layer (PHY) of the WiMAX radio interface uses orthogonal frequency division multiplexing (OFDM) as the modulation scheme. OFDM is particularly suitable to carry data over broadband frequency-selective channels because it allows for a low-cost channel equalization. OFDM waveforms, on the contrary, are rather sensitive when transmitting over time-selective channels such as those encountered in high-mobility scenarios. Time-selectivity causes the OFDM subcarriers to loose their orthogonality property and produces inter-carrier interference (ICI). Time-selectivity is particularly harmful when considering complex scenarios with multiple antennas and/or multiple users.
Different techniques have been proposed in the literature to mitigate the effects of ICI. More specifically, issues like pilot pattern design [3], window design [4], channel estimation [5], and channel equalization have been thoroughly studied.
Basis expansion models (BEMs) have been extensively used to capture the double selectivity of the wireless channel and to implement cost-effective channel estimation and equalization schemes [6]. The rationale under BEM is the efficient representation of either the impulse or the frequency response of a channel by means of a linear combination of some basis vectors. Different BEMs have been proposed to be used under doubly-selective channels, such as the complex exponential BEM (CE-BEM) [7], polynomial BEM (P-BEM) [8,9], discrete prolate spheroidal BEM (DPS-BEM) [10], and Karhunen-Loève BEM (KL-BEM) [11,12]. Regarding pilot patterns for OFDM under ICI conditions, the Kronecker delta model, where guard subcarriers are allocated around pilot subcarriers, has been proposed as an optimal design to minimize channel estimation errors [13,14]. A detailed review of the algorithms and techniques to combat double-selectivity of the wireless channel can be found in [15].
Communication standards such as WiMAX or LTE define the pilot structure to be used by the elements in the network. Such pilot structures typically do not correspond to those assumed in many ICI estimation and equalization techniques proposed in the literature. In this work, we focus on specific algorithms which do not depend on such assumptions [12,16,17]. These methods typically estimate the ICI from a set of consecutive frequency response estimations and exploit time variations per subcarrier to estimate its spreading.
Experimental performance evaluations of 4G technologies in high-mobility situations are scarce in the literature due to the huge difficulties of carrying out experiments in such scenarios. In this work, we follow an approach where the time-selective wireless channels of high-mobility situations are recreated from experiments carried out at low speeds and hence are more cost-efficient to implement [18]. The approach consists of time-interpolating OFDM symbols prior to its transmission over-the-air (OTA), which leads to a reduction of the bandwidth of the whole OFDM signal. This produces OFDM waveforms which convey exactly the same information as the original one but with a reduced subcarrier spacing, hence artificially increasing the waveform sensitivity to ICI. At reception, and prior to its demodulation, the interpolation operation is inverted via a simple decimation operation. The resulting OFDM symbols are affected by ICI similarly as if they were transmitted over a high-mobility wireless channel. In fact, interpolating the original signal by a factor I will affect the transmitted signal similar to what would happen if it were transmitted at I times the original speed.
In this paper, we extend preliminary results obtained using this technique and already reported in [18,19]. The main contributions of this paper are as follows:
Evaluation of ICI estimation and cancellation algorithms with experimental measurements of multiple-input multiple-output (MIMO) transmissions under time-selective conditions. These results are compared to simulations with comparable model parameters.
Experimental study of the impact of ICI and feedback delay on performance metrics such as throughput.
Improved measurement methodology with respect to that used in previous works ([18,19]). Instead of carrying out a dedicated measurement for each emulated speed value, the signals corresponding to all emulated speeds (time interpolation factors) are transmitted sequentially right after each other, thus making possible their latter comparison under similar conditions. Additionally, two receivers are used simultaneously, and therefore outdoor-to-outdoor as well as outdoor-to-indoor measurements are recorded under similar conditions.
This work focuses on OFDM waveforms based on the WiMAX physical layer. Howerver, note that the experimental methodology also applies to other OFDM-based transmissions, such as those used in LTE. Although the results are obtained for a particular standard, this work provide hints on the behavior of other OFDM-based transmissions on high-mobility conditions and on the performance of the ICI cancellation techniques in different scenarios.
Signal model
We consider the transmission of MIMO-OFDM symbols synthesized according to the PHY layer specifications of the IEEE 802.16e (Mobile WiMAX) standard. We assume OFDM symbols with N subcarriers and a cyclic prefix of N g samples are transmitted. In total, each OFDM symbol occupies N t =N+N g samples. OFDM symbols are grouped in frames of K consecutive symbols, including a symbol reserved as a preamble. We also assume multiple antennas at transmission and reception, i.e., MIMO-OFDM transmissions.The number of transmit and receive antennas is M T and M R , respectively. OFDM symbols are spatially multiplexed over the M T transmit antennas except the preamble which is transmitted only through the first antenna.
Let \({\mathbf {s}}_{k}^{(m)} \in \mathbb {C}^{N \times 1}\), k=1,…,K, m=1,…,M T be the column vector that represents the N complex-valued information symbols transmitted in the kth OFDM symbol over the mth antenna. Such vectors contain data, pilot, and guard symbols. Similarly, \({\mathbf {x}}_{k}^{(m)} \in \mathbb {C}^{N_{t} \times 1}\) contains the N t samples corresponding to the kth OFDM symbol transmitted over the mth antenna.
Elaborating the signal model of a MIMO-OFDM system, the discrete-time representation of the transmitted MIMO-OFDM symbols is:
$$ {\mathbf{x}}_{k} = \left({\mathbf{I}}_{M_{T}} \otimes {\mathbf{G}}_{1}{\mathbf{F}}^{H}\right) {\mathbf{s}}_{k}, \quad k=1,\ldots, K, $$
where \({\mathbf {s}}_{k} = \left [ {{\mathbf {s}}_{k}^{(1)}}^{T},{{\mathbf {s}}_{k}^{(2)}}^{T},\cdots,{{\mathbf {s}}_{k}^{(M_{T})}}^{T} \right ]^{T}\) is the N M T ×1 column vector containing the information, pilot, and guard symbols transmitted in the kth MIMO-OFDM symbol; F is the standard N×N discrete Fourier transform (DFT) matrix; G 1 is a N t ×N matrix which appends the N g samples of the cyclic prefix; ⊗ denotes the Kronecker product; \({\mathbf {I}}_{M_{T}}\) is the M T ×M T identity matrix; and \({\mathbf {x}}_{k} = \left [ {{\mathbf {x}}_{k}^{(1)}}^{T},{{\mathbf {x}}_{k}^{(2)}}^{T},\cdots,{{\mathbf {x}}_{k}^{(M_{T})}}^{T} \right ]^{T}\) is the N t M T ×1 column vector with the samples of the kth MIMO-OFDM symbol. We assume the samples in \({\mathbf {x}}_{k}^{(m)}\) are serially transmitted over the mth antenna at a sampling rate F s =1/T s where T s is the sampling period.
The information symbol vectors are constructed as \({{\mathbf {s}}_{k}^{(m)}={\mathbf {P}}_{k}^{(m)}{\mathbf {p}}_{k}^{(m)}+{\mathbf {D}}_{k}^{(m)}{\mathbf {d}}_{k}^{(m)}}\) where \({\mathbf {p}}_{k}^{(m)}\) is a P×1 vector containing the pilot symbols in the kth OFDM symbol transmitted over the mth transmit antenna, whereas \({\mathbf {p}}_{k}^{(m)}\) is the N×P matrix that defines the positions of the pilots in such a symbol. Similarly, \({\mathbf {d}}_{k}^{(m)}\) is the D×1 vector containing the data symbols, and \({\mathbf {d}}_{k}^{(m)}\) is the N×D matrix that defines their positions. Data symbols are the output of a quadrature amplitude modulation (QAM) constellation mapper whose inputs are the channel encoded source bits. Note that P+D<N, with N−(P+D) being the number of guard subcarriers. Matrices \({\mathbf {p}}_{k}^{(m)}\) and \({\mathbf {d}}_{k}^{(m)}\) consist of ones and zeros only and are designed so that data and pilots are assigned to different subcarriers. According to the Mobile WiMAX standard, pilot subcarriers allocated in the mth antenna are set to 0 in all other antennas.
We next assume that the previous OFDM waveforms are transmitted over a time-varying MIMO channel. Such a channel is represented by:
$$ {\fontsize{8.8pt}{9.6pt}\selectfont{\begin{aligned} {}h^{i,j}(t,\tau) \,=\, \sum_{l=1}^{L} h^{i,j}_{l}(t)\delta\left(\!\tau - \tau^{i,j}_{l}\!\right)\!, i=1, \ldots, M_{R} \ \text{and}\ j = 1, \ldots, M_{T}, \end{aligned}}} $$
where h i,j(t,τ) is the channel impulse response between the jth transmit antenna and ith receive antenna, while \(h^{i,j}_{l}(t)\) and \(\tau ^{i,j}_{l}\) are the lth path gain and delay of h i,j(t,τ), respectively. Elaborating on the discrete-time signal model of a MIMO-OFDM system, the MIMO channel during the transmission of the kth OFDM symbol is represented by the block matrix:
$$ {\mathbf{H}}_{k} = \left[ \begin{array}{cccc} {\mathbf{H}}_{k}^{1,1} & {\mathbf{H}}_{k}^{1,2} & \cdots & {\mathbf{H}}_{k}^{1,M_{T}} \\ {\mathbf{H}}_{k}^{2,1} & {\mathbf{H}}_{k}^{2,2} & \cdots & {\mathbf{H}}_{k}^{2,M_{T}} \\ \cdots & \cdots & \cdots & \cdots \\ {\mathbf{H}}_{k}^{M_{R},1} & {\mathbf{H}}_{k}^{M_{R},2} & \cdots & {\mathbf{H}}_{k}^{M_{R},M_{T}} \\ \end{array} \right], $$
where \({\mathbf {H}}_{k}^{i,j} \in \mathbb {C}^{N_{t} \times N_{t}}\) are matrices representing the channel impulse response between the (i,j) antenna pair whose entries are:
$$ {}{\mathbf{H}}^{i,j}_{k}(r,s) = h^{i,j}(((k-1)N_{t} + N_{g} + r - 1)T_{s}, \text{mod}(r-s,N)T_{s}). $$
Note that \({\mathbf {H}}_{k}^{i,m} {\mathbf {x}}_{k}^{(m)}\) represents the time-convolution between the signal transmitted over the mth antenna and h i,m(t,τ).
Having in mind the previous channel representation, the discrete-time received signal is given by:
$$ {\mathbf{r}}_{k} = \left({\mathbf{I}}_{M_{R}} \otimes {\mathbf{F}}{\mathbf{G}}_{2}\right) {\mathbf{H}}_{k} {\mathbf{x}}_{k} + {\mathbf{w}}_{k} = {\mathbf{G}}_{k}{\mathbf{s}}_{k}+{\mathbf{w}}_{k}, $$
where G 2 is a N×N t matrix which represents the removal of the cyclic prefix, G k is a block matrix containing the channel frequency response between each antenna pair, and \({\mathbf {w}}_{k} \in {\mathbf {C}}^{N_{t} \times 1}\) is a vector of independent complex-valued Gaussian-distributed random variables with variance \({\sigma ^{2}_{w}}\). When the channel is time invariant, all submatrices in G k are diagonal. However, when the channel is time-variant, entries different than zero appear outside their main diagonals, hence introducing ICI between the transmitted subcarriers. In such case, Equation 5 is rewritten as:
$$ {\mathbf{r}}_{k} = \bar{{\mathbf{G}}}_{k}{\mathbf{s}}_{k}+{\mathbf{z}}_{k}+{\mathbf{w}}_{k}, $$
where \(\bar {{\mathbf {G}}}_{k}\) is a block matrix with the main diagonals of the submatrices of G k and \({{\mathbf {z}}_{k} = ({\mathbf {G}}_{k}-\bar {{\mathbf {G}}}_{k}){\mathbf {s}}_{k}}\) represents the ICI in the received signal. Note that in multi-antenna systems, ICI occurs not only among subcarriers but also among different transmit antennas.
When transmitting MIMO-OFDM symbols over time-varying channels, the amount of ICI relates to the normalized Doppler spread given by D n =f d T, where f d is the maximum Doppler frequency and T=N T s is the duration of an OFDM symbol excluding the cyclic prefix. According to our previous proposal [18], it is possible to adjust the parameter T by time interpolation by a factor I, yielding an OFDM symbol duration T I=I T s N. Therefore, the normalized Doppler spread impacting the time-interpolated OFDM signal is:
$$ {D^{I}_{n}}=f_{d} T^{I} = f_{d} I T_{s} N = \frac{T_{s} N I f_{c} v}{c} = \frac{T_{s} N f_{c}}{c} v^{I}, $$
with f c the carrier frequency, c the speed of light, and v I=I v the emulated speed as a result of an actual measurement speed v and a time-interpolation factor I. Consequently, enlarging the symbol length T I allows for the emulation of a velocity v I, which is I times higher than the actual speed of the receiver, namely v.
Figure 1 shows the graphical representation of a measurement setup designed according to the previous premises. Interpolating the MIMO-OFDM waveforms prior to its transmission allows for emulating high-velocity conditions while conducting low-velocity experiments. Due to the interpolation step, signals over the air will suffer from severe ICI degradation, although the maximum Doppler frequency is low.
Block diagram of the outdoor-to-outdoor measurement setup. The transmit antennas are placed outdoors, and the receive antennas are installed outside the car on its roof. The interpolator at the transmitter side and the decimator at the receiver side enable the recreation of high-speed conditions during low-speed experiments. The corresponding outdoor-to-indoor setup is basically the same. The only difference is that receiving antennas are placed inside the car instead of outside.
It should be noticed that interpolation does not allow for a perfect recreation of high-mobility channels because the signals over-the-air have a reduced bandwidth and are less sensitive to the channel frequency selectivity.
Nevertheless, note that in this work, we are mostly interested in conducting experiments to test the performance of ICI cancellation methods in WiMAX receivers over real-world channels rather than channel equalization methods which can be tested in static experiments. Time interpolation does not reproduce the exact conditions of high mobility scenarios but provide a cost-efficient approximation to them.
Receiver structure
Figure 2 shows the block diagram of the receiver structure utilized along this work. The samples captured with the hardware equipment are first decimated to undo the interpolation applied at the transmitter to induce the ICI. The decimated samples are input to an OFDM receiver that performs frame detection, and time and fractional carrier frequency offset (CFO) estimation before the DFT; and channel estimation, ICI cancellation, data subcarriers equalization, and channel decoding after the DFT. Figure 2 shows the general structure of the receiver. The same OFDM receiver is used to obtain the experimental results and the simulation results.
Block diagram of the receiver structure with decimation and the OFDM receiver with Inter-Carrier Interference (ICI) cancellation. The OFDM receiver is the same for the experimental and the simulation results.
The ensuing subsections present a more detailed description of each processing block represented in Figure 2.
Frame detection and synchronization
Frame detection is carried out using the correlation properties of the preamble symbol. WiMAX defines a preamble symbol with pilot subcarriers generated from a pseudo-noise sequence modulated as binary phase-shift keying (BPSK), with a spacing of two guard subcarriers between them. This structure leads to a threefold repetition in the time domain, which can be exploited for frame detection and time and fractional carrier frequency offset (CFO) estimation. Integer frequency shifts are corrected after the DFT by performing a cross-correlation between the differential sequences of the transmitted and received preamble pilot sequences [20,21].
ICI estimation
The frequency response matrix G k in the received MIMO-OFDM signal model given by Equation 5 needs to be estimated. We start determining a noisy frequency response estimation as a least squares (LS) estimation on the pilot subcarriers, affected by the interferences z k and w k , followed by a linear minimum mean squared error (LMMSE) interpolation of the channel coefficients on the data subcarriers. Recall that pilot subcarriers are transmitted over disjoint sets of subcarriers at different antennas. Assuming spatially uncorrelated channels, this estimation can be done independently for each transmit-receive antenna pair. For simplicity reasons, antenna indices will be dropped in the following expressions.
The previous frequency response estimations can be expressed as:
$$ \hat{{\mathbf{g}}}_{k} ={\mathbf{A}}\text{diag}\left\{ {\mathbf{p}}_{k} \right\}^{-1}{\mathbf{P}}_{k}^{T}{\mathbf{r}}_{k}, $$
where \({\mathbf {A}} = {\mathbf {C}}_{hh_{p}}({\mathbf {C}}_{h_{p}h_{p}} + {\sigma ^{2}_{I}}{\mathbf {I}}_{P})^{-1}\) is a N×P matrix with the LMMSE interpolation coefficients, \({\mathbf {C}}_{hh_{p}} = {\mathbf {P}}_{k}^{T}\mathrm {E}\{ {\mathbf {g}}_{k} {\mathbf {g}}_{k}^{T} \}\) and \({\mathbf {C}}_{h_{p}h_{p}}={\mathbf {P}}_{k}^{T}\mathrm {E}\{{\mathbf {g}}_{k} {\mathbf {g}}_{k}^{T}\}{\mathbf {P}}_{k}\) are submatrices of the covariance matrix of the channel frequency response, and \(\hat {{\mathbf {g}}}_{k}\) is a N×1 vector with the estimated channel coefficients per subcarrier. The second-order statistics of the channel are estimated by using all the received pilot subcarriers in a frame. Also, the power \({\sigma ^{2}_{z}}\) of the ICI arising from z k in Equation 6 is estimated according to [22], and it is considered as part of \({\sigma ^{2}_{I}} = {\sigma ^{2}_{w}}+{\sigma ^{2}_{z}}\). As explained below, these frequency response estimations play a fundamental role in the estimation of the ICI.
Indeed, the ICI frequency response matrix G k can be estimated by expressing the subcarrier spreading in terms of a BEM represented by a R N t ×Q matrix \({\mathbf {B}}=[{\mathbf {B}}_{1}^{T}, {\mathbf {B}}_{2}^{T}, \cdots, {\mathbf {B}}_{R}^{T}]^{T}\), where the dimensions of B r are N t ×Q, with r=1,…,R. Matrix B represents the basis vector for the time evolution of the channel during R consecutive OFDM symbols, with Q the order of the BEM. The BEM allows for expressing the frequency response matrix G k in terms of a matrix C of size Q×N. Using BEM, the estimate of the rth symbol in the group can be written as:
$$ \hat{{\mathbf{G}}}_{r} = \sum_{q = 1}^{Q} {\mathbf{F}}{\mathbf{G}}_{2} \text{diag}\left\{{\mathbf{b}}_{r,q}\right\} {\mathbf{G}}_{1}{\mathbf{F}}^{H} \text{diag}\left\{\hat{{\mathbf{c}}}_{q}\right\}, $$
where b r,q is the qth column of B r , G 1 and G 2 are the matrices which append and remove the cyclic prefix in Equation 5, and \(\hat {{\mathbf {c}}}_{q}\) is a N×1 vector with the columns of the estimate \(\hat {{\mathbf {C}}}^{T}\).
We next define the N×R matrix \(\hat {{\mathbf {J}}}_{r} = \left [ \hat {{\mathbf {g}}}_{r},\hat {{\mathbf {g}}}_{r+1},\cdots,\hat {{\mathbf {g}}}_{r+R-1} \right ]\), with r∈[1,K−R+1]. Such a matrix enables us to estimate the matrix C of BEM coefficients by means of the following LS regression [17]:
$$ \hat{{\mathbf{C}}}_{r} = \left({\mathbf{K}}^{H} {\mathbf{K}}\right)^{-1}{\mathbf{K}}^{H} \hat{{\mathbf{J}}}_{r}^{T}, $$
where K is a R×Q matrix whose columns are obtained from B as:
$$ {\fontsize{8.7pt}{9.6pt}\selectfont{\begin{aligned} {}{\mathbf{k}}_{q}\! =\! \left[b_{q}\left(N_{g}+N/2\right), b_{q}\left(N_{t}+N/2\right), \cdots, b_{q}\left((N/2)\,+\,(R\,-\,1)N_{t}\right)\right]\!. \end{aligned}}} $$
Finally, the estimates \(\hat {{\mathbf {c}}}_{q}\) in Equation 9 are obtained from the columns of \(\hat {{\mathbf {C}}}_{r}^{T}\), which provide the coefficients for the channel matrices G r of the group.
The results in this work are all obtained by using the discrete prolate spheroidal BEM (DPS-BEM) [10]. Such a BEM is built on the Slepian sequences arising from time sequences whose energy is localized on a given frequency interval. For the purpose of ICI estimation, the frequency interval is the one corresponding to the Doppler spectrum, whose domain is bounded by the maximum Doppler frequency. Recall that the use of DPS-BEMs is equivalent to assuming a time-selective channel with a flat Doppler spectrum. Consequently, to determine the specific DPS-BEM appropriate for a given scenario, it is necessary to know the mobile speed, and more specifically, the emulated speed after interpolation [10].
ICI cancellation and equalization
Once the full frequency response matrices for all antenna pairs are obtained, both ICI cancellation and equalization of the received signal are done to obtain estimates of the information symbols s k . In the literature, both block interference cancellation (BIC) and sequential interference cancellation (SIC) schemes have been proposed [15,23]. In this work, a LMMSE SIC receiver of seven taps is used to remove the ICI and equalize the channel. Such a method has been chosen due to its good trade-off between computational cost and performance.
The ICI estimation algorithm described in the previous subsection works on groups of OFDM symbols. Nevertheless, it should be noticed that the estimation error inside the group is not uniform along the OFDM symbols. Lower errors are achieved in the central symbols, and this is taken into account by the equalizer implemented in our receiver. ICI cancellation and channel equalization is carried out according to the following steps:
Obtain the matrices \(\hat {{\mathbf {C}}}_{r}\) for the K−R+1 groups in a frame, as explained in the previous subsection.
Remove ICI from each OFDM symbol as follows:
For symbols k∈[1,R/2], ICI is suppressed with the coefficients obtained from \(\hat {{\mathbf {C}}}_{1}\).
For symbols k∈[R/2+1,K−R/2], ICI is suppressed with the coefficients obtained from \(\hat {{\mathbf {C}}}_{k-R/2+1}\).
For symbols k∈[K−R/2+1,K], ICI is suppressed with the coefficients obtained from \(\hat {{\mathbf {C}}}_{K-R+1}\).
Obtain a new frequency response estimate from the ICI-reduced received signal and return to step 1.
As can be seen, except for the first and last symbols of the frame, ICI is estimated and cancelled for the central OFDM symbol of each set. The final equalization of the ICI-reduced signal is done by zero forcing. Detected information symbols are demapped and sent to a Viterbi decoder to obtain the information bits.
The testbed employed for the experimental evaluation in this work is an upgrade of that employed in the measurement campaigns described in [18,19] which, at the same time, has evolved from the one described in [24]. The testbed consists of three USRP B210 boards [25] (see Figures 3 and 4) built from the AD9361 chip [26] by Analog Devices, which supports a continuous frequency coverage from 70 MHz to 6 GHz; full-duplex MIMO operation with up to 56 MHz of bandwidth; USB 3.0 connectivity; on-chip 12 bit analog-to-digital converters (ADC) and digital-to-analog converters (DAC) up to 61.44 M sample/s; automatic gain control; and configurable transmit and receive gain values.
Base station downlink transmitter is placed outdoors. Located on the second floor of the CITIC building located in the Campus de Elviña at the University of A Coruña. Note that only one of the two vertically polarized antennas is used for 1×1 transmissions, while both cross polarized antennas are used for the 2×2 MIMO transmissions.
Mobile receivers mounted on a car. Two antennas corresponding to the outdoor receiver are placed on the roof of the car, while another two antennas for the second receiver are inside the car, between the two front seats.
A single board is used in continuous transmit-only mode for implementing the base station transmitter for the downlink. The base station is equipped with two Mini-Circuits TVA-11-422 high-power amplifiers [27], two Interline SECTOR IS-G14-F2425-A120-V vertically polarized transmit antennas [28], and an Ubiquity AM-2G15-120 cross polarized transmit antenna [29]. Notice that a single vertically polarized antenna was used for the single-input single-output (SISO) transmissions, while the cross polarized antenna was employed for measuring the 2×2 MIMO ones.
The remaining two USRP B210 boards are used for implementing two different mobile receivers, both mounted on a car. The first one is connected to a couple of eRize ERZA24O-09MBR omnidirectional 9 dBi-gain antennas placed outdoors, on the roof the car. The second mobile receiver is connected to another two omnidirectional antennas placed inside the car, between the two front seats. Using both mobile receivers allows us to capture, at the same time, the signals transmitted by the base station to both outdoor and indoor receivers.
With respect to the software, we use a multi-thread receiver implemented in C++ with Boost and using the Ettus USRP Hardware Driver (UHD) software. The main thread of the receiver is responsible for retrieving the samples coming from the USRP through the USB 3.0 bus and store them in a set of buffers in the main memory of the host laptop. The second thread reads the samples from such buffers and saves them persistently in a dedicated solid-state drive. Finally, there is a low-priority thread for logging important information for documenting the measurement campaign. On the other hand, the transmitter is a single-thread process since the same signals are cyclically transmitted over-the-air. Therefore, the signals to be radiated are first stored in a temporary buffer and next transmitted in a loop to the USRP through the USB 3.0. The rest of the software was implemented in MATLAB.
For the measurements, we use the WiMAX profile corresponding to 7 MHz of bandwidth at a sampling rate of F b =8 M sample/s, N=1,024 subcarriers, and a cyclic prefix of 1/8 (N g =128). Table 1 summarizes the parameter values of this WiMAX profile. At the base station, once the OFDM signals were generated, they are time interpolated with the interpolation factors I∈{4,12,20}, hence producing signals with bandwidth ranging from 1.75 MHz for I=4 to 350 kHz for I=20. Next, the signals corresponding to each interpolation factor I are scaled by a factor \(\sqrt {I}\) in order that signals corresponding to all interpolation factors be transmitted with the same energy. The three time-interpolated frames are then frequency-shifted 1.2 MHz to avoid the DC leakage at both the transmitter and the receiver. Hence, the DC subcarrier is frequency shifted 1.2 MHz to avoid the distortion caused by the DC leakage to the subcarriers around the DC, since the subcarrier spacing is reduced and we are dealing with direct-conversion transceivers. In order to facilitate the signal-to-noise ratio (SNR) estimation at the receiver, after transmitting the three time-interpolated frames, the transmitter is kept silent during a small time lapse which allows for estimating the noise variance at the receiver. Finally, all generated signals are stored in a file in the disk of the base station transmitter.
Table 1 Parameter values of the WiMAX profile used in the experiments
Once the transmit signals have been generated, the base station is notified and the over-the-air transmission process starts. First, the signals are read from the corresponding source file and transferred to the USRP, where they are again interpolated in the FPGA before reaching the (DAC). Note that this interpolation stage is needed for adapting the signal sampling rate to the sampling frequency of the DAC, thus not affecting the signal bandwidth. Next, the signals are up-converted to the central frequency f c =2.6 GHz, pre-amplified inside the USRP (configured with a gain value of 60 dB out of 89.5 dB), amplified by the two Mini-Circuits TVA-11-422 high-power amplifiers (one per transmit antenna) at their maximum gain of 40 dB, and finally radiated by the antennas (we use the vertically polarized transmit antennas for SISO transmissions, while the cross polarized ones are employed for 2×2 MIMO ones). The mean transmit power value measured at each antenna input is +17.5 dBm when a single transmit antenna is used. In case both transmit antennas are employed, the transmit gain is reduced to 3 dB per antenna to keep the total transmit power equal despite of the number of transmit antennas.
Figure 3 shows the base station placed outdoors, on the second floor of the CITIC building located in the Campus de Elviña at the University of A Coruña. It also shows the power amplifiers, the antennas (vertically or cross polarized), the USRP B210, and the laptop running the software for the base station. Looking at the picture of the cross-polarized antennas in Figure 3, one can also see part of the road traversed by the car during the measurements.
The two receivers are also built around the USRP B210 and the UHD with the software installed on two laptops, one per receiver. Notice that during the measurements, the acquired signals are persistently stored in dedicated solid state drives attached to laptop receivers, but they are not processed using the WiMAX receiver at that moment. As shown in Figure 4, the two outdoor receive antennas are sticked on the roof of the car used for measuring and they are connected directly to the USRP B210, which is powered by its corresponding laptop. The receive gain of the USRP is set to 35 dB (out of 73 dB), ensuring a linear operation. The second receiver is completely installed inside the car used for the measurements, with the antennas between the two front seats of the car. We use another laptop for running the receive software and for persistently storing the signals acquired by the indoor receiver. Unlike the outdoor receiver, the receive gain of the indoor receiver is set to 45 dB to better accommodate the amplitude of the acquired signals to the range of the ADC, while ensuring that the receive signal is not distorted by the amplifiers.
Figure 5 shows the measurement scenario, including the position of the base station and the path followed by the mobile receivers in the car. From the starting to the end points, there is a distance of 210 m, which is traversed by the car in 25.2 s at a constant speed of 30 km/h.
Measurement scenario at the Campus de Elviña, A Coruña. The measurement trajectory as well as the location of the base station are specified.
Physical layer configuration
OFDM symbol structure of the transmitted signal follows the downlink of mobile WiMAX standard regarding subcarrier allocation, symbol mapping, and channel encoding. The frames consists of K=25 symbols, with the first one reserved for the preamble signal transmitted only by the first antenna. The other symbols carry the information of six bursts with the modulation and channel coding profiles: 4-QAM 1/2, 4-QAM 3/4, 16-QAM 1/2, 16-QAM 3/4, 64-QAM 1/2, and 64-QAM 3/4. Each burst spans during four consecutive OFDM symbols along all the available data subcarriers. The permutation matrices \({\mathbf {p}}_{k}^{(m)}\) and \({\mathbf {d}}_{k}^{(m)}\) are generated according to the partial usage of subcarriers (PUSC) zone with the corresponding modifications for MIMO as specified in the standard and pilot subcarriers on \({\mathbf {p}}_{k}^{(m)}\) generated as a boosted BPSK sequence obtained by mapping the output of a linear feedback shift register. Channel coding is performed by the convolutional encoder defined in the standard, which features tail-biting to terminate coding blocks.
Regarding the receiver, a DPS-BEM with Q=5 is used for ICI estimation, given its good properties to estimate ICI assuming that only the maximum Doppler frequency is known.
In order to ensure similar channel characteristics for all the interpolation factors in each measurement, a super-frame consisting of four different parts is built. The first part is a small time period during which the base station is kept silent to allow for noise variance estimation at the receiver as it was already explained. The remaining three parts are built from the same mobile WiMAX frame consisting of the six burst plus the preamble that was described above. Consequently, the second part of the super-frame corresponds to the aforementioned mobile WiMAX frame interpolated in time by the factor I=4. Next, the third part is the same as the second one but using I=12, while the fourth part employs I=20. Finally, the super-frame is cyclically transmitted from the base station to both mobile receivers, one placed outside the car and the other installed inside.
This section presents the results obtained from the measurements described above. We also carried out simulations to support the experimental results. Simulations were designed to create scenarios comparable to those obtained from the measurements. Parameters estimated from the real-data captured are the mean SNR of each frame, and the K-factor of the wireless channel. The mean SNR is estimated taking as a reference the noise variance estimated during the silence preceding each frame; and the K-factor is estimated and averaged for all frames. Regarding simulations, the estimated mean SNRs for each frame in the real scenario are applied to the simulated transmissions, all with the same average K-factor. Finally, full characterization of the channel frequency responses or spatial covariance matrices have not been obtained, so simplified assumptions have been made while conducting the simulations. Basically, we assumed a frequency-flat spatially uncorrelated MIMO channel. The frequency-flat assumption arises from the fact that for the highest interpolation factors, the bandwidth of the transmitted signals will be rather narrow, and therefore the frequency selectivity observed by the receiver would be negligible. As for the MIMO channel, the coefficients for each pair of antennas are drawn from independent Rice distributions, although not identically distributed, picking random complex mean values for each MIMO channel matrix entry.
Error vector magnitude and SNR
Error vector magnitude (EVM) is the first figure of merit considered in this work. EVM is measured as the mean squared error between the received constellation after equalization and the transmitted constellation, normalized by the transmitted constellation power, i.e.:
$$ \text{EVM}_{F} = \frac{\displaystyle\sum_{k=1}^{K}\sum_{n=1}^{N} \Vert\hat{{\mathbf{s}}}_{k}(n) - {\mathbf{s}}_{k}(n)\Vert^{2}}{\displaystyle\sum_{k=1}^{K}\sum_{n=1}^{N} \Vert{\mathbf{s}}_{k}(n)\Vert^{2}}, $$
being EVM F the mean EVM for the Fth frame. Note that the original transmitted symbols are used to normalize the (EVM). Notice that if only symbol estimates were available, EVM measures would be less accurate.
Figures 6, 7, and 8 show the EVM and SNR results versus distance for SISO transmissions with outdoor antennas for interpolation factors I=4, I=12, and I=20, respectively. The car moved at the actual speed of 30 km/h. The previous factors correspond to emulated speeds of 120, 360, and 600 km/h, respectively. To clarify the plots, the course of the car has been split in 20 equally long sectors, and the median EVM is determined for each sector. Also, as a reference, the median SNR for each sector is shown. Figures 6, 7, and 8 clearly show the performance gains obtained when ICI cancellation is performed at reception. The figures also show that comparable gains are obtained with simulation and real measurements, hence validating the performance of the ICI estimation and cancellation techniques utilized. Figure 6 shows a constant offset between simulations and measurements. Recall that simulation results were obtained assuming a frequency-flat channel. The soundness of this assumption weakens for low interpolation factors such as I=4, so a significant difference between simulation and experimental results is to be expected. Conversely, better agreement between experiments and simulations is observed in Figures 7 and 8 which correspond to higher interpolation factors (I=12 and I=20, respectively). In such cases, the frequency-flat channel response assumption is tighter.
EVM and SNR for 1×1 and v I=120 km/h. The receive antennas are placed outside the car.
Figures 9, 10, and 11 show the results for the same measurements as before but with the mobile receiver inside the car. The most important difference compared to the previous figures is the lower SNR of this scenario, where median values range between 0 and 10 dBs, a large difference compared to the 30 to 35 dB achieved outside due to the strong attenuation introduced by the car structure. As a consequence, negligible performance gains are obtained with receivers that perform ICI cancellation because the noise at the receiver prevails over the interference of the neighbor subcarriers.
EVM and SNR for 1×1 and v I=120 km/h. The receive antennas are placed inside the car.
Figures 12, 13, and 14 show the results corresponding to 2×2 MIMO measurements with the receiving antennas placed outside the car for I=4, I=12, and I=20, respectively. Contrary to the SISO case, ICI cancellation does not produce significant performance gains when I=4. ICI cancellation produces significant performance gains for I=12 and I=20. Again, this is because the received SNR is significantly smaller and the Gaussian noise dominates over the ICI. Also, although spatially uncorrelated channels have been assumed in simulations, the real scenario will show some spatial correlation between antennas. Such correlation depends on the distance between antennas, and also on the distance between the transmitter and the receiver, an effect more evident when strong line-of-sight components are present [30]. This spatial correlation introduces interference between antennas, which impairs the gain that could be obtained by the ICI cancellation technique.
In order to evaluate the impact of the ICI cancellation methods on the final user quality of experience (QoE), throughput estimations have been carried out using the information conveyed in the bursts. Two types of throughput estimation have been considered. The first one assumes that the transmitter knows, for each frame, the optimal modulation and coding profile to be used. This case was modeled by estimating the bit error ratio (BER) of the six received bursts per frame and assuming that the whole frame was transmitted with the profile which carries more information bits without suffering any bit error. The second method assumes that the transmitter estimates the most suitable burst profile, and consequently only transmits the corresponding profile during all the burst. Errors are measured in this case assuming only the burst picked by the transmitter is available, regardless whether this burst has errors or not. This method is intended to determine the impact of the channel fast time-variations on this type of decisions.
To allow the transmitter to take these decisions, a frame by frame EVM metric is obtained in a similar way as the WiMAX standard proposes, i.e.:
$$ \text{EVM}_{F} = (1-\alpha) \text{EVM}_{F-1} + \alpha\text{EVM}_{F}, $$
where α is a forgetting factor. Along this work, we assume α=1/4. This EVM value is used to pick a modulation profile and burst according to Table 2.
Table 2 EVM to burst-profile mapping for throughput estimations
Results for the 1×1 setup with the antennas placed outside the car are shown in Figures 15, 16, and 17 for interpolation factors I=4, I=12, and I=20, respectively. When I=4, the performance gain when using the receiver with ICI cancellation is not significant because the throughput already saturates at the maximum value 15 Mbit/s without cancelling the ICI (see Figure 15). On the contrary, significant performance gains are obtained for I=12 and I=20 when the ICI is cancelled. Nevertheless, note the large throughput degradation when the profile is determined from delayed EVM measurements rather than from the actual ones. In channels with low coherence times, estimations of the EVM according to Equation 13 are less reliable, thus increasing the probability of picking up an incorrect burst profile. This effect is more evident for the highest interpolation factor I=20, where at some time instants even the ICI-cancelled signal provides lower throughput.
Throughput for 1×1 and v I=120 km/h. The receive antennas are placed outside the car.
Figures 18, 19, and 20 show the results for the same measurement campaign as before but with the antennas placed inside the car. The most important effect is that the SNR is now much lower and the receiver is unable to detect any burst without errors during the first part of the measurement trajectory. The larger SNRs observed in the last sector of the measurement course allow for correctly decoding information bits but leading to throughput values far from those observed from the antennas placed outside the car. Also, the throughput gains obtained after ICI cancellation are quite low.
Throughput for 1×1 and v I=120 km/h. The receive antennas are placed inside the car.
Corresponding results for the 2×2 MIMO setup are shown in Figures 21, 22, and 23. The first evident feature is the larger achievable throughput, since two streams are being transmitted in each frame. As with EVM, low gains when I=4 are appreciated, but peak gains of 2 Mbit/s are observed when I=12 and I=20.
A more meaningful way to represent the mean throughputs obtained during all measurement campaigns is the one shown in Tables 3, 4, and 5. These throughputs show that in general, some gain is obtained by using receivers that cancel the ICI and how the throughput worsens with the interpolation factor. It is also important to note that in some scenarios at low-emulated speeds, negligible gains, or even less performance is obtained with these methods. In general, a remarkable difference is observed between the ideal throughput and the realistic measure which considers the EVM metric to pick the burst profile. This is due to the inability of the EVM estimator to track the fast time-variations of the channel. Also, even though in the previous figures at some instants the ICI-cancelled throughput was lower, it can be seen that in average the ICI cancellation techniques provide a better performance. Finally, in the case of MIMO 2×2, although in terms of EVM almost no difference is appreciated when I=4, in terms of throughput an average gain of approximately 0.50 Mbit/s has been obtained. And when I=12 and I=20, the average gains grow up to 1.8 and 1.4 Mbit/s approximately.
Table 3 Mean throughput in Mbit/s for the 1×1 scenario when the receive antennas are placed outside the car
Table 4 Mean throughput in Mbit/s for the 1×1 scenario when the receive antenna is placed inside the car
Table 5 Mean throughput in Mbit/s for 2×2 scenario when the receive antennas are placed outside the car
We have experimentally evaluated the performance of the WiMAX downlink physical layer in high-mobility scenarios. Both SISO and MIMO transmissions were considered, as well as placing the receive antennas outside and inside the car used for the experiments. We focused on the ICI caused by channel time variations in such scenarios. Cost-effective measurement campaigns were carried out where high-mobility scenarios are emulated with a vehicle moving at a low speed. The key idea is the enlargement of the symbol period prior to its transmission over the air. Such enlargement reduces the frequency spacing between the OFDM subcarriers in WiMAX transmissions and hence induces high ICI values on the received signals. Experiments illustrate the performance of WiMAX receivers with and without ICI cancellation in terms of EVM and throughput. ICI cancellation produces significant performance gains mainly when the received SNR is high. Otherwise, thermal noise dominates over the ICI, and gains are not so much appreciated. Furthermore, the spacing of subcarriers in this standard is sufficient to provide a robust behavior for moderate mobility environments, and the gain observed after ICI cancellation is not very significant for speeds up to 80 km/h. Note that the profile tested in this work is the WiMAX profile with the lower subcarrier spacing, so it is foreseeable that other profiles present better performance under these scenarios.
UQ WiMAX gaining the wireless broadband market (2013). http://www.wimaxforum.org/LiteratureRetrieve.aspx?ID=177978.
M Ehammer, E Pschernig, T Graupl, in IEEE/AIAA 30th Digital Avionics Systems Conference (DASC). AeroMACS - an airport communications system (IEEESeattle, USA, 2011), pp. 4–114116. doi:10.1109/DASC.2011.6095903.
I Barhumi, G Leus, M Moonen, Optimal training design for MIMO OFDM systems in mobile wireless channels. IEEE Trans. Signal Process. 51(6), 1615–1624 (2003). doi:10.1109/TSP.2003.811243.
S Das, P Schniter, Max-SINR ISI/ICI-shaping multicarrier communication over the doubly dispersive channel. IEEE Trans. Signal Process. 55(12), 5782–5795 (2007). doi:10.1109/TSP.2007.901660.
MathSciNet
Z Tang, RC Cannizzaro, G Leus, P Banelli, Pilot-assisted time-varying channel estimation for OFDM systems. IEEE Trans. Signal Process. 55(5), 2226–2238 (2007). doi:10.1109/TSP.2007.893198.
GB Giannakis, C Tepedelenlioglu, Basis expansion models and diversity techniques for blind identification and equalization of time-varying channels. Proc. IEEE. 86(10), 1969–1986 (1998). doi:10.1109/5.720248.
G Leus, in Proc. of EUSIPCO. On the estimation of rapidly time-varying channels (EURASIPVienna, Austria, 2004), pp. 2227–2230.
DK Borah, BT Hart, Frequency-selective fading channel estimation with a polynomial time-varying channel model. IEEE Trans. Commun. 47(6), 862–873 (1999). doi:10.1109/26.771343.
H Hijazi, L Ros, Polynomial estimation of time-varying multipath gains with intercarrier interference mitigation in OFDM systems. IEEE Trans. Vehicular Technol. 58(1), 140–151 (2009). doi:10.1109/TVT.2008.923653.
T Zemen, CF Mecklenbräuker, Time-variant channel estimation using discrete prolate spheroidal sequences. IEEE Trans. Signal Process. 53(9), 3597–3607 (2005). doi:10.1109/TSP.2005.853104.
KAD Teo, S Ohno, in IEEE Global Telecommunications Conference, 2005. GLOBECOM '05, vol. 6. Optimal MMSE finite parameter model for doubly-selective channels (IEEESt. Louis, USA, 2005), pp. 3503–3507. doi:10.1109/GLOCOM.2005.1578424.
P Suárez-Casal, JA García-Naya, L Castedo, M Rupp, in IEEE 14th Workshop on Signal Processing Advances in Wireless Communications. SPAWC '13. KLT-based estimation of rapidly time-varying channels in MIMO-OFDM systems (IEEEDarmstadt, Germany, 2013).
A Stamoulis, SN Diggavi, N Al-Dhahir, Intercarrier interference in MIMO OFDM. IEEE Trans. Signal Process. 50(10), 2451–2464 (2002). doi:10.1109/TSP.2002.803347.
X Ma, GB Giannakis, S Ohno, Optimal training for block transmissions over doubly selective wireless fading channels. IEEE Trans. Signal Process. 51(5), 1351–1366 (2003). doi:10.1109/TSP.2003.810304.
F Hlawatsch, G Matz, Wireless Communications Over Rapidly Time-Varying Channels (Academic Press, USA, 2011).
H Hijazi, L Ros, G Jourdain, in Proceedings European Wireless Conference. OFDM channel parameters estimation used for ICI reduction in time-varying multipath channels (IEEEParis, France, 2007).
M Meidlinger, M Simko, Q Wang, M Rupp, in IEEE International Conference on Communications 2013. ICC 2013. Channel estimators for LTE-A downlink fast fading channels (IEEEStuttgart, Germany, 2013).
J Rodríguez-Piñeiro, P Suárez-Casal, JA García-Naya, L Castedo, C Briso-Rodríguez, JI Alonso-Montes, in IEEE 8th Sensor Array and Multichannel Signal Processing Workshop (SAM 2014). Experimental validation of ICI-aware OFDM receivers under time-varying conditions (IEEEA Coruña, Spain, 2014).
PS Casal, J Rodríguez-Piñeiro, JA García-Naya, L Castedo, in The Eleventh International Symposium on Wireless Communication Systems (ISWCS). Experimental assessment of WiMAX transmissions under highly time-varying channels (IEEEBarcelona, Spain, 2014).
W Nam, YH Lee, Preamble-based cell identification for cellular OFDM systems. IEEE Trans. Wireless Commun. 7(12), 5263–5267 (2008). doi:10.1109/T-WC.2008.071426.
A Carro-Lagoa, P Suarez-Casal, J Garcia-Naya, P Fraga-Lamas, L Castedo, A Morales-Mendez, Design and implementation of an OFDMA-TDD physical layer for WiMAX applications. EURASIP J. Wireless Commun. Netw. 2013(1), 243 (2013). doi:10.1186/1687-1499-2013-243.
Y Li, LJ Cimini, Bounds on the interchannel interference of OFDM in time-varying impairments. IEEE Trans. Commun. 49(3), 401–404 (2001). doi:10.1109/26.911445.
X Cai, GB Giannakis, Bounding performance and suppressing intercarrier interference in wireless mobile OFDM. IEEE Trans. Commun. 51(12), 2047–2056 (2003). doi:10.1109/TCOMM.2003.820752.
J Rodríguez-Piñeiro, JA Garcia-Naya, A Carro-Lagoa, L Castedo, in Proc. of 16th Euromicro Conference on Digital System Design. A testbed for evaluating LTE in high-speed trains (IEEESantander, Spain, 2013).
Ettus USRP B210 (2013). https://www.ettus.com/product/details/UB210-KIT.
Analog Devices AD9361 RFIC (2013). http://www.analog.com/en/rfif-components/rfif-transceivers/ad9361/products/product.html.
Mini-Circuits high-power amplifier TVA-11-422 (2013). http://www.minicircuits.com/pdfs/TVA-11-422.pdf.
Interline SECTOR IS-G14-F2425-A120-V vertically polarized antenna (2014). http://interline.pl/antennas/SECTOR-V120-14dBi-120deg-2.4-2.5GHz.
Ubiquity AM-2G15-120 cross polarized antenna (2013). http://dl.ubnt.com/datasheets/airmaxsector/airMAX_Sector_Antennas_DS.pdf.
M Matthaiou, DI Laurenson, C-X Wang, in IEEE Wireless Communications and Networking Conference, 2008. WCNC 2008. Capacity study of vehicle-to-roadside MIMO channels with a line-of-sight component (IEEELas Vegas, USA, 2008), pp. 775–779. doi:10.1109/WCNC.2008.142.
The authors thank Ismael Rozas Ramallal for his support in developing and testing the testbed and conducting the measurement campaigns. This work was supported in part by Xunta de Galicia, MINECO of Spain, and by FEDER funds of the E.U. under Grant 2012/287, Grant IPT-2011-1034-370000, and Grant TEC2013-47141-C4-1-R.
Department of Electronics and Systems, University of A Coruña, Facultade de Informática, Campus de Elviña, A Coruña, 15071, Spain
Pedro Suárez-Casal
, José Rodríguez-Piñeiro
, José A García-Naya
& Luis Castedo
Search for Pedro Suárez-Casal in:
Search for José Rodríguez-Piñeiro in:
Search for José A García-Naya in:
Search for Luis Castedo in:
Correspondence to Pedro Suárez-Casal.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0), which permits use, duplication, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Suárez-Casal, P., Rodríguez-Piñeiro, J., García-Naya, J.A. et al. Experimental evaluation of the WiMAX downlink physical layer in high-mobility scenarios. J Wireless Com Network 2015, 109 (2015) doi:10.1186/s13638-015-0339-9
Experimental Evaluation in Wireless Communications
|
CommonCrawl
|
Only show content I have access to (24)
Chapters (2)
Last 3 months (1)
Last 12 months (5)
Last 3 years (11)
Over 3 years (41)
Physics and Astronomy (20)
Materials Research (16)
Statistics and Probability (5)
MRS Online Proceedings Library Archive (15)
Microscopy and Microanalysis (7)
Epidemiology & Infection (5)
Canadian Journal of Neurological Sciences (3)
The Journal of Laryngology & Otology (3)
European Psychiatry (2)
Publications of the Astronomical Society of Australia (2)
Breast Cancer Online (1)
Bulletin of Entomological Research (1)
International Psychogeriatrics (1)
Journal of Clinical and Translational Science (1)
Journal of Materials Research (1)
Materials Research Society Internet Journal of Nitride Semiconductor Research (1)
Proceedings of the International Astronomical Union (1)
Proceedings of the Nutrition Society (1)
Symposium - International Astronomical Union (1)
The British Journal of Psychiatry (1)
The European Physical Journal - Applied Physics (1)
Cambridge University Press (2)
Materials Research Society (17)
AMA Mexican Society of Microscopy MMS (3)
Canadian Neurological Sciences Federation (3)
Royal College of Speech and Language Therapists (3)
European Psychiatric Association (2)
International Astronomical Union (2)
MSC - Microscopical Society of Canada (2)
Animal consortium (1)
Bioethics at Cambridge (1)
JLO (1984) Ltd (1)
MAS - Microbeam Analysis Society (1)
MiMi / EMAS - European Microbeam Analysis Society (1)
Nutrition Society (1)
The Royal College of Psychiatrists (1)
British Mycological Society Symposia (1)
WALLABY Pilot Survey: Public release of HI kinematic models for more than 100 galaxies from phase 1 of ASKAP pilot observations
N. Deg, K. Spekkens, T. Westmeier, T. N. Reynolds, P. Venkataraman, S. Goliath, A. X. Shen, R. Halloran, A. Bosma, B Catinella, W. J. G. de Blok, H. Dénes, E. M. DiTeodoro, A. Elagali, B.-Q. For, C Howlett, G. I. G. Józsa, P. Kamphuis, D. Kleiner, B Koribalski, K. Lee-Waddell, F. Lelli, X. Lin, C. Murugeshan, S. Oh, J. Rhee, T. C. Scott, L. Staveley-Smith, J. M. van der Hulst, L. Verdes-Montenegro, J. Wang, O. I. Wong
Journal: Publications of the Astronomical Society of Australia / Volume 39 / 2022
Published online by Cambridge University Press: 15 November 2022, e059
We present the Widefield ASKAP L-band Legacy All-sky Blind surveY (WALLABY) Pilot Phase I Hi kinematic models. This first data release consists of Hi observations of three fields in the direction of the Hydra and Norma clusters, and the NGC 4636 galaxy group. In this paper, we describe how we generate and publicly release flat-disk tilted-ring kinematic models for 109/592 unique Hi detections in these fields. The modelling method adopted here—which we call the WALLABY Kinematic Analysis Proto-Pipeline (WKAPP) and for which the corresponding scripts are also publicly available—consists of combining results from the homogeneous application of the FAT and 3DBarolo algorithms to the subset of 209 detections with sufficient resolution and $S/N$ in order to generate optimised model parameters and uncertainties. The 109 models presented here tend to be gas rich detections resolved by at least 3–4 synthesised beams across their major axes, but there is no obvious environmental bias in the modelling. The data release described here is the first step towards the derivation of similar products for thousands of spatially resolved WALLABY detections via a dedicated kinematic pipeline. Such a large publicly available and homogeneously analysed dataset will be a powerful legacy product that that will enable a wide range of scientific studies.
Identifying Possible Two-Level-System Sources in Superconducting Qubit with Advanced Electron Microscopy
Lin Zhou, Jin-Su Oh, Xiaotian Fang, Tae-Hoon Kim, Matt Kramer, A. Romanenko, S. Posen, A. Grassellino, Cameron J. Kopas, Mark Field, Jayss Marshall, Joshua Y. Mutus, Hilal Cansizoglu, Matthew Reagor
Journal: Microscopy and Microanalysis / Volume 28 / Issue S1 / August 2022
Published online by Cambridge University Press: 22 July 2022, pp. 1716-1717
213 Extrapulmonary Gas Exchange Through Peritoneal Perfluorocarbon Perfusion
JCTS 2022 Abstract Collection
Joshua L. Leibowitz, Warren Naselsky, Mahsa Doosthosseini, Kevin Aroom, Aakash Shah, Gregory J. Bittle, Jin-Oh Hahn, Hosam K. Fathy, Joseph S. Friedberg
Journal: Journal of Clinical and Translational Science / Volume 6 / Issue s1 / April 2022
Published online by Cambridge University Press: 19 April 2022, p. 34
OBJECTIVES/GOALS: For patients suffering from respiratory failure there are limited options to support gas exchange aside from mechanical ventilation. Our goal is to design, investigate, and refine a novel device for extrapulmonary gas exchange via peritoneal perfusion with perfluorocarbons (PFC) in an animal model. METHODS/STUDY POPULATION: Hypoxic respiratory failure will be modeled using 50 kg swine mechanically ventilated with subatmospheric (10-12%) oxygen. Through a midline laparotomy, two cannulas, one for inflow and one for outflow, will be placed into the peritoneal space. After abdominal closure, the cannulas will be connected to a device capable of draining, oxygenating, regulating temperature, filtering, and pumping perfluorodecalin at a rate of 3-4 liters per minute. During induced hypoxia, the physiologic response to PFC circulation through the peritoneal space will be monitored with invasive (e.g. arterial and venous blood gases) and non-invasive measurements (e.g. pulse oximetry). RESULTS/ANTICIPATED RESULTS: We anticipate that the initiation of oxygenated perfluorocarbons perfusion through the peritoneal space during induced hypoxia will create an increase in hemoglobin oxygen saturation and partial pressure of oxygen in arterial blood. As we expect gas exchange to be occurring in the microvascular beds of the peritoneal membrane, we expect to observe an increase in the venous blood oxygen content sampled from the inferior vena cava. Using other invasive hemodynamic measures (e.g. cardiac output) and blood samples taken from multiple venous sites, a quantifiable rate of oxygen delivery will be calculable. DISCUSSION/SIGNIFICANCE: Peritoneal perfluorocarbon perfusion, if able to deliver significant amounts of oxygen, would provide a potentially lifesaving therapy for patients in respiratory failure who are unable to be supported with mechanical ventilation alone, and are not candidates for extracorporeal membrane oxygenation.
Characteristics, management and outcome of a large necrotising otitis externa case series: need for standardised case definition
S H Hodgson, V J Sinclair, J Arwyn-Jones, K Oh, K Nucken, M Perenyei, V Sivapathasingam, P Martinez-Devesa, S T Pendlebury, J D Ramsden, P C Matthews, P Pretorius, M I Andersson
Journal: The Journal of Laryngology & Otology / Volume 136 / Issue 7 / July 2022
Published online by Cambridge University Press: 19 January 2022, pp. 604-610
Print publication: July 2022
Necrotising otitis externa is a severe ear infection for which there are no established diagnostic or treatment guidelines.
This study described clinical characteristics, management and outcomes for patients managed as necrotising otitis externa cases at a UK tertiary referral centre.
A total of 58 (63 per cent) patients were classified as definite necrotising otitis externa cases, 31 (34 per cent) as probable cases and 3 (3 per cent) as possible cases. Median duration of intravenous and oral antimicrobial therapy was 6.0 weeks (0.49–44.9 weeks). Six per cent of patients relapsed a median of 16.4 weeks (interquartile range, 23–121) after stopping antimicrobials. Twenty-eight per cent of cases had complex disease. These patients were older (p = 0.042), had a longer duration of symptoms prior to imaging (p < 0.0001) and higher C-reactive protein at diagnosis (p = 0.005). Despite longer courses of intravenous antimicrobials (23 vs 14 days; p = 0.032), complex cases were more likely to relapse (p = 0.016).
A standardised case-definition of necrotising otitis externa is needed to optimise diagnosis, management and research.
P.048 International MAGNIMS-CMSC-NAIMS consensus recommendations on the use of standardized MRI in MS
A Traboulsee, M Wattjes, O Ciccarelli, D Reich, B Banwell, N de Stefano, C Enzinger, F Fazekas, M Filippi, J Frederiksen, C Gasperini, Y Hacohen, L Kappos, DK Li, K Mankad, X Montalban, S Newsome, J Oh, J Palace, M Rocca, J Sastre-Garriga, M Tintore, H Vrenken, T Yours, F Barkhof, A Rovira
Journal: Canadian Journal of Neurological Sciences / Volume 48 / Issue s3 / November 2021
Published online by Cambridge University Press: 05 January 2022, pp. S32-S33
Background: Standardized magnetic resonance imaging (MRI) guidelines published in 2015 by the Europoean MAGNIMS group and in 2016 by the CMSC are important for the diagnosis and monitoring of patients with multiple sclerosis (MS) and for the appropriate use of MRI in routine clinical practice. Methods: Two panels of experts convened to update existing guidelines for a standardized MRI protocol. The MAGNIMS panel convened in Graz, Austria in April 2019. The CMSC NAIMS panel met separately and independently in Newark, USA in October 2019. Subsequently, the MAGNIMS, NAIMS, and CMSC working groups combined their efforts to reach an international consensus Results: The revised guidelines on MRI in MS merges recommendations from MAGNIMS, CMSC, and NAIMS to improve the use of MRI for diagnosis, prognosis and monitoring of individuals with MS. 3D acquisitions are emphasized for optimal comparison over time. Core brain sequences include a 3D-T2wFLAIR for lesion identification and monitoring treatment effectiveness. Gadolinium-based contrast is recommended for diagnostic studies and judicious use for routine monitoring of MS patients. DWI sequences are recommended for PML safety monitoring. Conclusions: The international consensus guidelines strive for global acceptance of a useful and usable standard of care for patients with MS.
The MAGPI survey: Science goals, design, observing strategy, early results and theoretical framework
C. Foster, J. T. Mendel, C. D. P. Lagos, E. Wisnioski, T. Yuan, F. D'Eugenio, T. M. Barone, K. E. Harborne, S. P. Vaughan, F. Schulze, R.-S. Remus, A. Gupta, F. Collacchioni, D. J. Khim, P. Taylor, R. Bassett, S. M. Croom, R. M. McDermid, A. Poci, A. J. Battisti, J. Bland-Hawthorn, S. Bellstedt, M. Colless, L. J. M. Davies, C. Derkenne, S. Driver, A. Ferré-Mateu, D. B. Fisher, E. Gjergo, E. J. Johnston, A. Khalid, C. Kobayashi, S. Oh, Y. Peng, A. S. G. Robotham, P. Sharda, S. M. Sweet, E. N. Taylor, K.-V. H. Tran, J. W. Trayford, J. van de Sande, S. K. Yi, L. Zanisi
Published online by Cambridge University Press: 26 July 2021, e031
We present an overview of the Middle Ages Galaxy Properties with Integral Field Spectroscopy (MAGPI) survey, a Large Program on the European Southern Observatory Very Large Telescope. MAGPI is designed to study the physical drivers of galaxy transformation at a lookback time of 3–4 Gyr, during which the dynamical, morphological, and chemical properties of galaxies are predicted to evolve significantly. The survey uses new medium-deep adaptive optics aided Multi-Unit Spectroscopic Explorer (MUSE) observations of fields selected from the Galaxy and Mass Assembly (GAMA) survey, providing a wealth of publicly available ancillary multi-wavelength data. With these data, MAGPI will map the kinematic and chemical properties of stars and ionised gas for a sample of 60 massive ( ${>}7 \times 10^{10} {\mathrm{M}}_\odot$ ) central galaxies at $0.25 < z <0.35$ in a representative range of environments (isolated, groups and clusters). The spatial resolution delivered by MUSE with Ground Layer Adaptive Optics ( $0.6-0.8$ arcsec FWHM) will facilitate a direct comparison with Integral Field Spectroscopy surveys of the nearby Universe, such as SAMI and MaNGA, and at higher redshifts using adaptive optics, for example, SINS. In addition to the primary (central) galaxy sample, MAGPI will deliver resolved and unresolved spectra for as many as 150 satellite galaxies at $0.25 < z <0.35$ , as well as hundreds of emission-line sources at $z < 6$ . This paper outlines the science goals, survey design, and observing strategy of MAGPI. We also present a first look at the MAGPI data, and the theoretical framework to which MAGPI data will be compared using the current generation of cosmological hydrodynamical simulations including EAGLE, Magneticum, HORIZON-AGN, and Illustris-TNG. Our results show that cosmological hydrodynamical simulations make discrepant predictions in the spatially resolved properties of galaxies at $z\approx 0.3$ . MAGPI observations will place new constraints and allow for tangible improvements in galaxy formation theory.
Seasonal variation of patulous Eustachian tube diagnoses using climatic and national health insurance data
S Lee, S-W Choi, J Kim, H M Lee, S-J Oh, S-K Kong
Journal: The Journal of Laryngology & Otology / Volume 135 / Issue 8 / August 2021
Published online by Cambridge University Press: 09 July 2021, pp. 695-701
This study aimed to analyse if there were any associations between patulous Eustachian tube occurrence and climatic factors and seasonality.
The correlation between the monthly average number of patients diagnosed with patulous Eustachian tube and climatic factors in Seoul, Korea, from January 2010 to December 2016, was statistically analysed using national data sets.
The relative risk for patulous Eustachian tube occurrence according to season was significantly higher in summer and autumn, and lower in winter than in spring (relative risk (95 per cent confidence interval): 1.334 (1.267–1.404), 1.219 (1.157–1.285) and 0.889 (0.840–0.941) for summer, autumn and winter, respectively). Temperature, atmospheric pressure and relative humidity had a moderate positive (r = 0.648), negative (r = –0.601) and positive (r = 0.492) correlation with the number of patulous Eustachian tube cases, respectively.
The number of patulous Eustachian tube cases was highest in summer and increased in proportion to changes in temperature and humidity, which could be due to physiological changes caused by climatic factors or diet trends.
Older adult psychopathology: international comparisons of self-reports, collateral reports, and cross-informant agreement
L.A. Rescorla, M.Y. Ivanova, T.M. Achenbach, Vera Almeida, Meltem Anafarta-Sendag, Ieva Bite, J. Carlos Caldas, John William Capps, Yi-Chuen Chen, Paola Colombo, Margareth da Silva Oliveira, Anca Dobrean, Nese Erol, Alessandra Frigerio, Yasuko Funabiki, Reda Gedutienė, Halldór S. Guðmundsson, Min Quan Heo, Young Ah Kim, Tih-Shih Lee, Manuela Leite, Jianghong Liu, Jasminka Markovic, Monika Misiec, Marcus Müller, Kyung Ja Oh, Verónica Portillo-Reyes, Wolfgang Retz, Sandra B. Sebre, Shupeng Shi, Sigurveig H. Sigurðardóttir, Roma Šimulionienė, Elvisa Sokoli, Dragana Milijasevic, Ewa Zasępa
Journal: International Psychogeriatrics / Volume 34 / Issue 5 / May 2022
Published online by Cambridge University Press: 04 September 2020, pp. 467-478
To conduct international comparisons of self-reports, collateral reports, and cross-informant agreement regarding older adult psychopathology.
We compared self-ratings of problems (e.g. I cry a lot) and personal strengths (e.g. I like to help others) for 10,686 adults aged 60–102 years from 19 societies and collateral ratings for 7,065 of these adults from 12 societies.
Data were obtained via the Older Adult Self-Report (OASR) and the Older Adult Behavior Checklist (OABCL; Achenbach et al., 2004).
Cronbach's alphas were .76 (OASR) and .80 (OABCL) averaged across societies. Across societies, 27 of the 30 problem items with the highest mean ratings and 28 of the 30 items with the lowest mean ratings were the same on the OASR and the OABCL. Q correlations between the means of the 0–1–2 ratings for the 113 problem items averaged across all pairs of societies yielded means of .77 (OASR) and .78 (OABCL). For the OASR and OABCL, respectively, analyses of variance (ANOVAs) yielded effect sizes (ESs) for society of 15% and 18% for Total Problems and 42% and 31% for Personal Strengths, respectively. For 5,584 cross-informant dyads in 12 societies, cross-informant correlations averaged across societies were .68 for Total Problems and .58 for Personal Strengths. Mixed-model ANOVAs yielded large effects for society on both Total Problems (ES = 17%) and Personal Strengths (ES = 36%).
The OASR and OABCL are efficient, low-cost, easily administered mental health assessments that can be used internationally to screen for many problems and strengths.
Comparison of patulous Eustachian tube patients with and without a concave defect in the anterolateral wall of the tubal valve
S-W Choi, J-H Park, S Lee, S-J Oh, S-K Kong
Journal: The Journal of Laryngology & Otology / Volume 134 / Issue 6 / June 2020
Print publication: June 2020
Patulous Eustachian tube appears to be caused by a concave defect in the anterolateral wall of the tubal valve of the Eustachian tube. This study aimed to compare the clinical features of patulous Eustachian tube patients with or without a defect in the anterolateral wall of the tubal valve.
Sixty-six patients with a patulous Eustachian tube completed a questionnaire, which was evaluated alongside endoscopic findings of the tympanic membrane, nasal cavity and Eustachian tube orifice.
Females were more frequently diagnosed with a patulous Eustachian tube, but the valve defect was more common in males (p = 0.007). The ratio of patulous Eustachian tube patients with or without defects in the anterolateral wall of the tubal valve was 1.6:1. Weight loss in the previous six months and being refractory to conservative management were significantly associated with the defect (p = 0.035 and 0.037, respectively). Symptom severity was significantly higher in patients with the defect.
Patulous Eustachian tube patients without a defect in the anterolateral wall of the tubal valve can be non-surgically treated more often than those with the defect. Identification of the defect could assist in making treatment decisions for patulous Eustachian tube patients.
2763 – Psychiatric Symptoms of Internet Game Addiction in the Child and Adolescent Psychiatric Clinic
E.-J. Oh, S.-Y. Bhang, J.-H. Ahn, S.-H. Choi, S.-W. Choi, H.-K. Lee
Journal: European Psychiatry / Volume 28 / Issue S1 / 2013
Published online by Cambridge University Press: 15 April 2020, p. 1
The prevalence of internet game use among children and adolescents has been increased in the recent years.
Internet addiction has been found to cause various psychiatric symptoms and psychological problems. Internet addiction has been found to cause various psychiatric symptoms and psychological problems.
The aim of this study was to examine the association between problematic internet game use and psychiatric symptoms in a sample of the Child and Adolescent Psychiatric Clinic, Ulsan University Hospital.
We analyzed data from 447 subjects who first visit the Child and Adolescent Psychiatric Clinic of the Ulsan University Hospital. The level of Internet addiction was categorized as either high-risk (≥108; group 3), potential risk (95 to 107; group 2), or no risk (≤94, group 1) based on the total score. Data were analyzed using SPSS version 17.0 and one-way ANOVA and multiple logistic regression method were used.
Thirteen adolescents met the criteria for high risk group of internet game addiction. in the high risk group, 10 were male and 3 were female adolescents. There was an mean difference among group 3 (high risk)< 1 (no risk),2 (potential risk) in AHI ; whereas group 3 (high risk)>1 (no risk), 2 (potential risk) in BDI, BAI, inattention, hyperactivity/impulsivity and K-ARS score. with multiple logistic regression analysis, K-scale was significantly related with male sex, BDI, ARShyperactivity/ impulsivity score.
We conclude that having male sex, happiness and depressive symptoms is associated with the risk of developing internet use disorders.
U-shaped relationship between depression and body mass index in the Korean adults
J.-H. Lee, S.K. Park, J.-H. Ryoo, C.-M. Oh, J.-M. Choi, R.S. McIntyre, R.B. Mansur, H. Kim, S. Hales, J.Y. Jung
Journal: European Psychiatry / Volume 45 / September 2017
Published online by Cambridge University Press: 23 March 2020, pp. 72-80
Although a number of studies have examined the relationship between depression and obesity, it is still insufficient to establish the specific pattern of relationship between depression and body mass index (BMI) categories. Thus, this study was aimed to investigate the relationship between depression and BMI categories.
A cross-sectional study was conducted for a cohort of 159,390 Korean based on Kangbuk Samsung Health Study (KSHS). Study participants were classified into 5 groups by Asian-specific cut-off of BMI (18.5, 23, 25 and 30 kg/m2). The presence of depression was determined by Center for Epidemiologic Studies-Depression scales (CES-D) = 16 and = 25. The adjusted odd ratios (ORs) for depression were evaluated by multiple logistic regression analysis, in which independent variable was 5 categories of BMI and dependent variable was depression. Subgroup analysis was conducted by gender and age.
When normal group was set as a reference, the adjusted ORs for depression formed U-shaped pattern of relationship with BMI categories [underweight: 1.31 (1.14–1.50), overweight: 0.94 (0.85–1.04), obese group: 1.01 (0.91–1.12), severe obese group: 1.28 (1.05–1.54)]. This pattern of relationship was more prominent in female and young age group than male and elderly subgroup. BMI level with the lowest likelihood of depression was 18.5 kg/m2 to 25 kg/m2 in women and 23 kg/m2 to 25 kg/m2 in men.
There was a U-shaped relationship between depression and BMI categories. This finding suggests that both underweight and severe obesity are associated with the increased risk for depression.
18 Peroxiredoxin1 is a therapeutic target in group-3 medulloblastoma
BV. Sajesh, OH. Ngoc, R. Omar, H. Fediuk, L. Li, S. Alrushaid, W. Wang, J. Pu, HD. Sun, T. Siahaan, T. Werbowetski-Ogilvie, M. Wölfl, M. Remke, V. Ramaswamy, M. Taylor, C. Eberhart, M. Symons, R. Ruggieri, DW Miller, MI Vanan
Journal: Canadian Journal of Neurological Sciences / Volume 45 / Issue S3 / June 2018
Published online by Cambridge University Press: 27 July 2018, p. S16
Group-3 medulloblastoma (MBL) is highly resistant to radiation (IR) and chemotherapy and has the worst prognosis. Hence, there is an urgent need to elucidate targets that sensitize these tumors to chemotherapy and IR. Employing standard assays for viability and sensitization to IR, we identified PRDX1 as a therapeutic target in Group-3 MBL. Specifically, targeting PRDX1 by RNAi or inhibition by Adenanthin led to specific killing and sensitization to IR of Group-3 MBL cells. We rescued sensitization of Daoy and UW228 cells by hypermorphic expression of PRDX1. PRDX1 knockdown caused oxidative DNA damage and induced apoptosis. We correlated PRDX1 expression to patient outcomes in a validated MBL tumor-microarray. Whole genome sequencing identified pathways/genes that were dysregulated with PRDX1 inhibition or silencing. Our in vivo studies in mice employing flank/orthotopic tumors from patient derived xenografts/Group-3 MBL cells confirmed in vitro observations. Animals with tumors in which PRDX1 was targeted by RNAi or Adenanthin (using mini osmotic pumps) showed decreased tumor burden and increased survival when compared to controls. Since, Adenanthin does not cross the blood brain barrier (BBB) we used HAV6 peptide to transiently disrupt the BBB and deliver Adenanthin to the tumor. Immunohistochemistry confirmed that targeting PRDX1 resulted in increased oxidative DNA damage, apoptosis and decreased proliferation. In summary, we have validated PRDX1 as a therapeutic target in group-3 MBL, identified Adenanthin as a potent chemical inhibitor of PRDX1 and confirmed the role of HAV peptide (in the transient modulation of BBB permeability) in an orthotopic model of group-3 MBL.
Millimeter VLBI observations of Sgr A* with KaVA and KVN
G.-Y. Zhao, M. Kino, I.-J. Cho, K. Akiyama, B. W. Sohn, T. Jung, J. C. Algaba, K. Hada, Y. Hagiwara, J. Hodgson, M. Honma, N. Kawaguchi, S. Koyama, J. A. Lee, T. Lee, K. Niinuma, J. Oh, J.-H. Park, H. Ro, S. Sawada-Satoh, F. Tazaki, S. Trippe, K. Wajima, H. Yoo
Journal: Proceedings of the International Astronomical Union / Volume 11 / Issue S322 / July 2016
Published online by Cambridge University Press: 09 February 2017, pp. 56-63
We present recent observation results of Sgr A* at millimeter obtained with VLBI arrays in Korea and Japan.
7 mm monitoring of Sgr A* is part of our AGN large project. The results at 7 epochs during 2013-2014, including high resolution maps, flux density and two-dimensional size measurements are presented. The source shows no significant variation in flux and structure related to the G2 encounter in 2014. According to recent MHD simulations by kawashima et al., flux and magnetic field energy can be expected to increase several years after the encounter; We will keep our monitoring in order to test this prediction.
Astrometric observations of Sgr A* were performed in 2015 at 7 and 3.5 millimeter simultaneously. Source-frequency phase referencing was applied and a combined "core-shift" of Sgr A* and a nearby calibrator was measured. Future observations and analysis are necessary to determine the core-shift in each source.
Canadian Expert Panel Recommendations for MRI Use in MS Diagnosis and Monitoring
CJNS Editor's Choice Articles
Anthony Traboulsee, Laurent Létourneau-Guillon, Mark Steven Freedman, Paul W. O'Connor, Aditya Bharatha, Santanu Chakraborty, J. Marc Girard, Fabrizio Giuliani, John T. Lysack, James J. Marriott, Luanne M. Metz, Sarah A. Morrow, Jiwon Oh, Manas Sharma, Robert A. Vandorpe, Talia Alexandra Vertinsky, Vikram S. Wadhwa, Sarah von Riedemann, David K.B. Li
Journal: Canadian Journal of Neurological Sciences / Volume 42 / Issue 3 / May 2015
Published online by Cambridge University Press: 21 April 2015, pp. 159-167
Background: A definitive diagnosis of multiple sclerosis (MS), as distinct from a clinically isolated syndrome, requires one of two conditions: a second clinical attack or particular magnetic resonance imaging (MRI) findings as defined by the McDonald criteria. MRI is also important after a diagnosis is made as a means of monitoring subclinical disease activity. While a standardized protocol for diagnostic and follow-up MRI has been developed by the Consortium of Multiple Sclerosis Centres, acceptance and implementation in Canada have been suboptimal. Methods: To improve diagnosis, monitoring, and management of a clinically isolated syndrome and MS, a Canadian expert panel created consensus recommendations about the appropriate application of the 2010 McDonald criteria in routine practice, strategies to improve adherence to the standardized Consortium of Multiple Sclerosis Centres MRI protocol, and methods for ensuring effective communication among health care practitioners, in particular referring physicians, neurologists, and radiologists. Results: This article presents eight consensus statements developed by the expert panel, along with the rationale underlying the recommendations and commentaries on how to prioritize resource use within the Canadian healthcare system. Conclusions: The expert panel calls on neurologists and radiologists in Canada to incorporate the McDonald criteria, the Consortium of Multiple Sclerosis Centres MRI protocol, and other guidance given in this consensus presentation into their practices. By improving communication and general awareness of best practices for MRI use in MS diagnosis and monitoring, we can improve patient care across Canada by providing timely diagnosis, informed management decisions, and better continuity of care.
Alcohol consumption and persistent infection of high-risk human papillomavirus
H. Y. OH, M. K. KIM, S. SEO, D. O. LEE, Y. K. CHUNG, M. C. LIM, J. KIM, C. W. LEE, S. PARK
Journal: Epidemiology & Infection / Volume 143 / Issue 7 / May 2015
Published online by Cambridge University Press: 04 September 2014, pp. 1442-1450
Alcohol consumption is a possible co-factor of high-risk human papillomavirus (HR-HPV) persistence, a major step in cervical carcinogenesis, but the association between alcohol and continuous HPV infection remains unclear. This prospective study identified the association between alcohol consumption and HR-HPV persistence. Overall, 9230 women who underwent screening during 2002–2011 at the National Cancer Center, Korea were analysed in multivariate logistic regression. Current drinkers [odds ratio (OR) 2·49, 95% confidence interval (CI) 1·32–4·71] and drinkers for ⩾5 years (OR 2·33, 95% CI 1·17–4·63) had a higher risk of 2-year HR-HPV persistence (HPV positivity for 3 consecutive years) than non-drinkers and drinkers for <5 years, respectively (vs. HPV negativity for 3 consecutive years). A high drinking frequency (⩾twice/week) and a high beer intake (⩾3 glasses/occasion) had higher risks of 1-year (OR 1·80, 95% CI 1·01–3·36) HPV positivity for 2 consecutive years) and 2-year HR-HPV persistence (OR 3·62, 95% CI 1·35–9·75) than non-drinkers. Of the HPV-positive subjects enrolled, drinking habit (OR 2·68, 95% CI 1·10–6·51) and high consumption of beer or soju (⩾2 glasses/occasion; OR 2·90, 95% CI 1·06–7·98) increased the risk of 2-year consecutive or alternate HR-HPV positivity (vs. consecutive HPV negativity). These findings suggest that alcohol consumption might increase the risk of cervical HR-HPV persistence in Korean women.
By Melisa Akan, Elisabeth Bacon, Rosemary Bradley, Alan S. Brown, Sarah Buchanan, Ana Buján, Anne M. Cleary, Katie Croft Caderao, Fernando Díaz, Anastasia Efklides, David Facal, Santiago Galdo-Álvarez, J. Richard Hanley, Trevor A. Harley, Marie Izaute, Fredrik U. Jönsson, Onésimo Juncos-Rabadán, Dilay Zeynep Karadoller, Kimberly R. Klein, Asher Koriat, Mónica Lindín, Siobhan B. G. MacAndrew, Janet Metcalfe, Chris J. A. Moulin, Ravit Nussinson, Justin D. Oh-Lee, Hajime Otani, Arturo X. Pereiro, Bennett L. Schwartz, Celine Souchay, Shelly R. Staley, Richard J. Stevenson
Edited by Bennett L. Schwartz, Florida International University, Alan S. Brown, Southern Methodist University, Texas
Book: Tip-of-the-Tongue States and Related Phenomena
Print publication: 16 June 2014, pp vii-viii
Outbreak of enterotoxigenic Escherichia coli O169 enteritis in schoolchildren associated with consumption of kimchi, Republic of Korea, 2012‡
S. H. CHO, J. KIM, K. -H. OH, J. K. HU, J. SEO, S. S. OH, M. J. HUR, Y. -H. CHOI, S. K. YOUN, G. T. CHUNG, Y. J. CHOE
Journal: Epidemiology & Infection / Volume 142 / Issue 3 / March 2014
Enterotoxigenic Escherichia coli (ETEC) is now recognized as a common cause of foodborne outbreaks. This study aimed to describe the first ETEC O169 outbreak identified in Korea. In this outbreak, we identified 1642 cases from seven schools. Retrospective cohort studies were performed in two schools; and case-control studies were conducted in five schools. In two schools, radish kimchi was associated with illness; and in five other schools, radish or cabbage kimchi was found to have a higher risk among food items. Adjusted relative risk of kimchi was 5·87–7·21 in schools that underwent cohort studies; and adjusted odds ratio was 4·52–12·37 in schools that underwent case-control studies. ETEC O169 was isolated from 230 affected students, and was indistinguishable from the isolates detected from the kimchi product distributed by company X, a food company that produced and distributed kimchi to all seven schools. In this outbreak, we found that the risk of a kimchi-borne outbreak of ETEC O169 infection is present in Korea. We recommend continued monitoring regarding food safety in Korea, and strengthening surveillance regarding ETEC O169 infection through implementation of active laboratory surveillance to confirm its infection.
Effective Utilization of STEM Imaging Capability in FIB for Physical Failure Analysis on 20nm & 14nm Transistor Nodes in Semiconductor Wafer Foundries
W. Zhao, D. Nedeau, S. Kodali, J. Huang, C.-K. Oh, S.-K. Lim, R. Rai, Z.-H. Mai, J. Lam
Extended abstract of a paper presented at Microscopy and Microanalysis 2013 in Indianapolis, Indiana, USA, August 4 – August 8, 2013.
An Effective Approach to Extract Cross-Sectional Information from Top-Down SEM for 20nm & 14nm Transistor Nodes in Semiconductor Wafer-Foundries
W. Zhao, Y. Wei, C.-K. Oh, S. Kodali, T. Schaeffer, S.-K. Lim, R. Rai, Z.-H. Mai, J. Lam
Published online by Cambridge University Press: 09 October 2013, pp. 1122-1123
Influence of ion beam damage by FIB on the RESET amorphous volume observation in phase change random access memory device
J. Oh, Y. Jang, S. Jeon, T. Lee, W. Kim, H. Kim, C. Kim
|
CommonCrawl
|
Arithmetic symplectic geometry via mirror symmetry?
Homological mirror symmetry in the classical setting relates the bounded derived category of coherent sheaves on a Calabi-Yau manifold to the split-closure of the derived Fukaya category of the mirror Calabi-Yau.
In the paper 'Arithmetic mirror symmetry for the 2-torus', authors construct a $\mathbb{Z}$-linear equivalence between exact Fukaya category of a punctured torus and the category of perfect complexes of coherent sheaves on the central fiber of Tate curve.
Intuitively, this statement is only a half of the HMS conjecture since we also would like to have an equivalence between the 'Fukaya category of the central fiber of Tate curve' and the bounded derived category of coherent sheaves on the punctured elliptic curve.
The question is: is it possible to somehow define the notion of Fukaya category for the central fiber of the Tate curve (which is a curve in $\mathbb{P}^2(\mathbb{Z})$, not a smooth manifold)? If it is possible, can we construct an equivalence to the derived bounded category of coherent sheaves on the punctured elliptic curve?
ag.algebraic-geometry arithmetic-geometry sg.symplectic-geometry
Yes. Recently Auroux (jointly with Efimov and Katzarkov) has proposed a definition of the Fukaya category for trivalent configurations of rational curves. If $\Sigma_g$ is a genus $g$ Riemann surface with $g\geq2$, then its mirror is a trivalent configuration of $3g-3$ rational curves meeting in $2g-2$ triple points. In the case when $g=1$, a nodal curve is obviously not a trivalent configuration. However, one can replace it with a nodal curve with an affine line attached at the node, which then enables one to make sense of its Fukaya category. The objects of this version of Fukaya category are embedded graphs with trivalent vertices at the triple points, and morphisms are linear combinations of intersection points as in usual Floer theory. Note that since the punctured elliptic curve $E^\circ:=E\setminus\{pt\}$ is not compact, there are two versions of derived categories of coherent sheaves, namely one can consider either the usual $D^b\mathit{Coh}(E^\circ)$ or its compactly supported version $D^b\mathit{Coh}_\mathit{cpt}(E^\circ)$. On the mirror side, this determines whether one needs to do wrapping on the affine line component or not.
YHBKJYHBKJ
How far can one get with the Gross-Siebert program?
Noncommutative Fukaya category?
Concerning the homological mirror symmetry conjecture
Derived categories of arithmetic schemes?
Relation between mirror symmetry, homological mirror symmetry, and SYZ conjecture
Multiple mirrors phenomenon from SYZ and HMS perspective
Mirror symmetry for singular Lagrangian torus fibrations
|
CommonCrawl
|
Functional and effective whole brain connectivity using magnetoencephalography to identify monozygotic twin pairs
Heritability and interindividual variability of regional structure-function coupling
Zijin Gu, Keith Wakefield Jamison, … Amy Kuceyeski
CHIMGEN: a Chinese imaging genetics cohort to enhance cross-ethnic and cross-geographic brain research
Qiang Xu, Lining Guo, … for the CHIMGEN Consortium
Using multiple short epochs optimises the stability of infant EEG connectivity parameters
Rianne Haartsen, Bauke van der Velde, … Chantal Kemner
Diversity of meso-scale architecture in human and non-human connectomes
Richard F. Betzel, John D. Medaglia & Danielle S. Bassett
Structural and functional brain scans from the cross-sectional Southwest University adult lifespan dataset
Dongtao Wei, Kaixiang Zhuang, … Jiang Qiu
Chinese adult brain atlas with functional and white matter parcellation
Jingwen Zhu & Anqi Qiu
BrainSpace: a toolbox for the analysis of macroscale gradients in neuroimaging and connectomics datasets
Reinder Vos de Wael, Oualid Benkarim, … Boris C. Bernhardt
Changes in electrophysiological static and dynamic human brain functional architecture from childhood to late adulthood
N. Coquelet, V. Wens, … X. De Tiège
Age-related changes of whole-brain dynamics in spontaneous neuronal coactivations
Guofa Shou, Han Yuan, … Lei Ding
M. Demuru1,
A. A. Gouw1,2,
A. Hillebrand2,
C. J. Stam2,
B. W. van Dijk2,
P. Scheltens1,
B. M. Tijms1,
E. Konijnenberg1,
M. ten Kate1,
A. den Braber1,3,
D. J. A. Smit3,4,
D. I. Boomsma3 &
P. J. Visser ORCID: orcid.org/0000-0001-8008-97271
Scientific Reports volume 7, Article number: 9685 (2017) Cite this article
Resting-state functional connectivity patterns are highly stable over time within subjects. This suggests that such 'functional fingerprints' may have strong genetic component. We investigated whether the functional (FC) or effective (EC) connectivity patterns of one monozygotic twin could be used to identify the co-twin among a larger sample and determined the overlap in functional fingerprints within monozygotic (MZ) twin pairs using resting state magnetoencephalography (MEG). We included 32 cognitively normal MZ twin pairs from the Netherlands Twin Register who participate in the EMIF-AD preclinAD study (average age 68 years). Combining EC information across multiple frequency bands we obtained an identification rate over 75%. Since MZ twin pairs are genetically identical these results suggest a high genetic contribution to MEG-based EC patterns, leading to large similarities in brain connectivity patterns between two individuals even after 60 years of life or more.
Inter-individual variability in functional brain connectivity has been associated with inter-individual differences in measures of cognitive functioning1, gender2, ageing3, 4 and the presence of brain pathology5. Despite the observation that resting-state networks (RSNs) have a topographic core that is homogeneous between individuals6,7,8,9, recent papers have shown that it is possible to reliably identify single-subjects based on their functional connectivity patterns as measured with functional magnetic resonance imaging (fMRI)10, 11. Therefore, these patterns can be regarded as 'functional connectivity fingerprints' (FCFs) or connectivity profiles. In this study, we considered four different ways of defining connectivity: three undirected (functional) and one directed (effective). Functional and effective connectivity capture two different aspects of interaction between time-series. The former evaluates the statistical interdependency between two time-series without giving any information about the influence of one time-series on the other, whereas effective connectivity captures the influence of one signal on the other12.
We investigated whether the functional (FC) or effective (EC) connectivity patterns of one monozygotic (MZ) twin could be used to identify the co-twin among a sample MZ twin pairs. MZ twins arise from a single fertilized egg and share all their genetic material, i.e. have the same genetic background13. We therefore aimed to test the similarity on the FCFs and effective connectivity fingerprints (ECFs) using magnetoencephalography (MEG).
Previous studies in twins support the genetic influence on whole-brain summary statistics such as the average functional connectivity across all brain regions or measures of functional brain network topology as assessed by electroencephalography (EEG), MEG and fMRI14,15,16,17,18,19,20,21 with estimates of heritability varying between 42 and 72%. Importantly, twin studies analyzing the contribution of genetics, shared environment, and unique environment to functional connectivity by comparing overlap in connectivity within MZ and dizygotic (DZ) twin pairs concluded that the concordance between twins was mainly due to genetic factors and not the shared environment14,15,16. This indicates that resemblance in brain connectivity within MZ twins can be attributed to genetics, rather than a shared environment.
Here, we investigated the resemblance between MZ twin pairs in functional and effective connectivity, i.e. going beyond whole-brain summary statistics14,15,16,17,18,19,20,21,22. If genes indeed have a major effect on connectivity profiles, then MZ twin pairs should be identifiable from these profiles. Furthermore, to better understand the genetic effect on the subject-specific connectivity profiles, we extended our analysis by removing the connectivity patterns that are shared between individuals ('functional or effective connectivity backbone'), and attempted to identify MZ twin pairs on the basis of only their unique functional or effective connectivity patterns. The functional or effective connectivity backbone most likely underlies highly conserved functions23, whereas the residual functional or effective connectivity patterns express the inter-individual variability and the influence of genetic and shared environmental factors on this latter is still unknown.
We hypothesized that source-level MEG FCFs and ECFs, computed from resting-state data, enable reliable identification of MZ twin pairs and can therefore be regarded as fingerprints. We assessed the discriminative effectiveness of MEG fingerprints computed with three functional connectivity measures (phase lag index24, amplitude envelope correlation25,26,27 with and without signal-leakage correction) and one effective connectivity estimate (directed phase transfer entropy28, 29) in order to capture different coupling modes30: phase relations, amplitude envelope relations, and directed interactions, respectively.
We analyzed data from 32 monozygotic twin pairs (64 subjects in total) from the Netherlands Twin Register (NTR31) who take part in the EMIF-AD preclinAD study (see methods section) on predictors for Alzheimer's disease biomarkers in cognitively healthy elderly subjects. MEG data consisting of 5 minutes no-task eyes-closed resting-state recordings were source-reconstructed to 78 cortical regions (regions of interest, ROIs) of the automated anatomical labeling (AAL) atlas32. Functional (FC) and effective (EC) connectivity was assessed between all pairs of regions with Amplitude Envelope Correlation (AEC), Amplitude Envelope Correlation with leakage correction (AEC-c), Phase Lag Index (PLI) and directed Phase Transfer Entropy (dPTE) in 5 frequency bands (delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and lower gamma (30–48 Hz)). For every subject we obtained an average FC or EC matrix for every FC or EC measure in every frequency band, resulting in FCFs and a ECF for every frequency band (see Fig. 1a).
Example of functional connectivity fingerprints. For every subject, and for every frequency band, a FC matrix was computed using a FC measure (a). Every matrix contains ranked values for visualization purposes. From every FC matrix the lower triangular entries, consisting of the pair-wise connectivity between all possible combinations of ROIs (i.e. \((\frac{78\ast 77}{2})=3003\)), were extracted. These entries correspond to the FCF for a frequency band. An example of a FCF in the delta band is shown in the bottom plot in (a) where the y-axis shows FC values and the x-axis are the ROI pairs. A FCF was computed for every frequency band and these were pooled together (delta FCF in blue, theta FCF in magenta, alpha FCF in green, beta FCF in red and gamma FCF profile in yellow) to obtain a single global FCF (b), which was used for the identification analysis. An example of global FCFs for two different twin pairs ((A-A′) and (B-B′)) is shown in (c). Visual comparison of the FCFs suggest that within a twin pair FCFs are more similar than between unrelated subjects (see green ellipsoids in (c)). The same strategy was used to obtain ECFs.
In order to combine the information from different bands, for every FC and EC measure we pooled the FCF or ECF of the different frequency bands in one global FCF or ECF (see Fig. 1b), which was used for the identification analysis. The similarities between FCFs or ECFs were quantified with Spearman's correlation coefficients that were subsequently converted into distance scores. The identification analysis consisted of an iterative process, in which at every step a subject was selected and his or her FCF or ECF was compared with the FCFs or ECFs of all other subjects. If the lowest distance score was obtained for the subject's co-twin we considered it a successful outcome (a match), otherwise a miss. The rate of successful identification across all individuals was calculated as the ratio between the number of successful outcomes and the total number of subjects. Statistical significance of the observed success rate was assessed by means of permutation testing33.
Conventionally, statistical inference of functional connectivity patterns are performed at group level, which has shown reliable connectivity patterns6, 9, 25, 26, 34, 35. However, within these patterns it is possible to identify at least two components: a common pattern that is shared among subjects (low variance across subjects) and unique patterns (high variance across subjects) representing the individual connectivity patterns that contribute to the inter-subject variability36, 37. We applied singular value decomposition (SVD) to the individual FCFs and ECFs (independently for every FC or EC measure) in order to disentangle these two components and repeated the identification analysis using the pooled FCFs and ECFs from which the shared pattern was removed.
In order to understand the relative contribution of each frequency band to the identification performances we repeated the identification analyses using the FCFs or ECFs of each band separately, again with and without the shared pattern.
Subject characteristics
Thirty-two monozygotic twin pairs (21 female pairs) with a mean age of 68.13 (±7.93 standard deviation) years participated in the study. All participants were cognitively normal and scored 25 or higher on the Mini Mental State Examination (mean and standard deviation 28.84 ± 1.16). The mean duration of education was 15.08 years (±4.43 standard deviation). Mean education score was 5.16 (±0.95 standard deviation) based on the Dutch Classification System by Verhage (1964) consisting of a 7-point scale, ranging from primary education not finished (1) to master degree (7).
Identification analyses using FCFs or ECFs
Identification success rates obtained combining the information from all frequency bands are shown in Table 1 for both the original data and after removal of the common pattern. For every measure, the highest significant success rate was obtained after removal of the common pattern: dPTE success rate was over 75% (\(49/64=76.6 \% ,\,p=0.001)\), AEC was higher than 50% (\(34/64=53.1 \% ,\,p=0.001)\), AEC-c was close to 40% (\(24/64=37.5 \% ,\,p=0.001)\) and PLI was around 9% (\(6/64=9.4 \% ,\,p=0.002)\). Success rates obtained with the common pattern included were lower, namely: dPTE (\(37/64=57.8 \% ,\,p=0.001)\), AEC (\(23/64=35.9 \% ,\,p=0.001)\) and AEC-c (\(15/64=23.4 \% ,\,p=0.001)\), with the PLI result not being significant. Figure 2 shows distance score histograms for MZ twin pair and genetically unrelated subjects for the case where the common pattern was removed. Note that for the dPTE the score distributions for twin pairs and unrelated pairs (identification rate 76.6%,) were further apart compared to the distributions obtained when using other FC measure metrics (see Figure S1 in the supplementary information file for all the fingerprint comparisons).
Table 1 Twin identification success rate using the global FCF or ECF based on different measures.
Distance score histograms for MZ twin pairs and for genetically unrelated subjects. For every connectivity measure (AEC, AEC-c, PLI and dPTE) the distance score distributions are displayed. These distance scores were computed using the global FCFs or ECFs after the removal of the shared pattern. Note the differences in scales for the x-axes. Also note that score distributions obtained with dPTE were further apart compared to the distributions obtained when using other connectivity metrics.
For every FC and EC measure, twin pair identification was also performed using the FCF or ECF for each individual frequency band in order to assess the discriminative power of every frequency band alone. Results are shown in Table 2 both for the original data and after the removal of the common pattern. Identification success rates varied across FC and EC measures and frequency bands: with the original data the highest significant success rate, although moderate, was obtained for dPTE in the alpha band (\(23/64=35.9 \% ,\,p=0.001)\). Success rates higher than 25% were observed for dPTE in the beta (\(17/64=26.6 \% ,\,p=0.001)\) and theta (\(16/64=25.0 \% ,\,p=0.001)\) bands, and for AEC without correction in the beta band (\(17/64=26.6 \% ,\,p=0.001)\).
Table 2 Twin identification success rates for different FC or EC measures in individual frequency bands: Amplitude Envelope Correlation without (AEC) and with correction (AEC-c), directed Phase Transfer Entropy (dPTE) and Phase Lag Index (PLI).
Although we also observed significant identification rates for the corrected AEC (alpha and beta bands), these rates were lower compared to when using AEC without leakage correction. The only significant success rate for PLI was in the alpha band (\(5/64=7.8 \% ,\,p=0.004)\).
Across measures and frequency bands the identification performances generally increased after removal of the common pattern. The best results for most measures were observed in theta, alpha and beta bands. For AEC without correction the success rate improved for all the frequency bands, with the highest value (\(31/64=48.4 \% ,\,p=0.001)\) in the beta band and the lowest value (\(15/64=23.4 \% ,\,p=0.001)\) in the delta band. Likewise, the success rates for AEC with leakage correction, with highest and lowest value in the alpha (\(20/64=31.2 \% ,\,p=0.001)\) and gamma band (4/64 = 6.2%, p = 0.018), respectively. Again, PLI success rates were generally low compared to the other FC and EC measures, with significant, yet low, success rates only in the beta (\(4/64=6.2 \% ,\,p=0.014)\) and gamma (\(6/64=9.4 \% ,\,p=0.002)\) band. The significant success rates for dPTE increased especially for the alpha (\(26/64=40.6 \% ,\,p=0.001)\) and beta (\(24/64=37.5 \% ,\,p=0.001)\) bands.
In summary, the best identification was obtained for the alpha and beta band after removal of the common pattern, especially for the dPTE and uncorrected AEC.
In this study, we investigated the resemblance between twins from 32 monozygotic pairs using MEG FC and EC patterns. We showed that it is possible to identify which MZ twins belong to the same pair from a pool of subjects exploiting the EC patterns. The high success rate obtained (over 75%) indicated that MEG EC patterns act as a functional fingerprint. Despite the observation that resting-state FC patterns are shared between subjects6, 9, 25, 26, 34, 35, we observed, on top of this common pattern, that they also provided reliable information to identify MZ twin pairs from unrelated pairs. These observations indicated that MEG EC patterns are genetic traits.
Although previous studies (see refs 38 and 39 for reviews) have demonstrated high heritability of FC, such analyses were performed on summary whole-brain statistics and not on the full FC or EC patterns and so it was unclear whether this would allow for MZ twin pair identification.
Heritability of FC during a resting-state paradigm has been previously assessed with fMRI in specific sub-networks such as default-mode network19 (h2~40%) and extended to other resting-state networks20. Moreover network topology has also been shown to be heritable18, 21, 22 (h2 42%-60%). Heritability of FC has been observed in EEG studies14,15,16,17, however these studies adopted either a whole-brain statistic (i.e. averaging across all pair-wise connectivity values or an overall network topological measure) or per-electrode statistic (i.e. averaging connectivity per recording site). Here we showed that EC patterns carried sufficient heritable information to allow for an ~75% correct identification.
FC and EC can be measured in different ways, which is likely to influence results. Therefore, we investigated the influence of different FC and EC measures on twin identification. Envelope correlation measures (like AEC and AEC-c) and phase-based measures (like PLI) are related to distinct coupling mechanisms30, characterized by different dynamics and possibly involved in different cognitive processes30. We further extended the study of neuronal synchronization considering causal relationships (effective connectivity) between neuronal populations. Understanding what is the influence that a neuronal population exerts on another one is crucial to decipher cognitive processes12. The use of a directed measure such as dPTE28, 29 allowed the estimation of such causal relationships (in Granger and Wiener terms).
The best identification rates using the pooled FCFs or ECFs (both with the original data and after the removal of the shared pattern, identification rates 57.8% and 76.6% respectively) were obtained with dPTE suggesting that the directionality of interactions provided reliable information for the identification. We speculate that both structural and functional aspects play a role in these results. Recently, the mesoscale structural connectome of the mouse was disclosed40 and a striking finding was the asymmetry in the connectivity profile (i.e. difference in in- and out-going connections). This asymmetry was a key concept exploited by Henriksen colleagues41 to build a growing model that successfully reproduces the mouse connectome with its directed connectivity. We argue that asymmetry of connections may be a fundamental property of mammalian connectome42, 43 even though a comprehensive blue-print about afferent and efferent connections is lacking in humans44. Measures such as dPTE may be more prone to detect the influence of such anatomical asymmetries than undirected measures and this extra information is beneficial for identification. Indeed, structural connections influence FC and EC patterns however they do not coincide. Different complex dynamical phenomena can arise on top of a fixed structural network and there is theoretical evidence45 that effective connectivity12 may result from self-organization of brain rhythm activity. In addition, the advantages of transfer entropy (TE) measures in detecting the complex dynamics has recently been reported46. Estimates of connectivity patterns through the dPTE probably have richer information content that allow outperforming undirected FC measures (i.e. AEC, AEC-c and PLI) for identification.
Each measure has different strengths and weaknesses. Specifically, the activity coming from a neuronal source can be detected by different sensors (field spread), resulting in spurious estimates of FC between sensors. This problem is mitigated, yet not completely solved47, in source-space, where it is commonly referred as signal-leakage48, 49. Interpretation of FC estimates is therefore problematic if metrics are used that do not address this problem. AEC does not correct for such spurious estimates of functional connectivity, while AEC-c, PLI and dPTE do address this problem directly24, 25, 28, 50. Although handling this problem aids interpretation, our results showed that identification rate is generally lower when correcting for field spread. For example, we observed a decrease from an identification rate of 53.1% using AEC without correction to 37.5% after the correction (AEC-c) using the FC profiles without the shared patterns. The same it is true for the original data: identification rate decreased from 35.9% with AEC to 23.4% with AEC-c. Moreover, PLI showed only one significant result with a success rate of 9.4%. Together, these findings for AEC, AEC-c and PLI suggested that zero-lag interactions provide valuable information for identification. The drop in performance for AEC after leakage correction and the poor results for PLI could be caused by ignoring the zero-lag interactions, which might have included true interactions. Conversely, as reported by Colclough and colleagues51, the higher performance for uncorrected connectivity metrics could be related to the fact that these measures may reflect trivial properties arising from the spatial configuration of sources, which hinder interpretability.
Moreover, the decrease in performance for AEC after correction could be related to the signal to noise level: power in orthogonalized time-series is an order of magnitude lower than before orthogonalization50, 51. Furthermore, part of the success of uncorrected AEC could be due to the fact that band power itself is highly heritable52.
In Table 2, it can be observed that uncorrected AEC (after removal of the shared pattern) shows higher (or equal) identification rate compared to dPTE for all frequency bands, however when all bands are combined dPTE outperforms AEC (76,6% vs 53,1%). An explanation for this result may be found in the observation that dPTE contains potentially more independent information in different frequency bands than uncorrected AEC. Figure S2 shows that the connectivity matrices for uncorrected AEC are fairly similar across frequency bands, whereas for the dPTE matrices differ more across the frequency bands. This may influence the overall identification rate when pooling the bands together: in the case of AEC redundant information is pooled together, whereas potentially independent information is pooled when using dPTE, leading to a better identification rate.
The removal of the shared pattern across subjects from the individual FCFs or ECFs improved the identification rates for each connectivity measures. Recently, Hawrylycz23 and colleagues demonstrated that a shared FC pattern across individuals relate to consistent gene expression signatures. Likely, this FC pattern underlies highly conserved functions. Since we aimed to analyze at individual level data, we decided to discard this shared FC or EC pattern across individuals in order to assess the influence of familial factors on the residual FC or EC patterns. It might be argued that the removal of the shared pattern across subjects from individual functional connectivity profiles reduces the generalizability of the results because this transformation is group-dependent. However, the recent literature on the consistency and repeatability of MEG functional connectivity patterns at group-level9, 37, 51 supports the use of this approach in our study. We showed that also the FC and EC patterns that are specific to the individual are strongly influenced by familial factors.
Recently, Finn and colleagues10 reported high identification rates with fMRI fingerprints (~95% and ~99% using whole brain and sub-network fingerprints respectively in resting-state condition) without performing any removal of a shared FC pattern. However, our results support that the removal of this common pattern enhances identification rate. The high identification rate without removal of a shared pattern in Finn's study may be related to two main differences with the present study: first, they matched the same subject while we aimed to identify one subject using the fingerprint of its twin; second, the higher dimension of their fingerprint (35778 entries in the feature vector obtained from 268 ROIs) compared to our fingerprint (15015 entries in the feature vector obtained from 78 ROIs times 5 frequency bands). The effect of the fingerprint size is confirmed by a result reported as well in Finn's study. They recomputed the identification rates using a different parcellation scheme with fewer ROIs (68 ROIs, 2278 entries in the feature vector) and the identification rates dropped (~89% and ~75% using whole brain and sub-network fingerprints respectively in resting-state condition). We recomputed as well the identification rates using a parcellation scheme with a higher resolution (see Tables S3 and S4). By using a higher resolution atlas, the identification rates for both the original data and after the removal of a shared pattern improved or remained stable for every connectivity measure (compared to using the AAL atlas).
Although the identification rate generally improved with the high resolution parcellation scheme, the best identification rate (dPTE after removal of the shared pattern, pooled across frequency bands) did not improve (76.6%). This observation can probably be ascribed to the resolution of the beamformer approach in combination with resting-state MEG data, which represents an upper bound on the independent information that can be reconstructed.
Other than for PLI, using the frequency-specific FCFs independently for the identification analysis gave the best results for the alpha and beta band (both with and without the shared patterns). These results are in accordance with previous EEG studies on heritability15,16,17, 52 and a recent study in which the high reliability of FC in these bands was reported37. In addition, although the whole power spectrum was reported to be heritable52, 53, power in alpha and beta bands exhibited higher values of heritability. We speculate that this could be related to a distinct genetic origin of different brain rhythms54. A recent work by Richiardi et al.55 emphasizes the relationship between FC patterns and genes, showing how genes associated to ion channels and synaptic function control the spatial organization of functional resting-state networks in fMRI.
In our study we could not disentangle genetic from shared environment sources of variance56 because the study included MZ twins only. However, previous research14,15,16 has shown that shared environmental factors are negligible in FC estimates.
Our results are related to an elderly population and so future studies should investigate how these results would generalize to younger individuals.
Another limitation of this study is the small number of subjects, which could have inflated identification rates. Even though the best identification rate (~75%) may seem low compared to other studies10 we would like to emphasize that in our study we matched different subjects, which is a harder problem than identifying the same subject between different observations (i.e. recordings), as was the case in these previous reports. We did not consider different source reconstruction strategies for the solution of the inverse problem. Recently, it has been shown57 that results in source space are affected by different choices during the analysis pipeline (i.e. inverse model, type of connectivity measure and different software implementations). Hence, although our results were obtained using a tried-and-tested analysis pipeline29, 47, 58,59,60,61 that has also been implemented by other groups62, 63 future studies should reproduce our findings using alternative approaches.
Finally, it would be interesting to further investigate to what extent different resting-state networks6,7,8 are related to either the shared pattern across individuals or the residual connectivity patterns. This would help to understand which connectivity patterns are related to conserved functions23 or inter-individual variability. Recently, it was shown that fronto-parietal networks are mostly related to inter-individual variability, and this improved the identification rate compared to a whole brain approach10. However, our main goal was to show, at a global level, if MZ twin identification was feasible using MEG effective and functional connectivity fingerprints. Further studies focusing on the relative influence of different sub-network on the identification rates are desirable.
We conclude that MEG-based effective connectivity patterns can be considered as fingerprints that are highly specific to individuals, under strong genetic influence, and might be good candidates to study the influence of genetic variation on brain functioning and ultimately inter individual differences in behavior and/or psychopathology.
No table of contents entries found
The sample for this study comes from the Netherlands Twin Register and takes part in an ongoing study on heritability of Alzheimer's disease risk factors in cognitively healthy elderly subjects, which is part of the Innovative Medicine Initiative (IMI) European Information Framework for Alzheimer Disease project (EMIF-AD). The aim of the PreclinAD study is to collect 300 cognitively normal elderly participants with ages ranging from 60 to 100 years. Subjects are recruited from the Manchester and Newcastle Aging Study (MNAS) (n = 100) and, as a twin sub-study, from the Netherlands Twin Register (NTR) (n = 200; 100 monozygotic twin pairs).
In this study data from the first 32 NTR monozygotic twin pairs enrolled in the PreclinAD study were analyzed. This subset was used because these subjects had an MEG recording available at the time when analysis was performed.
Inclusion criteria were age 60 years and older, Telephone Interview for Cognitive Status modified (TICS-m) > 2264; Geriatric Depression Scale (GDS) (15 item) < 1165; Consortium to Establish a Registry for Alzheimer's Disease (CERAD) 10 word list immediate and delayed recall (>−1.5 SD of age adjusted normative data)66; Clinical Dementia Rating (CDR) scale of 0 with a score on the memory sub-domain of 067.
Exclusion criteria included neurological and psychiatric diseases such as mild cognitive impairment, brain tumor, brain infection, schizophrenia, bipolar disorder, Parkinson's disease, epilepsy; other systemic illness/co-morbidity (e.g. thyroid disease, uncontrolled diabetes mellitus, cancer), recreational drug use, alcohol consumption (>35 units per week), and use of medication that may influence cognition (e.g. benzodiazepine, lithium carbonate, antipsychotics including atypical agents, antidepressants, or Parkinson's disease medicines).
This study and all procedures were carried out in accordance with a protocol approved by the ethical board of the VUmc (Medische Ethische Toetsingscommissie VUmc, project number 2014.210, approval date 2014-11-27). All subjects provided written informed consent.
Neuropsychological assessment
Subjects underwent extensive neuropsychological testing and questionnaires. The Mini-Mental State Examination was administered to assess cognitive status68.
MRI acquisition
Anatomical whole brain scans were obtained using a 3.0T MR scanner (Philips Achieva). Isotropic structural 3D T1-weighted images were acquired using a sagittal fast field echo sequence (repetition time = 7.9 ms, echo time = 4.5 ms, flip angle = 8°, 1 mm × 1 mm × 1 mm voxels).
MEG acquisition
MEG data were recorded using a 306-channel whole-head MEG system (Elekta Neuromag Oy, Helsinki, Finland) while participants were in supine position inside a magnetically shielded room (Vacuumschmelze, Hanau, Germany). MEG recordings were performed before the MRI scan. Magnetic fields were recorded at a sample frequency of 1250 Hz, with an anti-aliasing filter of 410 Hz and a high-pass filter of 0.1 Hz. The protocol consisted of 5 minutes in a eyes-closed resting-state condition (i.e. not performing any task), followed by 2 minutes in an eyes-open condition, and then again 5 minutes in an eyes-closed condition. Only the first 5 minutes of eyes-closed data were used for the analysis.
The head position relative to the MEG sensors was recorded continuously using the signals from five head-localization coils. The head-localization coil positions were digitized, as well as the outline of the participants scalp (~500 points), using a 3D digitizer (Fastrak, Polhemus, Colchester, VT, USA).
Channels that were malfunctioning during the recording, for example due to excessive noise, were identified by visual inspection of the data, and removed (median = 9, range 2–13) before applying the temporal extension of Signal Space Separation (tSSS) in MaxFilter software (Elekta Neuromag Oy, version 2.2.15)69,70,71. The tSSS filter was used to remove artifacts that SSS without temporal extension would fail to discard, typically from noise sources near the head, using a subspace correlation limit of 0.9 and a sliding window of 10 seconds.
The digitized scalp surfaces of all subjects were co-registered to their structural MRIs using a surface-matching procedure, with an estimated resulting accuracy of 4 mm72. A single sphere was fitted to the outline of the scalp as obtained from the co-registered MRI, which was used as a volume conductor model for the beamformer approach described below.
An atlas-based beamforming approach47 was used to project the MEG data to 78 regions of interest (ROIs) in source-space, using the AAL atlas32. For a detailed description we refer the reader to (Hillebrand et al. 2016). The broadband (0.5–48 Hz) time-series at sensor level were projected through the normalized broadband beamformer weights for each ROI's centroid in order to obtain a time-series for each ROI. From these time-series, the first 18 epochs each containing 16384 samples (13.10s), were selected73, 74.
These time-series were then downsampled to a sample frequency of 312 Hz (yielding epochs of 4096 samples each) and filtered in classical EEG/MEG frequency bands (delta (0.5–4 Hz), theta (4–8 Hz), alpha (8–13 Hz), beta (13–30 Hz), and lower gamma (30–48 Hz)), using an offline discrete FFT filter that does not distort the phases75.
Functional and effective connectivity
Pairwise FC and EC was estimated between each of the 78 ROIs for each frequency band using three FC and one EC measure listed below. All four measures are based on the computation on the Hilbert transform to obtain the analytic signal which was used estimate the envelope or the instantaneous phase. The measures used to estimate FC and EC are (see supplementary information file for details):
Amplitude Envelope Correlation (AEC), which detects amplitude-based coupling among brain signals. AEC captures interactions between two time-series computing the correlation between their envelopes27.
The leakage-corrected Amplitude Envelope Correlation (AEC-c), where time-series were orthogonalized by means of linear regression analysis before estimating functional connectivity with AEC50. This correction was performed on the band-filtered time series in time domain with the aim to reduce trivial spurious correlations induced by signal leakage. Orthogonalization of two time series can be done in two directions (i.e. given X and Y as time series, X can be regressed out from Y, and Y can be regressed out from X). For every pair of time series we computed the orthogonalization in both directions, and then we averaged the AEC values computed on the orthogonalized time series for the two directions.
Phase Lag Index (PLI)24 is a measure of the asymmetry of the distribution of phase differences between two time series. It reflects the consistency of phase relations between two time series, avoiding zero-lag (mod π) phase coupling and thereby minimizing the influence of spurious correlation induced by leakage.
Phase Transfer Entropy (PTE) is a directional phase-based measure that estimates information flow on the basis of the transfer entropy between the time series of the instantaneous phases28, 76. We used the implementation of dPTE as described in29, which is bounded in the range 0.5 < dPTExy ≤ 1 when information flows preferentially from a time series X to time series Y. However when information flows preferentially toward X from Y, 0 ≤ dPTExy < 0.5. In the case of no preferential direction of information flow, dPTExy = 0.5.
Subsequent analysis steps were performed independently for every FC and EC measure and for all frequency bands.
The calculation of FC and EC measures resulted in 18 (one for each epoch) FC or EC matrices for each subject, which were then averaged per subject. No threshold was applied to the connectivity matrices.
For each subject, a FCF and ECF was extracted from the average FC or EC matrix by vectorizing its lower triangular part not including the diagonal. This vector therefore has 3003 entries \((\frac{78\ast 77}{2})\), which represent the FC or EC values between every pair of ROIs. Usually a directed connectivity matrix will not be symmetrical; however, due to the way dPTE is defined the two triangular parts are trivially related: the dPTE for a pair of regions is computed by normalizing the PTE values by the sum of the PTE values in both directions (\(dPTE=\frac{PT{E}_{xy}}{{{\rm{PTE}}}_{{\rm{xy}}}+{{\rm{PTE}}}_{{\rm{yx}}}}\), see supplementary information for details), which forces the upper and lower triangular part of the dPTE matrix to add up to one \((dPT{E}_{xy}=1-dPT{E}_{yx})\). Hence even for the dPTE matrix we considered only the lower triangular part like for the other FC matrices.
We combined for every subject the FCFs or ECFs for the individual frequency bands (independently for every functional connectivity measure). This resulted in a global FCFs or ECFs across bands (15015 entries = 3003 × 5 frequency bands). The global FCF or ECF was then used for the identification analysis using either the FCFs (or ECFs) with or without the shared pattern, and statistical significance was assessed using permutation testing.
Twin pair identification using functional or effective connectivity
Spearman's rank correlation was used as a similarity measure to compare two FCFs or ECFs. In order to deal with negative correlations, similarity scores were transformed into distance scores using the following formula (eq. 1):
$$distance\,score=1-\frac{\begin{array}{c}(correlation\,coefficient+1)\end{array}}{2}$$
The identification analysis consisted of an iterative process, in which at every step a subject was selected and its FCF or ECF was compared with the other subjects' FCFs or ECFs, yielding 63 distance scores. If the lowest distance score was obtained for the subject's co-twin we considered it a successful outcome (a match), otherwise a miss. We repeated this process for all 64 subjects and we calculated the success rate as the ratio between the number matches and total number of subjects.
In order to assess statistical significance of the observed success rate we used permutation testing. At each iteration we first randomly redefined the relatedness between subjects (i.e. subjects were randomly assigned to be twin pairs), next the identification analysis was performed and the success rate computed. This procedure was repeated 1000 times to build a permuted success rate distribution, and the observed success rate was compared against this distribution to determine the p-value.
Singular Value Decomposition
Twin pair identification performances might improve by removing the contribution of the connectivity pattern shared among FC or EC profiles. To this end, all connectivity profiles were pooled in a matrix (M 15015×64) and the singular value decomposition (SVD) of this matrix was computed (eq. 2):
$${{\bf{M}}}_{15015\times 64}={{\bf{U}}}_{15015\times 64}\,{{\bf{S}}}_{64\times 64}\,{{{\bf{V}}}^{{\rm{T}}}}_{64\times 64}$$
where U is the matrix containing the left singular vectors, S contains the singular values, V is the matrix containing the right singular vectors, and T is the matrix transpose. The left singular vector is associated with the largest singular value and represents the dominant common pattern shared among the connectivity profiles. Projecting back the matrix of the connectivity profiles without the contribution of the largest singular value and its corresponding left and right singular vectors yielded the connectivity profiles without the shared pattern.
The computation of the dPTE was performed using Brainwave software (BrainWave, version 0.9.150.6; home.kpn.nl/stam7883/brainwave.html), all the other analyses were performed using in-house MATLAB scripts (MATLAB Release 2012a, The MathWorks, Inc., Natick, Massachusetts, United States) and an additional MATLAB plotting script from http://www.mathworks.com/matlabcentral/fileexchange/ (tight_subplot.m).
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Rosenberg, M. D. et al. A neuromarker of sustained attention from whole-brain functional connectivity. Nature Publishing Group 19, 165–171 (2016).
Gong, G., He, Y. & Evans, A. C. Brain Connectivity: Gender Makes a Difference. Neuroscientist 17, 575–591 (2011).
Meunier, D., Achard, S., Morcom, A. & Bullmore, E. Age-related changes in modular organization of human brain functional networks. Neuroimage 44, 715–723 (2009).
Boersma, M. et al. Growing trees in child brains: graph theoretical analysis of electroencephalography-derived minimum spanning tree in 5- and 7-year-old children reflects brain maturation. Brain Connect 3, 50–60 (2013).
Stam, C. J. Modern network science of neurological disorders. Nat. Rev. Neurosci., doi:10.1038/nrn3801 (2014).
Damoiseaux, J. S. et al. Consistent resting-state networks across healthy subjects. PNAS 103, 13848–13853 (2006).
Fox, M. D. & Raichle, M. E. Spontaneous fluctuations in brain activity observed with functional magnetic resonance imaging. Nat. Rev. Neurosci. 8, 700–711 (2007).
Rosazza, C. & Minati, L. Resting-state brain networks: literature review and clinical applications. Neurol Sci 32, 773–785 (2011).
de Pasquale, F. et al. Temporal dynamics of spontaneous MEG activity in brain networks. PNAS 107, 6040–6045 (2010).
Article ADS PubMed PubMed Central Google Scholar
Finn, E. S. et al. Functional connectome fingerprinting: identifying individuals using patterns of brain connectivity. Nature Publishing Group 18, 1664–1671 (2015).
Miranda-Dominguez, O. et al. Connectotyping: model based fingerprinting of the functional connectome. PLoS ONE 9, e111048 (2014).
Friston, K. J. Functional and effective connectivity: a review. Brain Connect 1, 13–36 (2011).
Boomsma, D., Busjahn, A. & Peltonen, L. Classical twin studies and beyond. Nature Reviews Genetics 3, 872–882 (2002).
Posthuma, D. et al. Genetic components of functional connectivity in the brain: The heritability of synchronization likelihood. Hum Brain Mapp 26, 191–198 (2005).
Smit, D. J. A., Stam, C. J., Posthuma, D., Boomsma, D. I. & De Geus, E. J. C. Heritability of 'small-world' networks in the brain: A graph theoretical analysis of resting-state EEG functional connectivity. Hum Brain Mapp 29, 1368–1378 (2008).
Smit, D. J. A. et al. Endophenotypes in a Dynamically Connected Brain. Behav Genet 40, 167–177 (2010).
Schutte, N. M. et al. Heritability of Resting State EEG Functional Connectivity Patterns. Twin Res Hum Genet 16, 962–969 (2013).
Sinclair, B. et al. Heritability of the network architecture of intrinsic brain functional connectivity. Neuroimage 121, 243–252 (2015).
Glahn, D. C. et al. Genetic control over the resting brain. Proc. Natl. Acad. Sci. USA 107, 1223–1228 (2010).
Fu, Y. et al. Genetic influences on resting-state functional networks: A twin study. Hum Brain Mapp 36, 3959–3972 (2015).
Fornito, A. et al. Genetic influences on cost-efficient organization of human cortical functional networks. J. Neurosci. 31, 3261–3270 (2011).
van den Heuvel, M. P. et al. Genetic control of functional brain network efficiency in children. Eur Neuropsychopharmacol 23, 19–23 (2013).
Hawrylycz, M. et al. Canonical genetic signatures of the adult human brain. Nat Neurosci 18, 1832–1844 (2015).
Stam, C. J., Nolte, G. & Daffertshofer, A. Phase lag index: Assessment of functional connectivity from multi channel EEG and MEG with diminished bias from common sources. Hum Brain Mapp 28, 1178–1193 (2007).
Hipp, J. F., Hawellek, D. J., Corbetta, M., Siegel, M. & Engel, A. K. Large-scale cortical correlation structure of spontaneous oscillatory activity. Nat Neurosci 15, 884–890 (2012).
de Pasquale, F. et al. A Cortical Core for Dynamic Integration of Functional Networks in the Resting Human Brain. Neuron 74, 753–764 (2012).
Bruns, A., Eckhorn, R., Jokeit, H. & Ebner, A. Amplitude envelope correlation detects coupling among incoherent brain signals. Neuroreport 11, 1509–1514 (2000).
Lobier, M., Siebenhuhner, F., Palva, S. & Palva, J. M. Phase transfer entropy: A novel phase-based measure for directed connectivity in networks coupled by oscillatory interactions. Neuroimage 85, 853–872 (2014).
Hillebrand, A. et al. Direction of information flow in large-scale resting-state networks is frequency-dependent. Proc. Natl. Acad. Sci. USA 113, 3867–3872 (2016).
Engel, A. K., Gerloff, C., Hilgetag, C. C. & Nolte, G. Intrinsic Coupling Modes: Multiscale Interactions in Ongoing Brain Activity. Neuron 80, 867–886 (2013).
Boomsma, D. I. et al. Netherlands Twin Register: A focus on longitudinal research. Twin Res 5, 401–406 (2002).
Tzourio-Mazoyer, N. et al. Automated Anatomical Labeling of Activations in SPM Using a Macroscopic Anatomical Parcellation of the MNI MRI Single-Subject Brain. Neuroimage 15, 273–289 (2002).
Nichols, T. E. & Holmes, A. P. Nonparametric permutation tests for functional neuroimaging: A primer with examples. Hum Brain Mapp 15, 1–25 (2002).
Fox, M. D. et al. The human brain is intrinsically organized into dynamic, anticorrelated functional networks. PNAS 102, 9673–9678 (2005).
Brookes, M. J. et al. Investigating the electrophysiological basis of resting state networks using magnetoencephalography. PNAS 108, 16783–16788 (2011).
Mueller, S. et al. Individual Variability in Functional Connectivity Architecture of the Human Brain. Neuron 77, 586–595 (2013).
Wens, V. et al. Inter- and Intra-Subject Variability of Neuromagnetic Resting State Networks. Brain Topogr 27, 620–634 (2014).
Thompson, P. M., Ge, T., Glahn, D. C., Jahanshad, N. & Nichols, T. E. Genetics of the connectome. Neuroimage 80, 475–488 (2013).
Jansen, A. G., Mous, S. E., White, T., Posthuma, D. & Polderman, T. J. C. What Twin Studies Tell Us About the Heritability of Brain Development, Morphology, and Function: A Review. Neuropsychol Rev 25, 27–46 (2015).
Oh, S. W. et al. A mesoscale connectome of the mouse brain. Nature 508, 207–214 (2014).
Henriksen, S., Pang, R. & Wronkiewicz, M. A simple generative model of the mouse mesoscale connectome. Elife 5 (2016).
Passingham, R. E., Stephan, K. E. & Kotter, R. The anatomical basis of functional localization in the cortex. Nat. Rev. Neurosci. 3, 606–616 (2002).
Van Essen, D. C. Cartography and Connectomes. Neuron 80, 775–790 (2013).
Jbabdi, S. & Johansen-Berg, H. Tractography: Where Do We Go from Here? Brain Connect 1, 169–183 (2011).
Battaglia, D., Witt, A., Wolf, F. & Geisel, T. Dynamic Effective Connectivity of Inter-Areal Brain Circuits. PLoS Comput Biol 8, e1002438 (2012).
Wang, H. E. et al. A systematic framework for functional connectivity measures. Front Neurosci 8 (2014).
Hillebrand, A., Barnes, G. R., Bosboom, J. L., Berendse, H. W. & Stam, C. J. Frequency-dependent functional connectivity within resting-state networks: an atlas-based MEG beamformer solution. Neuroimage 59, 3909–3921 (2012).
Palva, S. & Palva, J. M. Discovering oscillatory interaction networks with M/EEG: challenges and breakthroughs. Trends Cogn. Sci. (Regul. Ed.) 16, 219–230 (2012).
Schoffelen, J.-M. & Gross, J. Source connectivity analysis with MEG and EEG. Hum Brain Mapp 30, 1857–1865 (2009).
Brookes, M. J., Woolrich, M. W. & Barnes, G. R. Measuring functional connectivity in MEG: A multivariate approach insensitive to linear source leakage. Neuroimage 63, 910–920 (2012).
Colclough, G. L. et al. How reliable are MEG resting-state connectivity metrics? Neuroimage, doi:10.1016/j.neuroimage.2016.05.070 (2016).
Smit, D. J. A., Posthuma, D., Boomsma, D. I. & De Geus, E. Heritability of background EEG across the power spectrum. Psychophysiology 42, 691–697 (2005).
Ent, D. V., van Soelen, I. & Stam, C. J. Strong resemblance in the amplitude of oscillatory brain activity in monozygotic twins is not caused by 'trivial' similarities in the composition of the skull – van't Ent - 2008 - Human Brain Mapping - Wiley Online Library. Human brain …, doi:10.1002/hbm.20656/pdf (2009).
Buzsaki, G., Logothetis, N. & Singer, W. Scaling Brain Size, Keeping Timing: Evolutionary Preservation of Brain Rhythms. Neuron 80, 751–764 (2013).
Poulsen, C. J., Tabor, C. & White, J. D. CLIMATE CHANGE. Long-term climate forcing by atmospheric oxygen concentrations. Science 348, 1238–1241 (2015).
Falconer, D. S. & Mackay, T. Introduction to quantitative genetics. (Pearson, 1996).
Mahjoory, K. et al. Consistency of EEG source localization and connectivity estimates. Neuroimage 152, 590–601 (2017).
Olde Dubbelink, K. T. E. et al. Disrupted brain network topology in Parkinson's disease: a longitudinal magnetoencephalography study. Brain 137, 197–207 (2014).
Olde Dubbelink, K. T. E. et al. Predicting dementia in Parkinson disease by combining neurophysiologic and cognitive markers. Neurology 82, 263–270 (2014).
Tewarie, P. et al. Disruption of structural and functional networks in long-standing multiple sclerosis. Hum Brain Mapp 35, 5946–5961 (2014).
Yu, M., Engels, M., Hillebrand, A. & Van Straaten, E. Selective impairment of hippocampus and posterior hub areas in Alzheimer's disease: an MEG-based multiplex network study. Brain 140, 1466–1485 (2017).
Tewarie, P. et al. Integrating cross-frequency and within band functional networks in resting-state MEG - A multi-layer network approach. Neuroimage 142, 324–336 (2016).
Hunt, B. A. E. et al. Relationships between cortical myeloarchitecture and electrophysiological networks. PNAS 113, 13510–13515 (2016).
de Jager, C. A., Budge, M. M. & Clarke, R. Utility of TICS-M for the assessment of cognitive function in older adults. Int J Geriatr Psychiatry 18, 318–324 (2003).
Yesavage, J. A. et al. Development and validation of a geriatric depression screening scale: A preliminary report. Journal of Psychiatric Research 17, 37–49 (1982).
Morris, J. C. et al. The Consortium to Establish a Registry for Alzheimer's Disease (CERAD). Part I. Clinical and neuropsychological assessment of Alzheimer's disease. Neurology 39, 1159–1165 (1989).
Morris, J. C. The Clinical Dementia Rating (CDR): current version and scoring rules. Neurology 43, 2412–2414 (1993).
Folstein, M. F., Robins, L. N. & Helzer, J. E. THe mini-mental state examination. Arch Gen Psychiatry 40, 812 (1983).
Taulu, S. & Hari, R. Removal of magnetoencephalographic artifacts with temporal signal-space separation: demonstration with single-trial auditory-evoked responses. Hum Brain Mapp 30, 1524–1534 (2009).
Taulu, S. & Simola, J. Spatiotemporal signal space separation method for rejecting nearby interference in MEG measurements. Phys Med Biol 51, 1759–1768 (2006).
Taulu, S., Kajola, M. & Simola, J. Suppression of interference and artifacts by the Signal Space Separation Method. Brain Topogr 16, 269–275 (2004).
Whalen, C., Maclin, E. L., Fabiani, M. & Gratton, G. Validation of a method for coregistering scalp recording locations with 3D structural MR images. Hum Brain Mapp 29, 1288–1301 (2008).
Fraschini, M. et al. The effect of epoch length on estimated EEG functional connectivity and brain network organisation. J Neural Eng 13, 036015 (2016).
van Diessen, E. et al. Opportunities and methodological challenges in EEG and MEG resting state functional brain network research. Clin Neurophysiol 0 (2014).
Delorme, A. & Makeig, S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 134, 9–21 (2004).
Palus, M. & Vejmelka, M. Directionality of coupling from bivariate time series: How to avoid false causalities and missed connections. Phys Rev E Stat Nonlin Soft Matter Phys 75 (2007).
The research leading to these results has received support from the Innovative Medicines Initiative Joint Undertaking under grant agreement n° 115372, resources of which are composed of financial contribution from the European Union's Seventh Framework Programme (FP7/2007–2013) and EFPIA companies' in kind contribution. We thank Linda Douw for assistance with the higher resolution functional atlas.
Alzheimer Center and Department of Neurology, Neuroscience Campus Amsterdam, VU University Medical Center, Amsterdam, The Netherlands
M. Demuru, A. A. Gouw, P. Scheltens, B. M. Tijms, E. Konijnenberg, M. ten Kate, A. den Braber & P. J. Visser
Department of Clinical Neurophysiology and Magnetoencephalography Center, VU University Medical Center, Amsterdam, The Netherlands
A. A. Gouw, A. Hillebrand, C. J. Stam & B. W. van Dijk
Department of Biological Psychology, VU University Amsterdam, Amsterdam, The Netherlands
A. den Braber, D. J. A. Smit & D. I. Boomsma
Department of Psychiatry, Academic Medical Center, Amsterdam, The Netherlands
D. J. A. Smit
M. Demuru
A. A. Gouw
A. Hillebrand
C. J. Stam
B. W. van Dijk
P. Scheltens
B. M. Tijms
E. Konijnenberg
M. ten Kate
A. den Braber
D. I. Boomsma
P. J. Visser
M.D., A.A.G., A.H., C.J.S., B.W.D. conceptualized the study. M.D. performed pre-processing and the analyses. E.K., M.T.K., A.D.B. provided part of the materials. C.J.S., D.I.B., A.H., A.A.G., B.M.T., D.J.A.S., P.S., P.J.V. provided support and guidance with data interpretation. M.D. wrote the manuscript, with contributions and comments from all other authors.
Correspondence to M. Demuru.
Dr. P.J. Visser serves as an advisory board member of Eli-Lilly and is consultant for Janssen Pharmaceutica. He receives/received research grants from Biogen and GE Healthcare, European Commission 6th and 7th Framework programme, the Innovative Medicines Initiative (IMI), European Union Joint Programme – Neurodegenerative Disease Research (JPND), and Zon-Mw.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Demuru, M., Gouw, A.A., Hillebrand, A. et al. Functional and effective whole brain connectivity using magnetoencephalography to identify monozygotic twin pairs. Sci Rep 7, 9685 (2017). https://doi.org/10.1038/s41598-017-10235-y
DOI: https://doi.org/10.1038/s41598-017-10235-y
Sensitive and reproducible MEG resting-state metrics of functional connectivity in Alzheimer's disease
Deborah N. Schoonhoven
Casper T. Briels
Alida A. Gouw
Alzheimer's Research & Therapy (2022)
Brainprints: identifying individuals from magnetoencephalograms
Shenghao Wu
Aaditya Ramdas
Leila Wehbe
Communications Biology (2022)
Reliability and Individual Specificity of EEG Microstate Characteristics
Jiayi Liu
Jing Xu
Jia-Hong Gao
Brain Topography (2020)
The EMIF-AD PreclinAD study: study design and baseline cohort overview
Elles Konijnenberg
Stephen F. Carter
Pieter Jelle Visser
|
CommonCrawl
|
Why are there uneven bright areas in this photo of black hole?
In the recently released photo of a black hole shown above, which was created by using data from EHT, why is the lower region brighter than the one above? Is it because of the rotation of the accretion disk? Also what is the orientation of the accretion disk? Are we looking at it head on?
black-hole event-horizon-telescope
Kushal BhuyanKushal Bhuyan
$\begingroup$ Great question! I'd just seen this video but you beat me to it :-) $\endgroup$ – uhoh Apr 11 '19 at 2:33
$\begingroup$ Related astronomy.stackexchange.com/questions/30332/… $\endgroup$ – ProfRob Apr 11 '19 at 6:51
$\begingroup$ related? physics.stackexchange.com/questions/471753/… $\endgroup$ – user17915 Apr 11 '19 at 9:24
$\begingroup$ Another really helpful video youtube.com/watch?v=zUyH3XhpLTo&t=346s $\endgroup$ – josh Apr 12 '19 at 10:35
No, you aren't seeing the shape of the accretion disk. Although its plane is almost that of the picture it is far larger and fainter than the ring that is seen. The reason for this asymmetry is almost entirely due to Doppler beaming and boosting of radiation arising in matter travelling at relativistic speeds very close to the black hole. This in turn is almost entirely controlled by the orientation of the black hole spin. The black hole sweeps up material and magnetic fields almost irrespective of the orientation of any accretion disk.
The pictures below from the fifth event horizon telescope paper makes things clear.
The black arrow indicates the direction of black hole spin. The blue arrow indicates the initial rotation of the accretion flow. The jet of M87 is more or less East-West (projected onto the page), but the right hand side is pointing towards the Earth. It is assumed that the spin vector of the black hole is aligned (or anti-aligned) with this.
The two left hand plots show agreement with the observations. What they have in common is that the black hole spin vector is mostly into the page (anti-aligned with the jet). Gas is forced to rotate in the same way and results in projected relativistic motion towards us south of the black hole and away from us north of the black hole. Doppler boosting and beaming does the rest.
As the paper says:
the location of the peak flux in the ring is controlled by the black hole spin: it always lies roughly 90 degrees counterclockwise from the projection of the spin vector on the sky.
$\begingroup$ Your answer is really helpful and makes it easier to start reading the papers, thanks! Possibly answerable(?): What defines the plane of an accretion disk around a black hole? $\endgroup$ – uhoh Apr 11 '19 at 6:56
There's some recent information which is worthy of an update to the answer (despite the difficulty of typing MathJax on my phone). I've quoted minimally as I wouldn't have improved upon what these scientists have published. Previous edits remain beneath this addition.
In the paper "Measurement of the spin of the M87 black hole from its observed twisted light" (Apr 16 2019) by Fabrizio Tamburini, Bo Thidé, and Massimo Della Valle, they explain on page 2:
... The imaging techniques applied to this data set reveal the presence of an asymmetric ring with clockwise rotation and a "crescent" geometric structure that exhibits a clear central brightness depression. This indicates a source dominated by lensed emission surrounding the black hole shadow.
From the analysis of the two data sets we obtain the asymmetry parameters $q_1$ = 1.417 for epoch 1 and $q_2$ = 1.369 for epoch 2. They give an averaged asymmetry in the spiral spectrum of $\bar{q}$ = 1.393±0.024 in agreement with that of our numerical simulations, $q_{num}$ = 1.375, of partially incoherent light emitted by the Einstein ring of a Kerr black hole with $\boldsymbol{a} \textbf{~}\!$ 0.9±0.1, corresponding to a rotational energy $^{[10]}$ of $\textbf{10}{^\textbf{64}}\!$ erg, which is comparable to the energy radiated by the brightest quasars (~ 500 trillion $\odot$) over a Gyr (billion year) timescale, and inclination $i$ = 17° between the approaching jet and the line of sight, with the angular momenta of the accretion flow and of the black hole anti-aligned, showing clockwise rotation as described in Ref. 5.
This result is in good agreement with results from the analysis of the fiducial pipeline images of amplitude and phase plots for 11 April, 2017 of DIFMAP with $q$ = 1.401, EHT $q$ = 1.361 and SMILI, $q$ = 1.319, $^{[6]}$ giving for that day an averaged value $\bar{q}$ = 1.360 that deviates of 0.09 from epoch 2 value estimated with TIE and $q$ > 0 confirm the clockwise rotation. The spiral spectra are reported in Fig. 2.
Then one determines the rotation parameter $a$ by comparing those obtained by a linear interpolation with the asymmetry parameter $q$ of various models, as reported in the numerical example of Table I for different values of inclination and rotation parameters $i$ and $q$. The results are depicted in Fig. 1.
[1] Fabrizio Tamburini, Bo Thidé, Gabriel Molina-Terriza, and Gabriele Anzolin, "Twisting of light around rotating black holes," Nature Phys. 7, 195–197 (2011).
[4] EHT Collaboration et al., "Imaging the central supermassive black hole," Astrophys. J. Lett. 875, L4(52) (2019), First M87 Event Horizon Telescope Results IV.
[5] EHT Collaboration et al., "Physical origin of the asymmetric ring," Astrophys. J. Lett. 875, L5(31) (2019), First M87 Event Horizon Telescope Results V.
[6] EHT Collaboration et al., "The shadow and mass of the central black hole," Astrophys. J. Lett. 875, L6(44) (2019), First M87 Event Horizon Telescope Results VI.
[10] Demetrios Christodoulou and Remo Ruffini, "Reversible transformations of a charged black hole," Phys. Rev. D 4, 3552–3555 (1971).
[29] Bin Chen, Ronald Kantowski, Xinyu Dai, Eddie Baron, and Prasad Maddumage, "Algorithms and programs for strong gravitational lensing in Kerr space-time including polarization," Astrophys. J. Suppl. Ser. 218, 4 (2015).
Figure 1. Experimental results. Field components along the observer's direction and spiral spectra obtained with the TIE method for epoch 1 and epoch 2. The asymmetry between the $m$ = 1 and $m$ = −1 components in both of the spiral spectra reveals the rotation of the black hole in M87. It also indicates that the electromagnetic vortex reconstructed from the TIE analysis of the EM field intensities extracted from the brightness temperature in a finite frequency bandwidth has components along the propagation direction to the observer that are compatible with twisted lensing of a black hole with $a$ = 0.9±0.1 rotating clockwise with the spin pointing away from Earth and an Einstein ring with a gravitational radius $R_g$ = 5, as indicated by an EHT analysis dominated by incoherent emission. For all days, the diameters of the ring features span the narrow range 38–44 µ-arcseconds and the observed peak brightness temperature of the ring is $T$ ∼ 6×10$^9$ K.$^{[6]}$ The other components ($x$ and $y$) of the EM field derived from TIE equations do not show a predominant OAM component. This is expected $^{[1]}$.
Figure 2. Results from DIFMAP, EHT and SMILI data analyses and of numerical simulations from KERTAP. The first three insets show the experimental spiral spectra obtained from the three fiducial pipeline images for 11 April 2017 from SMILI, EHT imaging, and DIFMAP $^{[4]}$. They represent the visibility amplitude and phase as a function of the vector baseline. In all the datasets the asymmetry parameter, the ratio between the $m$ = 1 and $m$ = −1 peaks in the spiral spectra, is $q$ > 1 indicating clockwise rotation: the black hole is found to have its spin pointing away from Earth and an inclination between the approaching jet and the line of sight of $i$ = 17° (equivalent to a similar geometry with an inclination $i$ = 163°, but where the angular momentum of the accretion flow and that of the BH are anti-aligned) (left). Fourth inset: spiral spectrum of the numerical simulations with KERTAP $^{[29]}$ obtained from the normalized intensity and phase of the $z$ component of the radiation field emitted from a spatially resolved image of the black hole accretion disc dominated by thermalized emission with Γ = 2. The coherence χ of the radiation emission is characterized by the ratio between the $m$ = 0 and $m$ = 1 peaks in the spiral spectra. The lower the χ, the higher the coherence in the emission. The experimental spiral spectra of SMILI, EHT imaging, and DIFMAP show higher coherence in the radiation emission (χ$_\text{SMILI}$ = 1.198, χ$_\text{EHT}$ = 1.798) and (χ$_\text{DIFMAP}$ = 1.107) with respect to the simulated model of a simple thermalized accretion disk with power spectrum Γ = 2 (χ$_\text{KERTAP}$ = 5.029) and with respect to that obtained in the TIE reconstruction of the wavefront (χ$_\text{ep1}$ = 13.745 and χ$_\text{ep2}$ = 14.649) in Fig.1. Even if the asymmetry $q$ is well preserved, the TIE method can be improved by consecutive data acquisitions of the wavefront, separated by a much shorter time interval than one day and might therefore provide better information on the source emission.
That paper contains considerable additional information and illustrations well worth reviewing. Thank you Jack R. Woods for the link which led me to the above information.
Previous edit:
In the paper: "First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring", (Apr 10 2019), by The Event Horizon Telescope Collaboration, Kazunori Akiyama, Antxon Alberdi, Walter Alef, Keiichi Asada, Rebecca Azulay, Anne-Kathrin Baczko, David Ball, Mislav Baloković, John Barrett, et al., in one of several papers recently published they explain:
(4) The ring is brighter in the south than the north. This can be explained by a combination of motion in the source and Doppler beaming. As a simple example we consider a luminous, optically thin ring rotating with speed v and an angular momentum vector inclined at a viewing angle i > 0° to the line of sight. Then the approaching side of the ring is Doppler boosted, and the receding side is Doppler dimmed, producing a surface brightness contrast of order unity if v is relativistic. The approaching side of the large-scale jet in M87 is oriented west–northwest (position angle $\mathrm{PA}\approx 288^\circ ;$ in Paper VI this is called ${\mathrm{PA}}_{\mathrm{FJ}}$), or to the right and slightly up in the image.
Figure 5 from that paper is included in Rob Jeffries answer.
The conclusion that they reach, in part, is:
"... The results of this comparison are consistent with the hypothesis that the compact 1.3 mm emission in M87 arises within a few ${r}_{{\rm{g}}}$ of a Kerr black hole, and that the ring-like structure of the image is generated by strong gravitational lensing and Doppler beaming. The models predict that the asymmetry of the image depends on the sense of black hole spin. If this interpretation is accurate, then the spin vector of the black hole in M87 points away from Earth (the black hole spins clockwise on the sky). The models also predict that there is a strong energy flux directed away from the poles of the black hole, and that this energy flux is electromagnetically dominated. If the models are correct, then the central engine for the M87 jet is powered by the electromagnetic extraction of free energy associated with black hole spin via the Blandford–Znajek process.".
First Draft:
The article: "Ergoregion instability of exotic compact objects: electromagnetic and gravitational perturbations and the role of absorption", (Feb 15 2019), by Elisa Maggio, Vitor Cardoso, Sam R. Dolan, and Paolo Pani explains that this is due to rotational superradiance on page 10:
"... the instability can be understood in terms of waves trapped within the photon-sphere barrier and amplified by superradiant scattering$^{[43]}$
[43] R. Brito, V. Cardoso, and P. Pani, Lect. Notes Phys. 906, pp.1 (2015), arXiv:1501.06570.
In the article "Superradiance", (above) while considerably longer, maybe much more approachable. On page 38 where they explain the Penrose Process they offer a diagram which probably makes the understanding of this easier:
"Figure 7: Pictorial view of the original Penrose processes. A particle with energy E$_0$ decays inside the ergosphere into two particles, one with negative energy E$_2$ < 0 which falls into the BH, while the second particle escapes to infinity with an energy higher than the original particle, E$_1$ > E$_0$.".
From page 41:
"Figure 8: The carousel analogy of the Penrose process. A body falls nearly from rest into a rotating cylinder, whose surface is sprayed with glue. At the surface the body is forced to co-rotate with the cylinder (analog therefore of the BH ergosphere, the surface beyond which no observer can remain stationary with respect to infinity). The negative energy states of the ergoregion are played by the potential energy associated with the sticky surface. If now half the object (in reddish) is detached from the first half (yellowish), it will reach infinity with more (kinetic) energy than it had initially, extracting rotational energy out of the system.".
A further more complicated model, believed to be beyond what was asked, from page 46:
"Figure 9: Pictorial view of the different collisional Penrose processes. Left: initial particleswith ingoing radial momentum (p$^r _1$ < 0 and p$^r_2$ < 0). Particle 3 has initial ingoing radial momentum, but eventually finds a turning point and escapes to infinity. The maximum efficiency for this was shown to be quite modest η ∼ 1.5 $^{[168, 169, 170, 171]}$. Right: initial particles with p$^r_1$ > 0 and p$^r_2$ < 0. In this case particle 1 must have p$^r_1$ > 0 inside the ergosphere. For this process the efficiency can be unbound for extremal BHs $^{[172, 173]}$.
[168] T. Piran and J. Shaham, "Upper Bounds on Collisional Penrose Processes Near Rotating Black Hole Horizons," Phys.Rev. D16 (1977) 1615–1635.
[169] T. Harada, H. Nemoto, and U. Miyamoto, "Upper limits of particle emission from high-energy collision and reaction near a maximally rotating Kerr black hole," Phys.Rev. D86 (2012) 024027, arXiv:1205.7088 [gr-qc].
[170] M. Bejger, T. Piran, M. Abramowicz, and F. Hakanson, "Collisional Penrose process near the horizon of extreme Kerr black holes," Phys.Rev.Lett. 109 (2012) 121101, arXiv:1205.4350 [astro-ph.HE].
[171] O. Zaslavskii, "On energetics of particle collisions near black holes: BSW effect versus Penrose process," Phys.Rev. D86 (2012) 084030, arXiv:1205.4410 [gr-qc].
[172] J. D. Schnittman, "A revised upper limit to energy extraction from a Kerr black hole," arXiv:1410.6446 [astro-ph.HE].
[173] E. Berti, R. Brito, and V. Cardoso, "Ultra-high-energy debris from the collisional Penrose process," arXiv:1410.8534 [gr-qc].
There is a summary on page 170 (nowhere near the end of the paper) which explains:
"In gravitational theories, superradiance is intimately connected to tidal acceleration, even at Newtonian level. Relativistic gravitational theories predict the existence of BHs, gravitational vacuum solutions whose event horizon behaves as a one-way viscous membrane. This allows superradiance to occur in BH spacetimes, and to extract energy from vacuum even at the classical level. When semiclassical effects are taken into account, superradiance occurs also in static configurations, as in the case of Hawking radiation from a Schwarzschild BH.
The efficiency of superradiant scattering of GWs by a spinning (Kerr) BH can be larger than 100% and this phenomenon is deeply connected to other important mechanisms associated to spinning compact objects, such as the Penrose process, the ergoregion instability, the Blandford-Znajek effect, and the CFS instability. Rotational superradiance might be challenging to observe in the laboratory, but its BH counterpart is associated with a number of interesting effects and instabilities, which may leave an observational imprint. We have presented a unified treatment of BH superradiant phenomena including charged BHs, higher dimensions, nonasymptotically flat spacetimes, analog models of gravity and theories beyond GR.".
RobRob
$\begingroup$ A paper (Tamburini et al/ 04/18/19) with more information about the M87 black hole spin is shown in this You Tube video youtube.com/watch?v=0osP65BRoYk. The video presenter explains that the black hole spin is about 90%c in a clockwise direction from our viewpoint and is independent of the accretion disk rotation. $\endgroup$ – Jack R. Woods Apr 28 '19 at 18:58
$\begingroup$ @JackR.Woods Thank you very much for that useful link. I've updated the answer and credited you with providing the source. Indeed the enormous energy of the BH's spin exceeds any effect of the accretion disk, that paper also provides specifics about the rotation and it's orientation. $\endgroup$ – Rob Apr 29 '19 at 4:41
I believe we are seeing one of the effects of the accretion disk rotating at very high speeds. This is called relativistic beaming, and it occurs because particles (in this case matter in the accretion disk) that are travelling at relativistic speeds (say, upwards of .2c), tend to preferentially emit their radiation in a cone towards the direction of motion.
This suggests that the matter at the bottom of the picture (the brightest blobs) are travelling towards us, and the darker parts are travelling away. Since the black hole tends to warp light around itself, I'm not sure from the photo of the orientation of the accretion disk.
Jim421616Jim421616
$\begingroup$ It is what I guessed. That the bottom bright part moves (rotates) towards earth. But they said the rotation is clockwise, a sentence that let alone does not tell me much. I'll go also through the other answer or the papers. But perhaps you have further details. $\endgroup$ – Alchimista Apr 11 '19 at 7:13
Not the answer you're looking for? Browse other questions tagged black-hole event-horizon-telescope or ask your own question.
Why is there no color shift on the photo of the M87 black hole?
What defines the plane of an accretion disk around a black hole?
What is the orientation of the M87 black hole image relative to the jet?
Maximum spin rate of a black hole?
Shouldn't we not be able to see some black holes?
Why did the Event Horizon Telescope take so long to take a photo of a black hole?
Is this the best non-radio image of whatever's at the center of M87? How was it taken?
What part of the EM spectrum was used in the black hole image?
Why don't we see the gas behind the black hole?
If a black hole does not emit light, how can one take a picture of the black hole itself?
Would Hubble Space Telescope improve black hole image observed by EHT if it joined array of telesopes?
From whence is the Event Horizon Telescope black hole data available for amateur reconstruction?
|
CommonCrawl
|
Impact of random safety analyses on structure, process and outcome indicators: multicentre study
María Bodí1,2,3,
Iban Oliva ORCID: orcid.org/0000-0002-2579-39461,
Maria Cruz Martín4,
Maria Carmen Gilavert1,
Carlos Muñoz4,
Montserrat Olona2,5 &
Gonzalo Sirgo1,2
To assess the impact of a real-time random safety tool on structure, process and outcome indicators.
Prospective study conducted over a period of 12 months in two adult patient intensive care units. Safety rounds were conducted three days a week ascertaining 37 safety measures (grouped into 10 blocks). In each round, 50% of the patients and 50% of the measures were randomized. The impact of this safety tool was analysed on indicators of structure (safety culture, healthcare protocols), process (improvement proportion related to tool application, IPR) and outcome (mortality, average stay, rate of catheter-related bacteraemias and rate of ventilator-associated pneumonia, VAP).
A total of 1214 patient-days were analysed. Structure indicators: the use of the safety tool was associated with an increase in the safety climate and the creation/modification of healthcare protocols (sedation/analgesia and weaning). Process indicators: Twelve of the 37 measures had an IPR > 10%; six showed a progressive decrease in the IPR over the study period. Nursing workloads and patient severity on the day of analysis were independently associated with a higher IPR in half of the blocks of variables. Outcome indicators: A significant decrease in the rate of VAP was observed.
The real-time random safety tool improved the care process and adherence to clinical practice guidelines and was associated with an improvement in structure, process and outcome indicators.
The application of evidence-based medicine is of major concern in intensive care medicine today [1]. Errors in health care may occur due to an unintended act or by omission. Those resulting from the former are more visible and therefore more easily detectable. Errors of omission are more insidious and more difficult to identify and include the failure to ensure that patients receive recommended medical care as supported by high-quality clinical research evidence [2], which occurs paradoxically in more severe patients [3]. For example, the lack of adherence to clinical practice guidelines may be due to the lack of knowledge about them and the presence of barriers that prevent their use such as a lack of time, a lack of resources, organizational aspects or even resistance to changing work habits.
To analyse and prevent patient safety-related incidents, reactive or proactive tools are used. These are complementary to each other. Checklists have been proposed as a simple and useful proactive method to prevent errors of commission and omission in critically ill patients [4, 5]. The complex reality in which they need to be implemented requires an approach that includes more than eliminating barriers and supporting facilitating factors. Implementation leaders must facilitate team learning to foster the mutual understanding of perspectives and motivations and the realignment of routines [6].
Among the various proactive methods, random safety audits [7] facilitate the interaction between the responsible team and the professional who verifies the safety measures, and have the potential to reduce future errors through the identification of system failures that contribute to gaps in quality and safety. This tool promotes the changes in accordance with the application of scientific evidence, feedback with the team, and providing and strengthening knowledge [8]. Weiss et al. [9] showed that checklists of safety measures guided by an observer (prompter) decreased mortality and average length of stay in an intensive care unit (ICU) compared to those carried out through self-verification.
Our group has developed and validated a tool—the real-time random safety audits (in Spanish: Análisis Aleatorios de Seguridad en Tiempo Real, AASTRE)—and found it to be effective in detecting and remedying errors of omission in real time, thereby improving adherence to guidelines [10] and proving to be most useful in situations of high care load and in more severe patients [11].
Thus, this multicentre study aims to investigate the usefulness of the AASTRE by measuring their effect on structure, process and outcome indicators.
Study design and participating centres
This is a prospective study involving two university hospitals over a 1-year period (January–December 2013). Table 1 shows the characteristics of the two centres and the most relevant initiatives implemented in terms of patient safety.
Table 1 Characteristics of the centres and safety-related initiatives
Methodology for the implementation of the AASTRE
Design and description of the checklist
The checklist, as previously validated [10], consists of 37 safety measures grouped into ten blocks of different areas of care: mechanical ventilation, haemodynamics, renal function and continuous renal replacement techniques (CRRT), sedation and analgesia, treatment (two blocks), nutrition, techniques and tests, nursing care and structure. AASTRE are standardly performed three days per week (including weeks with weekday holidays and holiday periods), with 50% of the safety measures and 50% of the ICU patients randomized on each day of analysis. Each safety measure has a specific definition, assessment criteria and a specific methodology for verification. All patients admitted to the ICU are eligible for AASTRE to be performed on them. However, for each selected patient, only those measures for which they meet the assessment criteria will be evaluated [11].
Role and training of prompters
The safety audits are always carried out immediately after the ICU daily clinical round and require the participation of a prompter and the healthcare professionals directly responsible for patient care (senior attending physician, residents and nurses). The prompter is one of the two senior attending physicians of each ICU (not directly caring for the patient) who has received the education and training required by the study and who is responsible for verifying and/or promoting the safety measures. At all centres, training sessions were held on the theoretical aspects and the methodology used in the AASTRE. In addition, all prompters were trained online in the goals of the study and in the use of the tool. Moreover, practical training was also required, carrying out at least three safety rounds prior to the start of the study.
Many of the measures included in the checklist are routinely carried out by healthcare professionals during the ICU daily clinical round. The purpose of the safety audits is to verify that they have indeed been carried out. If this were not the case (error of omission), the prompter reminds the healthcare professionals that they should be carried out. In this framework, the possible responses during the audits are: (1) "Yes"—when the measure analysed had been taken/performed on the ICU daily round; (2) "Yes, after AASTRE"—when the safety audit was used to detect an error of omission that has been corrected; (3) "No"—when the measure analysed could not be changed despite the audit; (4) "Not applicable"—when the patient did not meet the assessment criteria. The checklist and the responses of the evaluations are entered into a web platform (http://www.aastre.es). Safety audits were performed with a tablet at the bedside to facilitate implementation.
Definition of variables and indicators
Number of patient-days was the number of patients assessed in the total number of days on which safety audits were carried out in the two hospitals.
Structure indicators
Perception of safety culture (in Hospital 2): We used a previously validated questionnaire [12] based on the Safety Climate Survey (SCS) and the Safety Attitude Questionnaire-ICU model (SAQ-ICU). It analysed six dimensions: teamwork climate, safety climate, perceptions of management, job satisfaction, working conditions, and stress recognition. The questionnaire on the perception of safety culture was administered to medical, nursing and ancillary staff. Three evaluation periods were considered: 1) initial period: the month prior to the start of the study; 2) intermediate period: month 6 of the study; 3) final period: the month after the end of the study.
The execution or updating of protocols and/or procedures promoted by the AASTRE was investigated.
The proportion of changes in the care process carried out as a result of verification was considered. IPR-AASTRE (improvement proportion related to the AASTRE) were calculated globally (IPR-AASTRE-G), for each safety measure (IPR-AASTRE), and for each block of variables (IPR-AASTRE-B), according to the following formulas:
$$ {\text{IPR}} - {\text{AASTRE}} = \frac{{{\text{number of occasions on which the AASTRE changed clinical practice }}\left( {\text{{"yes}}, {\text{after the AASTRE"}}} \right)}}{{{\text{number of occasions on which the measure was selected }} - {\text{number of occasions on which the measure was not applicable}}}} \times 100 $$
$$ {\text{IPR}} - {\text{AASTRE}} - {\text{B}} = \frac{\text{sum of the number of occasions on which the AASTRE changed clinical practice in each block}}{{{\text{number of occasions on which the measure was selected in each block}} - {\text{number of occasions on which the measure was not applicable in each block}}}} \times 100 $$
IPR-AASTRE-B helped simplify the assessment of the impact of other variables on utility. These variables are: type of patient (medical, surgical, neurocritical and trauma), staffing ratio [PNR; patient:nurse ratio (≤2:1 vs. >2:1) and PPR; patient:physician ratio (≤2:1, 2–3:1, >3:1)], the Sequential Organ Failure Assessment (SOFA) score and length of stay (length of stay at the time of safety audits (<7, 7–14, >14 days).
Outcome indicators
The impact by the AASTRE on ICU mortality, average stay and rates of central venous catheter-related bacteraemia (CRB) and ventilator-associated pneumonia (VAP) using standardized definitions [13, 14] was investigated. The clinical definition of VAP requires patients to fulfil one radiographic, one systemic, and two pulmonary criteria. Radiographic criteria include new or progressive infiltrates, consolidation and cavitation. Systemic criteria include fever, abnormal white blood cell count and altered mental status. Pulmonary criteria include purulent sputum, new or worsening cough or dyspnoea or tachypnea, rales or bronchial breath sounds, and worsening gas exchange. CRB is defined in a patient with a central venous catheter with at least one positive blood culture (two blood culture if common skin contaminant organism) obtained from a peripheral vein, clinical manifestations of infections (i.e. fever, chills and/or hypotension), and no apparent source for the bloodstream infection except the catheter. One of the following should be present: a positive semi-quantitative (>15 CFU per catheter segment) or quantitative (>102 CFU per catheter segment) catheter culture, whereby the same organism (species) is isolated from a catheter segment and a peripheral blood culture; simultaneous quantitative blood cultures with a ratio of >3:1 CFU/ml of blood (catheter vs. peripheral blood); differential time to positivity (growth in a culture of blood obtained through a catheter hub is detected by an automated blood culture system at least 2 h earlier than a culture of simultaneously drawn peripheral blood of equal volume). The information relative to VAP and CRB was collected prospectively at both centres participating in the study and the previous year, using identical diagnostic criteria.
For descriptive analysis, we used absolute numbers (N) and relative frequency (percentage) for categorical variables; the mean and standard deviation for continuous variables. Chi-square tests and linear trend Chi-square tests were used for categorical variables and Student's t test for continuous variables in univariate analysis. For multivariate analysis, multiple logistic regression, fixed model and likelihood ratio method analyses were performed to ascertain the impact of different variables on the IPR-AASTRE-B and with the aim of adjusting for possible confounding effects. The results were expressed as odds ratio and their 95% confidence interval (CI).
We used direct standardization by APACHE 2012 (<15, 15–25, >25) to evaluate mortality change and incidence density ratio (IDR) 2013 vs 2012 and CI to evaluate CRB and VAP incidence changes. The acceptable level of statistical significance was set at p ≤ 0.05. All data analyses were performed using the SPSS version 15 statistical package (SPSS Inc., Chicago, IL).
During the study period, AASTRE were carried out on 1214 patient-days. Table 2 shows the distribution of the type of patients evaluated globally, and in each hospital, the workloads (of nursing staff and physicians), the seriousness of the patients measured using the SOFA and patients' length of stay on the day the safety rounds were conducted. Most patients were medical (47.0%), with a PNR ≤2:1 (57.0%), a PPR 2–3:1(62.1%), SOFA <4 (56.9%) and an average stay <7 days (42.5%). It should be noted that the distribution of the types of patients evaluated is different in the two hospitals of the study. In Hospital 1, there was a predominance of medical patients (57.6%), followed by surgical patients (32.7%). In Hospital 2, although the evaluation of medical patients predominated (41.3%), followed by surgical patients (31.6%), there was a significantly higher percentage of assessments of neurosurgical (13.4%) and trauma patients (13.7%). The nursing workload was higher in Hospital 1, where in most cases each nurse takes care of more than two patients. With regard to the physicians' workload, it was significantly higher in Hospital 1. It is in this centre that most frequently a physician treats more than three patients (36%). In terms of patient severity on the day of the administration of the AASTRE, the only differences found were in the SOFA subgroup <4 (more prevalent in Hospital 1, 67.1%) and in the SOFA subgroup 4–7 (more prevalent in Hospital 2, 32.7%). Finally, in respect of length of stay in the ICU on the day of the AASTRE, in Hospital 1 there was a significant predominance of patients whose length of stay was less than seven days (54%), the rest of the periods considered were significantly more prevalent in Hospital 2 except the period of ≥21 days, which was virtually identical in both hospitals.
Table 2 Distribution of the type and severity of patient disease/condition, staffing ratios and length of stay on the day of evaluation
Perception of safety culture: The response rate to the perception of safety culture questionnaire that had been administered to 71 professionals was 94.4% (in the initial period), 66.6% (in the intermediate period) and 70.4% (in the final period). A progressive increase was observed in positive responses in the Safety Climate item throughout the study period (p < 0.0001) in the safety culture perception survey. No significant changes were observed in the other items (Table 3).
Table 3 Safety culture survey results in Hospital 2
Implementation or updating of protocols and/or procedures: The use of the AASTRE was associated with changes in sedation/analgesia and weaning protocols at both hospitals. It is also noteworthy that in the two hospitals of the study, the use of AASTRE motivated the creation of a new procedure of the prescription and review of monitoring and mechanical ventilation (MV) alarms.
The overall IPR-AASTRE-G were 6.7%. Table 4 shows the distribution of patients evaluated for each measure and improvement proportion related to the AASTRE (IPR-AASTRE), and their evolution throughout the study period. Twelve of the 37 measures (32.4%) had IPR-AASTRE >10%. Some are included in the bundles to prevent VAP (evaluation of the level of sedation and pain in the sedated patient, semi-recumbent position) and CRB (daily assessment of catheter needs); others in the good medical practice guidelines (verification of alveolar pressure in patients with acute respiratory failure, assessment of acute renal failure and artificial nutrition) and in the basic safety measures (appropriate treatment prescription, review of MV or monitor alarms, patient identification).
Table 4 Distribution of evaluated patients for every measure and improvement proportion related to the AASTRE (IPR-AASTRE)
Only six steps (verification of MV or monitor alarms, proper administration of the prescribed treatment, assessment of acute renal failure and the risk of developing pressure ulcers and prevention of thromboembolic disease) showed a progressive decrease in the IPR-AASTRE throughout the study period. In addition, in one measure ("daily assessment by parenteral nutrition team"), a significant increase was seen in IPR-AASTRE as it was assessed during the different four-month periods.
Table 5 shows the impact of the independent variables selected (type of patient, staffing ratio, severity and length of stay) in the IPR-AASTRE-B. The high PNR was associated with a higher IPR-AASTRE in the MV and haemodynamics blocks. The SOFA was associated independently with a higher IPR-AASTRE-B in four blocks. Finally, the length of stay was significantly inversely associated with the IPR-AASTRE-B of the techniques and tests and treatment blocks.
Table 5 Variables related to the utility of the AASTRE (multivariate analysis)
The use of the AASTRE was associated with a significant decline in the VAP rate. No significant impact on average stay, mortality and CRB rate was observed (Table 6).
Table 6 Outcome indicators
Checklists have been proposed as tools to ensure that essential components of care are not omitted [15]. However, this is the first multicentre study to analyse the impact of real-time random safety audits on quality indicators in the critical patient. An improvement is seen in indicators of structure (safety climate, clinical protocols and healthcare procedures), process (better adherence to good clinical practice guidelines) and outcome (decline in the rate of VAP) [16]. These data support a way to improve health care for the critical patient by means of the AASTRE tool whose use is feasible as shown in the pilot study published previously by our group [10].
The use of the AASTRE was associated with an improvement in the safety climate. An association has been described between a better safety climate and outcome [17], average stay [18] or adverse events [19]. Although other authors have not demonstrated that checklists improve communication and teamwork [20], the impact of the AASTRE on the safety climate could be the result of improved communication in clinical practice, as described in other tools [21].
The guidelines require local adaptation via local protocols to enable their effective, safe and efficient use [22, 23]. The AASTRE were associated with the need to renew sedation/analgesia and weaning protocols. This occurs as a natural consequence when verifying the safety measures using AASTRE reflexively and at the bedside. This highlights the need to update local protocols in accordance with the latest sources of scientific knowledge. Difficulties for adherence have been described [24, 25] in both protocols. The AASTRE allow evaluating adherence to protocols and can promote their regular updating. The AASTRE have improved safety in relation to monitoring and mechanical ventilation alarms through the creation of specific protocols.
While the beneficial effect of the introduction of protocols in clinical practice has been discussed [26, 27], most studies acknowledge that they are useful although more patently in hands of inexperienced healthcare providers or suboptimal work environments. In such malfunctioning environments, they help but may hinder the performance and progress of the health professional, reducing their autonomy [28]. In fact, in a study published recently, no effect of the protocols was observed in the outcome [29]. Therefore, AASTRE promote much needed professional autonomy but invites reflection as to decisions at the bedside. And this reflection leads to the updating of protocols even though a direct improvement in the results is not guaranteed.
Process indicators: IPR-AASTRE
Health care requires many more scientifically sound process measures than are currently available. The AASTRE are process indicators since they evaluate the degree of adherence to scientific evidence [30]. They allow measuring the gap between the indication of therapies that have proven effective within human clinical research and the real safe and effective use of these therapies in routine clinical practice.
The failure to ensure that patients receive recommended medical care is supported by high-quality clinical research evidence. This type of safety and quality problem can be effectively addressed with knowledge translation tools [23].
The fact that 12 of the 37 measures considered (32.4%) had IPR-AASTRE >10% shows the ability of the AASTRE to modify essential aspects of clinical practice and improve adherence to evidence-based guidelines, a priority in health care [11, 23, 31]. Hopefully, through organizational learning, this effect could be maintained over time [32]. In this regard, some authors [33] have described the ability of checklists to maintain adherence to good clinical practice guidelines achieving close to 100% compliance for semi-recumbent position or suitable sedation. However, in our study, these measures scored IPR-AASTRE of 21.7 and 11.6%, indicating that if the intervention (AASTRE) had not been implemented, the measure would not have been carried out in a large percentage of patients. Also, the evaluation of another essential measure as is the assessment of catheter needs, in our experience, was corrected in 16.8% of evaluations. The fact that IPR-AASTRE utility is maintained over time may be related to the complexity of ICU clinical activity. In this regard, the AASTRE act as a tool that redirects healthcare activity towards essential aspects of care, regardless of the environmental situation. However, in our study six measures showed a significant decrease along the four-month periods analysed, indicating that this tool can also help systematize healthcare and organizational learning. Nevertheless, in the case of measures with a gradually ascending IPR-AASTRE, it should be verified (as occurs in this study with the measure "daily assessment by parenteral nutrition team") that the lack of adherence to the recommendations can be accounted for by causes from outside the work team implementing AASTRE (a problem of communication with other agents involved in treating the patient, as might occur in this case with professionals of the Pharmacy Service who are responsible for monitoring hospital parenteral nutrition, for example).
The AASTRE have proven to be more effective in more serious patients, in the early days of admission and in increased workload environments. These findings are consistent with the data published by our group previously [11]. Without interrupting the work flow, aspects of severe patient care are recalled and their definitive inclusion into treatment is left to the discretion of the senior physician responsible of the patient, according to the indication:risk ratio.
The concept of care bundling and its efficacy in improving clinical outcomes are also supported in the literature [34]. Our results show a significant decrease in VAP influencing three of the recommendations established to prevent this type of adverse event (assessment of sedation level, semi-recumbent position and prophylaxis of deep venous thrombosis). Dubose et al. [35] described this effect at a trauma ICU through a checklist of VAP bundle measures. However, in our study, the AASTRE had no impact on mortality and CRB. Probably, to demonstrate an impact on the rate of CRB requires influencing other aspects such as catheter insertion and maintenance [36].
In the critical patient, no study has managed to associate the use of safety checklists with a decrease in mortality [15]. Recently, in a study of Brazilian ICUs [37], the introduction of daily checklists, goal setting and clinician prompting did not decrease in-hospital mortality or other clinical outcomes. Despite being a study with a considerable sample size, some organizational and methodological aspects could render the results unreproducible outside that environment. For example, the health system is not comparable to the European one as regards cultural aspects and the organization of work teams. Moreover, standardized mortality is high and the number of patients recruited in each ICU was relatively low. In addition, important methodological aspects such as the period of analysis (just 4 months in that study), the definition of the measures, the eligibility of the patients and training in the use of the tools are different in the two studies. Nevertheless, the most distinguishing factor between the studies is the role of the prompter. According to the authors of the Brazilian study, the feedback of the clinician with the prompter was carried out later in the day. In our study, this is one of the keys of our methodology, the prompter interacts at the bedside during healthcare activity, immediately after the daily clinical rounds, acting as a catalyser of the transfer of knowledge, thus improving adherence to scientific proof. In any case, we are aware that a single intervention, albeit cross-cutting, never has a definitive impact on patient prognosis. Moreover, using mortality as an outcome measure requires larger samples and risk adjustment for fair comparison among providers and organizations [38].
There are limitations to this study. (1) Only two ICUs have participated. Moreover, their participation in the design of the AASTRE tool, the experience gained by the research team from the pilot study and the development of the culture of the continuous improvement of the quality of care that is underlying in the participating centres may mean that it is not possible to extrapolate the results to other ICUs. (2) The Hawthorne effect, a performance gain resulting from the knowledge of being observed, is difficult to distinguish from those resulting from the intervention. (3) The perception of safety culture was investigated only at one centre. (4) Sample size was not initially calculated to investigate the impact of the AASTRE on mortality or nosocomial infection rates. (5) The study design does not include a control group, since the random selection of the patients evaluated in the safety audits does not allow this. (6) Having demographic data of the patient populations attended to during the study period, of the quantitative evaluation of the Nursing workload and of the incidence of adverse events may have helped establish more precise analysis of the data and of the impact of AASTRE (Additional file 1).
In conclusion, our results suggest that the AASTRE were associated with improved structure, process and outcome indicators. In addition, this tool allows simultaneously translating medical evidence to clinical practice, reducing errors of omission, and also allows assessing quality through process indicators.
AASTRE:
Análisis Aleatorios de Seguridad en Tiempo Real
CRRT:
continuous renal replacement techniques
SCS:
Safety Climate Survey
SAQ-ICU:
Safety Attitude Questionnaire-ICU model
PNR:
patient:nurse ratio
PPR:
patient:physician ratio
NS:
no significant differences
IPR-AASTRE:
improvement proportion related to the AASTRE
IPR-AASTRE-G:
improvement proportion related to the AASTRE globally
IPR-AASTRE-B:
improvement proportion related to the AASTRE for each block of variables
MV:
CRB:
central venous catheter-related bacteraemia
VAP:
ventilator-associated pneumonia (VAP)
central venous catheter
Curtis JR, Cook DJ, Wall RJ, Angus DC, Bion J, Kacmarek R, et al. Intensive care unit quality improvement: a how-to guide for the interdisciplinary team. Crit Care Med. 2006;34:211–8.
Cabana MD, Rand CS, Powe NR, Wu AW, Wilson MH, Abboud PA, et al. Why don't physicians follow clinical practice guidelines? A framework for improvement. JAMA. 1999;282:1458–65.
Ilan R, Fowler RA, Geerts R, Pinto R, Sibbald WJ, Martin CM. Knowledge translation in critical care: factors associated with prescription of commonly recommended best practices for critically ill patients. Crit Care Med. 2007;35:1696–702.
Ursprung R, Gray JE, Edwards WH, Horbar JD, Nickerson J, Plsek P, et al. Real time patient safety audits: improving safety every day. Qual Saf Health Care. 2005;14:284–9.
Byrnes MC, Schuerer DJE, Schallom ME, Sona CS, Mazuski JE, Taylor BE, et al. Implementation of a mandatory checklist of protocols and objectives improves compliance with a wide range of evidence-based intensive care unit practices. Crit Care Med. 2009;37:2775–81.
Bergs J, Lambrechts F, Simons P, Vlayen A, Marneffe W, Hellings J, et al. Barriers and facilitators related to the implementation of surgical safety checklist: a systematic review of the qualitative evidence. BMJ Qual Saf. 2015;24:776–86.
Lee L, Girish S, Van den Berg E, Leaf A. Random safety audits in the neonatal unit. Arch Dis Child Fetal Neonatal Ed. 2009;94:F116–9.
Leape LL, Berwick DM, Bates DW. What practices will most improve safety? Evidence-based medicine meets patient safety. JAMA. 2002;288:501–7.
Weiss CH, Moazed F, McEvoy CA, Singer BD, Szleifer I, Amaral LA, et al. Prompting physician to address a daily checklist and process of care and clinical outcomes. Am J Respir Crit Care Med. 2011;184:680–6.
Sirgo Rodríguez G, Olona Cabases M, Martin Delgado MC, Esteban Reboll F, Pobo Peris A, Bodí Saera M, ART-SACC Study Experts. Audits in real time for safety in critical care: definition and pilot study. Med Intensiva. 2014;38:472–86.
Bodí M, Olona M, Martín MC, Alceaga R, Rodríguez JC, Corral E, et al. Feasibility and utility of the use of real time random safety audits in adult ICU patients: a multicentre study. Intensive Care Med. 2015;41:1089–98.
Gutiérrez-Cía I, de Cos PM, Juan AY, Obón-Azuara B, Alonso-Ovies Á, Martin-Delgado MC, et al. Perception of safety culture in Spanish intensive care units. Med Clin. 2010;135(Suppl 1):37–44.
Mermel LA, Allon M, Bouza E, Graven DE, Flynn P, O'Grady NP, et al. Clinical practice guidelines for the diagnosis and management of intravascular catheter-related infections: 2009 update by the Infectious Diseases Society of America. Clin Infect Dis. 2009;49:1–45.
Klompas M, Kleinman K, Khan Y, Evans RS, Lloyd JF, Stevenson K, et al. Rapid and reproducible surveillance for ventilator-associated pneumonia. Clin Infect Dis. 2012;54:370–7.
Thomassen Ø, Storesund A, Søfteland E, Brattebø G. The effects of safety checklist in medicine: a systematic review. Acta Anaesthesiol Scand. 2014;58:5–18.
Mittman BS. Creating the evidence base for quality improvement collaborative. Ann Intern Med. 2004;140:897–901.
Singer S, Gaba D, Geppert J, Sinaiko AD, Howard SK, Park KC. The culture of safety: results of an organization-wide survey in 15 California hospitals. Qual Saf Health Care. 2003;12:112–8.
Huang DT, Clermont G, Kong L, Weissfeld LA, Sexton JB, Rowan KM, et al. Intensive care unit safety culture and outcomes: a US multicenter study. Int J Qual Health Care. 2010;22:151–61.
Valentin A, Schiffinger M, Steyrer J, Huber C, Strunk G. Safety climate reduces medication and dislodgement errors in routine intensive care practice. Intensive Care Med. 2013;39:391–8.
Böhmer AB, Kindermann P, Schwanke U, Bellendir M, Tinschmann T, Schmidt C, et al. Long-term effects of a perioperative safety checklist from the viewpoint of personnel. Acta Anaesthesiol Scand. 2013;57:150–7.
Randmaa M, Mårtensson G, Leo Swenne CL, Engström M. SBAR improves communication and safety climate and decreases incident reports due to communication errors in an anaesthetic clinic: a prospective intervention study. BMJ Open. 2014;4:e004268.
Roffey P, Thangathurai D. Increased use of protocols in ICU settings. Intensive Care Med. 2011;37:1402.
Needham DM. Patient safety, quality of care, and knowledge translation in the intensive care unit. Respir Care. 2010;55:922–8.
Sneyers B, Laterre PF, Perreault MM, Wouters D, Spinewine A. Current practices and barriers impairing physicians' and nurses'adherence to analgo-sedation recommendations in the intensive care unit–a national survey. Crit Care. 2014;18:655.
Rose L, Dainty KN, Jordan J, et al. Weaning from mechanical ventilation: a scoping review of qualitative studies. Am J Crit Care. 2014;23:e54–70.
Soares M, Bozza FA, Angus DC, Japiassú AM, Viana WN, Costa R, et al. Organizational characteristics, outcomes, and resource use in 78 Brazilian intensive care units: the ORCHESTRA study. Intensive Care Med. 2015;41:2149–60.
Isherwood P. Response to: Protocols: help for improvement but beware of regression to the mean and mediocrity. Intensive Care Med. 2016;42:631.
Girbes AR, Robert R, Marik PE. Protocols: help for improvement but beware of regression to the mean and mediocrity. Intensive Care Med. 2015;41:2218–20.
Sevransky JE, Checkley W, Herrera P, Pickering BW, Barr J, Brown SM, United States Critical Illness and Injury Trials Group-Critical Illness Outcomes Study Investigators, et al. Protocols and hospital mortality in critically ill patients: the USA Critical Illness and Injury Trials Group Critical Illness Outcomes Study. Crit Care Med. 2015;43:2076–84.
Pronovost P, Holzmueller CG, Needham DM, Sexton JB, Miller M, Berenholtz S, et al. How will we know patients are safer? An organization-wide approach to measuring and improving safety. Crit Care Med. 2006;34:1988–95.
Kiyoshi-Teo H, Cabana MD, Froelicher ES, Blegen MA. Adherence to institution-specific ventilator-associated pneumonia prevention guidelines. Am J Crit Care. 2014;23:201–14.
Wadhwani V, Shillingford A, Penford G, Thomson MA. Random safety audits for improving standards in the neonatal unit. Arch Dis Child Fetal Neonatal Ed. 2011;96:Fa49.
Teixeira PGR, Inaba K, DuBose J, Melo N, Bass M, Belzberg H, et al. Measurable outcomes of quality improvement using a daily quality rounds checklist: two-year prospective analysis of sustainability in a surgical intensive care unit. J Trauma Acute Care Surg. 2013;75:717–21.
Resar R, Pronovost P, Haraden C, Simmonds T, Rainey T, Nolan T. Using a bundle approach to improve ventilator care processes and reduce ventilator-associated pneumonia. Jt Comm J Qual Patient Saf. 2005;31:243–8.
Dubose J, Teixeira PG, Inaba K, Lam L, Talving P, Putty B, et al. Measurable outcomes of quality improvement using a daily quality rounds checklist: one-year analysis in a trauma intensive care unit with sustained ventilator-associated pneumonia reduction. J Trauma. 2010;69:855–60.
Hsu YJ, Marsteller JA. Influence of the comprehensive Unit-based Safety Program in ICUs: evidence from the keystone ICU Project. Am J Med Qual. 2016;31:349–57.
Writing Group for the CHECKLIST-ICU Investigators and the Brazilian Research in Intensive Care Network (BRICNet), Cavalcanti AB, Bozza FA, Machado FR, Salluh JI, Campagnucci VP, Vendramim P, et al. Effect of a quality improvement intervention with daily round checklists, goal setting, and clinician prompting on mortality of critically ill patients: a randomized clinical trial. JAMA. 2016;315:1480–90.
Weled BJ, Adzhigirey LA, Hodgman TM, Brilli RJ, Spevetz A, Kline AM, et al. Critical care delivery: the importance of process of care and ICU structure to improved outcomes: an update from the American College of critical care medicine task force on models of critical care. Crit Care Med. 2015;43:1520–5.
All authors contributed to study conception and design, data analysis, and drafting the manuscript. MOC contributed to data analysis and statistical analysis. All authors read and approved the final manuscript.
The authors added the database of the present study as supplementary material.
The study was approved by the Ethics and Clinical Research Committee of each investigating centre. It was deemed unnecessary to obtain informed consent.
This study was supported by Grants from the Fondo de Investigación Sanitaria (Institute of Health Carlos III from Spain, FIS Grants, Project PI11/02311) and from Fundación Ricardo Barri Casanovas. FEDER.2014 SGR 926.
Intensive Care Unit, Hospital Universitario Joan XXIII, Tarragona, Spain
María Bodí, Iban Oliva, Maria Carmen Gilavert & Gonzalo Sirgo
Instituto de Investigación Sanitaria Pere Virgili, Rovira i Virgili University, Tarragona, Spain
María Bodí, Montserrat Olona & Gonzalo Sirgo
Centro de Investigación Biomédica en Red de Enfermedades Respiratorias (CIBERES), Instituto de Salud Carlos III, Madrid, Spain
María Bodí
Intensive Care Unit, Hospital Universitario de Torrejón, Torrejón de Ardoz, Madrid, Spain
Maria Cruz Martín & Carlos Muñoz
Department of Preventive Medicine, Hospital Universitario Joan XXIII, Tarragona, Spain
Montserrat Olona
Iban Oliva
Maria Cruz Martín
Maria Carmen Gilavert
Carlos Muñoz
Gonzalo Sirgo
Correspondence to Iban Oliva.
Database of AASTRE study.
Bodí, M., Oliva, I., Martín, M.C. et al. Impact of random safety analyses on structure, process and outcome indicators: multicentre study. Ann. Intensive Care 7, 23 (2017). https://doi.org/10.1186/s13613-017-0245-x
DOI: https://doi.org/10.1186/s13613-017-0245-x
Critical patients
Real-time safety audits
Quality indicators
|
CommonCrawl
|
Ryan Test (6)
Scale-dependent alignment, tumbling and stretching of slender rods in isotropic turbulence
Nimish Pujara, Greg A. Voth, Evan A. Variano
Journal: Journal of Fluid Mechanics / Volume 860 / 10 February 2019
We examine the dynamics of slender, rigid rods in direct numerical simulation of isotropic turbulence. The focus is on the statistics of three quantities and how they vary as rod length increases from the dissipation range to the inertial range. These quantities are (i) the steady-state rod alignment with respect to the perceived velocity gradients in the surrounding flow, (ii) the rate of rod reorientation (tumbling) and (iii) the rate at which the rod end points move apart (stretching). Under the approximations of slender-body theory, the rod inertia is neglected and rods are modelled as passive particles in the flow that do not affect the fluid velocity field. We find that the average rod alignment changes qualitatively as rod length increases from the dissipation range to the inertial range. While rods in the dissipation range align most strongly with fluid vorticity, rods in the inertial range align most strongly with the most extensional eigenvector of the perceived strain-rate tensor. For rods in the inertial range, we find that the variance of rod stretching and the variance of rod tumbling both scale as $l^{-4/3}$ , where $l$ is the rod length. However, when rod dynamics are compared to two-point fluid velocity statistics (structure functions), we see non-monotonic behaviour in the variance of rod tumbling due to the influence of small-scale fluid motions. Additionally, we find that the skewness of rod stretching does not show scale invariance in the inertial range, in contrast to the skewness of longitudinal fluid velocity increments as predicted by Kolmogorov's $4/5$ law. Finally, we examine the power-law scaling exponents of higher-order moments of rod tumbling and rod stretching for rods with lengths in the inertial range and find that they show anomalous scaling. We compare these scaling exponents to predictions from Kolmogorov's refined similarity hypotheses.
Rotations of small, inertialess triaxial ellipsoids in isotropic turbulence
Nimish Pujara, Evan A. Variano
Journal: Journal of Fluid Mechanics / Volume 821 / 25 June 2017
Print publication: 25 June 2017
The statistics of rotational motion of small, inertialess triaxial ellipsoids are computed along Lagrangian trajectories extracted from direct numerical simulations of homogeneous isotropic turbulence. The total particle angular velocity and its components along the three principal axes of the particle are considered, expanding on the results of Chevillard & Meneveau (J. Fluid Mech., vol. 737, 2013, pp. 571–596) who showed results of the rotation rate of the particle's principal axes. The variance of the particle angular velocity, referred to as the particle enstrophy, is found to increase as particles become elongated, regardless of whether they are axisymmetric. This trend is explained by considering the contributions of vorticity and strain rate to particle rotation. It is found that the majority of particle enstrophy is due to fluid vorticity. Strain-rate-induced rotations, which are sensitive to shape, are mostly cancelled by strain–vorticity interactions. The remainder of the strain-rate-induced rotations are responsible for weak variations in particle enstrophy. For particles of all shapes, the majority of the enstrophy is in rotations about the longest axis, which is due to alignment between the longest axis and fluid vorticity. The integral time scale for particle angular velocities about different axes reveals that rotations are most persistent about the longest axis, but that a full revolution is rare.
Rotational kinematics of large cylindrical particles in turbulence
Ankur D. Bordoloi, Evan Variano
Journal: Journal of Fluid Mechanics / Volume 815 / 25 March 2017
The rotational kinematics of inertial cylinders in homogeneous isotropic turbulence is investigated via laboratory experiments. The effects of particle size and shape on rotation statistics are measured for near-neutrally buoyant particles whose sizes are within the inertial subrange of turbulence. To examine the effects of particle size, three right-circular cylinders (aspect ratio $\unicode[STIX]{x1D706}=1$ ) are considered, with size $d_{eq}=16\unicode[STIX]{x1D702}$ , $27\unicode[STIX]{x1D702}$ and $67\unicode[STIX]{x1D702}$ . Here, $d_{eq}$ is the diameter of a sphere whose volume is equal to that of the particle and $\unicode[STIX]{x1D702}$ is the Kolmogorov length scale. Results show that the variance of the particle rotation rate follows a $-4/3$ power-law scaling with respect to $d_{eq}$ . To examine the effect of particle shape, two cylinders with identical volumes and different aspect ratios ( $\unicode[STIX]{x1D706}=1$ and $\unicode[STIX]{x1D706}=4$ ) are measured. Their motion also scales with $d_{eq}$ regardless of shape. Simultaneous measurements of orientation and rotation for $\unicode[STIX]{x1D706}=4$ particles allows a decomposition of rotation along the primary axes of each particle. This analysis shows that there is no preference for rotation about a particle's symmetry axis, unlike the preference displayed by sub-Kolmogorov-scale particles in previous studies.
Turbulent transport of a high-Schmidt-number scalar near an air–water interface
Evan A. Variano, Edwin A. Cowen
Journal: Journal of Fluid Mechanics / Volume 731 / 25 September 2013
We measure solute transport near a turbulent air–water interface at which there is zero mean shear. The interface is stirred by high-Reynolds-number homogeneous isotropic turbulence generated far below the surface, and solute transport into the water is driven by an imposed concentration gradient. The air–water interface is held at a constant concentration much higher than that in the bulk of the water by maintaining pure ${\mathrm{CO} }_{2} $ gas above a water tank that has been initially purged of dissolved ${\mathrm{CO} }_{2} $ . We measure velocity and concentration fluctuations below the air–water interface, from the viscous sublayer to the middle of the 'source region' where the effects of the surface are first felt. Our laboratory measurement technique uses quantitative imaging to collect simultaneous concentration and velocity fields, which are measured at a resolution that reveals the dynamics in the turbulent inertial subrange. Two-point statistics reveal the spatial structure of velocity and concentration fluctuations, and are examined as a function of depth beneath the air–water interface. There is a clear dominance of large scales at all depths for all quantities, but the relative importance of scales changes markedly with proximity to the interface. Quadrant analysis of the turbulent scalar flux shows a four-way balance of flux components far from the interface, which near the interface evolves into a two-way balance between motions that are raising and lowering parcels of low-concentration fluid.
Shape effects on turbulent modulation by large nearly neutrally buoyant particles
Gabriele Bellani, Margaret L. Byron, Audric G. Collignon, Colin R. Meyer, Evan A. Variano
Journal: Journal of Fluid Mechanics / Volume 712 / 10 December 2012
Published online by Cambridge University Press: 27 September 2012, pp. 41-60
We investigate dilute suspensions of Taylor-microscale-sized particles in homogeneous isotropic turbulence. In particular, we focus on the effect of particle shape on particle–fluid interaction. We conduct laboratory experiments using a novel experimental technique to simultaneously measure the kinematics of fluid and particle phases. This uses transparent particles having the same refractive index as water, whose motion we track via embedded optical tracers. We compare the turbulent statistics of a single-phase flow to the turbulent statistics of the fluid phase in a particle–laden suspension. Two suspensions are compared, one in which the particles are spheres and the other in which they are prolate ellipsoids with aspect ratio 2. We find that spherical particles at volume fraction ${\phi }_{v} = 0. 14\hspace{0.167em} \% $ reduce the turbulent kinetic energy (TKE) by 15 % relative to the single-phase flow. At the same volume fraction (and slightly smaller total surface area), ellipsoidal particles have a much smaller effect: they reduce the TKE by 3 % relative to the single-phase flow. Spectral analysis shows the details of TKE reduction and redistribution across spatial scales: spherical particles remove energy from large scales and reinsert it at small scales, while ellipsoids remove relatively less TKE from large scales and reinsert relatively more at small scales. Shape effects are far less evident in the statistics of particle rotation, which are very similar for ellipsoids and spheres. Comparing these with fluid enstrophy statistics, we find that particle rotation is dominated by velocity gradients on scales much larger than the particle characteristic length scales.
A random-jet-stirred turbulence tank
Published online by Cambridge University Press: 14 May 2008, pp. 1-32
We report measurements of the flow above a planar array of synthetic jets, firing upwards in a spatiotemporally random pattern to create turbulence at an air–water interface. The flow generated by this randomly actuated synthetic jet array (RASJA) is turbulent, with a large Reynolds number and a weak secondary (mean) flow. The turbulence is homogeneous over a large region and has similar isotropy characteristics to those of grid turbulence. These properties make the RASJA an ideal facility for studying the behaviour of turbulence at boundaries, which we do by measuring one-point statistics approaching the air–water interface (via particle image velocimetry). We explore the effects of different spatiotemporally random driving patterns, highlighting design conditions relevant to all randomly forced facilities. We find that the number of jets firing at a given instant, and the distribution of the duration for which each jet fires, greatly affect the resulting flow. We identify and study the driving pattern that is optimal given our tank geometry. In this optimal configuration, the flow is statistically highly repeatable and rapidly reaches steady state. With increasing distance from the jets, there is a jet merging region followed by a planar homogeneous region with a power-law decay of turbulent kinetic energy. In this homogeneous region, we find a Reynolds number of 314 based on the Taylor microscale. We measure all components of mean flow velocity to be less than 10% of the turbulent velocity fluctuation magnitude. The tank width includes roughly 10 integral length scales, and because wall effects persist for one to two integral length scales, there is sizable core region in which turbulent flow is unaffected by the walls. We determine the dissipation rate of turbulent kinetic energy via three methods, the most robust using the velocity structure function. Having a precise value of dissipation and low mean flow allows us to measure the empirical constant in an existing model of the Eulerian velocity power spectrum. This model provides a method for determining the dissipation rate from velocity time series recorded at a single point, even when Taylor's frozen turbulence hypothesis does not hold. Because the jet array offers a high degree of flow control, we can quantify the effects of the mean flow in stirred tanks by intentionally forcing a mean flow and varying its strength. We demonstrate this technique with measurements of gas transfer across the free surface, and find a threshold below which mean flow no longer contributes significantly to the gas transfer velocity.
|
CommonCrawl
|
Is intersection of regular language and context free language is "always" context free language
I have read that intersection of regular language and context-free language is always context-free. Most of the places an standard example has been used to prove this, e.g., \begin{align*} L_1 &= L(a^*b^*)\\ L_2 &= \{a^nb^n\mid n\geq 0\} \quad\text{(which is context free)}\\ L_1\cap L_2 &= \{a^nb^n\mid n\geq 0\}\,. \end{align*} But my question is what if the regular language is finite, such as \begin{align*} L_1 &= \{ab\}\\ L_2 &= \{a^nb^n\mid n\geq 0\}\\ L_1\cap L_2 = \{ab\}\,. \end{align*} Here intersection comes out to be finite and the language generated by intersection of both language is also finite which is regular (I know it's also context-free but regular is a stronger answer here).
What mistake am I making in understanding the concept?
formal-languages regular-languages context-free closure-properties
Raphael♦
Shaji Thorn BlueShaji Thorn Blue
$\begingroup$ cs.stackexchange.com/questions/18642/… $\endgroup$ – D.W.♦ Jun 22 '16 at 7:04
$\begingroup$ "Most of the places an standard example has been used to prove this" -- citation needed. A single example can't prove a claim such as this. $\endgroup$ – Raphael♦ Jun 22 '16 at 8:08
$\begingroup$ You already give yourself the answer, so I don't understand what you are asking. $\endgroup$ – Raphael♦ Jun 22 '16 at 8:09
The claim is that the intersection of a regular language and a context-free language is context-free. You've intersected a regular language ($\{ab\}$) and a context-free language ($\{a^nb^n\mid n\geq 0\}$) and the result was a context-free language ($\{a,b\}$). Sure, that language is also regular but every regular language is also context-free. The statement isn't that the intersection of a regular language and a context-free language is a non-regular context-free language.
Analogously, the child of two mammals is a mammal. You've taken two mammals and you're saying, "Why is their child a cat? It's supposed to be a mammal."
David RicherbyDavid Richerby
Your mistake is in interpreting the meaning of the statement
The intersection of a regular language and a context-free language is a context-free language.
This statement means that if $R$ is regular and $L$ is context-free then $R \cap L$ is context-free. It doesn't state that $R \cap L$ is not regular; this is just not part of the claim.
Your first example shows that $R \cap L$ could be not regular, but it could also be regular, as your other example shows.
Yuval FilmusYuval Filmus
$\begingroup$ I don't think this is the issue. Sure, the claim doesn't say that $R\cap L$ is regular. But I think the misunderstanding is that the asker is interpreting "The intersection is context-free" to mean "The intersection is context-free and not regular." $\endgroup$ – David Richerby Jun 22 '16 at 7:47
$\begingroup$ Thanks Yuval finally able to understand the meaning of the concept. $\endgroup$ – Shaji Thorn Blue Jun 22 '16 at 17:38
Not the answer you're looking for? Browse other questions tagged formal-languages regular-languages context-free closure-properties or ask your own question.
Intersection of context free with regular languages
Deciding whether a Language is Context-free/Regular/Non Context-Free
Give a grammar to show whether a language is regular or context-free
Context free language and the complement of it
Can an intersection of two context-free languages be an undecidable language?
Is $\{ 0^p1^q0^r \mid p \neq r \}$ context-free?
|
CommonCrawl
|
Additive Differential of XOR
I'm studying about Additive differential of XOR
I saw two papers that are "H.Lipmaa et al., On the Additive Differential Probability of Exclusive-or" and "V.Velichkov et al., The Additive Differential Probability of ARX"
In this two paper, Limaa et al. and Velichkov et al. compute the probability of additive differential of xor in similar way.
But there is one difference between them.
That is matrix $A_i$ is different.
For example $A_{000}$ in first paper is $A_{000}=\begin{pmatrix} 4& 0& 0& 1& 0& 1& 1& 0 \\0& 0& 0& 1& 0& 1& 0& 0 \\ \vdots & & & \cdots & \cdots & & & \vdots \\ 0& 0& 0& 0& 0& 0& 0& 0 \end{pmatrix}$
and $A_{000}$ in second paper is $A_{000}=\begin{pmatrix} 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\0& 1& 0& 0& 0& 0& 0 & 0 \\ \vdots & & & \cdots & \cdots & & & \vdots \\ 0& 0& 0& 0& 0& 0& 0& 1 \end{pmatrix}$
I hope someone can tell me why there exists a difference.
modular-arithmetic differential-analysis
Ella Rose♦
jyjjyj
$\begingroup$ I find the answer... See the table 2. in second paper. $\endgroup$ – jyj Mar 8 at 8:15
$\begingroup$ If you found the answer, please be kind enough to share it. $\endgroup$ – DannyNiu Mar 8 at 9:02
$\begingroup$ In second paper(V.Velichkov et al., The Additive Differential Probability of ARX), the table 2's order is not same to orginal order. For example, S[i] =0 when (s1[i],s2[i],s3[i])=(0,0,-1). So, there exist a difference $\endgroup$ – jyj Mar 8 at 11:44
$\begingroup$ @jyj Please post it as an answer to your question, rather than sharing it in the comments. Comments should not be used to answer questions. $\endgroup$ – Ella Rose♦ Mar 8 at 15:19
The OP has provided the answer in a comment.
"In the second paper(V.Velichkov et al., The Additive Differential Probability of ARX), the table 2's order is not same as the original order.
For example, $S[i] =0$ when $(s_1[i],s_2[i],s_3[i])=(0,0,-1)$. So, there exists a difference.
kodlu
Not the answer you're looking for? Browse other questions tagged modular-arithmetic differential-analysis or ask your own question.
S-box with differential uniformity = 2
Differential Fault Analysis of AES
Differential Cryptanalysis
Differential trails in FEAL-8
Differential or Truncated Differential Attack On Small Cipher
What is a differential trail?
Additive differential probability
Searching for a differential characteristic (differential cryptanalysis)
Recover all keys on differential cryptanalysis
Implementation of differential cryptanalysis
|
CommonCrawl
|
How to compute Riemann-Stieltjes / Lebesgue(-Stieltjes) integral?
The definitions do not seem easy to me for computation. For example, Lebesgue(-Stieltjes) integral is a measure theory concept, involving construction for from step function, simple functions, nonnegative function till general function.
I was wondering, in practice, what common ways for computing Lebesgue(-Stieltjes) integral are?
Is it most desirable, when possible, to convert Lebesgue(-Stieltjes) integral to Riemann(-Stieltjes) integral, and Riemann-Stieltjes) integral to Riemann integral, and then apply the methods learned from calculus to compute the equivalent Riemann integral?
What about the cases when the equivalence/conversion is not possible? Is definition the only way to compute Riemann-Stieltjes or Lebesgue(-Stieltjes) integrals?
My questions come from a previous reply by Gortaur
Usually only Lebesgue (Lebesgue-Stieltjes) integrals are used in the probability theory. On the other hand to calculate them you can use an equivalence of Lebesgue-Stieltjes and Riemann-Stieltjes integrals (provided necessary conditions).
real-analysis integration measure-theory
TimTim
$\begingroup$ @Tim: You may want to check out Frank Burk's "A Garden of Integrals". It describes the different types and how they relate to one another. In particular, the Lebesgue Integral has an "FTC", and the Lebesgue-Stieltjes Integral can, in many instances, be evaluated much like the Riemann Stieltjes integral can (for example, when $g$ is differentiable). $\endgroup$
– Arturo Magidin
$\begingroup$ @Arturo: Thanks! I will. By "FTC", you mean Fundamental Theorems of Calculus? Is this en.wikipedia.org/wiki/Differentiation_of_integrals the "FTC" for Lebesgue Integral? Or/and something else? The link does not look like to make the computation easier. $\endgroup$
– Tim
$\begingroup$ @Tim: I mean the first part of the FTC, which for Riemann Integrals says that if $F'(x) = f(x)$ on $[a,b]$, $f(x)$ continuous, then $\int_a^b f(x)\,dx = F(b)-F(a)$. There are similar theorems for the Lebesgue Integral. The book's in my office, but I'll try to post the theorems tomorrow. $\endgroup$
$\begingroup$ You can view most of the book from google. The only condition is that $F$ is absolutely continuous on $[a,b]$. $\endgroup$
– GWu
$\begingroup$ @Arturo: Thanks! (1) That's nice. I will have the access to the book too, and perhaps just knowing the the numbering of the theorems will be fine and you don't need to type much. (2) Besides FTC, are there other usual ways to calculate the integrals? Are these all possible, integration by parts, by substitution, ... $\endgroup$
Even with the Riemann Integral, we do not usually use the definition (as a limit of Riemann sums, or by verifying that the limit of the upper sums and the lower sums both exist and are equal) to compute integrals. Instead, we use the Fundamental Theorem of Calculus, or theorems about convergence. The following are taken from Frank E. Burk's A Garden of Integrals, which I recommend. One can use these theorems to compute integrals without having to go down all the way to the definition (when they are applicable).
Theorem (Theorem 3.8.1 in AGoI; Convergence for Riemann Integrable Functions) If $\{f_k\}$ is a sequence of Riemann integrable functions converging uniformly to the function $f$ on $[a,b]$, then $f$ is Riemann integrable on $[a,b]$ and $$R\int_a^b f(x)\,dx = \lim_{k\to\infty}R\int_a^b f_k(x)\,dx$$
(where "$R\int_a^b f(x)\,dx$" means "the Riemann integral of $f(x)$").
Theorem (Theorem 3.7.1 in AGoI; Fundamental Theorem of Calculus for the Riemann Integral) If $F$ is a differentiable function on $[a,b]$, and $F'$ is bounded and continuous almost everywhere on $[a,b]$, then:
$F'$ is Riemann-integrable on $[a,b]$, and
$\displaystyle R\int_a^x F'(t)\,dt = F(x) - F(a)$ for each $x\in [a,b]$.
Likewise, for Riemann-Stieltjes, we don't usually go by the definition; instead we try, as far as possible, to use theorems that tell us how to evaluate them. For example:
Theorem (Theorem 4.3.1 in AGoI) Suppose $f$ is continuous and $\phi$ is differentiable, with $\phi'$ being Riemann integrable on $[a,b]$. Then the Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists, and $$\text{R-S}\int_a^b f(x)d\phi(x) = R\int_a^b f(x)\phi'(x)\,dx$$ where $\text{R-S}\int_a^bf(x)d\phi(x)$ is the Riemann-Stieltjes integral of $f$ with respect to $d\phi(x)$.
Theorem (Theorem 4.3.2 in AGoI) Suppose $f$ and $\phi$ are bounded functions with no common discontinuities on the interval $[a,b]$, and that the Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists. Then the Riemann-Stieltjes integral of $\phi$ with respect to $f$ exists, and $$\text{R-S}\int_a^b \phi(x)df(x) = f(b)\phi(b) - f(a)\phi(a) - \text{R-S}\int_a^bf(x)d\phi(x).$$
Theorem. (Theorem 4.4.1 in AGoI; FTC for Riemann-Stieltjes Integrals) If $f$ is continuous on $[a,b]$ and $\phi$ is monotone increasing on $[a,b]$, then $$\displaystyle \text{R-S}\int_a^b f(x)d\phi(x)$$ exists. Defining a function $F$ on $[a,b]$ by $$F(x) =\text{R-S}\int_a^x f(t)d\phi(t),$$ then
$F$ is continuous at any point where $\phi$ is continuous; and
$F$ is differentiable at each point where $\phi$ is differentiable (almost everywhere), and at such points $F'=f\phi'$.
Theorem. (Theorem 4.6.1 in AGoI; Convergence Theorem for the Riemann-Stieltjes integral.) Suppose $\{f_k\}$ is a sequence of continuous functions converging uniformly to $f$ on $[a,b]$ and that $\phi$ is monotone increasing on $[a,b]$. Then
The Riemann-Stieltjes integral of $f_k$ with respect to $\phi$ exists for all $k$; and
The Riemann-Stieltjes integral of $f$ with respect to $\phi$ exists; and
$\displaystyle \text{R-S}\int_a^b f(x)d\phi(x) = \lim_{k\to\infty} \text{R-S}\int_a^b f_k(x)d\phi(x)$.
One reason why one often restricts the Riemann-Stieltjes integral to $\phi$ of bounded variation is that every function of bounded variation is the difference of two monotone increasing functions, so we can apply theorems like the above when $\phi$ is of bounded variation.
For the Lebesgue integral, there are a lot of "convergence" theorems: theorems that relate the integral of a limit of functions with the limit of the integrals; these are very useful to compute integrals. Among them:
Theorem (Theorem 6.3.2 in AGoI) If $\{f_k\}$ is a monotone increasing sequence of nonnegative measurable functions converging pointwise to the function $f$ on $[a,b]$, then the Lebesgue integral of $f$ exists and $$L\int_a^b fd\mu = \lim_{k\to\infty} L\int_a^b f_kd\mu.$$
Theorem (Lebesgue's Dominated Convergence Theorem; Theorem 6.3.3 in AGoI) Suppose $\{f_k\}$ is a sequence of Lebesgue integrable functions ($f_k$ measurable and $L\int_a^b|f_k|d\mu\lt\infty$ for all $k$) converging pointwise almost everywhere to $f$ on $[a,b]$. Let $g$ be a Lebesgue integrable function such that $|f_k|\leq g$ on $[a,b]$ for all $k$. Then $f$ is Lebesgue integrable on $[a,b]$ and $$L\int_a^b fd\mu = \lim_{k\to\infty} L\int_a^b f_kd\mu.$$
Theorem (Theorem 6.4.2 in AGoI) If $F$ is a differentiable function, and the derivative $F'$ is bounded on the interval $[a,b]$, then $F'$ is Lebesgue integrable on $[a,b]$ and $$L\int_a^x F'd\mu = F(x) - F(a)$$ for all $x$ in $[a,b]$.
Theorem (Theorem 6.4.3 in AGoI) If $F$ is absolutely continuous on $[a,b]$, then $F'$ is Lebesgue integrable and $$L\int_a^x F'd\mu = F(x) - F(a),\qquad\text{for }x\text{ in }[a,b].$$
Theorem (Theorem 6.4.4 in AGoI) If $f$ is continuous and $\phi$ is absolutely continuous on an interval $[a,b]$, then the Riemann-Stieltjes integral of $f$ with respect to $\phi$ is the Lebesgue integral of $f\phi'$ on $[a,b]$: $$\text{R-S}\int_a^b f(x)d\phi(x) = L\int_a^b f\phi'd\mu.$$
For Lebesgue-Stieltjes Integrals, you also have an FTC:
Theorem. (Theorem 7.7.1 in AGoI; FTC for Lebesgue-Stieltjes Integrals) If $g$ is a Lebesgue measurable function on $R$, $f$ is a nonnegative Lebesgue integrable function on $\mathbb{R}$, and $F(x) = L\int_{-\infty}^xd\mu$, then
$F$ is bounded, monotone increasing, absolutely continuous, and differentiable almost everywhere with $F' = f$ almost everywhere;
There is a Lebesgue-Stieltjes measure $\mu_f$ so that, for any Lebesgue measurable set $E$, $\mu_f(E) = L\int_E fd\mu$, and $\mu_f$ is absolutely continuous with respect to Lebesgue measure.
$\displaystyle \text{L-S}\int_{\mathbb{R}} gd\mu_f = L\int_{\mathbb{R}}gfd\mu = L\int_{\mathbb{R}} gF'd\mu$.
The Henstock-Kurzweil integral likewise has monotone convergence theorems (if $\{f_k\}$ is a monotone sequence of H-K integrable functions that converge pointwise to $f$, then $f$ is H-K integrable if and only if the integrals of the $f_k$ are bounded, and in that case the integral of the limit equals the limit of the integrals); a dominated convergence theorem (very similar to Lebesgue's dominated convergence); an FTC that says that if $F$ is differentiable on $[a,b]$, then $F'$ is H-K integrable and $$\text{H-K}\int_a^x F'(t)dt = F(x) - F(a);$$ (this holds if $F$ is continuous on $[a,b]$ and has at most countably many exceptional points on $[a,b]$ as well); and a "2nd FTC" theorem.
Arturo MagidinArturo Magidin
$\begingroup$ Thanks! That is really helpful! $\endgroup$
$\begingroup$ @Tim: You're welcome. The point is that, just as with limits, and derivatives, and Riemann integrals, we seldom want to go back all the way to the definition in order to compute integrals; instead, we want to settle some basic instances, and then apply a bunch of theorems that tell us how to compute the results for more complicated functions built up from simpler ones (when possible). There may (and are) some instances when one does need to go all the way back to the definition, but we usually don't. $\endgroup$
$\begingroup$ Thanks! Generally, when talking about Lebesgue integral (in narrow sense), Lebesgue-Stieltjes integral or Riemann-stieltijes integral, are the integrands all real-valued Borel measurable functions defined on $\mathbb{R}$, not on $\mathbb{R}^n$, nor on other more general measure space? I know in Rudin's real and complex analysis, Lebesgue integral has a wider sense and applies to real-valued integrands defined on general measure space. $\endgroup$
$\begingroup$ @Tim: The "standard" are functions that are Lebesgue measurable (more general than Borel measurable). We can do it with values on any measure space, though $\mathbb{R}$ and $\mathbb{R}^n$ (with the Lebesgue product measure on the latter) are the most common. I took a course in which we did the entire development of Lebesgue integration with Banach-space-valued functions, and there was absolutely no difficulty in generalizing from $\mathbb{R}$ to that situation. $\endgroup$
The answer depends on what do you mean by practice. In the most common situations the integral of a function $f$ can be usually found by comparing the $f$ to the functions $g$ for which we already know what integral looks like. For the large class of commonly used functions all the notions of the integrals coincide, so it is not necessary to convert from one integral to another, when you have already calculated one variant.
If for the certain function only one notion of integral applies, the comparison principle still applies. The classical example is the Dirichlet function for which Riemman integral does not exist. On the other hand this is zero almost everywhere, and Lebesgues integral is the same for functions which differ on the set of Lebesgue measure zero. So we get the Lebesgue integral of Dirichlet function is the Lebesgue integral of zero. For zero function Rieman and Lebesgue integrals coincide so we can calculate which ever is easier.
Another trick is to construct the sequence of functions with known integrals which converge to the function of interest. Then integral is usually the limit of the corresponding integrals. This of course does not apply for all types of convergence and functions.
To sum up, the definition is not the only way to calculate the integral. The definition is used to calculate the integrals of the most simple functions, the rest is usually calculated by manipulating the function or integral to already known solution.
mpiktasmpiktas
It will be better if you will provide the area (or the problem) which leads you to the calculation of this integrals. From the computational point of view there are two "types" of integrals which lead you to correspondent two general methods of its computation. They are dependent on the distribution $Q$. Let us consider the case of $\mathbb{R}$.
The first type is an integral of an absolutely continuous distributions $Q$ - i.e. of such that $Q(dx) = h(x)\,dx$ for function $h$ which is a density function. These integrals often are calculated like a Riemann integrals (using correspondent methods).
All other 1-dimensional integrals for the computations can be reduced to the previous case. For the cumulative distribution function $g$ (which always exists) you can write $$ dg(x) = h(x)dx + \sum\limits_{i=1}^n p_i\delta(x-x_i) $$ where $\delta(x)$ is a Dirac function.
Then for the continuous function $f$ $$ \int\limits_{\mathbb{R}}f(x)\,dg(x) = \int\limits_{-\infty}^\infty f(x)h(x)\,dx +\sum\limits_{i=1}^n p_if(x_i). $$
This also will be the case if $f$ has countably many discontinuities which do not coincide with the sequence $(x_i)$ of massive points.
IlyaIlya
$\begingroup$ Thanks! The area can be probability theory, but can be in complex analysis and manifold. I was wondering what types of integral are used in complex analysis, such as for Fourier transformation? Is integral over manifold called integral of differential forms? Are the integrals from these two areas special cases of Lebesgue integral over a general measure space? $\endgroup$
$\begingroup$ I think that integration of a differential from deals more with a Riemann integral than with the Lebesgue one. The idea is the following - the Riemann construction uses partition of the domain of the function (which then easily extended on the limit case with non-compact sets). On the other hand, to define the Lebesgue integral one usually deals with the partition of the codomain and measures of pre-images. The Dirichlet function is a good example where partition of the codomain is easy, on the other hand, the Riemann integral is undefined. $\endgroup$
– Ilya
Not the answer you're looking for? Browse other questions tagged real-analysis integration measure-theory or ask your own question.
Fundamental Theorem of Calculus
What is the insight behind the Lebesgue integral?
Is expectation Riemann-/Lebesgue–Stieltjes integral?
Lebesgue integral or Lebesgue–Stieltjes integration?
Generalizing Green's Theorem
What if limit of integral is not equal to integral of limit?
How to integrate a function with respect to a Lebesgue-Stieltjes measure?
How is Riemann–Stieltjes integral a special case of Lebesgue–Stieltjes integral?
Is there difference between existence of integral and integrability
What is the point of Riemann-Stieltjes integration?
Riemann Stieltjes integral definition and implications
For non-negative functions, are Riemann-Stieltjes and Lebesgue integrals equivalent?
Understanding Stieltjes-Riemann
Confusion about Stieltjes integrals: Improper-Riemann, Lebesgue, and Generalized Riemann
dominated convergence theorem for Riemann-Stieltjes integral
|
CommonCrawl
|
Generic Scene Node Composer
About Getting Started Examples Gallery Documentation Launch
AudioProcessing.Compute.Volume.Limiter
This node implements an audio limiter. Limiters are very useful to avoid clipping of a signal when the signal's amplitude leaves its allowed range between -1 and 1. For example, clipping is a typical problem when multiple audio sources are added together to generate a mix. Trivial solutions to avoid clipping would be either to clamp the signal to the range [-1,1], which might introduce strong distortion, or to scale down the amplitude of the complete signal linearly, which reduce the perceived loudness.
In contrast, a dynamic range limiter avoids clipping by a non-linear scaling that effects only large amplitudes above a certain threshold and leaves smaller amplitudes intact. Thereby, the limiter allows to control how smoothly the scaling factor (gain) is changing over time.
The images below shows typical characteristics of the applied non-linear scaling function. The left curve is call soft-knee characteristic due to its soft transition in the area around the threshold. The right one is call hard-knee characteristic.
Because loudness is perceived linear in logarithmic units, the scaling function is applied after conversion to the logarithmic decibel (dB) scale. The characteristic maps an input value on the x-axis (in dB) to a value on the y-axis (in dB). In this example, the threshold is -10 dB. Therefore, all input values above -10 db are mapped to -10 dB after the characteristic is applied.
Input Slots
The limiter node has the follow input slots:
Threshold (default: -6 dB): The threshold in dB at which limiting starts.
Knee width (default: 0 dB): The width of the soft-knee in dB. A value of 0 dB, generates a hard-knee characteristic.
Attack time (default: 0 sec): Attack time in seconds. This parameter controls how fast the limiter is responding to a signal value above the threshold. A value of 0 seconds means that the limiter react immediately and the output is guaranteed to stay below the threshold. Because of this property, limiters with an attack time of 0 seconds are also called "brickwall" limiters. A longer attach time means that the gain is changed more slowly. This has the advantage of a less abrupt gain change, but values above the threshold can occur.
Release time (default: 0.5 sec): Release time in seconds. This parameter controls how fast the limiter releases the scaling once the signal drops below the threshold.
Sample rate (default: 44100 samples/sec): Sample rate of the input audio signal.
Apply make-up gain (default: true): As the limiter reduces the amplitude, this option allows to automatically compensate for this loss.
Stream (default: false): Enable this Boolean parameter if the limiter is used as a node in a streaming audio pipeline. This is important because with this option enabled, the gain value is maintained from one processed frame to the next.
Debug (default: Final Gain): selects which debug signal is displayed at the debug output (Input dB, Applied Characteristic, Gain, Smoothed Gain, Make-up Gain, Final Gain, Characteristic).
Implementation Details
The implementation of the dynamic range limiter follows the feed-forward design from Josh Reiss's tutorial, which proposes a 6-step side-chain that computes the final gain factor for each input sample. The following figure illustrates the internal signals after each side-chain step for a test input signal.
Step 1: Convert to dB
The conversion to decibel (dB) scale is computed by:
$x_{\mathrm{dB}} = 20 \log_{10}|x|$,
where $x$ is a sample of the input signal. As can be seen in the figure above, an amplitude in range [-1,1] will result in a $x_{dB}$ less than or equal to zero, whereas the problematic values outside this range, which would cause clipping, have a $x_{dB}$ larger than zero.
Step 2: Apply Characteristic
In this step the static characteristic is applied. The user can select a threshold $T$ and a knee width $W$. For a hard-knee characteristic (with $W \le 0$), the formula is:
$x_{c} = \begin{cases} x_{\mathrm{dB}} & \,\,:\,\, x_{\mathrm{dB}} < T\\ T & \,\,:\,\, x_{\mathrm{dB}} \ge T \\ \end{cases}$
and for a soft knee characteristic with knee width $W$:
$x_{c} = \begin{cases} x_{\mathrm{dB}} & \,\,:\,\, x_{\mathrm{dB}} < (T - W/2)\\ x_{\mathrm{dB}} - \frac{ \left(x_{\mathrm{dB}} - T + W/2\right)^2}{2 \,W } & \,\,:\,\, (T - W/2) \le x_{\mathrm{dB}} \le (T + W/2) \\ T & \,\,:\,\, x_{\mathrm{dB}} > (T +W/2).\\ \end{cases}$
Thereby, the threshold $T$ and the knee width $W$ are specified in dB. For example, in the figure above a hard knee characteristic ($W=0\, \mathrm{dB}$) with a threshold of $T = -10\,\mathrm{dB}$ is applied.
Step 3: Compute Gain
The gain that should be applied to the input signal is computed by the difference between the signal after the applied characteristic and the input signal in dB:
$x_{g} = x_{c} - x_{\mathrm{dB}} $
Step 4: Smooth Gain
In this step the gain is smoothed over time using a single pole recursive low-pass filter. Two different filter coefficents $\alpha$ and $\beta$ are applied, where the filter coefficent $\alpha$ is controlled by the attack time and $\beta$ is controlled by the release time.
$x_{s}[n] = \begin{cases} \alpha \, x_{s}[n-1] + (1.0 - \alpha) \,x_{g}[n] & \,\,:\,\, x_{g}[n] < x_{s}[n-1]\\ \beta \, x_{s}[n-1] + (1.0 - \beta) \, x_{g}[n] & \,\,:\,\, x_{g}[n] \ge x_{s}[n-1]\\ \end{cases}$
For the example in the figure above, the attack time is selected to be 0 seconds, resulting in an $\alpha$ of zero, and therefore no smoothing is done while the required gain $x_{g}$ continiously gets lower at the beginning of the signal. A limiter with an attack time of 0 seconds is called "brickwall" limiter because it will always directly apply the required negative gain if the signal goes above the threshold. The release time is chosen as 0.2 seconds and, therefore, we can observe smoothing of the gain once the required gain $x_{g}$ gets higher.
Step 5: Make-up Gain
Typically, because input amplitudes above the threshold are suppressed, the output signal has a reduced loudness. To compensate for this loss, additional make-up gain can be applied. The amount of make-up gain can be determined by computing how much loss would be realized by the characteristic for an amplitude of 1.0, corresponding to 0 dB.
$x_{m} = x_{s} - \mbox{characteristic}(0 \,\mbox{dB})$
In the example in the figure above, the threshold is $T = -10\,\mathrm{dB}$ and, thus, the gain is raised by $10\,\mathrm{dB}$ for the complete signal.
Step 6: Convert to Linear
This last step converts the gain from decibel (dB) scale back to the linear domain. The inverse function to the one in step 1 must be applied, which is given by:
$g = 10^{\frac{x_{m}}{20}}$
The resulting final gain $g$ is the output of the side-chain and is multiplied with the current sample of the input signal.
About • Gallery • Terms of Use • Privacy Policy • Contact
Copyright © 2014, Thorsten Thormählen, All rights reserved
|
CommonCrawl
|
An easy tool to assess ventilation in health facilities as part of air-borne transmission prevention: a cross-sectional survey from Uganda
Miranda Brouwer ORCID: orcid.org/0000-0001-8260-46491,
Achilles Katamba2,
Elly Tebasoboke Katabira2 &
Frank van Leth3
No guidelines exist on assessing ventilation through air changes per hour (ACH) using a vaneometer. The objective of the study was to evaluate the position and frequency for measuring air velocity using a vaneometer to assess ventilation with ACH; and to assess influence of ambient temperature and weather on ACH.
Cross-sectional survey in six urban health facilities in Kampala, Uganda. Measurements consisted of taking air velocity on nine separate moments in five positions in each opening of the TB clinic, laboratory, outpatient consultation and outpatient waiting room using a vaneometer. We assessed in addition the ventilation with the "20% rule", and compared this estimation with the ventilation in ACH assessed using the vaneometer.
A total of 189 measurements showed no influence on air velocity of the position and moment of the measurement. No significant influence existed of ambient temperature and a small but significant influence of sunny weather. Ventilation was adequate in 17/24 (71%) of all measurements. Using the "20% rule", ventilation was adequate in 50% of rooms assessed. Agreement between both methods existed in 13/23 (56%) of the rooms assessed.
Most rooms had adequate ventilation when assessed using a vaneometer for measuring air velocity. A single vaneometer measurement of air velocity is adequate to assess ventilation in this setting. These findings provide practical input for clear guidelines on assessing ventilation using a vaneometer. Assessing ventilation with a vaneometer differs substantially from applying the "20% rule".
Tuberculosis (TB) is an airborne disease of which transmission occurs through infectious droplets in the air originating mostly from coughing people. This makes health care facilities high-risk areas for TB transmission because coughing patients, including those with (undiagnosed) TB, gather there when seeking care. Therefore, by the nature of their work, health care workers have an increased exposure to TB, and a higher risk of TB disease compared to the general population [1]. To reduce the risk of TB transmission in health care facilities, the World Health Organization (WHO) recommends a set of TB infection prevention and control measures [2]. These measures include the use of ventilation systems. In existing health care facilities maximizing natural ventilation takes priority before considering other ventilation systems.
Evaluation of the adequacy of ventilation is through assessment of the number of air changes per hour (ACH) [2]. This is the number of times per hour that air from outside the room replaces the air in the room. International guidelines recommend at least 12 ACH for airborne precaution rooms [2, 3], and at least 6-12 ACH for laboratories performing low risk investigations such as smear microscopy [4]. If individual health care workers or health care facilities had a simple tool to assess ventilation in their workrooms, it may encourage them to maximize natural ventilation. If adequate ventilation is not possible, they could use additional measures to reduce the airborne transmission risk.
The most used reference tests for measuring actual ACH are tracer gases or carbon dioxide dilution [5,6,7], as described by Menzies et al. [8] These techniques require equipment that has limited availability in resource-constrained settings. Other techniques have been used, such as asking health care workers about ventilation in their consultation rooms without quantitative assessment [9], or the open openings' surface to floor surface ratio to assess ventilation, the "20% rule" [10], as recommended in the Ugandan TB infection control guidelines [11]. Ventilation is considered adequate if the surface of open openings is more than 20% of the floor surface. These methods are easy to use but have not been validated against an adequate reference method.
The document on implementation of the WHO infection prevention and control policy suggests a relative simple tool, a vaneometer, to assess ventilation [12]. The vaneometer is developed for industry to measure air velocity. This air velocity together with the volume of the room and the surface of openings through which air enters the room, provide the inputs to calculate the ACH.
Unfortunately, there is no operational guidance nor experiences from published studies on how to measure air velocity using the vaneometer, precluding the answers to some basic questions such as (1) Is a single air velocity measurement sufficient, and (2) Is the position in the opening relevant for the air velocity measurement? For widespread implementation of ventilation assessments in especially resource limited settings it would be of great help if a single measurement of air velocity would suffice, which is the primary research question for the current study. Assessment of ventilation does need trained staff, and if a single measurement were sufficient, staff could perform more assessments and cover more facilities in less time. A secondary question is how the ACH assessment with vaneometer compares to the assessment of the open openings' surface to floor surface ratio method.
In six purposefully chosen urban health care facilities in Kampala, we conducted ventilation assessments in the TB clinic, the laboratory, an outpatient department (OPD) consultation room, and in the OPD waiting area. Data collectors took nine rounds of separate air velocity measurements for each opening using a vaneometer: three times a day on three consecutive working days. At each of these time points, they took the measurements at five positions in each opening in the room: in the center of the opening and in the middle of each of the sides of the opening. They kept the vaneometer for a few seconds at each position and then read the air velocity. The measurements were taken with openings open or closed as in routine working conditions.
In addition, they measured the height and width of all openings to calculate the surface of the openings, as well as width, length and height of the rooms to calculate the volume of the rooms. They recorded information on ambient temperature (degrees centigrade) and weather conditions (cloudy, rainy, sunny, windy or a combination of these) at the time of the measurement. The recording of open or closed state of the openings as in routine working conditions occurred on the first day only.
The data collectors used an android phone with pre-installed structured data capture forms using Open Data Kit Collect (version 1.4.2.). The forms were uploaded using Open Data Kit Aggregate to a server from which databases in the form of comma separated files were downloaded. The data collectors received training on the use of the vaneometer and had prior experience conducting such assessments. They used a DwyerTM vaneometer M480 with a vane (Dwyer Instruments, Inc., Michigan City, USA) to measure air velocity in meters per second. The selection for this type of vaneometer was based on the price (USD 35,75 at the time of the study) and the experience that researchers and data collectors had with this type. The air velocity lower detection limit of this device is 25 ft per minute or 0.13 m per second (manufacturer instructions leaflet).
The data files were imported into STATA version 12 (StataCorp, College Station, Texas, USA). To assess the appropriateness of the use of the vaneometer, we estimated the effect of the position of the measurement and the round of the measurement of air velocity at a specific opening. We used a hierarchical model that incorporated a fixed effect for the round, and a random effect for the position of the measurement. The fixed effect of round denotes how much the mean overall velocity changes on average for each round. The random effect allows the mean overall velocity to differ by opening. Its estimate is a standard deviation and consists of two parts: a between-estimate and a within-estimate. The between-estimate gives the standard deviation of the different mean overall velocities at each position. A small between-random effect indicates that there is not much variation in overall mean velocity between the different positions. The within-estimate gives us the standard error of the actual measurements as is similar to the residuals in every statistical model. The difference in magnitude of these parts of a random effect tells us where the variation in velocity measurement comes from.
As input for the model, we used only measured air velocities that had an inward direction and were not equal to zero. The estimated air velocities for each opening provided the input for the formula of ACH
$$ ACH=3600\;\mathrm{s}\; x\frac{\left( average\kern0.5em estimated\kern0.5em air\kern0.5em velocity\kern0.5em \left( m/ s\right)\right. x\left( area\ all\ openings\ with\ incoming\ air\ (m2)\right)}{volume\ of\ the\ room\ (m3)} $$
$$ \mathrm{s}=\mathrm{seconds},\mathrm{m}/\mathrm{s}=\mathrm{meter}\ \mathrm{per}\ \mathrm{second},\mathrm{m}2=\mathrm{square}\ \mathrm{meter},\mathrm{m}3=\mathrm{cubic}\ \mathrm{meter} $$
If the air velocity in an opening was not inward for all five positions, the area of the opening with inward air contributed proportionally to the ACH calculation. For example, if the direction of the airflow was inward in three of the five positions and outward in the remaining two positions, 60% of the total area of the opening contributed to the ACH calculation. We classified ventilation as inadequate if the ACH was below 6, as potentially adequate between 6 and 12, and as adequate if above 12 [2,3,4].
To assess the effect of weather, we collapsed the possible categories into two (sunny / not sunny) to obtain groups of similar sizes. Given the distribution of temperature, we grouped the data as below 25 degrees or 25 degrees and over.
We calculated the open openings' surface to floor surface ratio with R statistics [13].
The "20% rule" uses the formula
$$ ventilation=\frac{sum\; of\ the\ surface\ of\; all\; open\ open ings}{surface\ of\ the\ floor\ of\ the\ room} x\;100\% $$
The assessment of ventilation using the minimum ACH value calculated with the measured air velocity was then compared to the assessment of the ventilation with the "20% rule".
The Research and Ethics Committee of Makarere University and the Uganda National Council for Science and Technology in Kampala approved the Ugandan study.
Data collection took place from May to July 2014. In the six facilities, the data collectors took 189 measurements, i.e. measuring the air velocity at each opening in the room, out of the expected 216 (six facilities, four rooms, and nine rounds of measurements: three times a day on 3 days). Two TB clinics were tents, which were completely open structures with a roof and poles only. In one of them we took measurements on 1 day only, in the other TB tent no measurements at all. In one facility, we managed only 2 days of measurements. In one room in two facilities we did not manage three rounds of measurements in a day because the rooms were in use. In total, there were 3955 air velocity measurements, of which 278 (7%) were zero.
The effects of the hierarchical model are reported in Table 1. The average fixed effect of the round on the measured air velocity at a specific opening was small in relation to the mean overall air velocity at that opening, even if this effect was statistically significant. The between part of the random effect of the position of the measurement was in most instance almost non-existent, and always much lower than the within random effect. These results indicate that a single measurement at an arbitrary position of the opening would give a valid indication of the air velocity at that opening. Using these measurements in the calculation of the ACH would provide a valid assessment of the ventilation in the room.
Table 1 Effects of the hierarchical model
Table 2 presents the classification of ACH based on the modeled air velocity and the 20% rule. In 17 of the 23 (74%) rooms, all rounds of measurements conducted resulted in adequate ventilation. In one room, only one round resulted in inadequate ventilation, while all other rounds in the same room resulted in potentially adequate or adequate ventilation. The other six rooms had a combination of potentially adequate or adequate ventilation.
Table 2 Ventilation status in four areas in six urban health care faculties in Uganda per round of measurement and per day
The modeled air velocity did not vary significantly with the ambient temperature (p = 0.259). In sunny weather the air velocity was higher compared to non-sunny weather (p = 0.003), though the difference in the mean estimated air velocity in both weather conditions was rather small (0.07 m/s), meaning that under different weather conditions the air velocity may change. Another single measurement would be needed to assess the ACH under the different weather conditions.
The ventilation in the routine working situation with the "20% rule" showed that 12 of the 24 rooms assessed had a ratio of more than 20%, which is considered adequate ventilation (Table 2). Agreement between the two methods existed in 13/23 (56%) of the rooms if we combine the potentially adequate and adequate ventilation categories of the ACH method into one category of adequate ventilation. In one room we did not have air velocity measurements and therefore could not make the comparison. In Fig. 1 we show the two methods in a scatterplot where the left upper quadrant and the right lower quadrant show assessments where the methods did not agree.
Scatter plot of minimum ACH and the 20% rule in each room. Black squares show roooms where both methods did not agree (■) and black dots show rooms where both methods did agree on the ACH assessment (•). The value of the minimum ACH in the TB room in facility 3 was excluded from the scatterplot because of its high value (479) it distorted the plot
Our results suggest that in this setting a single air velocity measurement at all openings in a room using a vaneometer is sufficient to assess ventilation in that room through the calculation of ACH. Ventilation assessed with the vaneometer was classified as adequate in most of the rounds. These findings do not compare well with the "20% rule" because both methods agreed in only 56%.
The weather condition had a rather small effect on the measured air velocity and may be due to a difference in the temperature gradient between in- and outside temperature. Because we did not measure outside temperature we cannot verify this. However, the effect was rather small on the estimated air velocity and will probably not affect the ACH. Though different weather conditions may affect the opening of windows and doors compared to the routine working situation, which would affect ACH. Therefore, we recommend assessment of ACH under various weather conditions to verify the ventilation in this different condition.
The finding of (potentially) adequate ventilation in more than 94% (177/189) of the rounds was surprising. We did expect poorer ventilation based on other studies from Africa reporting less than 50% of rooms adequately ventilated, though with a different assessment method [10, 14]. The "20% rule" ventilation assessment of 50% adequately ventilated rooms agreed with another study from Uganda [10]. Deciding on the most appropriate assessment of ventilation systems would require a validation study using for example tracer gases.
We used 12 ACH as cut-off for adequate ventilation. This cut-off recommendation applies to mechanically ventilated airborne precaution rooms [2]. The recommended cut-off for laboratories is 6-12 ACH [4]. No clear recommendations on ACH exist for the other rooms such as TB clinics, OPD consultation and waiting rooms, or wards. In a systematic review, Li et al. did not find evidence for a recommended quantification of ventilation requirements [15]. A study in Canada found an association between general or non-isolation rooms having less than 2 ACH and the conversion of the tuberculin skin test in health care workers [16]. The study did not find an association between skin test conversion and inadequately ventilated isolation rooms for which at the time of the study the cut-off was 6 ACH. If a lower cut-off of more than 6 ACH instead of more than 12 ACH would be acceptable to define adequate ventilation, only one room in one round in Uganda would have inadequate ventilation.
Natural ventilation has been shown to achieve higher ACH than mechanical ventilation [5, 7]. The disadvantage of natural ventilation is its variability in both velocity and direction [17]. However, given the costs of mechanical ventilation systems and the need to maintain these systems, and the weak evidence available for specific recommendations regarding the quantification of ventilations requirements, natural ventilation seems the way forward for resource limited settings. Our study shows that in Uganda natural ventilation provides adequate ventilation in at least 50% ("20% rule") or 71% (vaneometer) of the facilities and rooms assessed.
Our method is easy and simple to use and provides a rough estimate of the ACH. It will give health care workers an idea whether their place of work is probably safe with regard to ventilation as prevention for air-borne transmission. However, if the assessment needs to be precise because of working with high risk patients such as patients with MDR-TB, then a rough estimate is insufficient.
Health facilities would need practical guidelines to assess ventilation using the vaneometer in their rooms. Based on our findings, not validated by a reference method, we suggest that such practical guidelines could include at least the following items:
A single measurement of air velocity at each opening using a vaneometer and measurements of openings and rooms provides adequate input for the ACH calculation;
If ACH is above 12 the ventilation is deemed adequate;
If the ACH is between 6 and 12, several measurements of air velocity provide insight into the variability of ventilation; if persistently between 6 and 12, opening more openings will probably increase ventilation;
Because of a potential effect of the weather, assessment of the ACH under different weather conditions is necessary;
If opening of more openings is not possible, or the ACH is below 6, then health facility management should consider improving health care worker safety through additional measures for infection prevention and control; and.
Training and support for ventilation assessments: infection control officers could conduct the assessments after a practical training on how to measure air velocity and how to calculate ACH.
Additional measures to reduce the TB transmission risk in rooms with inadequate ventilation assume that all administrative controls are in place [2]. Additional measures include positioning of health care workers such that they would not inhale potentially infected air, and fans to direct airflow out of the room. Construction adaptation such as addition windows to allow cross-ventilation or latticed walls, seem most effective, though not easily implemented [18]. Each situation with inadequate ventilation would need individual assessment on how to improve ventilation in the particular circumstances of that situation. Should all these measures be insufficient to contain the transmission risk health care workers may need to wear particulate respirators. To do that effectively, they need clear instructions on how and when to use these and how to handle the respirators in-between use should the respirators be used more than once [19].
This method of ACH calculation assumes perfect mixing of air in the entire room. This may not happen in rooms that have obstacles such as partition walls or patient screens. Imperfect mixing means that some areas in the room are better ventilated than other areas.
A further limitation to this study is that, in common with many resource-constrained settings, we lacked the resources to validate the vaneometer against a reference test for ACH assessment using trace gases [20] or carbon dioxide dilution [5, 7]. Such validation is urgently needed. Until such research is done, our findings should be interpreted cautiously.
We did not measure outside wind speed, which has been shown to influence ACH [7]. Therefore future research should also measure ambient, outside wind speed and test the extent to which this influences vaneometer assessment of natural ventilation ACH. For example, on still days, with little wind, airflow through room openings may be too low to measure with the vaneometer, possibly causing ACH to be under-estimated.
Although the manufacturer instructions for the vaneometer states accuracy to ±10% of the full scale, the reading of vaneometer is not straightforward because of the constant movement of the vane. However, the data collectors were trained and experienced in taking the readings as such minimizing reading variability. This inter-reader variability potentially results in different assessments of the ventilation in a room, and becomes especially important when the resulting ventilation is below 6 ACH. We therefore recommend taking more than one air velocity measurement if the resulting ACH is between 6 and 12.
In addition, the lower detection limit of the vaneometer device may assess ACH insufficiently in situations with low air velocity.
We assessed the area of the open openings only on the first day, which limits the comparison between the ACH assessment with the "20%-rule". Data collection took place on three consecutive working days, which may have resulted in the same openings being open or closed during all measurements, it would have been better to assess this at reach round of measurements.
Our study does not capture the complexity of ventilation that is influenced by many factors such as in- and outside temperature and surrounding structures. This was on purpose because we wanted to assess ventilation with simple to use tools and methodology which can be used in the many health facilities in settings with limited resources where more complicated ventilation assessment methods are not widely available. Also, the technical expertise to do such assessment is not or limited available. Our proposed method is easy to implement after a short training and provides a reasonable assessment of the ventilation status. Though we consider a single measurement sufficient for assessing ventilation, we do acknowledge that this method needs further validation. This method is probably of less value in situations where good infection control is highly important such as places where patients with MDR-TB receive treatment. However, it can provide an initial assessment that informs policy makers for further requirements.
It seems possible to assess ventilation in rooms in health care facilities using a vaneometer taking a single measurement of air velocity at each opening in the rooms. Further studies need to validate our findings and identify simple to use and implement methods to assess ventilation in the many health facilities in limited resources settings with a potentially high prevalence of airborne transmitted diseases such as TB. Such studies would provide further valuable input for guideline development on how to assess ventilation in health care facilities. These studies would also need to assess the usefulness and place of the "20% rule". An application for mobile phone to facilitate the ACH calculation and one for using the "20% rule" would simplify the assessment even further.
ACH:
Air Changes per Hour
OPD:
Outpatient Department
TB:
Joshi R, Reingold AL, Menzies D, Pai M. Tuberculosis among health-care workers in low- and middle-income countries: a systematic review. PLoS Med. 2006;3:e494.
World Health Organization. Policy on TB infection control in health-care facilities, congregate settings and households. 2009.
Centers for Disease Control (CDC). Guidelines for Preventing the Transmission of Mycobacterium tuberculosis in Health-Care Settings, 2005 [Internet]. 2005 [cited 2013 Mar 27]. Available from: http://www.cdc.gov/mmwr/preview/mmwrhtml/rr5417a1.htm
World Health Organization. Tuberculosis laboratory Biosafety manual. WHO/HTM/TB/2012.11. 2012.
Jiamjarasrangsi W, Bualert S, Chongthaleong A, Chaindamporn A, Udomsantisuk N, Euasamarnjit W. Inadequate ventilation for nosocomial tuberculosis prevention in public hospitals in Central Thailand. Int J Tuberc Lung Dis. 2009;13:454–9.
Hubad B, Lapanje A. Inadequate hospital ventilation system increases the risk of nosocomial mycobacterium tuberculosis. J Hosp Infect. 2012;80:88–91.
Escombe AR, Oeser CC, Gilman RH, Navincopa M, Ticona E, Pan W, et al. Natural ventilation for the prevention of airborne contagion. PLoS Med. 2007;4:e68.
Menzies R, Schwartzman K, Loo V, Pasztor J. Measuring ventilation of patient care areas in hospitals. Description of a new protocol. Am J Respir Crit Care Med. 1995;152:1992–9.
Javed S, Zaboli M, Zehra A, Shah N. Assessment of the protective measures taken in preventing nosocomial transmission of pulmonary tuberculosis among health-care workers. East J Med. 2013;17:115–8.
Buregyeya E, Nuwaha F, Verver S, Criel B, Colebunders R, Wanyenze R, et al. Implementation of tuberculosis infection control in health facilities in Mukono and Wakiso districts, Uganda. BMC Infect Dis. 2013;13:360.
Ministry of Health of The Republic of Uganda. Uganda National Guidelines for Tuberculosis Infection Control in Health Care Facilities, Congregate Settings and Households. [Internet]. Available from: http://www.who.int/hiv/pub/guidelines/uganda_hiv_tb.pdf
TBCTA. Implementing the WHO Policy on TB Infection Control [Internet]. 2010. Available from: http://www.tbcare1.org/publications/toolbox/tools/ic/TB_IC_Implementation_Framework.pdf
R Core Team (2014). R: A language and environment for statistical computing. [Internet]. R Foundation for Statistical Computing, Vienna, Austria.; Available from: http://www.R-project.org/.
Naidoo S, Seevnarain K, Nordstrom DL. Tuberculosis infection control in primary health clinics in eThekwini, KwaZulu-Natal, South Africa. Int J Tuberc Lung Dis Off J Int Union Tuberc Lung Dis. 2012;16:1600–4.
Li Y, Leung GM, Tang JW, Yang X, Chao CYH, Lin JZ, et al. Role of ventilation in airborne transmission of infectious agents in the built environment - a multidisciplinary systematic review. Indoor Air. 2007;17:2–18.
Menzies D, Fanning A, Yuan L, FitzGerald JM. Hospital ventilation and risk for tuberculous infection in canadian health care workers. Canadian collaborative Group in Nosocomial Transmission of TB. Ann Intern Med. 2000;133:779–89.
World Health Organization. Natural ventilation for infection control in health-care settings. [Internet]. 2009 [cited 2013 Apr 10]. Available from: http://www.who.int/water_sanitation_health/publications/natural_ventilation/en/index.html
Taylor JG, Yates TA, Mthethwa M, Tanser F, Abubakar I, Altamirano H. Measuring ventilation and modelling M. Tuberculosis transmission in indoor congregate settings, rural KwaZulu-Natal. Int. J Tuberc Lung Dis. 2016;20:1155–61.
Brouwer M, Coelho E. Das Dores Mosse C, van Leth F. Implementation of tuberculosis infection prevention and control in Mozambican health care facilities. Int J Tuberc Lung Dis. 2015;19:44–9.
Sherman MH. Tracer-gas techniques for measuring ventilation in a single zone. Build Environ. 1990;25:365–74.
The authors wish to thank Annet Nakaweesa and Michael Mukiibi for collecting the data. We also thank the participating health care facilities for their cooperation.
The first author used private resources to fund the study.
The datasets generated during and/or analysed during the current study are not publicly available due to the participating health facilities not being aware of public data sharing but are available from the corresponding author on reasonable request.
Conceived and designed the experiments: MB AK ETK FvL. Performed the experiments: MB AK ETK. Analyzed the data: MB AK FvL. Contributed reagents/materials/analysis tools: MB AK FvL. Wrote the paper: MB AK ETK FvL. All authors read and approved the final manuscript.
The authors declare they have no competing interest.
The Research and Ethics Committee of Makarere University and the Uganda National Council for Science and Technology in Kampala approved the study. The Kampala health authorities approved participation of the six health facilities.
PHTB Consult, Lovensestraat 79, 5014, DN, Tilburg, The Netherlands
Miranda Brouwer
Department of Medicine, School of Medicine, Makerere University, College of Health Sciences, P.O. Box 21696, Kampala, Uganda
Achilles Katamba & Elly Tebasoboke Katabira
Amsterdam Institute of Global Health and Development, Pietersbergweg 17, 1100, DE, Amsterdam, The Netherlands
Frank van Leth
Achilles Katamba
Elly Tebasoboke Katabira
Correspondence to Miranda Brouwer.
Brouwer, M., Katamba, A., Katabira, E.T. et al. An easy tool to assess ventilation in health facilities as part of air-borne transmission prevention: a cross-sectional survey from Uganda. BMC Infect Dis 17, 325 (2017). https://doi.org/10.1186/s12879-017-2425-6
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.