text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Equivalence of Two Lorentz Groups
How can I prove that $O(3;1)$ and $O(1;3)$ are the same group?
group-theory lie-algebras
omehoqueomehoque
migrated from physics.stackexchange.com Feb 8 '14 at 20:13
This question came from our site for active researchers, academics and students of physics.
$\begingroup$ Just for clarification: do you mean $SO(3,1)$ and $SO(1,3)$ instead of $O(3,1)$ and $O(1,3)$? $\endgroup$ – Hunter Feb 8 '14 at 12:59
$\begingroup$ Not necessarily. $\endgroup$ – omehoque Feb 8 '14 at 13:03
The matrices $M$ in $O(3,1)$ and $O(1,3)$ are defined by the condition $$ M G M^T = G $$ for $$ G=G_{1,3} ={\rm diag} (1,1,1,-1)\text{ and } G=G_{3,1} = {\rm diag} (1,-1,-1,-1)$$ respectively. I use the convention where the first argument counts the number of $+1$'s in the metric tensor and the second one counts the negative numbers $-1$ that follow.
But these two groups only differ by a permutation of the entries.
First, note that it doesn't matter whether we have a "mostly plus" or "mostly minus" metric. If you change the overall sign of the metric via $G\to -G$, $MGM^T = G$ will remain valid.
Second, the two groups only differ by having the "different signature coordinate" at the beginning or at the end. But it may be permuted around. If $M$ obeys the first condition of $O(1,3)$, $MG_{1,3}M^T =G_{1,3}$, you may define $$ M' = P M P^{-1} $$ where $P$ is the cyclic permutation matrix $$ P = \pmatrix{0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&0&0&0}$$ and it is easy to see that $M'$ will obey $$ M' G_{3,1} M^{\prime T} = G_{3,1} $$ simply because $$ M' G_{3,1} M^{\prime T} = PMP^{-1} G_{3,1} P^{-1T} M^T P^T $$ but $P^{-1}=P^T$ and, crucially, $$ P^{-1} G_{3,1} P = -G_{1,3} $$ So all the $P$'s will combine or cancel and one gets it.
One should try to get through every step here but the reason why the groups are isomorphic is really trivial: they are the groups of isometries of the "same" spacetime, one that has three spacelike and one timelike dimension, and they only differ by the convention how we order the coordinates and whether we use a mostly-plus or mostly-minus metric. But such changes of the notation don't change the underlying "physics" so the groups of symmetries of the "same" object parameterized differently must be isomorphic.
Luboš MotlLuboš Motl
$\begingroup$ Alhamdulillah! Great! $\endgroup$ – omehoque Feb 8 '14 at 13:19
Define the Lie group
$$\tag{1} O(p,q)~:=~ \{\Lambda\in {\rm Mat}_{n\times n}(\mathbb{R}) ~|~\Lambda^T\eta\Lambda= \eta \} $$ of pseudo-orthogonal real matrices $\Lambda$ for the metric $$\tag{2} \eta_{\mu\nu}~=~{\rm diag} (\underbrace{+1,\ldots,+1}_{p~\text{times}},\underbrace{-1,\ldots, -1}_{q~\text{times}}), \qquad n~=~p+q.$$
The groups $O(p,q)=O(q,p)$ are equal since the overall sign of the metric $\eta_{\mu\nu}\to -\eta_{\mu\nu}$ does not matter in the definition $$\tag{3}\Lambda^T\eta\Lambda~=~ \eta. $$
QmechanicQmechanic
Not the answer you're looking for? Browse other questions tagged group-theory lie-algebras or ask your own question.
monoid-operation equivalence
isomorphism between two factor groups
Proving two groups are isomorphic.
Groups statements equivalence
Adjoint Representation of Lorentz Group
Proving that two quotient groups are isomorphic
Prove two subgroups have the same order given an equivalence relation.
Equivalence of two relations in Braid groups
Prove that the Lorentz algebra $so(3,1)$ is simple / doesn't have invariant subalgebra
Spinors - Groups and Double Cover of Lorentz Group | CommonCrawl |
Faculty Lunches
CMI Seminars
Home | People | Events | Links | Positions
Mathematics of Information
Seminar (CS286)
4:00-5:00 pm in Annenberg 314
(except where otherwise noted)
Ben Lee Volk
Algebraic Complexity Theory: Lower bounds and Derandomization (1st of 2 parts)
Algebraic Complexity theory is a subfield of complexity theory, studying computational problems of algebraic flavor such as multiplying matrices, computing the determinant, and more generally, computing polynomials over a field, where the complexity is measured by the number of algebraic operations needed to compute it.
Two basic open problems in the area are (1) proving superpolynomial circuit lower bounds for explicit polynomials (which is the algebraic analog to the P vs. NP problem), and (2) designing deterministic efficient algorithms for the Polynomial Identity Testing problem, that asks to determine, given a circuit computing a polynomial, whether it computes the zero polynomial. Studying these questions often leads to beautiful mathematical questions of combinatorial, algebraic and geometric nature.
In this talk, we will discuss several old and new results in the area, explore the mysterious, bidirectional and intricate connections between the two problems above, and give some open problems.
Algebraic Complexity Theory: Lower bounds and Derandomization (2nd of 2 parts)
Alessandro Zocca
Hard-core interactions on graphs
In this talk I will introduce the hard-core model with Metropolis transition probabilities on finite graphs and its multi-state generalization. Motivated by the study of random-access networks performance, I will focus on the low-temperature regime and study the asymptotic behavior of these interacting particle systems by looking at first hitting times between stable states and mixing times. In particular, I will show how the order-of-magnitude of these hitting and mixing times depends on the underlying graph structure and derive precise asymptotics for various regular lattices. These results have been obtained extending the so-called "pathwise approach" developed in the statistical physics literature to study metastability phenomena and yielded a rigorous understanding of the root cause for poor delay performance of random-access networks.
Alessandro Achille (UCLA)
Complexity, Noise, and Emergent Properties of Learning Deep Representations
I will show that Information Theoretic quantities control and describe a large part of the training process of Deep Neural Networks, and can be used to explain how properties, such as invariance to nuisance variability and disentanglement of semantic factors, emerge naturally in the learned representation. The resulting theory has connections with several fields ranging from algorithmic complexity to variational inference. This framework not only predicts the asymptotic behavior of deep networks, but also shows that the initial learning transient has a large irreversible effect on the outcome of the training, which gives rise to critical learning periods akin to those observed in biological systems. This urges us to study the complex, and so far neglected, initial phase of learning.
Omer Tamuz
Combining and Comparing Experiments
A classical statistical model of hypothesis testing is the Blackwell Le Cam experiment, in which an observer makes inferences regarding the veracity of an hypothesis by observing a random outcome whose distribution depends on whether or not the hypothesis holds. We address a number of natural, classical questions: when does one experiment have more information than another? How can we quantify the amount of information an experiment holds? Our results include an answer to an old question of Blackwell, as well as a novel axiomatization of Kullback-Leibler divergence.
Joint with Xiaosheng Mu, Luciano Pomatto and Philipp Strack
Rasmus Kyng (Harvard)
Approximate Gaussian Elimination for Laplacians
We show how to perform sparse approximate Gaussian elimination for Laplacian matrices. We present a simple, nearly linear time algorithm that approximates a Laplacian matrix by a sparse LU factorization. We compute this factorization by subsampling standard Gaussian elimination. This gives the simplest known nearly linear time solver for Laplacian equations. The crux of our proof is the use of matrix martingales to analyze the algorithm. Finally, we will take a look at ongoing efforts to implement robust and fast Laplacian solvers based on these ideas.
Based on joint work with Sushant Sachdeva and Daniel Spielman.
Jan Hazla (MIT)
Reasoning in Bayesian Opinion Exchange Networks is Computationally Hard
Bayesian models of opinion exchange are extensively studied in economics, dating back to the work of Aumann on the agreement theorem. An important class of such models features agents arranged on a network (representing, e.g., social interactions), with the network structure determining which agents communicate with each other. It is often argued that the Bayesian computations needed by agents in such models are difficult, but prior to our work there were no rigorous arguments for such hardness.
We consider a well-studied model where fully rational agents receive private signals indicative of an unknown state of the world. Then, they repeatedly announce the state of the world they consider most likely to their neighbors, at the same time updating their beliefs based on their neighbors' announcements.
I will discuss our complexity-theoretic results establishing hardness of agents' computations in this model. Specifically, we show that these computations are NP-hard and extend this result to PSPACE-hardness. We show hardness not only for exact computations, but also that it is computationally difficult even to approximate the rational opinion in any meaningful way.
Joint work with Ali Jadbababie, Elchanan Mossel and Amin Rahimian.
Gautam Kamath
Privately Learning High-Dimensional Distributions
We present novel, computationally efficient, and differentially private algorithms for two fundamental high-dimensional learning problems: learning a multivariate Gaussian in R^d and learning a product distribution in {0,1}^d in total variation distance. The sample complexity of our algorithms nearly matches the sample complexity of the optimal non-private learners for these tasks in a wide range of parameters. Thus, our results show that private comes essentially for free for these problems, providing a counterpoint to the many negative results showing that privacy is often costly in high dimensions. Our algorithms introduce a novel technical approach to reducing the sensitivity of the estimation procedure that we call recursive private preconditioning, which may find additional applications.
Based on joint work with Jerry Li, Vikrant Singhal, and Jonathan Ullman.
CMI Special Lunch Seminar
★ 12 noon ★
★ Room 213 ★
Brent Waters (UT Austin)
Cryptographic Code Obfuscation
A cryptographic code obfuscator takes as input the description of a program P and outputs a program P' that is functionally equivalent to the original, but should hide the inner workings of the original program to the maximal extent possible. Until very recently no general purpose obfuscators existed that were based on cryptographically hard problems; however, in 2013 researchers proposed the first candidate obfuscator for "indistinguishability obfuscation". Since then there has been a tremendous interest in the subject from the cryptography community.
In this talk I will first introduce the concept and define indistinguishability obfuscation. Then I will show techniques for building cryptographic applications from it. Finally, I will conclude with discussing the challenging open problems in the area.
Miles Lopes (UC Davis)
Two New Bootstrap Methods for High-Dimensional and Large-Scale Data
Bootstrap methods are among the most broadly applicable tools for statistical inference and uncertainty quantification. Although these methods have an extensive literature, much remains to be understood about their applicability in modern settings, where observations are high-dimensional, or where the quantity of data outstrips computational resources. In this talk, I will present a couple of new bootstrap methods that are tailored to these settings. First, I will discuss the topic of "spectral statistics" arising from high-dimensional sample covariance matrices, and describe a method for approximating the distributions of such statistics. Second, in the context of large-scale data, I will discuss a more unconventional application of the bootstrap -- dealing with the tradeoff between accuracy and computational cost for randomized numerical linear algebra. This will include joint work from a paper with Alexander Aue and Andrew Blandino; https://arxiv.org/abs/1709.08251 (to appear at Biometrika), and a paper with Michael Mahoney, and Shusen Wang; https://arxiv.org/abs/1708.01945 (to appear at JMLR).
★ NOTE: will be held in
Hall Auditorium (135 Gates Thomas) ★
Gitta Kutyniok (TU Berlin)
Approximation Theory meets Deep Learning
Despite the outstanding success of deep neural networks in real-world applications, most of the related research is empirically driven and a mathematical foundation is almost completely missing. One central task of a neural network is to approximate a function, which for instance encodes a classification task. In this talk, we will be concerned with the question, how well a function can be approximated by a neural network with sparse connectivity. Using methods from approximation theory and applied harmonic analysis, we will derive a fundamental lower bound on the sparsity of a neural network. By explicitly constructing neural networks based on certain representation systems, so-called $\alpha$-shearlets, we will then demonstrate that this lower bound can in fact be attained. Finally, we present numerical experiments, which surprisingly show that already the standard backpropagation algorithm generates deep neural networks obeying those optimal approximation rates.
Nisheeth Vishnoi (Yale)
Towards Controlling Bias in AI Systems
Powerful AI systems, which are driven by machine learning tools, are increasingly controlling various aspects of modern society: from social interactions (e.g., Facebook, Twitter, Google, YouTube), economics (e.g., Uber, Airbnb, Banking), learning (e.g., Wikipedia, MOOCs), to governance (Judgements, Policing, Voting). These systems have a tremendous potential to change our lives for the better, but, via the ability to mimic and nudge human behavior, they also have the potential to be discriminatory, reinforce societal prejudices, and polarize opinions. Indeed, recent studies have demonstrated that these systems can be quite brittle and generally lack the required qualities to be deployed in various human-centric/societal contexts. The reason being that considerations such as fairness, explainability, accountability etc. have largely been an afterthought in the development of such AI systems.
In this talk, I will outline our efforts towards incorporating some of the above-mentioned issues in a principled manner for core machine learning tasks such as classification, data summarization, ranking, personalization, and online advertisement. Our work leads to new algorithms that have the ability to control and alleviate bias from their outputs, comes with provable guarantees, and often has low "price of fairness".
Based on several joint works with Elisa Celis.
Alistair Sinclair (UC Berkeley)
The Lee-Yang Theorem and the Ising Model
The celebrated Lee-Yang Theorem of the 1950s says that the zeros of the partition function of the ferromagnetic Ising model (viewed as a polynomial in the field parameter) lie on the unit circle in the complex plane. In this talk I will discuss a recent revival of interest in this result, inspired by computational questions. I will discuss three developments. First, I will explain how a generalization of the Lee-Yang theorem to rule out repeated zeros leads to hardness results for computing averages of observables associated with the Ising model. Then I will show how to combine the theorem with recent technology of Barvinok and others to obtain the first polynomial time deterministic approximation algorithms for the partition function. And finally I will discuss the relationship of the theorem to decay of correlations. The talk will be self-contained.
This is joint work with Jingcheng Liu and Piyush Srivastava.
Adam Smith (Boston University)
The Structure of Optimal Private Tests for Simple Hypotheses
Hypothesis testing plays a central role in statistical inference, and is used in many settings where privacy concerns are paramount. This work answers a basic question about testing simple hypotheses subject to a stability, or "privacy", constraint: given two distributions P and Q, and a privacy level e, how many i.i.d. samples are needed to distinguish P from Q such that changing one of the samples changes the test's acceptance probability by at most e. What sort of tests have optimal sample complexity? We characterize this sample complexity up to constant factors in terms of the structure of P and Q and the privacy level e, and show that this sample complexity is achieved by a certain randomized and clamped variant of the log-likelihood ratio test. Our result is an analogue of the classical Neyman-Pearson lemma in the setting of private hypothesis testing. Our analysis also sheds light on classical hypothesis testing, giving a new interpretation of the Hellinger distance between distributions.
Joint work with Cl�ment Canonne, Gautam Kamath, Audra McMillan, and Jonathan Ullman. To appear as STOC 2019. Available as https://arxiv.org/abs/1811.11148
Vijay Vazirani (UCI)
Matching is as Easy as the Decision Problem, in the NC Model
Is matching in NC, i.e., is there a deterministic fast parallel algorithm for it? This has been an outstanding open question in TCS for over three decades, ever since the discovery of Random NC matching algorithms. Over the last five years, the TCS community has launched a relentless attack on this question, leading to the discovery of numerous powerful ideas. We give what appears to be the culmination of this line of work: An NC algorithm for finding a minimum weight perfect matching in a general graph with polynomially bounded edge weights, provided it is given an oracle for the decision problem. Consequently, for settling the main open problem, it suffices to obtain an NC algorithm for the decision problem.
We believe this new fact has qualitatively changed the nature of this open problem.
Our result builds on the work of Anari and Vazirani (2018), which used planarity of the input graph critically; in fact, in three different ways. Our main challenge was to adapt these steps to general graphs by appropriately trading planarity with the use of the decision oracle. The latter was made possible by the use of several of the idea discovered over the last five years.
The difficulty of obtaining an NC perfect matching algorithm led researchers to study matching vis-a-vis clever relaxations of the class NC. In this vein, Goldwasser and Grossman (2015) gave a pseudo-deterministic RNC algorithm for finding a perfect matching in a bipartite graph, i.e., an RNC algorithm with the additional requirement that on the same graph, it should return the same (i.e., unique) perfect matching for almost all choices of random bits. A corollary of our reduction is an analogous algorithm for general graphs.
This talk is fully self-contained.
Based on the following joint paper with Nima Anari: https://arxiv.org/pdf/1901.10387.pdf
CMI Faculty Lunches
Venkat Chandrasekaran
Newton Polytopes and Relative Entropy Optimization
We discuss how relative entropy is uniquely suited to optimization problems involving sparse polynomials. Our results connect to the work of Descartes (the rule of signs) and Khovanskii (the theory of fewnomials). The Newton polytope associated to a polynomial plays a central role in our development. This is joint work with Riley Murray and Adam Wierman.
Equitable Voting Rules
Joint with Laurent Bartholdi, Wade Hann-Caruthers, Maya Josyula and Leeat Yariv.
Given n voters and two candidates A and B, a voting rule is a map from {A,B,abstain}^n to {A,B,tie}. An important example is majority, which has a desirable symmetry property: permuting the voters leaves the result unchanged. We study a weakening of this symmetry assumption, which allows for a far richer set of rules that still treat voters equally. We show that these rules can have very small---but not too small---winning coalitions (i.e., a set of voters that can control the election outcome).
I will discuss the relation to various known results and open problems in the theory of finite groups. No previous knowledge required.
Peter Schröder
Shape from Metric
Joint work with Albert Chern, Felix Knöppel and Ulrich Pinkall (all of TU Berlin Math)
We study the isometric immersion problem for orientable surface triangle meshes endowed with only a metric: given the combinatorics of the mesh together with edge lengths, approximate an isometric immersion into R^3.
To address this challenge we develop a discrete theory for surface immersions into R^3. It precisely characterizes a discrete immersion, up to subdivision and small perturbations. In particular our discrete theory correctly represents the topology of the space of immersions, i.e., the regular homotopy classes which represent its connected components. Our approach relies on unit quaternions to represent triangle orientations and to encode, in their parallel transport, the topology of the immersion. In unison with this theory we develop a computational apparatus based on a variational principle. Minimizing a non-linear Dirichlet energy optimally finds extrinsic geometry for the given intrinsic geometry and ensures low metric approximation error.
We demonstrate our algorithm with a number of applications from mathematical visualization and art directed isometric shape deformation, which mimics the behavior of thin materials with high membrane stiffness.
Adam Wierman
Transparency and Control in Platforms & Networked Markets
Platforms have emerged as a powerful economic force, driving both traditional markets, like the electricity market, and emerging markets, like the sharing economy. The power of platforms comes from their ability to tame the complexities of networked marketplaces -- marketplaces where there is not a single centralized market, but instead a network of interconnected markets loosely defined by a graph of feasible exchanges. Despite the power and prominence of platforms, the workings of platforms are often guarded secrets. Further, many competing platforms make very different design choices, but little is understood about the impact of these differing choices. In this talk, I will overview recent work from our group that focuses on reverse engineering the design of platforms and understanding the consequences of design choices underlying transparency in modern platforms. I will use electricity markets and ridesharing services as motivating examples throughout.
Babak Hassibi
Deep Learning and the Blessing of Dimensionality
Stochastic descent methods have recently gained tremendous popularity as the workhorse for deep learning. So much so that, it is now widely recognized that the success of deep networks is not only due to their special deep architecture, but also due to the behavior of the stochastic descent methods used, which plays a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD)--originally developed for quadratic loss and linear models in the context of H-infinity control in the 1990's--and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. These minimax properties can be used to explain the convergence and implicit-regularization of stochastic descent methods in highly over-parametrized settings, exemplified by training a deep neural network. This observation gives some insight into why deep networks exhibit such powerful generalization abilities. It is also a further example of what is increasingly referred to as the "blessing of dimensionality". We also show how different variations of the algorithms can lead to different generalization performances and note some very counter-intuitive phenomena. | CommonCrawl |
A nonlinear diffusion problem arising in population genetics
Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source
February 2014, 34(2): 803-820. doi: 10.3934/dcds.2014.34.803
Local Well-posedness and Persistence Property for the Generalized Novikov Equation
Yongye Zhao 1, , Yongsheng Li 1, and Wei Yan 2,
Department of Mathematics, South China University of Technology, Guangzhou, Guangdong 510640, China, China
College of Mathematics and Information Science, Henan Normal University, Xinxiang, Henan 453007, China
Received December 2012 Revised February 2013 Published August 2013
In this paper, we study the generalized Novikov equation which describes the motion of shallow water waves. By using the Littlewood-Paley decomposition and nonhomogeneous Besov spaces, we prove that the Cauchy problem for the generalized Novikov equation is locally well-posed in Besov space $B_{p,r}^{s}$ with $1\leq p, r\leq +\infty$ and $s>{\rm max}\{1+\frac{1}{p},\frac{3}{2}\}$. We also show the persistence property of the strong solutions which implies that the solution decays at infinity in the spatial variable provided that the initial function does.
Keywords: Littlewood-Paley decomposition., Besov spaces, Novikov equation, Cauchy problem, persistence property.
Mathematics Subject Classification: Primary: 35Q53, 35G25; Secondary: 35B30, 35A3.
Citation: Yongye Zhao, Yongsheng Li, Wei Yan. Local Well-posedness and Persistence Property for the Generalized Novikov Equation. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 803-820. doi: 10.3934/dcds.2014.34.803
G. Rodríguez-Blanco, On the Cauchy problem for the Camassa-Holm equation,, Nonl. Anal., 46 (2001), 309. doi: 10.1016/S0362-546X(01)00791-X. Google Scholar
A. Bressan and A. Constantin, Global conservative solutions of the Camassa-Holm equation,, Arch. Ration. Mech. Anal., 183 (2007), 215. doi: 10.1007/s00205-006-0010-z. Google Scholar
A. Bressan and A. Constantin, Global dissipative solutions of the Camassa-Holm equation,, Anal. Appl. (Singap.), 5 (2007), 1. doi: 10.1142/S0219530507000857. Google Scholar
R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons,, Phys. Rev. Lett., 71 (1993), 1661. doi: 10.1103/PhysRevLett.71.1661. Google Scholar
A. Constantin, On the Cauchy problem for the periodic Camassa-Holm equation,, J. Diff. Eqns., 141 (1997), 218. doi: 10.1006/jdeq.1997.3333. Google Scholar
A. Constantin, On the inverse spectral problem for the Camassa-Holm equation,, J. Funct. Anal., 155 (1998), 352. doi: 10.1006/jfan.1997.3231. Google Scholar
A. Constantin, Existence of permanent and breaking waves for a shallow water equation: A geometric approach,, Ann. Inst. Fourier (Grenoble), 50 (2000), 321. doi: 10.5802/aif.1757. Google Scholar
A. Constantin, On the scattering problem for the Camassa-Holm equation,, R. Soc. Lond. Proc. Ser. A Math. Phys. Eng. Sci., 457 (2001), 953. doi: 10.1098/rspa.2000.0701. Google Scholar
A. Constantin, Finite propagation speed for the Camassa-Holm equation,, J. Math. Phys., 46 (2005). doi: 10.1063/1.1845603. Google Scholar
A. Constantin and J. Escher, Global existence and blow-up for a shallow water equation,, Ann. Scuola. Norm. Sup. Pisa Cl. Sci. (4), 26 (1998), 303. Google Scholar
A. Constantin and J. Escher, Well-posedness, global existence, and blow-up phenomena for a periodic quasi-linear hyperbolic equation,, Comm. Pure Appl. Math., 51 (1998), 475. doi: 10.1002/(SICI)1097-0312(199805)51:5<475::AID-CPA2>3.0.CO;2-5. Google Scholar
A. Constantin and R. Ivanov, On an integrable two-component Camassa-Holm shallow water system,, Phys. Lett. A, 372 (2008), 7129. doi: 10.1016/j.physleta.2008.10.050. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi equations,, Arch. Rat. Mech. Anal., 192 (2009), 165. doi: 10.1007/s00205-008-0128-2. Google Scholar
A. Constantin and H. P. McKean, A shallow water equation on the circle,, Comm. Pure Appl. Math., 52 (1999), 949. doi: 10.1002/(SICI)1097-0312(199908)52:8<949::AID-CPA3>3.0.CO;2-D. Google Scholar
A. Constantin and W. A. Strauss, Stability of the Camassa-Holm solitons,, J. Nonl. Sci., 12 (2002), 415. doi: 10.1007/s00332-002-0517-x. Google Scholar
R. Danchin, A few remarks on the Camassa-Holm equation,, Diff. Integ. Eqns., 14 (2001), 953. Google Scholar
R. Danchin, "Fourier Analysis Method for PDEs,", Lecture Notes, (2005). Google Scholar
R. Danchin, On the well-posedness of the incompressible density-dependent Euler equations in the $L^p$ framework,, J. Diff. Eqns., 248 (2010), 2130. doi: 10.1016/j.jde.2009.09.007. Google Scholar
A. Degasperis, D. D. Holm and A. N. I. Hone, A new integral equation with peakon solutions,, Theoret. Math. Phys., 133 (2002), 1463. doi: 10.1023/A:1021186408422. Google Scholar
A. Degasperis and M. Procesi, Asymptotic integrability,, in, (1999), 23. Google Scholar
H. R. Dullin, G. A. Gottwald and D. D. Holm, An integrable shallow water equation with linear and nonlinear dispersion,, Phys. Rev. Lett., 87 (2001). doi: 10.1103/PhysRevLett.87.194501. Google Scholar
H. R. Dullin, G. A. Gottwald and D. D. Holm, Camassa-Holm, Korteweg-de Vries-5 and other asymptotically equivalent equations for shallow water waves,, Fluid. Dyn. Res., 33 (2003), 73. doi: 10.1016/S0169-5983(03)00046-7. Google Scholar
J. Escher, Y. Liu and Z. Yin, Global weak solutions and blow-up structure for the Degasperis-Procesi equation,, J. Funct. Anal., 241 (2006), 457. doi: 10.1016/j.jfa.2006.03.022. Google Scholar
J. Escher, Y. Liu and Z. Yin, Shock waves and blow-up phenomena for the periodic Degasperis-Procesi equation,, Indiana Univ. Math. J., 56 (2007), 87. doi: 10.1512/iumj.2007.56.3040. Google Scholar
J. Escher and Z. Yin, On the initial boundary value problems for the Degasperis-Procesi equation,, Phys. Lett. A, 368 (2007), 69. doi: 10.1016/j.physleta.2007.03.073. Google Scholar
A. Fokas, B. Fuchssteiner, Symplectic structures, their Bäklund transformations and hereditray symmetries,, Physica D., 4 (): 47. doi: 10.1016/0167-2789(81)90004-X. Google Scholar
X. Geng and B. Xue, An extension of integrable peakon equations with cubic nonlinearity,, Nonlinearity, 22 (2009), 1847. doi: 10.1088/0951-7715/22/8/004. Google Scholar
D. Henry, Infinite propagation speed for the Degasperis-Procesi equation,, J. Math. Anal. Appl., 311 (2005), 755. doi: 10.1016/j.jmaa.2005.03.001. Google Scholar
D. Henry, Compactly supported solutions of the Camassa-Holm equation,, J. Nonlinear Math. Phys., 12 (2005), 342. doi: 10.2991/jnmp.2005.12.3.3. Google Scholar
D. Henry, Persistence properties for a family of nonlinear partial differential equations,, Nonl. Anal., 70 (2009), 1565. doi: 10.1016/j.na.2008.02.104. Google Scholar
D. Henry, Persistence properties for the Degasperis-Procesi equation,, J. Hyper. Diff. Eq., 5 (2008), 99. doi: 10.1142/S0219891608001404. Google Scholar
D. Henry, Infinite propagation speed for a two component Camassa-Holm equation,, Discr. Contin. Dyn. Syst. Ser. B., 12 (2009), 597. doi: 10.3934/dcdsb.2009.12.597. Google Scholar
A. A. Himonas and C. Holliman, On well-posedness of the Degasperis-Procesi equation,, Discr. Contin. Dyn. Syst., 31 (2011), 469. doi: 10.3934/dcds.2011.31.469. Google Scholar
A. A. Himonas and G. Misio lek, The Cauchy problem for an integrable shallow water equation,, Diff. Int. Eqns., 14 (2001), 821. Google Scholar
A. A. Himonas, G. Misio lek, G. Ponce and Y. Zhou, Persistence properties and unique continuation of solutions of the Camassa-Holm equation,, Comm. Math. Phys., 271 (2007), 511. doi: 10.1007/s00220-006-0172-4. Google Scholar
H. Holden and X. Raynaud, Dissipative Solutions for the Camassa-Holm equation,, Discr. Contin. Dyn. Syst. Ser., 24 (2009), 1047. doi: 10.3934/dcds.2009.24.1047. Google Scholar
A. N. W. Hone, H. Lundmark and J. Szmigielski, Explicit multipeakon solutions of Novikov's cubically nonlinear integrable Camassa-Holm equation,, Dyn. Partial Diff. Eqns., 6 (2009), 253. Google Scholar
A. N. W. Hone and J. P. Wang, Prolongation algebras and Hamiltonian operators for peakon equations,, Inverse Problems, 19 (2003), 129. doi: 10.1088/0266-5611/19/1/307. Google Scholar
A. N. W. Hone and J. P. Wang, Integrable peakon equations with cubic nonlinearity,, J. Phys. A, 41 (2008). doi: 10.1088/1751-8113/41/37/372002. Google Scholar
R. Ivanov, Extended Camassa-Holm hierarchy and conserved quantities,, Z. Naturforsch. A, 61 (2006), 133. Google Scholar
Z. H. Jiang and L. D. Ni, Blow-up phenomenon for the integrable Novikov equation,, J. Math. Appl. Anal., 385 (2012), 551. doi: 10.1016/j.jmaa.2011.06.067. Google Scholar
S. Y. Lai and Y. H. Wu, The local well-posedness and existence of weak solutions for a generalized Camassa-Holm equation,, J. Diff. Eqns., 248 (2010), 2038. doi: 10.1016/j.jde.2010.01.008. Google Scholar
J. Lenells, Conservation laws of the Camassa-Holm equation,, J. Phys. A, 38 (2005), 869. doi: 10.1088/0305-4470/38/4/007. Google Scholar
Y. A. Li and P. J. Olver, Well-posedness and blow-up solutions for an integrable nonlinearly dispersive model wave equation,, J. Diff. Eqns., 162 (2000), 27. doi: 10.1006/jdeq.1999.3683. Google Scholar
Y. Liu and Z. Yin, Global existence and blow-up phenomena for the Degasperis-Procesi equation,, Comm. Math. Phys., 267 (2006), 801. doi: 10.1007/s00220-006-0082-5. Google Scholar
H. P. McKean, Breakdown of a shallow water equation,, Asian J. Math., 2 (1998), 867. Google Scholar
Y. S. Mi and C. L. Mu, On the Cauchy problem for the modified Novikov equation with peakon solutions,, J. Diff. Eqns., 254 (2013), 961. doi: 10.1016/j.jde.2012.09.016. Google Scholar
O. Mustafa, A note on the Degasperis-Procesi equation,, J. Nonl. Math. Phys., 12 (2005), 10. doi: 10.2991/jnmp.2005.12.1.2. Google Scholar
L. D. Ni and Y. Zhou, Well-posedness and persistence properties for the Novikov equation,, J. Diff. Eqns., 250 (2011), 3002. doi: 10.1016/j.jde.2011.01.030. Google Scholar
V. S. Novikov, Generalizations of the Camassa-Holm equation,, J. Phys. A, 42 (2009). doi: 10.1088/1751-8113/42/34/342002. Google Scholar
F. Tiǧlay, The periodic Cauchy problem for Novikov's equation,, Int. Math. Res. Notices IMRN, (2011), 4633. doi: 10.1093/imrn/rnq267. Google Scholar
M. Vishik, Hydrodynamics in Besov spaces,, Arch. Rat. Mech. Anal., 145 (1998), 197. doi: 10.1007/s002050050128. Google Scholar
W. Walter, "Differential and Integral Inequalities,", Ergebnisse der Mathematik und ihrer Grenzgebiete, (1970). Google Scholar
Z. Xin and P. Zhang, On the weak solution to a shallow water equation,, Comm. Pure Appl. Math., 53 (2000), 1411. doi: 10.1002/1097-0312(200011)53:11<1411::AID-CPA4>3.0.CO;2-5. Google Scholar
K. Yan and Z. Yin, On the Cauchy problem for a two-component Degasperis-Procesi system,, J. Diff. Eqns., 252 (2012), 2131. doi: 10.1016/j.jde.2011.08.003. Google Scholar
W. Yan, Y. S. Li and Y. M. Zhang, The Cauchy problem for the integrable Novikov equation,, J. Diff. Eqns., 253 (2012), 298. doi: 10.1016/j.jde.2012.03.015. Google Scholar
W. Yan, Y. S. Li and Y. M. Zhang, The Cauchy problem for the Novikov equation,, Nonlinear Differ. Equ. Appl., 20 (2013), 1157. doi: 10.1007/s00030-012-0202-1. Google Scholar
W. Yan, Y. S. Li and Y. M. Zhang, Global existence and blow-up phenomena for the weakly dissipative Novikov equation,, Nonl. Anal., 75 (2012), 2464. doi: 10.1016/j.na.2011.10.044. Google Scholar
W. Yan, Y. S. Li and Y. M. Zhang, The Cauchy problem for the generalized Camassa-Holm equation,, to appear., (). Google Scholar
Z. Yin, Well-posedness, blowup, and global existence for an integrable shallow water equation,, Discr. Contin. Dyn. Syst. Ser., 11 (2004), 393. doi: 10.3934/dcds.2004.11.393. Google Scholar
Z. Yin, Global existence for a new periodic integrable equation,, J. Math. Anal. Appl., 283 (2003), 129. doi: 10.1016/S0022-247X(03)00250-6. Google Scholar
Z. Yin, On the Cauchy problem for an integrable equation with peakon solutions,, Illinois J. Math., 47 (2003), 649. Google Scholar
Z. Yin, Global solutions to a new integrable equation with peakons,, Indiana Univ. Math. J., 53 (2004), 1189. doi: 10.1512/iumj.2004.53.2479. Google Scholar
Z. Yin, Global weak solutions for a new periodic integrable equation with peakon solutions,, J. Funct. Anal., 212 (2004), 182. doi: 10.1016/j.jfa.2003.07.010. Google Scholar
Rudong Zheng, Zhaoyang Yin. The Cauchy problem for a generalized Novikov equation. Discrete & Continuous Dynamical Systems - A, 2017, 37 (6) : 3503-3519. doi: 10.3934/dcds.2017149
Shouming Zhou. The Cauchy problem for a generalized $b$-equation with higher-order nonlinearities in critical Besov spaces and weighted $L^p$ spaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4967-4986. doi: 10.3934/dcds.2014.34.4967
Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 1-11. doi: 10.3934/dcds.2009.24.1
Hao Tang, Zhengrong Liu. On the Cauchy problem for the Boltzmann equation in Chemin-Lerner type spaces. Discrete & Continuous Dynamical Systems - A, 2016, 36 (4) : 2229-2256. doi: 10.3934/dcds.2016.36.2229
Van Duong Dinh. On the Cauchy problem for the nonlinear semi-relativistic equation in Sobolev spaces. Discrete & Continuous Dynamical Systems - A, 2018, 38 (3) : 1127-1143. doi: 10.3934/dcds.2018047
V. Varlamov, Yue Liu. Cauchy problem for the Ostrovsky equation. Discrete & Continuous Dynamical Systems - A, 2004, 10 (3) : 731-753. doi: 10.3934/dcds.2004.10.731
Adrien Dekkers, Anna Rozanova-Pierrat. Cauchy problem for the Kuznetsov equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (1) : 277-307. doi: 10.3934/dcds.2019012
Giuseppe Maria Coclite, Lorenzo di Ruvo. A note on the convergence of the solution of the Novikov equation. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2865-2899. doi: 10.3934/dcdsb.2018290
Julii A. Dubinskii. Complex Neumann type boundary problem and decomposition of Lebesgue spaces. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 201-210. doi: 10.3934/dcds.2004.10.201
Arturo de Pablo, Guillermo Reyes, Ariel Sánchez. The Cauchy problem for a nonhomogeneous heat equation with reaction. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 643-662. doi: 10.3934/dcds.2013.33.643
Guillermo Reyes, Juan-Luis Vázquez. The Cauchy problem for the inhomogeneous porous medium equation. Networks & Heterogeneous Media, 2006, 1 (2) : 337-351. doi: 10.3934/nhm.2006.1.337
Shaoyong Lai, Yong Hong Wu. The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 401-408. doi: 10.3934/dcdsb.2003.3.401
Binhua Feng, Xiangxia Yuan. On the Cauchy problem for the Schrödinger-Hartree equation. Evolution Equations & Control Theory, 2015, 4 (4) : 431-445. doi: 10.3934/eect.2015.4.431
Defu Chen, Yongsheng Li, Wei Yan. On the Cauchy problem for a generalized Camassa-Holm equation. Discrete & Continuous Dynamical Systems - A, 2015, 35 (3) : 871-889. doi: 10.3934/dcds.2015.35.871
Yongsheng Mi, Chunlai Mu, Pan Zheng. On the Cauchy problem of the modified Hunter-Saxton equation. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 2047-2072. doi: 10.3934/dcdss.2016084
Binhua Feng, Dun Zhao. On the Cauchy problem for the XFEL Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4171-4186. doi: 10.3934/dcdsb.2018131
Yushi Nakano, Shota Sakamoto. Spectra of expanding maps on Besov spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (4) : 1779-1797. doi: 10.3934/dcds.2019077
Wen Tan, Bo-Qing Dong, Zhi-Min Chen. Large-time regular solutions to the modified quasi-geostrophic equation in Besov spaces. Discrete & Continuous Dynamical Systems - A, 2019, 39 (7) : 3749-3765. doi: 10.3934/dcds.2019152
Baoxiang Wang. E-Besov spaces and dissipative equations. Communications on Pure & Applied Analysis, 2004, 3 (4) : 883-919. doi: 10.3934/cpaa.2004.3.883
Yongye Zhao Yongsheng Li Wei Yan | CommonCrawl |
Deep NN
Meta-Learning
Reinforcement Learning Basics
- Course ppt by UIUC
- Udacity Classes: TD(1) Rule, TD(1) Examples Part 1, 2, TD(0) Rule, TD(lampda), Summary
MDP Formulation
An agent interacts with its environment via perception and action. In each cycle,
- The agent receives its current state (s),
- The agent chooses an action (a) to be applied to itself,
- The action leads non-deterministically to a new state (s') via interaction with the environment,
- The state transition is typically associated with a scalar reward (r).
The job of RL is to find a policy (π) that maps state (s) to action (a), that maximizes certain long-term reward. From these characterizations of the agent, it is obvious that the MDP formulation is a suitable mathematically model:
- \(S\): finite set of states
- \(s_0\): initial state \( s0 \in S \)
- \(A\): finite set of actions
- \(P\): transition function \( P(s′|s, a) \) giving the probability of reaching \(s′\) by executing \(a\) in state \(s\)
- \(r\): instantaneous reward function \(r(s' | s, a) \), or \(r(s)\) if reward is only state dependent.
Long-term Rewards
Unlike instantaneous reward (r) that only captures the reward linked to a specific state or state transition, a long-term reward (R) specifies the reward that has certain temporal horizon (T). This is useful when used to characterizes how the agent should take the future into account in deciding the action it takes now:
Finite-horizon Rewards:
\(R = E(\sum_{t=0}^{T} r_t) \)
Interpretation 1: Effectively receding-horizon control, the agent always takes the T-step optimal action. Interpretation 2: The first step takes T-optimal action, on the next step, it takes a (T-1)-optimal, and so on, it takes the 1-step optimal solution. This is effectively a formulation of differential dynamic programming (DDP).
Average Reward:
\(R = E(\sum_{t=0}^{\infty} \frac{r_t}{T})\)
Policy that optimizes this reward is known as gain optimal policy. The initial reward of a agent's life is overshadowed by the long-run average performance, optimizing this reward results in bias optimal policy, which prefers maximizing long-run average reward and ties are broken by the initial extra reward. It can be seen as the limiting case of the infinite-horizon discounted model as the discount factor approaches 1.
Delayed Reward:
Sometimes, the agent has to take a long sequence of actions, receiving insignificant reward until finally arriving at a state with high reward, like the goal-oriented probabilistic planning. Under this setup, learning the actions are desirable based on reward that can take place arbitrarily far in the future is a challenging task.
Discounted Infinite-horizon Reward:
\(R = E(\sum_{t=0}^{\infty} {\gamma}^{t} \cdot r_t)\)
This reward is equivalent to the finite-horizon reward, but mathematically more tractable. In the field of optimal control, for example, discrete LQR policy, computing the finite-horizon involves solving:
Finite-horizon Dynamic Riccati Equation: $$A^T P_k A -(A^T P_k B + N) (R+B^T P_k B)^{-1}(B^T P_k A + N^T) + Q = P_{k-1}$$
While a simpler stationary Riccati Equation is solved in computing the infinite-horizon discrete LQR policy policy,
Infinite-horizon Stationary Riccati Equation: $$A^T P A - A^T P B (R + B^T P B)^{-1}B^T P A + Q = P$$
In fact, it can be shown that for the discounted infinite-horizon model, there exists an optimal deterministic stationary policy [1].
Definition of Optimal Policy w/ Known Model
By "known model", we mean one has complete knowledge of the state transition function \( P(s′|s, a) \) and the instantaneous reward function \(r(s' | s, a) \). Then the dynamic programming techniques [1] can be used to lay out the foundation of mathematical optimality representation.
Without loss of generality, we use discounted infinite-horizon reward model. Given any policy \(\pi\), we can calculate a value function governed by this policy: $$V(s) = \underset{\pi}{\operatorname{max}} E(\sum_{t=0}^{\infty} \gamma^t r_t) $$
There is a unique optimal value function that can be defined as (a value associated with state \(s\)) the maximum of the sum of instantaneous reward plus the expected discounted value of the next state: $$V^*(s) = \underset{a}{\operatorname{max}} [r(s, a) + \gamma \sum_{s' \in S} P(s'|s,a) V^*(s')] $$ The definition optimal policy is tightly linked to the definition of optimal policy, by only taking the "argmax" instead of "max": $$\pi^*(s) = \underset{a}{\operatorname{argmax}} [r(s, a) + \gamma \sum_{s' \in S} P(s'|s,a) V^*(s')] $$ Sometimes, we are not only interested in the optimal value function or the optimal policy (only mapped to a given state \(s\)), but also interested in the mapping from a state, action pair to an optimal value, which is also known as the Q value. Q value is just the main body of optimization from the above the optimal value/policy function definition: $$Q^*(s, a) = r(s, a) + \gamma \sum_{s' \in S} P(s'|s,a) V^*(s') $$
Algorithms w/ Known Model
Policy Iteration Algorithms (Model Known):
Policy iteration algorithm starts with a policy \(\pi\). But instead of interweaving the updating of policy & value per state (as done by value iteration algorithm), it first solves the value function of the entire state space \(S\) associated with the policy \(V_{\pi}\), then it evaluates each state \(s\) and see if improvement can be made by choosing any different action. If so, update to get new policy \(\pi'\):
Let \(\pi = \pi'\)
Compute the value function of policy \(\pi\) by solving the equation $$V_{\pi}(s) = r(s, \pi(s)) + \gamma \sum_{s' \in S} [P(s'|s, \pi(s)) V_{\pi}(s')]$$
Improve the policy at each state: $$\pi'(s) = \underset{a}{argmax}\{ r(s, a) + \gamma \sum_{s' \in S} [P(s' | s, a) V_{\pi}(s')] \}$$ There are at most \(|A|^{|S|}\) distinct policies, and the sequence of policies is guaranteed to improve (or terminate) at each step. Therefore, the complexity of policy iteration algorithm is to the exponential order of state number.
Value Iteration Algorithms (Model Known):
Greedy Full-Backup
Given a finite-element state set \(S\) and finite action set \(A\), update the value at the (\(k+1\))-th iteration:
- Loop for \(s \in S\)
- Loop for \(a \in A\)
- Calculate \(Q^{k}(s, a) = r(s, a) + \gamma \sum_{s' \in S} P(s'|s,a) V^{k}(s')\) for each \(a\)
- Update value by taking the maximum over all \(a\): \(V^{k+1}(s) = \underset{a}{\operatorname{max}} Q^{k}(s, a)\)
The value iteration algorithm guarantees convergence to the correct \(V^*\) value. There is also a Bellman residual bound, which states that: if the maximum difference between two successive value functions is less than \(\epsilon\), then the value of the greedy policy differs from the value function of the optimal policy by no more than \(\frac{2\epsilon \gamma}{(1-\gamma)}\)at any state.
This type of value iteration is also called "greedy full backup algorithm" since:
(1) the policy obtained by choosing, in every state, the action that maximizes the estimated discounted reward, using the current estimate of the value function.
(2) making use of information from all possible successor states.
Q-Learning
Other than the "full-backup" value iteration algorithm, there exists another class called "sample-backup" algorithm, a.k.a., Q-learning, which is critical to the operation of the model-free methods. Rather than using the successor states, Q-learning tries to evaluate the greedy policy while following a more exploratory scheme, and update Q-value as follows: $$Q^{k+1}(s, a) = Q^{k}(s, a) + \alpha \{ r + \gamma \underset{a'}{\operatorname{max}} [Q(s', a') - Q(s, a)] \}$$ with the following conditions met:
(1) \(a\) and \(s\) is updated infinitely often.
(2) \(s'\) is sampled from the distribution \(P(s'|s,a)\) when model is known, or, more often, execute & observe in the environment in model-free approaches.
(3) \(r\) is sampled with mean \(r(s,a)\) and bounded variance.
(4) The learning rate \(\alpha\) decreases slowly.
"Q-learning is exploration insensitive: that is, that the Q values will converge to the optimal values, independent of how the agent behaves while the data is being collected (as long as all state-action pairs are tried often enough).
This means that, although the exploration-exploitation issue must be addressed in Q-learning, the details of the exploration strategy will not affect the convergence of the learning algorithm.
For these reasons, Q-learning is the most popular and seems to be the most effective model-free algorithm for learning from delayed reinforcement. It does not, however, address any of the issues involved in generalizing over large state and/or action spaces. In addition, it may converge quite slowly to a good policy." [2]
On-Policy vs. Off-Policy Value Iteration Algorithms
Q-learning Value Update: $$Q^{k+1}(s, a) = Q^{k}(s, a) + \alpha \{ r + \tau \underset{a'}{\operatorname{max}} [Q(s', a') - Q(s, a)] \}$$
Q-learning is an off-policy learner:
- It learns the value of the optimal policy independently of the agent's actions.
- It estimates the return (total discounted future reward) for state-action pairs assuming a greedy policy were followed despite the fact that it's not following a greedy policy.
State-action-reward-state-action (SARSA) Algorithm: $$Q^{k+1}(s, a) = Q^{k}(s, a) + \alpha \{ r + \tau Q(s', a') - Q(s, a) \}$$
SARSA is an on-policy learner:
- It learns the value of the policy being carried out by the agent including the exploration steps.
- It estimates the return for state-action pairs assuming the current policy continues to be followed.
Algorithms w/ Unknown Model
Model-free Algorithms:
Reinforcement learning is primarily concerned with how to obtain the optimal policy when model is not known in advance. Model-free approaches learn a controller without learning a model by: interacting with its environment directly to obtain information which, by means of an appropriate algorithm, can be processed to produce an optimal policy.
Temporal credit assignment: "how do we know whether the action just taken is a good one, when it might have far-reaching effects? One strategy is to wait until the end and reward the actions taken if the result was good and punish them if the result was bad. In ongoing tasks, it is difficult to know what the end is, and this might require a great deal of memory. Instead, temporal difference & (model-free version) Q-learning algorithms use insights from value iteration to adjust the estimated value of a state based on the immediate reward and the estimated value of the next state." [2]
TD(1)
Episode \(T\)
For all state \(s\), eligibility \(e(s) = 0\), at start of episode, \(V_T(s) = V_{T-1}(s)\)
In this episode, from a starting state, perform forward simulation, achieve a sequence of state transitions, at each time \(t\), \(s_{t-1} \to s_{t} \) with reward \(r_t\).
$$e(s_{t-1}) = e(s_{t-1}) + 1$$
For all states \(s\), $$V_T(s) = V_T(s) + \alpha_{T} [r_t + \gamma V_{T-1}(s_t) - V_{T-1}(s_{t-1})]e(s)$$ $$e(s) = \gamma e(s)$$
For all state \(s\), at start of episode, \(V_T(s) = V_{T-1}(s)\)
For the single state \(s = s_{t-1}\) $$V_T(s) = V_T(s) + \alpha_T [r_t + \gamma V_{T}(s_t) - V_{T}(s_{t-1})]$$
TD(Lambda)
For all states \(s\), $$V_T(s) = V_T(s) + \alpha_{T} [r_t + \gamma V_{T-1}(s_t) - V_{T-1}(s_{t-1})]e(s)$$ $$e(s) = \lambda \gamma e(s)$$
Everything is identical to TD(1), except for the addition of parameter lambda in the last eligibility update step.
Model-based Algorithms:
Model-based methods intend to improve the model-free approaches from the following disadvantages: inefficiency in gathering & using data & slow convergence. But this class of methods is arguably less important since many problems that we are facing have very complex models. It may be practically easier to use model-free based approach supported by deep learning algorithms, such as actor-critic.
[1] Richard Bellman. Dynamic Programming. Princeton University Press, Princeton, NJ, 1957.
[2] RL Survey by Kaelbling, 1996 | CommonCrawl |
Finding Last two digits of an expression
Last two digits of an expression
If we need to find the last two digits of an expression we need to consider the last two digits of the base. We need to consider two cases separately.
Case 1: Numbers whose units digit is $1$.
These numbers are in the format of $...abc{1^{...xyz}}$
Unit digit of this expression is always 1 as the base ends with 1.
For the tenth place digit we need to multiply the digit in the tenth place of the base and unit digit of the power and take its unit digit. = $ a\fbox{b}c^{xy\fbox{z}}$
Find the last two digits of ${2341^{369}}$
${2341^{369}}$ = $23\fbox{4}1^{36\fbox{9}}$ = $61$
($\because $ units digit is 1, and $9\times4 = 36; \,$ take 6 as tens place)
Case 2: Numbers whose units digits is $5$.
The last two digits are always 25 or 75.
Let the given number is $..ab{5^{xyz}}$ = $a\fbox{b}5^{xy\fbox{z}}$
If the product of units digit of the power (i.e., z) and digit left to the 5 in the base (i.e.,b), is even then last two digits of the expression is 25, If the power is odd then it is 75.
Last two digits of ${2345^{369}}$
= ${23\fbox45^{36\fbox9}}$
the product $4 \times 9 = 36$ which is even.
So last two digits are 25.
the product $7 \times 1 = 7$ which is odd.
Case 3: Numbers whose units digits are 3, 7, 9.
we need to change the unit digits 3, 7, 9 to 1 by little modification. From the unit digit table we can find that 3, 7, 9 may give unit digit 1 for the powers of 4, 4, 2 respectively.
${2343^{4747}}$ = ${43^{4747}}$
($\because $ as we are concerned with only last two digits only)
\require{enclose}
\begin{array}{rll}
{43^{4747}} &= {\left( {{{43}^4}} \right)^{1186}} \times {43^3}\\[4pt]
&= {\left( {{{43}^2 \times{43}^2 }} \right)^{1186}} \times {43^2}\times 43\\[4pt]
&= {\left( {{{49}\times{49} }} \right)^{1186}} \times {49} \times 43 \\[4pt]
& (\because 43^2=1849)\\[4pt]
&= 01^{1186} \times 07 \quad \\ &(\because 49^2 = 2401;\, \, 49 \times 43= 2107)\\[4pt]
&= 01 \times 07\\[4pt]
&=07
Case 4: Numbers whose units digits is 2, 4, 6.
Firstly we should by-heart these two rules: ${2^{10}}$ raised to the odd power gives the last two digits as 24, raised to the even power always gives last two digits as 76,
${\left( {{2^{10}}} \right)^{odd}} = 24$
${\left( {{2^{10}}} \right)^{even}} = 76$
Find the last two digits of ${48^{199}}$
{48^{199}} &= {\left( {{{2}^4 \times3}} \right)^{199}} \\[4pt]
&= {\left( {{2^4}} \right)^{199}} \times {3^{199}}\\[4pt]
&= {2^{796}} \times {3^{199}} \\[4pt]
&= {\left( {{2^{10}}} \right)^{79}} \times {2^6} \times {\left( {{3^4}} \right)^{49}} \times {3^3}\\[4pt]
&= {\left( {24} \right)^{79}} \times 64 \times {\left( {81} \right)^{49}} \times 27\\[4pt]
&= 24 \times 64 \times 21 \times 27 \\[4pt]
& (\because 24^{odd} =24); \quad {\fbox81^{4\fbox9}} = 21) \\
Number System: Basics Number System: Digit Problems Number System: HCF and LCMNumber System: Factors and CoprimesNumber System: Divisbility Rules Number System: Power of a number in a Factorial Number System: Units digit of an expression Number System: Last two digits of an expressionNumber System: Base SystemNumber System: Last non zero digit of a factorial (LNZ)Number System: Last two non zero digits of a factorial | CommonCrawl |
A preliminary study on improving the recognition of esophageal speech using a hybrid system based on statistical voice conversion
Othman Lachhab1,
Joseph Di Martino2,
Elhassane Ibn Elhaj3 &
Ahmed Hammouch1
In this paper, we propose a hybrid system based on a modified statistical GMM voice conversion algorithm for improving the recognition of esophageal speech. This hybrid system aims to compensate for the distorted information present in the esophageal acoustic features by using a voice conversion method. The esophageal speech is converted into a "target" laryngeal speech using an iterative statistical estimation of a transformation function. We did not apply a speech synthesizer for reconstructing the converted speech signal, given that the converted Mel cepstral vectors are used directly as input of our speech recognition system. Furthermore the feature vectors are linearly transformed by the HLDA (heteroscedastic linear discriminant analysis) method to reduce their size in a smaller space having good discriminative properties. The experimental results demonstrate that our proposed system provides an improvement of the phone recognition accuracy with an absolute increase of 3.40 % when compared with the phone recognition accuracy obtained with neither HLDA nor voice conversion.
A total laryngectomy is a surgical procedure which consists in a complete removal of the larynx for the treatment of a cancer for example. Thus, the patient loses his/her vocal cords that allowed him/her a laryngeal voice. After surgery, some patients may waive any oral communication attempt because of the physical and mental bouleversement caused by the surgical act. Indeed, the anatomical changes deprive temporarily the patient of his/her voice. Only the whispered voice allows communication in a postoperative life. An alternative speaking rehabilitation method allows him/her to get a new voice called esophageal speech (ES) generated without vocal folds. The air from the lungs, original source of all human speech, no longer passes through the cavities of the phonatory apparatus. It is released directly from the stomach through the esophagus. The features of esophageal speech such as the envelope of the waveform and the spectral components differ from the features extracted from natural speech. Furthermore, the esophageal speech is characterized by specific noises and low intelligibility; the fundamental frequency of this voice is less stable than that of laryngeal voice. All these aspects cause a production of a hoarse, creaky and unnatural voice, difficult to understand.
Currently, researchers are mostly concentrated on the recognition and evaluation of alaryngeal speech, in such fields as laryngology and biomedical application of speech technology (Pravena et al. 2012; Dibazar et al. 2006). The evaluation of esophageal speech by perception judgments is one of the most used methods in clinical practice. It consists in following postoperative vocal evolution and efficiency of reeducation. The major drawbacks of this approach are the missing of reliability, as well as the difficulty of establishing a jury of experts for listening. Given the limitations of this perceptual analysis, the establishment of a more objective assessment protocol becomes a necessity. Nowadays, instrumental analysis (Wuyts et al. 2000; Yu et al. 2001) aims to provide a solution based on acoustic and aerodynamic measurements of speech sounds. Recently in (Lachhab et al. 2014), we proposed a new objective technique to assess esophageal speech. The originality of this approach is based on the use of an automatic speech recognition system in order to extract phonetic information of pathological voice signals.
In this paper, we propose a new hybrid system based on statistical voice conversion for improving the recognition of esophageal speech. This enhancing system combines a voice conversion algorithm that transforms esophageal speech into a "target" laryngeal speech, with an automatic speech recognition system based on HMMFootnote 1/GMMFootnote 2 models. This approach aims to correct and extract the lexical information contained in esophageal speech. Our hybrid system does not apply a speech synthesizer for reconstructing the converted speech signal, because the automatic speech recognition system used needs only as input data, converted Mel cepstral features. The discriminant information of the converted acoustic vectors is increased by the HLDA (heteroscedastic linear discriminant analysis) transformation in order to improve system performance.
This paper is organized as follows: "Previous and current research on enhancing pathological speech" details previous and current works on enhancing pathological voice. The used corpora for voice conversion and the HLDA transformation method are described in "The FPSD corpus" and "The HLDA transformation" respectively. In "The hybrid system for enhancing esophageal speech", the proposed hybrid system for improving the recognition of esophageal speech is discussed. In "Experiments and results", we present the experiments and obtained results. Finally, a conclusion of this paper is provided in "Conclusion and future works" as well a list of possible future works.
Previous and current research on enhancing pathological speech
The esophageal speech is characterized by high noise perturbation, low intelligibility and a fundamental frequency which is unstable. All these characteristics when compared with those of the laryngeal speech produce a hoarse, creaky and unnatural voice, difficult to understand. For this reason, several approaches have been proposed to improve the quality and intelligibility of the alaryngeal speech. One such a method described in (Qi et al. 1995), consists in resynthesizing tracheoesophageal (TE) speech using a simulated glottal waveform and a smoothed F0. A similar approach (del Pozo and Young 2006), uses a synthetic glottal waveform and a jitter and shimmer reduction model to reduce breathiness and harshness of original TE speech. Some other authors have proposed a signal processing based speech prosthesis, such Mixed-Excitation Linear Prediction (MELP) (Türkmen and Karsligil 2008), which consists in synthesizing normal speech from whispered voice by using pitch estimation and formant structure modification on voiced phonemes. The unvoiced phonemes in this study remain unmodified. However, this technique is unsuited to real-time operation. Another exemple has been reported by (Sharifzadeh et al. 2010), with a Code-Excitation Linear Prediction (CELP) in order to produce more natural characteristics by reconstructing the missing pitch elements from whispered speech. However, it is still difficult to mechanically generate realistic excitation signals similar to the one naturally generated by vocal fold vibrations. Other attempts for enhancing pathological speech based on the modifications of their acoustic features have been proposed, such as formant synthesis (Matui et al. 1999), background noise reduction based on auditory masking (Liu et al. 2006), approximation of vocal tract using LPC (Garcia et al. 2002, 2005) and comb filtering (Hisada and Sawada 2002), denoising electrolarynx (EL) speech by combined spectral substraction and root cepstral substraction procedure (Cole et al. 1997). This subtractive-type method is limited and lacks of accuracy in estimation of the background noise. In (Mantilla-Caeiros et al. 2010), the esophageal speech enhancement system proposed aims to replace voiced segments of alaryngeal speech, selected by pattern recognition techniques, with corresponding segments of normal speech. The silence and unvoiced segments remain unchanged. Another work reported in (del Pozo and Young 2008), consists in repairing TE phone durations by those predicted by regression trees built from normal data.
Recently, a statistical approach for enhancing alaryngeal speech based on conversion voice has been proposed in (Doi et al. 2014). This technique consists in converting the alaryngeal speech sound, in order to be perceived as pronounced by a target speaker with a laryngeal voice. In (Tanaka et al. 2014), a new hybrid method for alaryngeal speech enhancement based on noise reduction by spectral subtraction (Boll 1979) and using statistical voice conversion for predicting the excitation parameters was developed. These two recent approaches aim to improve the estimation of acoustic features in order to reconstruct an enhanced signal with best intelligibility. However, the conversion process used in these methods is quite complex and can generate errors in parameters estimation and thus produce unnatural synthesized sounds due to the lack of realistic excitation signals related to the converted spectral parameters. Consequently, in practice it is difficult for them to compensate for the differences existing in the alaryngeal acoustic parameters when compared with those of the laryngeal speech.
To overcome this drawback, we propose a new hybrid system for improving the recognition of esophageal speech based on a simple voice conversion algorithm. In this conversion process, an iterative statistical estimation of a transformation function is used. This estimation method is computationally inexpensive when compared to the classical EM (Werghi et al. 2010). On the other hand, we do not use a synthesizer for reconstructing the converted speech signal, because our hybrid system integrates a speech recognition system in order to extract the phonetic information directly from converted MFCC*Footnote 3 vectors.
The FPSD corpus
We chose to develop our esophageal speech recognition system with our own database. This French database entitled FPSD (French Pathological Speech Database), was established to simplify the training of phonetic models of esophageal speech recognition systems. This corpus contains 480 audio files saved in wav format, accompanied with their orthographic transcription files. The sentences are pronounced by a single laryngectomee speaker. We organized all the data in packets of five categories:
C1.
Sentences with one-syllable words.
Sentences with words of one and two syllables.
Sentences with words of three syllables.
Sentences with falling intonation.
Sentences with rising intonation.
Table 1 SAMPA transcription of the standard French phones
It is necessary to have a fairly large training corpus in order to process the intra-speaker variability. The more important is the training data, the better are the obtained performances. We divided our corpus into two subsets: one for training and the other one for the test. The training subset contains 425 sentences and the test one contains 55 sentences. The structure of our FPSD corpus is similar to the one used in the TIMIT corpus (Garofolo et al. 1993). We have for each sentence, the French text stored in a file (.txt), the audio signal recorded in the (.wav) format and sampled at 16 KHz with 16 bits by sample with a single input channel, a file (.wrd ) containing the word transcription and a file (.phn) containing the manual segmentation into phonemes. For realizing this manual segmentation we used the PraatFootnote 4 software which allows both transcriptions, annotations and analysis of the acoustic data. This software allows also viewing spectrograms and calculating prosodic parameters such as intensity, fundamental frequency, and other parameters such as energy and formants. Indeed, although it is difficult to assess the quality of a phonetic segmentation, there is a broad consensus on the fact that manual segmentation is more accurate than automatic segmentation. The phonetic labeling of the sentences was carried out with SAMPAFootnote 5 (Speech Assessment Methods Phonetic Alphabet) characters. This labeling method offers the advantage of using only simple ASCII characters. With SAMPA there is up to two characters to represent a phoneme. There exists another standard phonetic transcription method called International Phonetic Alphabet (IPA). Unfortunately, in the IPA method each phoneme is represented by a symbol that may not be entered on a computer keyboard. Table 1 shows the list of the 36 French phonetic labels used in our own FPSD database, with the IPA correspondence and examples.
The HLDA transformation
The goal of HLDA (Kumar and Andreou 1998) method consists in transforming the original data in a reduced dimension space while preserving discriminant information and the de-correlation of the different classes (phonemes). The n-dimensional feature vectors are projected into a space of \(p \le n\) dimension. Mathematically, we can express this reduction by applying the following linear transformation function:
$$\begin{aligned} Y=\Theta X=\left[ \begin{array}{c} \Theta _{p} X_{n} \\ \Theta _{n-p} X_{n} \end{array} \right] = \left[ \begin{array}{c} Y_{p} \\ Y_{n-p} \end{array} \right] \end{aligned}$$
where \(\Theta _{p}\) represents the p first rows of the transformation matrix and \(\Theta _{n-p}\) represents the remaining \(n-p\) rows. To obtain the transformed vectors \(Y_{p}\), we multiply the transformation matrix \(\Theta _{p}\) of dimension \((p\times n)\) by the input vector \(X_{n}\). Heteroscedastic LDA (HLDA) is an extension of LDA (Haeb-Umbach and Ney 1998). LDA assumes that the mean is the discriminating factor and not the variance, because the class distributions are Gaussians with different means and common covariance (Homoscedasticity). Due to this homoscedasticity, LDA may provide unsatisfactory performances when the class distributions are heteroscedastic (unequal variances or covariances). In order to overcome this limitation, HLDA has been proposed for treating the heteroscedasticity property. Each class is modeled as a normal distribution of \(x_{i}\) training vectors.
$$\begin{aligned} p(x_{i})=\frac{|\Theta |}{\sqrt{(2\pi )^{n}|\Sigma _{c(i)}|}} \exp \left(-\frac{1}{2}(\Theta x_{i}-\mu _{c(i)})^{T}\Sigma _{c(i)}^{-1} (\Theta x_{i}-\mu _{c(i)})\right) \end{aligned}$$
where \(\mu _{c(i)} , \Sigma _{c(i)}\) represent the mean vector and covariance matrix of class c(i) respectively. The objective is to find the optimal solution that respects a maximization criterion of log-likelihood probability function of the data in terms of \(\Theta\).
$$\begin{aligned} \tilde{\Theta }=\arg \max _\Theta \sum _{\forall i}\log p(x_i) \end{aligned}$$
The efficient iterative algorithm based on the generalized Expectation Maximization (EM) proposed in (Gales 1999; Burget 2004), is used in our experiments to simplify the estimation of matrix \(\Theta\).
The hybrid system for enhancing esophageal speech
In this section, the theory and implementation of the hybrid system for esophageal speech enhancement are described in detail. A block diagram of the proposed system is shown in Fig. 1.
Features extraction
The speech signals of the source and target speakers undergo a parameterization phase. The objective of this phase is to extract MFCC (Davis 1980) cepstral vectors. In this processing, the speech signal is sampled at 16 Khz with pre-emphasis of 0.97. A Hamming window of 25 ms shifted every 10 ms is used for obtaining the short time sections from which the cepstral coefficients are extracted. The first 12 cepstral coefficients (c1–c12) obtained from a bank of 26 filters in a Mel frequency scale, are retained. The logarithm of the energy of the frame, normalized over the entire sentence is added to the 12 cepstral coefficients in order to form a vector of 13 static coefficients (12 MFCC+ E).
Statistical voice conversion
The voice conversion process can be decomposed into two steps: training and transformation. During the training step, a parameterization phase (features extraction) is applied on two parallel corpora (source and target voices) containing sentences with the same phonetic content. The extracted cepstral vectors are used for determining an optimal conversion function that transforms the source vectors into target ones while minimizing the mean square error between the converted and target vectors. The second step is the transformation in which the system uses the previously learned conversion function for transforming the source speech signals in order to be perceived as pronounced by the target speaker.
The purpose of voice conversion is to convert the characteristics of a sound signal from a source speaker into the characteristics of a target speaker. In this paper, we will consider the GMM Gaussian mixture-based method described by Stylianou et al. (1998) and improved by Kain and Macon (1998), Toda et al. (2007) and then by Werghi et al. (2010). The Werghi's algorithm has been used in this study as our basic voice conversion procedure.
Block diagram of the proposed hybrid system for improving the recognition of esophageal speech
Training process: The X (source) sentences are normalized in a first step in order to have the same length in samples of their corresponding Y (target) normal voice sentences (this process is realized by the free Unix "sox" software) and aligned in a second step by the Dynamic Time Warping (DTW) algorithm. This latest phase consists in mapping the source vectors with the target vectors in order to create a huge mapping list. The corresponding vectors are concatenated then jointly in a single vector \(z=[x \ y]^T\) before classification. These extended vectors are classified using the "k-means" vector quantization algorithm (Kanungo et al. 2000) in order to determine the initial GMM parameters. The joint probability of vector z is given by:
$$\begin{aligned} p(z)=\sum _{i=1}^G\alpha _i\mathcal {N}_i(z,\mu _i,\Sigma _i) \end{aligned}$$
$$\begin{aligned} \Sigma _i=\left[ \begin{array}{cc} \Sigma _{i}^{xx} &{} \Sigma _{i}^{xy} \\ \Sigma _{i}^{yx} &{} \Sigma _{i}^{yy} \end{array} \right] \ \ \ and \ \ \ \mu _i=\left[ \begin{array}{c} \mu _{i}^{x} \\ \mu _{i}^{y} \end{array} \right] \end{aligned}$$
where \(\mathcal {N}(\cdot ,\mu ,\Sigma )\) denotes a Gaussian distribution with a mean vector \(\mu\) and a covariance matrix \(\Sigma\), \(\alpha\) is the mixture weight. This combination is used to model a joint GMM that depends on the source and target parameters. We obtain all the parameters at once, the mean vectors source and target \((\mu ^x,\mu ^y)\), the source and target covariance matrices \((\Sigma ^{xx},\Sigma ^{yy})\) and the cross-covariance matrices \((\Sigma ^{xy},\Sigma ^{yx})\) for each class i. The parameters are estimated by the iterative algorithm ISE2D (Iterative Statistical Estimation Directly from Data) described in (Werghi et al. 2010). The conversion function F(x) is then defined as the expectation E[y / x]:
$$\begin{aligned}&F(x)=E[y/x]=\sum _{i=1}^Gp(x/i)(\mu _i^y+\Sigma _i^{yx}{(\Sigma _i^{xx})}^{-1} (x-\mu _i^x))\end{aligned}$$
$$\begin{aligned}&p(x/i)=\frac{\alpha _i\mathcal {N}(x,\mu _i^x,\Sigma _{i}^{xx})}{\sum _{j=1}^G\alpha _j\mathcal {N}(x,\mu _j^x,\Sigma _j^{xx})} \end{aligned}$$
where p(x/i) represents the posterior probability that x is generated by the i th component and G is the number of Gaussians. The ISE2D method is computationally less expensive and gives better results than the classical EM method. This approach consists in estimating the GMM parameters directly from data by statistical computations shown below:
The weight \(\alpha _i\) of each normal distribution is estimated as the ratio between \(N_{s,i}\) the number of source vectors of class i and \(N_s\) the total number of source vectors.
$$\begin{aligned} \alpha _i=\frac{N_{s,i}}{N_s} \end{aligned}$$
The mean source vector \(\mu ^x\) and mean target vector \(\mu ^y\) are computed as follows:
$$\begin{aligned} \mu _i^x=\frac{\sum _{k=1}^{N_{s,i}}x_i^k}{N_{s,i}} \ \ \ and \ \ \ \mu _i^y=\frac{\sum _{k=1}^{N_{t,i}}y_i^k}{N_{t,i}} \end{aligned}$$
where \(x^k\) , \(y^k\) and \(N_{t,i}\) represent the \(kth\) source vector, the \(kth\) target vector and the number of target vectors of class i.
Conversion process: Once the GMM parameters are calculated, the previously estimated conversion function is applied to all the vectors of the FPSD database for converting the 12 MFCC*+E*Footnote 6 vectors \(\hat{y}_k\) .
$$\begin{aligned} \hat{y}_k=F(x_k) \end{aligned}$$
(k represents the vector number)
We do not use a synthesizer to reconstruct the speech signal. The converted vectors are used directly as input data of our speech recognition system.
Adding derivatives and reducing the dimensionality by HLDA
We have developed the same algorithm used in HTK for calculating the three derivatives. Let C(t) the cepstral coefficients of the converted frame at time t, then the corresponding delta coefficients \(\Delta C(t)\) are calculated on an analysis window of five frames (\(N_{\Delta }= 2\)) by using the following formula:
$$\begin{aligned} \Delta C(t)= \frac{\sum _{i=1}^{N_{\Delta }}i(C_{t+i}-C_{t-i})}{2\sum _{i=1}^{N_{\Delta }}i^2} \end{aligned}$$
The same formula (10) is applied to the delta coefficients to obtain the acceleration \((\Delta \Delta )\) coefficients. Similarly the third differential coefficients are computed by applying Eq. 10 on the acceleration \((\Delta \Delta )\) coefficients. The derivatives of the energy are calculated also in the same way. As mentioned above in "Statistical voice conversion", the conversion is applied on the 13 static coefficients MFCC \((12 MFCC + E)\). The differential coefficients of order 1, 2 and 3 called dynamic coefficients (\(\Delta\), \(\Delta \Delta\) and \(\Delta \Delta \Delta\)) are calculated from converted static coefficients and concatenated in the same space in order to increase the number of coefficients to \(d=52\). In order to improve the discriminant information and reduce the space dimensionality, the HLDA transformation matrix has been estimated using the method described in "The HLDA transformation". The new converted discriminant vectors contain 39 coefficients which represents the reference dimensionality used in most Automatic Speech Recognition systems (ASR).
The training of esophageal speech recognition system
Our esophageal speech recognition system is based on a statistical approach integrating acoustic and language levels in one decision process. These levels are represented by Hidden Markov Models (HMM). The 36 phones described in "The FPSD corpus" (see Table 1) are all modeled by left-to-right HMMs (see Fig. 2) with five states each (but only three of them can emit observations). The training of the acoustic models consists in estimating the mean vectors and covariance matrices of a set of weighted Gaussians. These parameters allow the computation of probability densities that constitute likelihood values associated with the emission of an observation by a state of a HMM. Furthermore the estimation of discrete probabilities associated with transitions between different states of the HMM are calculated. The converted discriminant vectors belonging to the training part of our FPSD database are used to estimate the optimal parameters \(\{A, \pi _i, B\}\).
Topology of the context-independent phonetic HMM
\(\pi _i\): An initial state probability.
\(A = a_{ij}\): The probability of transition from state i to state j (A is a transition probability matrix).
\(B = b_i(o_t)\): the matrix containing the distribution probability of emission the observation \(o_t\) in state i.
The output distribution \(b_i(o_t)\) for observing \(o_t\) in state i is generated by a Gaussian Mixture Model (GMM) and more precisely by a mixture of multivariate Gaussian distribution probabilities \(\mathcal {N}(o_t, \mu _{ik}, \Sigma _{ik})\) of mean vector \(\mu _{ik}\) and covariance matrix \(\Sigma _{ik}\):
$$\begin{aligned}&b_{i}(o_{t})= \sum _{k=1}^{n_{i}} \frac{c_{ik}}{\sqrt{{(2\pi )}^{d}|{\Sigma }_{ik}|}} \exp \left(-\frac{1}{2}(o_{t}-\mu _{ik})^{T}\Sigma ^{-1}_{ik}(o_{t}-\mu _{ik})\right)\nonumber \\&\left(\mathrm{with} \sum _{k=1}^{n_i}c_{ik}=1\right) \end{aligned}$$
where \(n_i\) represents the number of Gaussians in state i, \(o_t\) corresponds to an observation o at time t and \(c_{ik}\) represents the mixture weight for the k th Gaussian in state i. The recognition system is implemented using the platform HTK (Young et al. 2006). The HMM parameters are estimated based on maximum likelihood criterion MLE (Rabiner 1989).The obtained models are improved by increasing the number of Gaussians used to estimate the probability of emission of an observation in a state. The choice of the optimal number of Gaussians is a delicate issue, generally guided by the amount of training data. In our case, we limited this number to 16 Gaussians by state.
Phone recognition
The phone decoding is the heart of speech recognition systems. Its goal is to find the most likely states sequence corresponding to the parameters observed, in a composite model, and deducing the corresponding acoustic units. This task is performed using the Viterbi decoding algorithm applied on the converted Test vectors using the optimal parameters \(\{A,\pi _i, B\}\) already estimated. In parallel of this alignment, a bigram language model is calculated on all of the training part of our FPSD database to improve the decoding. The bigram language can be represented by a two-dimensional table giving the probability of occurrence of two successive phonemes. In this study the bigram language has been trained using only 425 sentences from HTK modules. The inclusion of this model allows approximately a 10 % gain in accuracy. Our language model can be of course enriched by various textual contents of large French databases in order to improve the performances of our system.
Experiments and results
In order to convert esophageal speech into a "normal speech" we recorded 50 esophageal and laryngeal sentences uttered respectively by a French male laryngectomee (the same one who participated in the creation of the FPSD database) and a French male speaker having a non-pathological voice. These new recordings do not belong to the FPSD database. They were uttered in order to determine the statistical conversion function. During the first iteration of training, the DTW alignment is applied on the source vectors x and target y containing 13 static coefficients. From the second iteration, the DTW alignment is realized between the converted static vectors \(\hat{y}\) and target vectors y in order to refine the mapping list. The conversion function is estimated using 64 classes. For evaluating our hybrid system we performed three experiments on the phone recognition system level (the conversion experiment described previously does not change). In the first experiment, we computed the derivatives of order 1 and 2 from the converted static vectors using the same HTK regression formula. The purpose of this experiment is to recover dynamic information and have new dimension vectors \(= 39 \ (12\) \(MFCC^*, E^* ; 12\) \(\Delta MFCC^*, \Delta E^*; 12\) \(\Delta \Delta MFCC^*, \Delta \Delta E^*)\) representing the reference dimensionality in most ASR systems. In experiment 2, another derivative (\(\Delta \Delta \Delta\)) is added and concatenated in the vectors space in order to increase the number of coefficients at \(d = 52 \ (12\) \(MFCC^*, E^*; 12\) \(\Delta MFCC^*, \Delta E^*; 12\) \(\Delta \Delta MFCC^*, \Delta \Delta E^*; 12\) \(\Delta \Delta \Delta MFCC^*, \Delta \Delta \Delta E^*)\). In experiment 3, the space of 52 coefficients used in experiment 2 is reduced to 39 coefficients using the HLDA \((52\rightarrow 39)\) transformation for improving the discriminant information and reducing the space dimensionality.
The phone accuracy and correct rates are calculated by Eq. 12, in order to evaluate our esophageal speech recognition system where N represents the total number of labels of the test utterances. The Substitution (S), Insertion (I) and Deletion (D) errors are computed by the DTW algorithm between the correct phone strings and the recognized phone strings.
$$\begin{aligned} Accuracy=\frac{N-(S+D+I)}{N}; \ Correct=\frac{N-(S+D)}{N} \end{aligned}$$
Table 2 shows the results of the three experiments described above on the converted MFCC* vectors of the Test part of our own FPSD database containing 55 sentences.
An additional evaluation with the same experiments has been performed on our phone recognition system using the original FPSD database (without vector conversion). We also realized these experiments on the laryngeal voice TIMIT database (Garofolo et al. 1993) with the same 39 phonetic classes as described by Lee and Hon (1989).
Table 2 Influence of the number of differential coefficients with the HLDA transformation on phone recognition rates on the converted \(MFCC^*\) vectors of the Test part of FPSD database
Table 3 Influence of the number of differential coefficients with the HLDA transformation on phone recognition rates on the Test part of the original FPSD database (without vector conversion)
Table 4 Influence of the number of differential coefficients with the HLDA transformation on phone recognition rates on the core test of the TIMIT database
The two tables, Tables 3 and 4 present the accuracy and correct rates for the three experiments described above respectively on the Test part of the original FPSD database (without vector conversion), and on the Core Test of the TIMIT database. From the results of experiment 3 (in Table 2) we can observe that the proposed hybrid system provides an improvement in phone recognition accuracy with an absolute increase of 3.40 %. Although this increase in performance seems to not be important, it is essential to point out that this is mainly due to the great complexity of the task undertaken. The resulting increase in performance obtained establishes that the HLDA and the voice conversion techniques can improve the discriminative properties of the cepstral frames used and therefore the recognition rates. So we think this article opens the way for further future successes in this very important topic that is the recognition of pathological voice.
Conclusion and future works
In this paper, we present our hybrid system for improving the recognition of esophageal speech. This system is based on a simplified statistical GMM voice conversion that projects the esophageal frames into a clean laryngeal speech space. We do not use a speech synthesizer for reconstructing the converted speech signals, because the converted Mel cepstral vectors are used directly as input of the phone recognition system we used. We also projected the converted MFCC* vectors by the HLDA transformation into a smaller space for improving the discriminative properties. The obtained results demonstrate that our proposed hybrid system can improve the recognition of the esophageal speech. Concerning future works we are interested in realizing a portable device that will process the recognition of ES speech and synthesize the recognized speech using a text-to-speech synthesizer. Such a device would permit laryngectomees an easier oral communication with other people. However, the ES speech recognition system should be able to restore a greater part of the phonetic information (speech-to-text). For this reason, we intend to extend our FPSD corpus in order to make possible the use of context-dependent HMM models (triphones). Moreover, we plan to replace our simple voice conversion method by Toda's algorithm [maximum likelihood estimation of spectral parameter trajectory considering global variance (GV) Toda et al. 2007] in order to improve the voice conversion process and consequently the accuracy of ES speech recognition.
Hidden Markov Model.
Gaussian Mixture Model.
Represents the converted MFCC vectors.
http://www.praat.org.
http://www.phon.ucl.ac.uk/home/sampa/index.html.
Represents the converted logarithm energy.
Boll SF (1979) Suppression of acoustic noise in speech using spectral subtraction. Acoust Speech Signal Process IEEE Trans 27(2):113–120
Burget L (2004) Combination of speech features using smoothed heteroscedastic linear discriminant analysis. In 8th International Conference on Spoken Language Processing. Sunjin Printing Co, Jeju island, pp 2549–2552. http://www.fit.vutbr.cz/research/pubs/index.php?id=7486
Cole D, Sridharan S, Geva M (1997) Application of noise reduction techniques for alaryngeal speech enhancement. In: Proc. IEEE TENCON-97, vol 2, pp 491–494. doi:10.1109/TENCON.1997.648252
Davis S, Mermelstein P (1980) Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. IEEE Trans Acoust Speech Signal Process 28(4):357–366. doi:10.1109/TASSP.1980.1163420
del Pozo A, Young S (2006) Continuous tracheoesophageal speech repair. In: Proc EUSIPCO. IEEE, Florence, Italy, pp 1–5
del Pozo A, Young S (2008) Repairing tracheoesophageal speech duration. In: Proc Speech Prosody, pp 187–190
Dibazar AA, Berger TW, Narayanan S (2006) Pathological voice assessment. Engineering in Medicine and Biology Society EMBS '06 28th Annual International Conference of the IEEE, NY, USA, pp 1669–1673. doi:10.1109/IEMBS.2006.259835
Doi D, Toda T, Nakamura K, Saruwatari H, Shikano K (2014) Alaryngeal speech enhancement based on one-to-many eigenvoice conversion. IEEE Trans Audio Speech Lang 22(1):172–183
Gales MJF (1999) Semi-tied covariance matrices for hidden markov models. IEEE Trans Speech Audio Process 7(3):272–281
Garcia B, Vicente J (2002) Time-spectral technique for esophageal speech regeneration. In: 11th EUSIPCO (European Signal Processing Conference). IEEE, Toulouse, France, pp 113–116
Garcia B, Vicente J, Ruiz I, Alonso A, Loyo E (2005) Esophageal voices: Glottal flow restoration. In: Proc ICASSP, Philadelphia, PA, USA, vol 4, pp 141–144. doi:10.1109/ICASSP.2005.1415965
Garofolo JS, Lamel LF, Fisher WM, Fiscus JG, Pallett D, Dahlgren NL (1993) The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus CDROM. NTIS order number PB91-100354
Haeb-Umbach R, Ney H (1998) Linear discriminant analysis for improved large vocabulary continuous speech recognition. In: Proc ICASSP, pp 13–16
Hisada A, Sawada H (2002) Real-time clarification of esophageal speech using a comb filter. In: International Conference on Disability, Virtual Reality and Associated Technologies. The University of Reading, Hungary, pp 39–46
Kain A, Macon M (1998) Spectral voice conversion for text-to-speech synthesis. In: Proc ICASSP. IEEE, Seattle, WA, USA, vol 1, pp 285–288. doi:10.1109/ICASSP.1998.674423
Kanungo T, Mount D, Netanyahu N, Piatko C, Silverman R, Wu A (2000) An efficient k-means clustering algorithm: analysis and implementation. IEEE Trans Pattern Anal Mach Intell 23(7):881–892
Kumar N, Andreou A (1998) Heteroscedastic discriminant analysis and reduced rank hmms for improved speech recognition. Speech Commun 26(4):283–297
Lachhab O, Martino JD, Elhaj EI, Hammouch A (2014) Improving the recognition of pathological voice using the discriminant HLDA transformation. In third IEEE International Colloquium in Information Science and Technology (CIST). IEEE, Tetouan, MOROCCO, pp 370–373. doi:10.1109/CIST.2014.7016648
Lee KF, Hon HW (1989) Speaker-independent phone recognition using hidden markov models. Acoust Speech Signal Process IEEE Trans 37(11):1641–1648
Liu H, Zhao Q, Wan M, Wang S (2006) Enhancement of electrolarynx speech based on auditory masking. Biomed Eng IEEE Trans 53(5):865–874
Mantilla-Caeiros A, Nakano-Miyatake M, Perez-Meana H (2010) A pattern recognition based esophageal speech enhancement system. J Appl Res Technol 8(1):56–71
Matui K, Hara N, Kobayashi N, Hirose H (1999) Enhancement of esophageal speech using formant synthesis. Proc ICASSP 1:1831–1834
Pravena D, Dhivya S, Durga Devi A (2012) Pathological voice recognition for vocal fold disease. Int J Comput Appl 47(13):31–37
Qi Y, Weinberg B, Bi N (1995) Enhancement of female esophageal and tracheoesophageal speech. Acoust Soc Am 98(5):2461–2465
Rabiner LR (1989) Tutorial on hidden markov models and selected applications in speech recognition. Proc IEEE 77(2):257–278
Sharifzadeh HR, McLoughlin IV, Ahmadi F (2010) Reconstruction of normal sounding speech for laryngectomy patients through a modified CELP codec. Biomed Eng IEEE Trans 57(10):2448–2458
Stylianou Y, Cappé O, Moulines E (1998) Continuous probabilistic transform for voice conversion. IEEE Proc Speech Audio Process 6(2):131–142
Tanaka K, Toda T, Neubig G, Sakti S, Nakamura S (2014) A hybrid approach to electrolaryngeal speech enhancement based on noise reduction and statistical excitation generation. IEICE Trans Inform Syst 97(6):1429–1437
Toda T, Black W, Tokuda K (2007) Voice conversion based on maximum-likelihood estimation of spectral parameter trajectory. IEEE Trans Audio Speech Lang Process 15(8):2222–2235
Türkmen H, Karsligil M (2008) Reconstruction of dysphonic speech by melp. Lect Notes Comput Sci 5197:767–774
Werghi A, Martino JD, Jebara SB (2010) On the use of an iterative estimation of continuous probabilistic transforms for voice conversion. In: Proceedings of the 5th International Symposium on Image/Video Communication over fixed and Mobile Networks (ISIVC). IEEE, Rabat, MOROCCO, pp 1–4. doi:10.1109/ISVC.2010.5656149
Wuyts L, De Bodt MS, Molenberghs G, Remacle M, Heylen L, Millet B, Van Lierde K, Raes J, Van de Heyning PH (2000) The dysphonia severity index : an objective measure of vocal quality based on a multiparameter approach. J Speech Lang Hear Res 43(3):796–809
Young S, Kershaw D, Odell J, Ollason D, Valtchev V, Woodland P (2006) The HTK Book Revised for HTK Version 3.4. Cambridge University Engineering Department, Cambridge. http://htk.eng.cam.ac.uk/docs/docs.shtml
Yu P, Ouakine M, Revis J, Giovanni A (2001) Objective voice analysis for dysphonic patients: a multiparametric protocol including acoustic and aerodynamic measurements. J Voice 15(4):529–542
OL and JDM conceived and designed the study with the help of EIE and AH who proposed the original hybrid system used. All the experiments have been realized by OL. OL and JDM drafted the initial manuscript and all the authors significantly contributed to its revision. All authors read and approved the final manuscript.
The authors would like to thank the University Mohammed 5 for having partly supported this study.
LRGE Laboratory, ENSET, Mohammed 5 University, Madinat Al Irfane, Rabat, Morocco
Othman Lachhab & Ahmed Hammouch
LORIA, B.P. 239, Vandœuvre-lès-Nancy, 54506, France
Joseph Di Martino
INPT, Madinat Al Irfane, Rabat, Morocco
Elhassane Ibn Elhaj
Othman Lachhab
Ahmed Hammouch
Correspondence to Othman Lachhab.
Lachhab, O., Di Martino, J., Elhaj, E.I. et al. A preliminary study on improving the recognition of esophageal speech using a hybrid system based on statistical voice conversion. SpringerPlus 4, 644 (2015). https://doi.org/10.1186/s40064-015-1428-2
Speech enhancement
Esophageal speech assessment
Voice conversion
Pathological voices
Automatic speech recognition (ASR) | CommonCrawl |
NIPS Conference Reviews
sciscore: 1.472
This account posts selections from reviews about papers on http://papers.nips.cc/. Parts of the reviews that are summaries are used here.
My Summaries 106
papers.nips.cc
Disentangling factors of variation in deep representation using adversarial training
Mathieu, Michaël and Zhao, Junbo Jake and Sprechmann, Pablo and Ramesh, Aditya and LeCun, Yann
Neural Information Processing Systems Conference - 2016 via Local Bibsonomy
[link] Summary by NIPS Conference Reviews 4 years ago
The authors presented a new generative model that learns to disentangle the factors of variations of the data. The authors claim that the proposed model is pretty robust to supervision. This is achieved by combining two of the most successful generative models: VAE and GAN. The model is able to resolve the analogies in a consistent way on several datasets with minimal parameter/architecture tunning.
This paper presents a way to learn latent codes for data, that captures both the information relevant for a given classification task, as well as the remaining irrelevant factors of variation (rather than discarding the latter as a classification model would). This is done by combining a VAE-style generative model, and adversarial training. This model proves capable of disentangling style and content in images (without explicit supervision for style information), and proves useful for analogy resolution.
This paper introduces a generative model for learning to disentangle hidden factors of variation. The disentangling separates the code into two, where one is claimed to be the code that descries factors relevant to solving a specific task, and the other describing the remaining factors. Experimental results show that the proposed method is promising.
The authors combine state of the art methods VAE and GAN to generate images with two complementary codes: one relevant and one irrelevant. They major contribution of the paper is the development of a training procedure that exploits triplets of images (two sharing the relevant code, one note sharing) to regularize the encoder-decoder architecture and avoid trivial solutions. The results are qualitatively good and comparable to previous article using more sources of supervision.
Paper seeks to explore the variations amongst samples which separate multiple classes using auto encoders and decoders. Specifically, the authors propose combining generative adversarial networks and variational auto encoders. The idea mimics the game play between two opponents, where one attempts to fool the other into believing a synthetic sample is in fact a natural sample. The paper proposes an iterative training procedure where a generative model was first trained on a number of samples while keeping the weights of the adversary constant and later the adversary is trained while keeping the generative model weights constant. The paper performs experiments on generation of instances between classes, retrieval of instances belonging to a given class, and interpolation of instances between two classes. The experiments were performed on MNIST, a set of 2D character animation sprites, and 2D NORB toy image dataset.
Improved Techniques for Training GANs
Salimans, Tim and Goodfellow, Ian J. and Zaremba, Wojciech and Cheung, Vicki and Radford, Alec and Chen, Xi
arXiv e-Print archive - 2016 via Local Bibsonomy
The Authors provide a bag of tricks for training GAN's in the image domain. Using these, they achieve very strong semi-supervised results on SHVN, MNIST, and CIFAR.
The authors then train the improved model on several images datasets, evaluate it on different tasks: semi-supervised learning, and generative capabilities, and achieve state-of-the-art results.
This paper investigates several techniques to stabilize GAN training and encourage convergence. Although lack of theoretical justification, the proposed heuristic techniques give better-looking samples. In addition to human judgement, the paper proposes a new metric called Inception score by applying pre-trained deep classification network on the generated samples. By introducing free labels with the generated samples as new category, the paper proposes the experiment using GAN under semi-supervised learning setting, which achieve SOTA semi-supervised performance on several benchmark datasets (MNIST, CIFAR-10, and SVHN).
An Online Sequence-to-Sequence Model Using Partial Conditioning
Jaitly, Navdeep and Le, Quoc V. and Vinyals, Oriol and Sutskever, Ilya and Sussillo, David and Bengio, Samy
The paper proposes a "neural transducer" model for sequence-to-sequence tasks that operates in a left-to-right and on-line fashion. In other words, the model produces output as the input is received instead of waiting until the full input is received like most sequence-to-sequence models do. Key ideas used to make the model work include a recurrent attention mechanism, the use of an end-of-block symbol in the output alphabet to indicate when the transducer should move to the next input block, and approximate algorithms based on dynamic programming and beam search for training and inference with the transducer model. Experiments on the TIMIT speech task show that the model works well and explore some of the design parameters of the model.
Like similar models of this type, the input is processed by an encoder and a decoder produces an output sequence using the information provided by the encoder and conditioned on its own previous predictions. The method is evaluated on a toy problem and the TIMIT phoneme recognition task. The authors also propose some smaller ideas like two different attention mechanism variations.
The map from block input to output is governed by a standard sequence-to-sequence model with additional state carried over from the previous block. Alignment of the two sequences is approximated by a dynamic program using a greedy local search heuristic. Experimental results are presented for phone recognition on TIMIT.
The encoder is a multi-layer LSTM RNN. The decoder is an RNN model conditioned on weighted sums of the last layer of the encoder and it's previous output. The weighting schemes (attention) varies and can be conditioned on the hidden states or also previous attention vectors. The decoder model produces a sequence of symbols, until it outputs a special end character "e" and is moved to the next block (other mechanisms where explored as well (no end-of-block-symbol and separately predicting the end of a block given the attention vector). It is then fed the weighted sum of the next block of encoder states. The resulting sequence of symbols determines an alignment of the target symbols over the blocks of inputs, where each block may be assigned a variable number of characters. The system is trained by fixing an alignment, that approximately resembles the best alignment. Finding this approximately best alignment is akin to a beam-search with a beam size of M (line 169), but a restricted set of symbols conditional on the last symbol in a particular hypothesis (since the target sequence is known). Alignments are computed less frequently than model updates (typically every 100 to 300 sequences). For inference, an unconstrained beam-search procedure is performed with a threshold on sequence length and beam size.
Professor Forcing: A New Algorithm for Training Recurrent Networks
Alex Lamb and Anirudh Goyal and Ying Zhang and Saizheng Zhang and Aaron Courville and Yoshua Bengio
Keywords: stat.ML, cs.LG
Abstract: The Teacher Forcing algorithm trains recurrent networks by supplying observed sequence values as inputs during training and using the network's own one-step-ahead predictions to do multi-step sampling. We introduce the Professor Forcing algorithm, which uses adversarial domain adaptation to encourage the dynamics of the recurrent network to be the same when training the network and when sampling from the network over multiple time steps. We apply Professor Forcing to language modeling, vocal synthesis on raw waveforms, handwriting generation, and image generation. Empirically we find that Professor Forcing acts as a regularizer, improving test likelihood on character level Penn Treebank and sequential MNIST. We also find that the model qualitatively improves samples, especially when sampling for a large number of time steps. This is supported by human evaluation of sample quality. Trade-offs between Professor Forcing and Scheduled Sampling are discussed. We produce T-SNEs showing that Professor Forcing successfully makes the dynamics of the network during training and sampling more similar.
Authors present a method similar to teacher forcing that uses generative adversarial networks to guide training on sequential tasks.
This work describes a novel algorithm to ensure the dynamics of an LSTM during inference follows that during training. The motivating example is sampling for a long number of steps at test time while only training on shorter sequences at training time. Experimental results are shown on PTB language modelling, MNIST, handwriting generation and music synthesis.
The paper is similar to Generative Adversarial Networks (GAN): in addition to a normal sequence model loss function, the parameters try to "fool" a classifier. That classifier is trying to distinguish generated sequences from the sequence model, from real data. A few Objectives are proposed in section 2.2. The key difference to GAN is the B in equations 1-4. B is a function outputs some statistics of the model, such as the hidden state of the RNN, whereas GAN tries rather to discriminate the actual output sequences.
This paper proposes a method for training recurrent neural networks (RNN) in the framework of adversarial training. Since RNNs can be used to generate sequential data, the goal is to optimize the network parameters in such a way that the generated samples are hard to distinguish from real data. This is particularly interesting for RNNs as the classical training criterion only involves the prediction of the next symbol in the sequence. Given a sequence of symbols $x_1, ..., x_t$, the model is trained so as to output $y_t$ as close to $x_{t+1}$ as possible. Training that way does not provide models that are robust during generation, as a mistake at time t potentially makes the prediction at time $t+k$ totally unreliable. This idea is somewhat similar to the idea of computing a sentence-wide loss in the context of encode-decoder translation models. The loss can only be computed after a complete sequence has been generated.
Can Active Memory Replace Attention?
Kaiser, Lukasz and Bengio, Samy
The authors propose to replace the notion of 'attention' in neural architectures with the notion of 'active memory' where rather than focusing on a single part of the memory one would operate on the whole of it in parallel.
This paper introduces an extension to neural GPUs for machine translation. I found the experimental analysis section lacking in both comparisons to state of the art MT techniques as well as thoroughly evaluating the proposed method.
This paper proposes active memory, which is a memory mechanism that operates all the part in parallel. The active memory was compared to attention mechanism and it is shown that the active memory is more effective for long sentence translation than the attention mechanism in English-French translation.
This paper proposes two new models for modeling sequential data in the sequence-to-sequence framework. The first is called the Markovian Neural GPU and the second is called the Extended Neural GPU. Both models are extensions of the Neural GPU model (Kaiser and Sutskever, 2016), but unlike the Neural GPU, the proposed models do not model the outputs independently but instead connect the output token distributions recursively. The paper provides empirical evidence on a machine translation task showing that the two proposed models perform better than the Neural GPU model and that the Extended Neural GPU performs on par with a GRU-based encoder-decoder model with attention.
On Multiplicative Integration with Recurrent Neural Networks
Wu, Yuhuai and Zhang, Saizheng and Zhang, Ying and Bengio, Yoshua and Salakhutdinov, Ruslan
This paper has a simple premise: that the, say, LSTM cell works better with multiplicative updates (equation 2) rather than additive ones (equation 1). This additive update is used in various places in lieu of additive ones, in various places in the LSTM recurrence equations (the exact formulation is in the supplementary material). A slightly hand wavy argument is made in favour of the multiplicative update, on the grounds of superior gradient flow (section 2.2). Mainly however, the authors make a rather thorough empirical investigation which shows remarkably good performance of their new architectures, on a range of real problems. Figure 1(a) is nice, showing an apparent greater information flow (as defined by a particular gradient) through time for the new scheme, as well as faster convergence and less saturated hidden unit activations. Overall, the experimental results appear thorough and convincing, although I am not a specialist in this area.
This model presents a multiplicative alternative (with an additive component) to the additive update which happens at the core of various RNNs (Simple RNNs, GRUs, LSTMs). The multiplicative component, without introducing a significant change in the number of parameters, yields better gradient passing properties which enable the learning of better models, as shown in experiments.
Architectural Complexity Measures of Recurrent Neural Networks
Zhang, Saizheng and Wu, Yuhuai and Che, Tong and Lin, Zhouhan and Memisevic, Roland and Salakhutdinov, Ruslan and Bengio, Yoshua
This paper proposes several definitions of measures of complexity of a recurrent neural network. They measure 1) recurrent depth (degree of multi-layeredness as a function of time of recursive connections) 2) feedforward depth (degree of multi-layeredness as a function of input -> output connections) 3) recurrent skip coefficient (degree of directness, like the inverse of multilayeredness, of connections) In addition to the actual definitions, there are two main contributions: - The authors show that the measures (which are limits as the number of time steps -> infinity) are well defined. - The authors correlate the measures with empirical performance in various ways, showing that all measure of depth can lead to improved performance.
This paper provides 3 measures of complexity for RNNs. They then show experimentally that these complexity measures are meaningful, in the sense that increasingly complexity seems to correlated with better performance.
The authors first present a rigorous graph-theoretic framework that describes the connecting architectures of RNNs in general, with which the authors easily explain how we can unfold an RNN. The authors then go on and propose tree architecture complexity measures of RNNs, namely the recurrent depth, the feedforward depth and the recurrent skip coefficient. Experiments on various tasks show the importance of certain measures on certain tasks, which indicates that those three complexity measures might be good guidelines when designing a recurrent neural network for certain tasks.
Reward Augmented Maximum Likelihood for Neural Structured Prediction
Norouzi, Mohammad and Bengio, Samy and Chen, Zhifeng and Jaitly, Navdeep and Schuster, Mike and Wu, Yonghui and Schuurmans, Dale
The proposed approach consists in corrupting the training targets with a noise derived from the task reward while doing maximum likelihood training. This simple but specific smoothing of the target distribution allows to significantly boost the performance of neural structured output prediction as showcased on TIMIT phone and translation tasks. The link between this approach and RL-based expected reward maximization is also made clear by the paper,
Prior work has chosen either maximum likelihood learning, which is relatively tractable but assumes a log likelihood loss, or reinforcement learning, which can be performed for a task-specific loss function but requires sampling many predictions to estimate gradients. The proposed objective bridges the gap with "reward-augmented maximum likelihood," which is similar to maximum likelihood but estimates the expected loss with samples that are drawn in proportion to their distance from the ground truth. Empirical results show good improvements with LSTM-based predictors on speech recognition and machine translation benchmarks relative to maximum likelihood training.
This work is inspired by recent advancement in reinforcement learning and likelihood learning. The authors suggest to learn parameters so as to minimize the KL divergence between CRFs and a probability model that is proportional to the reward function (which the authors call payoff distribution, see Equation 4). The authors suggest an optimization algorithm for the KL-divergence minimization that depends on sampling from the payoff distribution.
Current methods to learn a model for structured prediction include max margin optimisation and reinforcement learning. However, the max margin approach only optimises a bound on the true reward, and requires loss augmented inference to obtain gradients, which can be expensive. On the other hand, reinforcement learning does not make use of available supervision, and can therefore struggle when the reward is sparse, and furthermore the gradients can have high variance. The paper proposes a novel approach to learning for problems that involve structured prediction. They relate their approach to simple maximum likelihood (ML) learning and reinforcement learning (RL): ML optimises the KL divergence of a delta distribution relative to the model distribution, and RL optimises the KL divergence of the model distribution relative to the exponentiated reward distribution. They propose reward-augmented maximum likelihood learning, which optimises the KL divergence of the exponentiated reward distribution relative to the model distribution. Compared to RL, the arguments of the KL divergence are swapped. Compared to ML, the delta distribution is generalised to the exponentiated reward distribution. Training is cheap in RML learning. It is only necessary to sample from the output set according to the exponentiated reward distribution. All experiments are performed in speech recognition and machine translation, where the structure over the output set is defined by the edit distance. An improvement is demonstrated over simple ML.
Swapout: Learning an ensemble of deep architectures
Saurabh Singh and Derek Hoiem and David Forsyth
Keywords: cs.CV, cs.LG, cs.NE
Abstract: We describe Swapout, a new stochastic training method, that outperforms ResNets of identical network structure yielding impressive results on CIFAR-10 and CIFAR-100. Swapout samples from a rich set of architectures including dropout, stochastic depth and residual architectures as special cases. When viewed as a regularization method swapout not only inhibits co-adaptation of units in a layer, similar to dropout, but also across network layers. We conjecture that swapout achieves strong regularization by implicitly tying the parameters across layers. When viewed as an ensemble training method, it samples a much richer set of architectures than existing methods such as dropout or stochastic depth. We propose a parameterization that reveals connections to exiting architectures and suggests a much richer set of architectures to be explored. We show that our formulation suggests an efficient training method and validate our conclusions on CIFAR-10 and CIFAR-100 matching state of the art accuracy. Remarkably, our 32 layer wider model performs similar to a 1001 layer ResNet model.
Swapout is a method that stochastically selects forward propagation in a neural network from a palette of choices: drop, identity, feedforward, residual. Achieves best results on CIFAR-10,100 that I'm aware of.
This paper examines a stochastic training method for deep architectures that is formulated in such a way that the method generalizes dropout and stochastic depth techniques. The paper studies a stochastic formulation for layer outputs which could be formulated as $Y =\Theta_1 \odot X+ \Theta_2 \odot F(X)$ where $\Theta_1$ and $\Theta_2$ are tensors of i.i.d. Bernoulli random variables. This allows layers to either: be dropped $(Y=0)$, act a feedforward layer $Y=F(X)$, be skipped $Y=X$, or behave like a residual network $Y=X+F(X)$. The paper provides some well reasoned conjectures as to why "both dropout and swapout networks interact poorly with batch normalization if one uses deterministic inference", while also providing some nice experiments on the importance of the choice of the form of stochastic training schedules and the number of samples required to obtain estimates that make sampling useful. The approach is able to yield performance improvement over comparable models if the key and critical details of the stochastic training schedule and a sufficient number of samples are used.
This paper proposes a generalization of some stochastic regularization techniques for effectively training deep networks with skip connections (i.e. dropout, stochastic depth, ResNets.) Like stochastic depth, swapout allows for connections that randomly skip layers, which has been shown to give improved performance--perhaps due to shorter paths to the loss layer and the resulting implicit ensemble over architectures with differing depth. However, like dropout, swapout is independently applied to each unit in a layer allowing for a richer space of sampled architectures. Since accurate expectation approximations are not easily attainable due to the skip connections, the authors propose stochastic inference (in which multiple forward passes are averaged during inference) instead of deterministic inference. To evaluate its effectiveness, the authors evaluate swapout on the CIFAR dataset, showing improvements over various baselines.
Deep ADMM-Net for Compressive Sensing MRI
Yang, Yan and Sun, Jian and Li, Huibin and Xu, Zongben
The paper addresses the problem of compressive sensing MRI (CS-MRI) by proposing a "deep unfolding" approach (cf. http://arxiv.org/abs/1409.2574) with a sparsity-based data prior and inference via ADMM. All layers of the proposed ADMM-Net are based on a generalization of ADMM inference steps and are discriminatively trained to minimize a reconstruction error. In contrast to other methods for CS-MRI, the proposed approach offers both high reconstruction quality and fast run-time.
The basic idea is to convert the convention optimization based CS reconstruction algorithm into a fixed neural network learned with back-propagation algorithm. Specifically, the ADMM-based CS reconstruction is approximated with a deep neural network. Experimental results show that the approximated neural network outperforms several existing CS-MRI algorithms with less computational time.
The ADMM algorithm has proven to be useful for solving problems with differentiable and non-differentiable terms, and therefore has a clear link with compressed sensing. Experiments prove some gain in performance with respect to the state of the art, specially in terms of computational cost at test time.
Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much
He, Bryan D. and Sa, Christopher De and Mitliagkas, Ioannis and Ré, Christopher
A study of how scan orders influence Mixing time in Gibbs sampling.
This paper is interested in comparing the mixing rates of Gibbs sampling using either systematic scan or random updates. The basic contributions are two: First, in Section 2, a set of cases where 1) systematic scan is polynomially faster than random updates. Together with a previously known case where it can be slower this contradicts a conjecture that the speeds of systematic and random updates are similar. Secondly, (In Theorem 1) a set of mild conditions under which the mixing times of systematic scan and random updates are not "too" different (roughly within squares of each other).
First, following from a recent paper by Roberts and Rosenthal, the authors construct several examples which do not satisfy the commonly held belief that systematic scan is never more than a constant factor slower and a log factor faster than random scan. The authors then provide a result Theorem 1 which provides weaker bounds, which however they verify at least under some conditions. In fact the Theorem compares random scan to a lazy version of the systematic scan and shows that and obtains bounds in terms of various other quantities, like the minimum probability, or the minimum holding probability.
MCMC is at the heart of many applications of modern machine learning and statistics. It is thus important to understand the computational and theoretical performance under various conditions. The present paper focused on examining systematic Gibbs sampling in comparison to random scan Gibbs. They do so first though the construction of several examples which challenge the dominant intuitions about mixing times, and develop theoretical bounds which are much wider than previously conjectured.
Exploring Models and Data for Image Question Answering
Ren, Mengye and Kiros, Ryan and Zemel, Richard S.
This paper addresses the task of image-based Q&A on 2 axes: comparison of different models on 2 datasets and creation of a new dataset based on existing captions.
The paper is addressing an important and interesting new topic which has seen recent surge of interest (Malinowski2014, Malinowski2015, Antol2015, Gao2015, etc.). The paper is technically sound, well-written, and well-organized. They achieve good results on both datasets and the baselines are useful to understand important ablations. The new dataset is also much larger than previous work, allowing training of stronger models, esp. deep NN ones.
However, there are several weaknesses: their main model is not very different from existing work on image-Q&A (Malinowski2015, who also had a VIS+LSTM style model (but they were also jointly training the CNN and RNN, and also decoding with RNNs to produce longer answers) and achieves similar performance (except that adding bidirectionality and 2-way image input helps). Also, as the authors themselves discuss, the dataset in its current form, synthetically created from captions, is a good start but is quite conservative and limited, being single-word answers, and the transformation rules only designed for certain simple syntactic cases.
It is exploration work and will benefit a lot from a bit more progress in terms of new models and a slightly more broad dataset (at least with answers up to 2-3 words).
Regarding new models, e.g., attention-based models are very relevant and intuitive here (and the paper would be much more complete with this), since these models should learn to focus on the right area of the image to answer the given question and it would be very interesting to analyze the results of whether this focusing happens correctly.
Before attention models, since 2-way image input helped (actually, it would be good to ablate 2-way versus bidirectionality in the 2-VIS+BLSTM model), it would be good to also show the model version that feeds the image vector at every time step of the question.
Also, it would be useful to have a nearest neighbor baseline as in Devlin et al., 2015, given their discussion of COCO's properties. Here too, one could imagine copying answers of training questions, for cases where the captions are very similar.
Regarding a broader-scope dataset, the issue with the current approach is that it is too similar to the captioning approach or task, which has the drawback that a major motivation to move to image-Q&A is to move away from single, vague (non-specific), generic, one-event-focused captions to a more complex and detailed understanding of and reasoning over the image; which doesn't happen with this paper's current dataset creation approach, and so this will also not encourage thinking of very different models to handle image-Q&A, since the best captioning models will continue to work well here. Also, having 2-3 word answers will capture more realistic and more diverse scenarios; and though it is true that evaluation is harder, one can start with existing metrics like BLEU, METEOR, CIDEr, and human eval. And since these will not be full sentences but just 2-3 word phrases, such existing metrics will be much more robust and stable already.
The task of image-Q&A is very recent with only a couple of prior and concurrent work, and the dataset creation procedure, despite its limitations (discussed above) is novel. The models are mostly not novel, being very similar to Malinowski2015, but the authors add bidirectionality and 2-way image input (but then Malinowski2015 was jointly training the CNN and RNN, and also decoding with RNNs to produce longer answers).
As discussed above, the paper show useful results and ablations on the important, recent task of image-Q&A, based on 2 datasets -- an existing small dataset and a new large dataset; however, the second, new dataset is synthetically created by rule-transforming captions and only to single-word answers, thus keeping the impact of the dataset limited, because it keeps the task too similar to the generic captioning task and because there is no generation of answers or prediction of multi-word answers.
Winner-Take-All Autoencoders
Makhzani, Alireza and Frey, Brendan J.
The paper proposes a novel way to train a sparse autoencoder where the hidden unit sparsity is governed by a winner-take-all kind of selection scheme. This is a convincing way to achieve a sparse autoencoder, while the paper could have included some more details about their training strategy and the complexity of the algorithm.
The authors present a fully connected auto-encoder with a new sparsity constraint called the lifetime sparsity. For each hidden unit across the mini-batch, they rank the activation values, keeping only the top-k% for reconstruction. The approach is appealing because they don't need to find a hard threshold and it makes sure every hidden unit/filter is updated (no dead filters because their activation was below the threshold).
Their encoder is a deep stack of ReLu and the decoder is shallow and linear (note that usually non-symmetric auto-encoders lead to worse results). They also show how to apply to RBM. The effect of sparsity is very effective and noticeable on the images depicting the filters.
They extend this auto-encoder in a convolutional/deconvolutional framework, making it possible to train on larger images than MNIST or TFD. They add a spatial sparsity, keeping the top activation per feature map for the reconstruction and combine it with the lifetime sparsity presented before.
The proposed approach exploits on a mechanism close to the one of k-sparse autoencoders proposed by Makkhzani et al [14]. The authors extend the idea from [14] to build winner-take-all encoders (and RBMs), that enforce both spatial and lifetime regularization by keeping only a percentage (the biggest) of activations. The lifetime sparsity allows overcoming problems that could arise with k-sparse autoencoders. The authors next propose to embed their modeling framework in convolutional neural nets to deal with larger images than e.g. those of mnist.
End-To-End Memory Networks
Sukhbaatar, Sainbayar and Szlam, Arthur and Weston, Jason and Fergus, Rob
This paper presents an end-to-end version of memory networks (Weston et al., 2015) such that the model doesn't train on the intermediate 'supporting facts' strong supervision of which input sentences are the best memory accesses, making it much more realistic. They also have multiple hops (computational steps) per output symbol. The tasks are Q&A and language modeling, and achieves strong results.
The paper is a useful extension of memNN because it removes the strong, unrealistic supervision requirement and still performs pretty competitively. The architecture is defined pretty cleanly and simply. The related work section is quite well-written, detailing the various similarities and differences with multiple streams of related work. The discussion about the model's connection to RNNs is also useful.
StopWasting My Gradients: Practical SVRG
Harikandeh, Reza and Ahmed, Mohamed Osama and Virani, Alim and Schmidt, Mark and Konecný, Jakub and Sallinen, Scott
This paper extends the stochastic optimization algorithm SVRG proposed in recent years. These modifications mainly includes: the convergence analysis of SVRG with corrupted full gradient; Mix the iteration of SGD and SVRG; the strategy of mini-batch; Using support vectors etc. For each modification, the author makes clear proofs and achieves linear convergence under smooth and strong convex assumptions. However, this paper's novelty is not big enough. The improvement of convergence rate is not obvious and the proof outline is very similar to the original SVRG. The key problem such as the support for non-strongly convex loss is still unsolved.
This paper starts with a key proposition showing that SVRG does not require a very accurate approximation of the total gradient of the objective function needed by SVRG algorithm. The authors use this proposition to derive a batching SVRG algorithm with the same convergence rate as that of original SVRG. Then, the authors propose a mixed stochastic gradient/SVRG approach and give a convergence proof for such a scheme. As a different approach of speeding up, the authors proposed a speed-up technique for Huberized hinge-loss support vector machine.
Spatial Transformer Networks
Jaderberg, Max and Simonyan, Karen and Zisserman, Andrew and Kavukcuoglu, Koray
This paper presents a novel layer that can be used in convolutional neural networks. A spatial transformer layer computes re-sampling points of the signal based on another neural network. The suggested transformations include scaling, cropping, rotations and non-rigid deformation whose paramerters are trained end-to-end with the rest of the model. The resulting re-sampling grid is then used to create a new representation of the underlying signal through bi-linear or nearest neighbor interpolation. This has interesting implications: the network can learn to co-locate objects in a set of images that all contain the same object, the transformation parameter localize the attention area explicitly, fine data resolution is restricted to areas important for the task. Furthermore, the model improves over previous state-of-the-art on a number of tasks.
The layer has one mini neural network that regresses on the parameters of a parametric transformation, e.g. affine), then there is a module that applies the transformation to a regular grid and a third more or less "reads off" the values in the transformed positions and maps them to a regular grid, hence under-forming the image or previous layer. Gradients for back-propagation in a few cases are derived. The results are mostly of the classic deep learning variety, including mnist and svhn, but there is also the fine-grained birds dataset. The networks with spatial transformers seem to lead to improved results in all cases.
Inverse Reinforcement Learning with Locally Consistent Reward Functions
Nguyen, Quoc Phong and Low, Kian Hsiang and Jaillet, Patrick
This paper addresses the problem of inverse reinforcement learning when the agent can change it's objective during the recording of trajectories. This results in a transition between several reward functions that explain only locally the trajectory of the observed agent. Transition probabilities between reward functions are unknown. The author propose a cascade of an EM and Viterbi algorithms to discover the reward functions and the segments on which they are valid.
Their algorithm consists in maximizing the log-likelihood of the expert's demonstrated trajectories depending on some parameters which are the original distributions of states and rewards, the local rewards and the transition function between rewards. To do so, they use the expectation-maximisation (EM) method. Then, via the Viterbi algorithm, they are able to partition the trajectories into segments with local consistent rewards.
Strengths of the paper:
1. The authors leverage existing and classical methods from the machine learning and optimization fields such as EM, Viterbi, Value iteration and gradient ascent in order to build their algorithm. This will allow the community to easily reproduce their results. 2. The experiments are conducted on synthetic and real-world data. They compare their method to MLIRL which does not use locally consistent rewards and which is the canonical choice to compare to as their algorithm is a generalization of MLIRL. The results presented show the superiority of their method over MLIRL. 3. The idea presented by the authors is original as far as I know.
Weaknesses of the paper:
1. The paper is very dense ( the figures are incorporated in the text) which makes the reading difficult.
2. The algorithm proposed needs the knowledge of the dynamics and the number of rewards. The authors, as future works, plan to extend their algorithm to unknown number of rewards, however they do not mention to get rid off the knowledge of the dynamics. Could the authors comment on that as some IRL algorithms do not need a perfect knowledge of the dynamics?
3. The method needs to solve iteratively MDPs when learning the reward functions. For each theta in the gradient ascent a MDP needs to be solved. Is this prohibitive for huge MDPs? Is there a way to avoid that step? The action-value function Q is defined via a softmax operator in order to have a derivable policy, does it allow to solve more efficiently the MDP?
4. The authors are using gradient ascent in the EM method, could they comment on the concavity of their criteria?
5. In the experiments (gridworlds), the number of features for the states is very small and thus it is understandable that a reward which is linear on the features will perform badly. Do the authors consider comparing their method to an IRL method where the number of features defining the states is greater? This is the main problem that I have with the experiments, the features used are not expressive enough to consider using a classical IRL method and this can explain why MLIRL performs badly and that its performance does not improve when the number of expert trajectories grows.
6. The performance is measured by the average log-likelihood of the expert's demonstrated trajectories which is the criterion maximized by the algorithm. I think that a more pertinent measure would be the value function of the policy produced by the optimization of the reward obtained by the algorithm. Could the authors comment on that and explain why their performance metric is more appropriate?
Teaching Machines to Read and Comprehend
Hermann, Karl Moritz and Kociský, Tomás and Grefenstette, Edward and Espeholt, Lasse and Kay, Will and Suleyman, Mustafa and Blunsom, Phil
This paper deals with the formal question of machine reading. It proposes a novel methodology for automatic dataset building for machine reading model evaluation. To do so, the authors leverage on news resources that are equipped with a summary to generate a large number of questions about articles by replacing the named entities of it. Furthermore a attention enhanced LSTM inspired reading model is proposed and evaluated. The paper is well-written and clear, the originality seems to lie on two aspects. First, an original methodology of question answering dataset creation, where context-query-answer triples are automatically extracted from news feeds. Such proposition can be considered as important because it opens the way for large model learning and evaluation. The second contribution is the addition of an attention mechanism to an LSTM reading model. the empirical results seem to show relevant improvement with respect to an up-to-date list of machine reading models.
Given the lack of an appropriate dataset, the author provides a new dataset which scraped CNN and Daily Mail, using both the full text and abstract summaries/bullet points. The dataset was then anonymised (i.e. entity names removed). Next the author presents a two novel Deep long-short term memory models which perform well on the Cloze query task.
Bandits with Unobserved Confounders: A Causal Approach
Bareinboim, Elias and Forney, Andrew and Pearl, Judea
The paper "Bandits with unobs. confounders: a causal approach" addresses the problem of bandit learning. It is assumed that in the observational setting, the player's decision is influenced by some unobserved context. If we randomize the player's decision, however, this intention is lost. The key idea is now that, using the available data from both scenarios, one can infer whether one should overrule the player's intention. Ultimately, this leads to the following strategy: observe the player's intention and then decide whether he should act accordingly or pull the other arm.
The author showed that the current MAB algorithms actually attempt to maximize rewards according to the experimental distribution, which is not optimal in the confounding case, and proposed to make use of the effect of the treatment on the treated (ETT), i.e., by comparing the average payouts obtained by players for going in favor of or against their intuition. To me, the paper is interesting because it addresses the confounding issue in MAB and proposed a way to estimate some properties of the confounder (related to the casino's payout strategy in the given example) based on ETT.
At first glance, one might think that the blinking light on the slot machines (B) and the drunkenness of the patron (D) could be either modified or observed in lines 153-159, where we read about a hypothetical attempt to optimize reward using traditional Thompson sampling. If those factors were observable or subject to intervention -- and I'd think they would be, in reality -- then it would be straightforward to do better than the 30% reward rate that's given. The paper eventually makes it clear that both of these variables are unobserved and unalterable. It would help if this were explicit early in the example, or if the cover story were modified to make this aspect more intuitive.
Multi-Task Bayesian Optimization
Swersky, Kevin and Snoek, Jasper and Adams, Ryan Prescott
This paper presents a multi-task Bayesian optimization approach to hyper-parameter setting in machine learning models. In particular, it leverages previous work on multi-task GP learning with decomposable covariance functions and Bayesian optimization of expensive cost functions. Previous work has shown that decomposable covariance functions can be useful in multi-task regression problems (e.g. \cite{conf/nips/BonillaCW07}) and that Bayesian optimization based on response-surfaces can also be useful for hyper-parameter tuning of machine learning algorithms \cite{conf/nips/SnoekLA12} \cite{conf/icml/BergstraYC13}.
The paper combines the decomposable covariance assumption \cite{conf/nips/BonillaCW07} and Bayesian optimization based on expected improvement \cite{journals/jgo/Jones01} and entropy search \cite{conf/icml/BergstraYC13} to show empirically that it is possible to :
1. Transfer optimization knowledge across related problems, addressing e.g. the cold-start problem
2. Optimize an aggregate of different objective functions with applications to speeding-up cross validation
3. Use information from a smaller problem to help optimize a bigger problem faster
Positive experimental results are shown on synthetic data (Branin-Hoo function), optimizing logistic regression hyper-parameters and optimizing hyper-parameters of online LDA on real data.
Predicting Parameters in Deep Learning
Denil, Misha and Shakibi, Babak and Dinh, Laurent and Ranzato, Marc'Aurelio and de Freitas, Nando
Motivated by recent attempts to learn very large networks this work proposes an approach for reducing the number of free parameters in neural-network type architectures. The method is based on the intuition that there is typically strong redundancy in the learned parameters (for instance, the first layer filters of of NNs applied to images are smooth): The authors suggest to learn only a subset of the parameter values and to then predicted the remaining ones through some form of interpolation. The proposed approach is evaluated for several architectures (MLP, convolutional NN, reconstruction-ICA) and different vision datasets (MNIST, CIFAR, STL-10). The results suggest that in general it is sufficient to learn fewer than 50% of the parameters without any loss in performance (significantly fewer parameters seem sufficient for MNIST).
The method is relatively simple: The authors assume a low-rank decomposition of the weight matrix and then further fix one of the two matrices using prior knowledge about the data (e.g., in the vision case, exploiting the fact that nearby pixels - and weights - tend to be correlated). This can be interpreted as predicting the "unobserved" parameters from the subset of learned filter weights via kernel ridge regression, where the kernel captures prior knowledge about the topology / "smoothness" of the weights. For the situation when such prior knowledge is not available the authors describe a way to learn a suitable kernel from data.
The idea of reducing the number of parameters in NN-like architectures through connectivity constraints in itself is of course not novel, and the authors provide a pretty good discussion of related work in section 5. Their method is very closely related to the idea of factorizing weight matrices as is, for instance, commonly done for 3-way RBMs (e.g. ref [22] in the paper), but also occasionally for standard RBMs (e.g. [R1], missing in the paper). The present papers differs from these in that the authors propose to exploit prior knowledge to constrain one of the matrices. As also discussed by the authors, the approach can further be interpreted as a particular type of pooling -- a strategy commonly employed in convolutional neural networks. Another view of the proposed approach is that the filters are represented as a linear combination of basis functions (in the paper, the particular form of the basis functions is determined by the choice of kernel). Such representations have been explored in various forms and to various ends in the computer vision and signal processing literature (see e.g. [R2,R3,R4,R5]). [R4,R5], for instance, represent filters in terms of a linear combination of basis functions that reduce the computational complexity of the filtering process).
Memory Limited, Streaming PCA
Mitliagkas, Ioannis and Caramanis, Constantine and Jain, Prateek
This paper proposes an approach to one-pass SVD based on a blocked variant of the power method, which variance is reduced within each block of streaming data, and compares to exact batch SVD.
Figure 1d is offered as an example where the proposed Algo 1 can scale to data for which the authors claim to be so large that "traditional batch methods" could not be run and reported. Yet there are many existing well-known SVD methods which are routinely used for even larger data sets than the largest here (sparse 8.2M vectors for 120k dimensions). These include the EMPCA (Roweiss 1998) and fast randomized SVD (Haiko et at 2011), both of which the author's cite. Why were these methods (both very simple to implement efficiently even in Matlab, etc.) not reported for this data? Especially necessary to compare against is the randomized SVD, since it too can be done in one-pass (see Haiko et al); although that cited paper discusses the tradeoffs in doing multiple passes -- something this paper does not even discuss. The authors say it took "a few hours" for Algo 1 to extract the top 7 components. Methods like the randomized SVD family of Haiko et al scale linearly in those parameters (n=8.2M and d=120k and k=7 and the number of non-zeros of the sparse data) and typically run in less than 1 hour for even larger data sets. So, demonstrating both the speed and accuracy of the proposed Algo 1 compared to the randomized algorithms seems necessary at this point, to establish the practical significance of this proposed approach.
This paper identifies and resolves a basic gap in the design of streaming PCA algorithms. It is shown that a block stochastic streaming version of the power method recovers the dominant rank-k PCA subspace with optimal memory requirements and sample complexity not too worse than batch PCA (which maintains the covariance matrix explicitly), assuming that streaming data is drawn from a natural probabilistic generative model. The paper is excellently written and provides intuitions for the analysis, starting with exact rank 1 and exact rank k case to the general rank k approximation problem. Some empirical analysis is also provided illustrating the approach for PCA on large document-term matrices.
Analyzing the Harmonic Structure in Graph-Based Learning
Wu, Xiao-Ming and Li, Zhenguo and Chang, Shih-Fu
The authors introduce a functional called the "harmonic loss" and show that (a) it characterizes smoothness in the sense that functions with small harmonic loss change little across large cuts (to be precise, the cut has to be a level set separator) (b) several algorithms for learning on graphs implicitly try to find functions that minimize the harmonic loss, subject to some constraints.
The "harmonic loss" they define is essentially the (signed) divergence $\nabla f$ of the function across the cut, so it's not surprising that it should be closely related to smoothness. In classical vector calculus one would take the inner product of this divergence with itself and use the identity
< $\nabla f, \nabla f $> = < $f, \nabla^2 f $>
to argue that functions with small variation, i.e., small $| \nabla f |^2$ almost everywhere can be found by solving the Laplace equation. On graphs, modulo some tweaking with edge weights, essentially the same holds, leading to minimizing the quadratic form $ f^\top L f$, which is at the heart of all spectral methods. So in this sense, I am not surprised.
Alternatively, one can minimize the integral of $| \nabla f |$, which is the total variation, and leads to a different type of regularization ($l1$ rather than $l2$ is one way to put it). The "harmonic loss" introduced in this paper is essentially this total variation, except there is no absolute value sign. Among all this fairly standard stuff, the interesting thing about the paper is that for the purpose of analyzing algorithms one can get away with only considering this divergence across cuts that separate level sets of $f$, and in that case all the gradients point in the same direction so one can drop the absolute value sign. This is nice because the "harmonic loss" becomes linear and a bunch of things about it are very easy to prove. At least this is my interpretation of what the paper is about.
Distributed representations of words and phrases and their compositionality
Mikolov, Tomas and Sutskever, Ilya and Chen, Kai and Corrado, Greg S and Dean, Jeff
Advances in neural information processing systems - 2013 via Local Bibsonomy
Keywords: thema:deepwalk, language, modelling, representation
The paper discusses a number of extensions to the Skip-gram model previously proposed by Mikolov et al (citation [7] in the paper): which learns linear word embeddings that are particularly useful for analogical reasoning type tasks. The extensions proposed (namely, negative sampling and sub-sampling of high frequency words) enable extremely fast training of the model on large scale datasets. This also results in significantly improved performance as compared to previously proposed techniques based on neural networks. The authors also provide a method for training phrase level embeddings by slightly tweaking the original training algorithm.
This paper proposes 3 improvements for the skip-gram model which allows for learning embeddings for words. The first improvement is subsampling frequent word, the second is the use of a simplified version of noise constrastive estimation (NCE) and finally they propose a method to learn idiomatic phrase embeddings. In all three cases the improvements are somewhat ad-hoc. In practice, both the subsampling and negative samples help to improve generalization substantially on an analogical reasoning task. The paper reviews related work and furthers the interesting topic of additive compositionality in embeddings.
The article does not propose any explanation as to why the negative sampling produces better results than NCE which it is suppose to loosely approximate. In fact it doesn't explain why besides the obvious generalization gain the negative sampling scheme should be preferred to NCE since they achieve similar speeds.
The Fast Convergence of Incremental PCA
Balsubramani, Akshay and Dasgupta, Sanjoy and Freund, Yoav
This paper proves fast convergence rates for Oja's well-known incremental algorithm for PCA. The proof uses a novel technique to describe the progress of the algorithm, by breaking it into several "epochs"; this is necessary because the PCA problem is not convex, and has saddle points. The proof also uses some ideas from the study of stochastic gradient descent algorithms for strongly convex functions. The theoretical bounds give some insight into the practical performance of Oja's algorithm, and its sensitivity to different parameter settings.
They prove the $\tilde{O}(1/n)$ finite sample rate of convergence for estimating the leading eigenvector of the covaraince matrix. Their results suggest the best learning rate for incremental PCA. Also, their analysis provide insights for relationship with SGD on strongly convex functions.
Matrix factorization with binary components
Slawski, Martin and Hein, Matthias and Lutsik, Pavlo
This paper discusses a new approach to binary matrix factorization
that is motivated by recent developments in non-negative matrix
factorization. The goal of
the paper is to present an algorithm for finding a factorization of a
matrix in the form $D = T A$ where the entries of $T$ are in
$\{0,1\}$. Such a model has wide applicability and is of interest to
the ML community. The algorithm has provable recovery guarantees in
the case of noiseless observations. A modified algorithm is applied
to the noisy setting; however, the authors do not establish recovery
guarantees.
The paper presents an algorithm for low-rank matrix factorization with constraints on one of the factors should be binary. The paper has several novel contributions for this problem. The algorithm guarantees the exact solution with the time complexity of $O(mr2^r+mnr)$, where previous approach (E. Meeds et al., NIPS 2007) uses MCMC algorithm so that it cannot guarantee a global convergence. Under additional assumptions on the binary factor matrix $T$, the uniqueness of $T$ is proved which means that each data point has a unique representation with the columns of $T$. Using Littlewood-Offord lemma, the paper computes a theoretical speed-up factor for their heuristic of the candidate binary vector set reduction step.
media.nips.cc
Learning to Pass Expectation Propagation Messages
Heess, Nicolas and Tarlow, Daniel and Winn, John
Advances in Neural Information Processing Systems 26 - 2013 via Local Bibsonomy
Keywords: ep
This paper proposes to learning expectation propagation (EP) message update operators from data that would enable fast and efficient approximate inference in situations where computing these operators is otherwise intractable.
This paper attacks the problem of computing the intractable low dimensional statistics in EP message passing by training a neural network. Training data is obtained using importance sampling and assuming that we know the forward model. The paper appears technically correct, honest about shortcomings, provides an original approach to a known challenge within EP and nicely illustrates the developed method in a number of well-chosen examples.
The authors propose a method for learning a mapping from input messages to the output message in the context of expectation propagation. The method can be thought of as a sort of "compilation" step, where there is a one-time cost of closely approximating the true output messages using important sampling, after which a neural network is trained to reproduce the output messages in the context of future inference queries.
Robust Low Rank Kernel Embeddings of Multivariate Distributions
Song, Le and Dai, Bo
The authors present a robust low rank kernel embedding related to higher order tensors and
latent variable models. In general the work is interesting and promising. It provides synergies between machine learning, kernel methods, tensors and latent variable models.
The RKHS embedding of a joint probability distribution between two variables involves the notion of covariance operators. For joint distributions over multiple variables, a tensor operator is needed. The paper defines these objects together with appropriate inner product, norms and reshaping operations on them. The paper then notes that in the presence of latent variables where the conditional dependence structure is a tree, these operators are low-rank when reshaped along the edges connecting latent variables. A low-rank decomposition of the embedding is then proposed that can be implemented on Gram matrices. Empirical results on density estimation tasks are impressive.
Fast Algorithms for Gaussian Noise Invariant Independent Component Analysis
Voss, James R. and Rademacher, Luis and Belkin, Mikhail
This paper presents a fast ICA algorithm that works best under Gaussian noise. This is demonstrated with components simulated from different univariate distributions and variable Gaussian noise.
The writing is clear. The paper is incremental in the sense that it builds on ideas from (Belkin et. al, 2013) but focuses on speeding up and improving their cumulant-based approach.
This is achieved via
1. a Hessian expansion of the cumulant-tensor-based quasi-orthogonalization.
2. gradient-based iterations that preserve quasi-orthogonalization of the latent factors (noised case) as well as whitening in the noiseless case.
This paper proposes a cumulant based independent component analysis (ICA) algorithm for source separation in the presence of additive Gaussian noise. The algorithm is somewhat incremental building upon Refs [2] and [3], but appear technically correct with experimental results confirming the claims made. The algorithms used for benchmarking assume no additive noise but is like InfoMax often quite robust to addition of noise.
Multi-Prediction Deep Boltzmann Machines
Goodfellow, Ian J. and Mirza, Mehdi and Courville, Aaron C. and Bengio, Yoshua
The paper presents a method for learning layers of representation and for completing missing queries both in input and labels in single procedure unlike some other methods like deep boltzmann machines (DBM). It is a recurrent net following the same operations as DBM with the goal of predicting a subset of inputs from its complement. Parts of paper are badly written, especially model explanation and multi-inference section, nevertheless the paper should be published and I hope the authors will rewrite them.
Deep Boltzmann Machines (DBNs) are usually initialized by greedily training a stack of RBMs, and then fine-tuning the overall model using persistent contrastive divergence (PCD). To perform classification, one typically provides the mean-field features to a separate classifier (e.g. a MLP) which is trained discriminatively. Therefore the overall process is somewhat ad-hoc, consisting of L + 2 models (where L is the number of hidden layers) each with its own objective. This paper presents a holistic training procedure for DBNs which has a single training stage (where both input and output variables are predicted) producing models which can classify directly as well as efficiently performing other tasks such as imputing missing inputs. The main technical contribution is the mechanism by which training is performed; a way of training DBNs which uses the mean field equations for the DBN to induce recurrent nets that are trained to solve different inference tasks (essentially predicting different subsets of observed variables).
Stochastic Ratio Matching of RBMs for Sparse High-Dimensional Inputs
Dauphin, Yann and Bengio, Yoshua
The paper uses a subsampling-based method to speed up ratio matching training of
RBMs on high-dimensional sparse binary data. The proposed approach is a simple
adaptation of the method proposed by Dauphin et al. (2011) for denoising
autoencoders.
This paper develops an algorithm that can successfully train RBMs on very high dimensional but sparse input data, such as often arises in NLP problems. The algorithm adapts a previous method developed for denoising autoencoders for use with RBMs. The authors present extensive experimental results verifying that their method learns a good generative model; provides unbiased gradient estimates; attains a two order of magnitude speed up on large sparse problems relative to the standard implementation; and yields state of the art performance on a number of NLP tasks. They also document the curious result that using a biased version of their estimator in fact leads to better performance on the classification tasks they tested.
Fast Convergence of Regularized Learning in Games
Syrgkanis, Vasilis and Agarwal, Alekh and Luo, Haipeng and Schapire, Robert E.
The authors perform theoretical analysis about faster convergence with multi-player normal-form games by generalizing techniques for two-player zero-sum games. They also perform empirical evaluation by using the 4-bidder simultaneous auction game.
The paper is concerned with two problems:
1. How does the social welfare of players using regret minimization algorithms compare to the optimal welfare.
2. Can one obtain better regret bounds when all players use a regret minimization algorithm
The paper deals with bounds on regret minimization algorithms in games. The usual regret bounds on these algorithms is in $O(\sqrt{T})$. However, this assumes that the learner faces a completely adversarial opponent. However, it is natural to assume that on a game everyone will play a regret minimization algorithm and the question is whether or not one can obtain better rates in this scenario. The authors show that regret in $O(T^{1/4})$ is achievable for general games.
Competitive Distribution Estimation: Why is Good-Turing Good
Orlitsky, Alon and Suresh, Ananda Theertha
The paper gives justification for the widespread use of the Good-Turing estimator for discrete distribution estimation through minimax regret analysis with two comparator classes. The paper obtains competitive regret bounds that lead to a more accurate characterization of the performance of the the Good-Turing estimators and in some cases is much better than the best known risk bounds. The comparator classes considered are estimators with knowledge of the distribution up to permutation, and estimators with full knowledge of the distribution, but with the constraint that the must assign the same probability mass to symbols appearing with the same frequencies.
A* Sampling
Maddison, Chris J. and Tarlow, Daniel and Minka, Tom
This paper introduces a new approach to sampling from continuous probability distributions. The method extends prior work on using a combination of Gumbel perturbations and optimization to the continuous case. This is technically challenging, and they devise several interesting ideas to deal with continuous spaces, e.g. to produce an exponentially large or even infinite number of random variables (one per point of the continuous/discrete space) with the right distribution in an implicit way. Finally, they highlight an interesting connection with adaptive rejection sampling. Some experimental results are provided and show the promise of the approach.
This paper introduces a sampling algorithm based on the Gumbel-max trick and A* search for continuous spaces. The Gumbel-Max trick adds perturbations to an energy function and after applying argmax, results in exact samples from the Gibbs distribution. While this applies to discrete spaces, this paper extends this idea to continuous spaces using the upper bounds on the infinitely many perturbation values.
Asymmetric LSH (ALSH) for Sublinear Time Maximum Inner Product Search (MIPS)
Shrivastava, Anshumali and 0001, Ping Li
This paper generalizes the LSH method to account for the (bounded) lengths of the data base vectors, so that the LSH tricks for fast approximate nearest neighbor search can exploit the well-known relation between Euclidian distance and dot product similarity (e.g. as in equation 2) and support MIPS search as well. They give 3 motivating examples where solving MIPS vs kNN per se is more appropriate and needed. Their algorithm is essentially equation 9 (using equation 7 compute vector reformulations $Q(q)$ and $P(x)$ of the query a database element respectively). This is based on apparently novel observation (equation 8) that the distance from the query converges to the dot product plus a constant, when a parameter m which exponentiated the $P(x)$ vector elements is sufficiently large (e.g. just 3 is claimed to suffice, leading to vectors $Q(q)$ and $P(x)$ which are just that m times larger than the original input dimensionality.
Scalable Influence Estimation in Continuous-Time Diffusion Networks
Du, Nan and Song, Le and Gomez-Rodriguez, Manuel and Zha, Hongyuan
The paper addresses how to estimate and maximize influence in large networks, where influence of node (or set of nodes) A is the expected number of nodes that will eventually adopt a certain idea following the initial adoption by A. The authors develop an algorithm for estimating influence within a given time frame, then use it as the basis of a greedy algorithm to find a given number of nodes to (approximately) maximize influence within the given time frame. They present theoretical bounds and an experimental evaluation of the algorithm.
The authors build on an extensive list of existing work, which is appropriately cited. The most relevant is the work by Gomez-Rodriguez & Scholkopf (2012) \cite{conf/icml/Gomez-RodriguezS12}, which provides an exact analytical solution to the identical formulation of the influence estimation problem. The main innovation in the present paper is a fast randomized algorithm for estimating influence, which is based on the algorithm for estimating neighborhood size by Cohen (1997) \cite{journals/jcss/Cohen97}. This approximation allows more flexibility in modeling the flows through the edges, is substantially faster than the analytical solution, and scales well with network size. Overall, this is a solid paper on an important topic of practical relevance.
Submodular Optimization with Submodular Cover and Submodular Knapsack Constraints
Iyer, Rishabh K. and Bilmes, Jeff A.
The authors introduce two new submodular optimization problems and
investigate approximation algorithms for them. The problems are
natural generalizations of many previous problems: there is a covering
problem ($min\\{ f(X) : g(X) \ge c\\}$) and a packing or knapsack problem
($max\\{ g(X) : f(X) \le b\\}$), where both f and g are submodular. These
generalize well-known previously studied versions of the problems
usually assume that f is modular. They show that there is an intimate
relationship between the two problems: any polynomial-time
bi-criterion algorithm for one problem implies one for the other
problem (with similar approximation factors) using a simple reduction.
They then present a general iterative framework for solving the two
problems by replacing either f or g by tight upper or lower bounds
(often modular) at each iteration. These tend to reduce the problem
at each iteration to a simpler subproblem for which there are existing
algorithms with approximation guarantees. In many cases, they are able
to translate these into approximation guarantees for the more general
problem. Their approximation bounds are curvature-dependent and
highlight the importance of this quantity on the difficulty of the
problem. The authors also present a hardness result that matches their
best approximation guarantees up to log factors, show that a number of
existing approximation algorithms (e.g. greedy ones) for the simpler
problem variants can be recovered from their framework by using
specific modular bounds, and show experimentally that the simpler
algorithm variants may perform as well as the ones with better
approximation guarantees in practice.
A memory frontier for complex synapses
Lahiri, Subhaneil and Ganguli, Surya
The paper studies the problem of memory storage with discrete (digital) synapses. Previous work established that memory capacity can be increased by adding a cascade of (latent) states but the optimal state transition dynamics was unknown and the actual dynamics was usually hand-picked using some heuristic rules. In this paper the authors aim to derive the optimal transition dynamics for synaptic cascades. They first derive an upper bound on achievable memory capacity and show that simple models with linear chain structures can approach (achieve) this bound.
The paper applies the theory if ergodic Markov chains in continuous time to the analysis of the memory properties of online learning in synapses with intrinsic states extending earlier work of Abbott, Fusi and their co-workers.
Optimal Neural Population Codes for High-dimensional Stimulus Variables
Wang, Zhuo and Stocker, Alan A. and Lee, Daniel D.
Finding the objective functions that regions of the nervous system are optimized for is a central question in neuroscience, providing a central computational principle behind neural representation in a given region. One common objective is to maximize the Shannon Information the neural response encodes about the input (infomax). This is supported by some experimental. Another is to minimize the decoding error when the neural population is decoded for a particular variable or variable. This has also been found to have some experimental evidence. These two different objectives are similar in some circumstances, giving similar predictions, in other cases they differ more.
Studies finding model optimal distributions of neural population tuning that minimizes decoding error (L2-min) have mostly considered 1-dimensional stimuli. In this paper the authors extend substantially on this, by developing analytical methods for finding the optimal distributions of neural tuning for higher dimensional stimuli. Their methods apply under certain limited conditions , such as when there is an equal number of neurons as stimulus dimensions (diffeomorphic). The authors compare their results to the infomax solution (in most detail for the 2D case), and find fairly similar results in some respects, but with two key differences. That the L2-min basis functions are more orthogonal than the infomax, and that the L2-min has discrete solutions rather than the continuum found for infomax. A consequence of these differences is that L2-min representations encode more correlated signals.
Correlations strike back (again): the case of associative memory retrieval
Savin, Cristina and Dayan, Peter and Lengyel, Máté
The paper investigates how correlation among synaptic weights, not correlation among neural activity, influences the retrieval performance of auto-associative memory. Authors studied two types of well-known learning rules, additive learning rule (e.g., Hebbian learning) and palimpsest learning rule (e.g., cascade learning), and showed that synaptic correlations are induced in most of the cases. They also investigated optimal retrieval dynamics and showed that there exists a local version of dynamics that can be implemented in neural networks (except for an XOR cascade model).
Variational Inference for Mahalanobis Distance Metrics in Gaussian Process Regression
Titsias, Michalis K. and Lázaro-Gredilla, Miguel
In a GP regression model, the process outputs can be integrated over analytically, but this is not so for (a) inputs and (b) kernel hyperparameters. Titsias etal 2010 showed a very clever way to do (a) with a particular variational technique (the goal was to do density estimation). In this paper, (b) is tackled, which requires some nontrivial extensions of Titsias etal. In particular, they show how to decouple the GP prior from the kernel hyperparameters. This is a simple trick, but very effective for what they want to do. They also treat the large number of kernel hyperparameters with an additional level of ARD and show how the ARD hyperparameters can be solved for analytically, which is nice.
One-shot learning and big data with n=2
Dicker, Lee H. and Foster, Dean P.
This paper studies a linear latent factor model, where one observes "examples" consisting of high-dimensional vectors $x_1, x_2, ..\in R^d$, and one wants to predict "labels" consisting of scalars $y_1, y_2, ... \in R$. Crucially, one is working in the "one-shot learning" regime, where the number of training examples n is small (say, $n=2$ or $n=10$), while the dimension d is large (say, $d \rightarrow \infty$). This paper considers a well-known method, principal component regression (PCR), and proves some somewhat surprising theoretical results: PCR is inconsistent, but a modified PCR estimator is weakly consistent; the modified estimator is obtained by "expanding" the PCR estimator, which is different from the usual "shrinkage" methods for high-dimensional data.
This paper aims to provide an analysis for principle component
regression in the setting where the feature vectors $x$. The authors
let $x = v + e$ where $e$ is some corruption of the nominal feature
vector $v$; and $v = a u$ where $a \sim N(0,\eta^2 \gamma^2 d)$ while
the observations $y = \theta/(\gamma \sqrt{d}) \langle v,u \rangle + \xi$. This
formulation is slightly different than the standard one because our
design vectors are noisy, which can pose challenges in identifying the
linear relationship between $x$ and $y$. Thus, using the top principle
components of $x$ is a standard method used in order to help
regularize the estimation. The paper is relevant to the ML
community. The key message of using a bias-corrected estimate of $y$
is interesting, but not necessarily new. Handling bias in regularized
methods is a common problem (cf. Regularization and variable selection
via the Elastic Net, Zou and Hastie, 2005). The authors present
theoretical analysis to justify their results. I find the paper
interesting; however I am not sure if the number of new results and
level of insights warrants acceptance.
Summary Statistics for Partitionings and Feature Allocations
Fidaner, Isik Baris and Cemgil, Ali Taylan
The authors propose novel approaches for summarizing the posterior of partitions in infinite mixture models. Often in applications, the posterior of the partition is quite diffuse; thus, the default MAP estimate is unsatisfactory. The proposed approach is based on the cumulative block sizes, which counts the number of clusters of size $\ge k$, for $k=1, …,n$. They also examine the projected cumulative block sizes, when the partition is projected onto a subset of $\\{1,...,n\\}$. These quantities are summarized by the cumulative occurrence distribution, the per element information of a set, the entropy, the projected entropy, and the subset occurrence. Finally, they propose using an agglomerative clustering algorithm where the projection entropy is used to measure distances between sets. In illustrations, the posterior of the partition is summarized by the dendrogram produced from the entropy agglomerative algorithm, along with existing summaries such as the posterior histogram of the number of clusters and the pairwise occurrences.
Actor-Critic Algorithms for Risk-Sensitive MDPs
A., Prashanth L. and Ghavamzadeh, Mohammad
The paper addresses the problem of finding a policy with a high expected return and a bounded variance. The paper considers both the discounted and the average reward cases. The authors propose formulate this problem as a constrained optimization problem, where the gradient of the Lagrangian dual function is estimated form samples. This gradient is composed of the gradient of the expected return and the gradient of the expected squared return. Both gradients need to be estimated in every state. The authors use a linear function approximation to generalize the gradient estimates to states that were not encountered in the samples. The authors use stochastic perturbation to evaluate the gradients in particular states by sampling two trajectories, one with policy parameters theta and another with policy parameters theta+beta, where beta is a perturbation random variable. The policy parameters are updated in an actor-critic scheme. The authors prove that the proposed optimization method converges to a local optimum. Numerical experiments on a traffic lights control problem show that the proposed technique finds a policy with a slightly higher risque than the optimal solution, but with a significantly lower variance.
What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach
Dai, Zhenwen and Exarchakis, Georgios and Lücke, Jörg
This paper presents a generative model for natural image patches which takes into account occlusions and the translation invariance of features. The model consists of a set of masks and a set of features which can be translated throughout the patch. Given a set of translations for the masks and features the patch is then generated by sampling (conditionally) independent Gaussian noise. An inference framework for the parameters is proposed and is demonstrated on synthetic data with convincing results. Additionally, experiments are run on natural image patches and the method learns a set of masks and features for natural images. When combined together the resulting receptive fields look mostly like Gabors, but some of them have a globular structures.
Decision Jungles: Compact and Rich Models for Classification
Shotton, Jamie and Sharp, Toby and Kohli, Pushmeet and Nowozin, Sebastian and Winn, John M. and Criminisi, Antonio
This paper revisits the idea of decision DAGs for classification. Unlike a decision tree, a decision DAG is able to merge nodes at each layer, preventing the tree from growing exponentially with depth. This represents an alternative to decision-trees utilizing pruning methods as a means of controlling model size and preventing overfitting. The paper casts learning with this model as an empirical risk minimization problem, where the idea is to learn both the DAG structure along with the split parameters of each node. Two algorithms are presented to learn the structure and parameters in a greedy layer-wise manner using an information-gain based objective. Compared to several baseline approaches using ensembles of fixed-size decision trees, ensembles of decision DAGs seem to provide improved generalization performance for a given model size (as measured by the total number of nodes in the ensemble).
Density estimation from unweighted k-nearest neighbor graphs: a roadmap
von Luxburg, Ulrike and Alamgir, Morteza
A method of estimating a density (up to constants) from an unweighted, directed k nearest neighbor graph is described. It is assumed (more or less) that the density is continuously differentiable, supported on a compact and connected subset of $R^d$ with non-empty interior and a smooth boundary, and is upper- and lower-bounded on its support.
Variational Policy Search via Trajectory Optimization
Levine, Sergey and Koltun, Vladlen
The paper introduces a new approach of how classical policy search can be combined and improved with trajectory optimization methods serving as exploration strategy. An optimization criteria with the goal of finding optimal policy parameters is decomposed with a variational approach. The variational distribution is approximated as Gaussian distribution which allows a solution with the iterative LQR algorithm. The overall algorithm uses expectation maximization to iterate between minimizing the KL divergence of the variational decomposition and maximizing the lower bound with respect to the policy parameters.
A simple example of Dirichlet process mixture inconsistency for the number of components
Miller, Jeffrey W. and Harrison, Matthew T.
This paper addresses one simple but potentially very important point: That Dirichlet process mixture models can be inconsistent in the number of mixture components that they infer. This is important because DPs are nowadays widely used in various types of statistical modeling, for example when building clustering type algorithms. This can have real-world implications, for example when clustering breast cancer data with the aim of identifying distinct disease subtypes. Such subtypes are used in clinical practice to inform treatment, so identifying the correct number of clusters (and hence subtypes) has a very important real-world impact.
The paper focuses on proofs concerning two specific cases where the DP turns out to be inconsistent. Both consider the case of the "standard normal DPM", where the likelihood is a univariate normal distribution with unit variance, the mean of which is subject to a normal prior with unit variance. The first proof shows that, if the data are drawn i.i.d. from a zero-mean, unit-variance normal (hence matching the assumed DPM model), $P(T=1 | \text{data})$ does not converge to 1. The second proof takes this further, demonstrating that in fact$ P(T=1 | \text{data}) -> 0 $
Training and Analysing Deep Recurrent Neural Networks
Hermans, Michiel and Schrauwen, Benjamin
The authors propose a new deep architecture, which combines the hierarchy of deep learning with time-series modeling known from HMMs or recurrent neural networks. The proposed training algorithm builds the network layer-by-layer using supervised (pre-)training a next-letter prediction objective. The experiments demonstrate that after training very large networks for about 10 days, the network performance on a Wikipedia dataset published by Hinton et al. improves over previous work. The authors then proceed to analyze and discuss details of how the network approaches its task. For example, long-term dependencies are modeled in higher layers, correspondence between opening and closing parenthesis are modeled as a "pseudo-stable attractor-like state".
Variance Reduction for Stochastic Gradient Optimization
Wang, Chong and Chen, Xi and Smola, Alexander J. and Xing, Eric P.
The authors propose to accelerate the stochastic gradient optimization algorithm by reducing the variance of the noisy gradient estimate by using the 'control variate' trick (a standard variance reduction technique for Monte Carlo simulations, explained in [3] for example). The control variate is a vector which hopefully has high correlation with the noisy gradient but for which the expectation is easier to compute. Standard convergence rates for stochastic gradient optimization depend on the variance of the gradient estimates, and thus a variance reduction technique should yield an acceleration of convergence. The authors give examples of control variates by using Taylor approximations of the gradient estimate for the optimization problem arising in regularized logistic regression as well as for MAP estimation for the latent Dirichlet Allocation (LDA) model. They compare constant step-size SGD with and without variance reduction for logistic regression on the covtype dataset, claiming that the variance reduction allows to use bigger step-sizes without having the problem of high variance and thus yields faster empirical convergence. For LDA, they compare the adaptive step-size version of the stochastic optimization method of [10] with and without variance reduction, showing a faster convergence on the held-out test log-likelihood on three large corpora.
Sparse Additive Text Models with Low Rank Background
Shi, Lei
This paper presents a model inspired by the SAGE (Sparse Additive GEnerative) model of Eisenstein et al. The authors use a different approach for modeling the "background" component of the model. SAGE uses the same background model for all; the authors allow different backgrounds for different topics/classification labels/etc., but try to keep the background matrix low rank. To make inference faster when using this low rank constraint, they use a bound on the likelihood function that avoids the log-sum-exp calculations from SAGE. Experimental results are positive for a few different tasks.
Sparse additive models represent sets of distributions over large vocabularies as log-linear combinations of a dense, shared background vector and a sparse, distribution-specific vector. The paper presents a modification that allows distributions to have distinct background vectors, but requires that the matrix of background vectors be low-rank. This method leads to better predictive performance in a labeled classification task and in a mixed-membership LDA-like setting.
Previous work on SAGE introduced a new model for text. It built a lexical distribution by adding deviation components to a fixed background. The model presented in this paper SAM-LRB, builds on SAGE and claims to improve it by two additions. First, providing a unique background for each class/topic. Second, providing an approximation of log-likelihood so as to provide a faster learning and inference algorithm in comparison to SAGE.
Deep Fisher Networks for Large-Scale Image Classification
Simonyan, Karen and Vedaldi, Andrea and Zisserman, Andrew
The paper proposes a new image representation for recognition based on a stacking of two layers of Fisher vector encoders, with the first layer capturing semi-local information and the second performing sum-pooling aggregation over the entire picture. The approach is inspired by the recent success of deep convolutional networks (CNN). The key-difference is that the architecture proposed in this paper is predominantly hand-designed with relatively few parameters learned compared to CNNs. This is both the strength and the weakness of the approach as it leads to much faster training but also slighter lower accuracy compared to fully learned deep networks.
This paper uses Fisher Vectors as inner building blocks in a recognition architecture. The basic Fisher vector module had previously demonstrated superior performance in recognition application. Here, it is augmented with discriminative linear projection for dimensionality reduction, and multiscale local pooling, to make it suitable for stacking. Inputs of all layers are jointly used for classification.
Causal Inference on Time Series using Restricted Structural Equation Models
Peters, Jonas and Janzing, Dominik and Schölkopf, Bernhard
This paper considers a class of structural equation models for times series data. The models allow nonlinear instantaneous effects and lagged effects. On the other hand, Granger-causality based methods do not allow instantaneous effects and a linear non-Gaussian method TS-LiNGAM (Hyvarinen et al., ICML2008, JMLR2010) assumes linear effects.
This paper introduces a model and procedure for learning instantaneous and lagged causal relationships among variables in a time series when each causal relationship is either identifiable in the sense of the additive noise model (Hoyer et al. 2009) or exhibits a time structure. The learning procedure finds a causal order by iteratively fitting VAR or GAM models where each variable is a function of all other variables and making the variable with the least dependence the lowest variable in the order. Excess parents are then pruned to produce the summary causal graph (where x->y indicates either an instantaneous or lagged cause up to the order of the VAR or GAM model that is fit). Experiments show that the method outperforms competing methods and returns no results in cases where the model can be identified (rather than wrong results).
More data speeds up training time in learning halfspaces over sparse vectors
Daniely, Amit and Linial, Nati and Shalev-Shwartz, Shai
This paper provides one of the most natural examples of a learning problem for which the problem becomes computationally tractable when given a sufficient amount of data, but is computationally intractable (though still information theoretically tractable) when given a smaller quantity of data. This computational intractability is based on a complexity-theoretic assumption about the hardness of distinguishing satisfiable 3SAT formulas from random ones at a given clause density (more specifically, the 3MAJ variant of the conjecture).
The specific problem considered by the authors is learning halfspaces over 3-sparse vectors. The authors complement their negative results with nearly matching positive results (if one believes a significantly stronger complexity theoretic conjecture-- that hardness persists even for random formulae whose density is $n^\mu$ over the satistfiability threshold). Sadly, the algorithmic results are described in the Appendix, and are not discussed. It seems like they are essentially modifications of Hazan et al.'s 2012, though it would be greatly appreciated if the authors included a high-level discussion of the algorithm. Even if no formal proofs of correctness will fit in the body, a description of the algorithm would be helpful.
Transportability from Multiple Environments with Limited Experiments: Completeness Results
Bareinboim, Elias and Pearl, Judea
Previously it has been shown that do-calculus is a sound inferential machinery for estimating a causal effect from a causal diagram and a set of observations and interventions. This paper further proves that it is not only sound, but also complete, meaning that every valid equality between probabilities defined on a semi-Markovian graph can be obtained through finite applications of the three rules of do-calculus. Moreover, the paper studies mz-transportability, which unifies those previously studied special cases of meta-identifiability. The authors proposed a complete algorithm to determine if a causal effect is mz-transportable, and if it is, outputs a transport formula for estimating the causal effect.
Robust Multimodal Graph Matching: Sparse Coding Meets Graph Matching
Fiori, Marcelo and Sprechmann, Pablo and Vogelstein, Joshua T. and Musé, Pablo and Sapiro, Guillermo
This paper examines the problem of approximate graph matching (isomorphism). Given graphs G, H with p nodes, represented by respective adjacency matrices A, B, Find a permutation matrix P that best "matches" AP and PB.
This paper poses the multimodal graph matching problem as a convex optimization problem, and solves it using augmented Langrangian techniques (viz., ADMM). This is an important problem with application in several fields. Experimental results on synthetic and multiple real world datasets demonstrate effectiveness of the proposed approach.
Modeling Clutter Perception using Parametric Proto-object Partitioning
Yu, Chen-Ping and Hua, Wen-Yu and Samaras, Dimitris and Zelinsky, Gregory J.
This paper proposes an image-based model for visual clutter perception ("a crowded, disorderly state"). For a given image, the model begins by applying an existing superpixel clustering then computing the intensity, colour and orientation histograms of pixels within each superpixel. Boundaries between adjacent superpixels are then retained or merged to create "proto-objects". The novel merging algorithm acts on the Earth Movers Distance (EMD), a measure of the similarity between two histograms. The distribution of histogram distances in each image for each image feature is modeled as a mixture of two Weibull distributions. The crossover point between the two distributions (or a fixed cumulative percentile if a single distribution is preferred by model selection) is used as the threshold point for merging: an edge is labelled ``similar'', and the superpixels merged, if the pair of superpixels exceed the threshold point for all three features. The clutter value for each image is the ratio of the final number of proto-objects to the initial number of superpixels (i.e. 0 = no proto-objects, not cluttered; 1 = all superpixels are proto-objects).
The model is validated by comparing to human clutter rankings of a subset of an existing image database. Human observers rank images from least to most cluttered, then the median ranking for each image is used as the ground truth for clutter perception. The new model correlates more highly with human rankings of clutter than a number of previous clutter perception and image segmentation models (including human object segmentation from a previous study).
PAC-Bayes-Empirical-Bernstein Inequality
Tolstikhin, Ilya O. and Seldin, Yevgeny
This paper derives a new empirical PAC Bayesian bound by combining an existing (non-empirical) PAC Bayesian Berstein bound (i.e., involving the true variance of the loss values) with a PAC Bayesian analysis of the concentration of the empirical variance around its true value. This new bound has the advantage of being tighter when the empirical variance is small compared to the empirical loss. Experiments on real and empirical data with simple models compare the new bound with the usual empirical PAC Bayesian bound confirming the advantage.
Point Based Value Iteration with Optimal Belief Compression for Dec-POMDPs
MacDermed, Liam and Isbell, Charles L.
This paper proposes a new method for Dec-POMDP planning that is built out of several components. The first is a new way of solving cooperative Bayesian games using an integer linear program. The second is the transformation of the Dec-POMDP to a belief POMDP in which a "centralized mediator" must select at each timestep the best action for each agent-belief pair. The third is to automate the discovery of optimal belief compression by dividing each timestep into two parts, the first corresponding to the original Dec-POMDP and the second giving each agent a chance to select how its beliefs in that timestep are mapped to a bounded set and thus compressed. The fourth assembles these components together into a point-based value iteration method that solves the resulting belief POMDP using a varient of PERSEUS in which the CBG solver is used to compute maximizations.
Three contributions are made:
* An approach to convert DEC-POMDPs to bounded belief DEC-POMDPs
* An approach to convert bounded belief DEC-POMDPs to POMDPs with exponentially many actions
* An integer linear program to optimize one-step look-ahead policies in POMDPs with exponentially many actions
On Decomposing the Proximal Map
Yu, Yaoliang
The paper deals with an interesting theoretical question concerning the proximity operator. It investigates when the proximity of the sum of two convex functions decomposes into the composition of the corresponding proximity operators. The problem is interesting since in the applications there is a growing interest in building complex regularizers by adding several simple terms.
They pursues a quite complete study. After proving a simple sufficient condition (Theorem 1), they gives the main result of the paper (Theorem 4): it is a complete characterization of the property (for a function) of being radial versus the property of being "well-coupled" with positively homogeneous functions (where well-coupled means that the prox of the sum of the couple decomposes into the composition of the two individual prox map). They also consider the case of polyhedral gauge functions, deriving a sufficient condition which is expressed by means of a cone invariance property. Examples are provided which show several prox-decomposition results, recovering known facts (in a simpler way) but also proving new ones.
The value of the paper is mainly on the theoretical side. It sheds light on the mechanism of composing proximity operators and unifies several particular results that were spread in the literature. The article is well written and technically sound. The only fault I see is that perhaps some times is not completely rigorous as I explain in the following.
Polar Operators for Structured Sparse Estimation
Zhang, Xinhua and Yu, Yaoliang and Schuurmans, Dale
The authors build their work on top of the generalized conditional gradient (GCG) method for sparse optimization. In particular, GCG methods require computation of the polar operator for the sparse regularization function (an example is the dual norm if the regularization function is an atomic norm). In this work, the authors identify a class of regularization functions, which are based on an underlying subset cost function. The key idea is a to 'lift' the regularizer into a higher dimensional space together with some constraints in the higher-dimensional space, where it has the property of 'marginalized modularity' allowing it to be reformulated as a linear program. Finally, the approach is generalized to general proximal objectives. The results demonstrate that the method is able to achieve better objective values in much less CPU time when compared with another polar operator method and accelerated proximal gradient (APG) on group Lasso and path coding problems.
Generalized Random Utility Models with Multiple Types
Soufiani, Hossein Azari and Diao, Hansheng and Lai, Zhenyu and Parkes, David C.
This paper is related with the problem of demand estimation in multi-heterogeneous agents, specifically, to classify agents and estimate preferences of each agent type using agents' ranking data of different alternatives. The problem is important since it has great practical value in studying underlying preference distributions of multiple agents. To tackle the problem, the authors introduce generalized random utility models (GRUM), provide RJMCMC algorithms for parameter estimation in GRUM and theoretically establish conditions for identifiability for the model. Experimental results on both synthetic and real dataset show the model's effectiveness.
Provable Subspace Clustering: When LRR meets SSC
Wang, Yu-Xiang and Xu, Huan and Leng, Chenlei
This paper proposes a new subspace clustering algorithm called Low Rank Sparse Subspace Clustering (LRSSC) and aims to study the conditions under which it is guaranteed to produce a correct clustering. The correctness is defined in terms of two properties. The self-expressiveness property (SEP) captures whether a data point is expressed as a linear combination of other points in the same subspace. The graph connectivity property (GCP) captures whether the points in one subspace form a connected component of the graph formed by all the data points. The LRSSC algorithm builds on two existing subspace clustering algorithms, SSC and LRR, which have complementary properties. The solution of LRR is guaranteed to satisfy the SEP under the strong assumption of independent subspaces and the GCP under weak assumptions (shown in this paper). On the other hand, the solution of SSC is guaranteed to satisfy the SEP under milder conditions, even with noisy data or data corrupted with outliers, but the solution of SSC need not satisfy the GCP. This paper combines the objective functions of both methods with the hope of obtaining a method that satisfies both SEP and GEP for some range of values of the relative weight between the two objective functions. Theorem 1 derives conditions under which LRSSC satisfies SEP in the deterministic case. These conditions are natural generalizations of existing conditions for SSC. But they are actually weaker than existing conditions. Theorem 2 derives conditions under which LRSSC satisfies SEP in the random case (data drawn at random from randomly drawn subspaces). Overall, it is shown that when the weight of the SSC term is large enough and the ratio of the data dimension to the subspace dimension grows with the log of the number of points, then LRSSC is guaranteed to satisfy SEP with high probability. I say high, because it does not tend to 1. Finally, Proposition 1 and Lemma 4 show that LRR satisfies GCP (presumably almost surely). Experiments support that for a range of the SSC weight, LRSCC works. Additional experiments on model selection show the usefulness of the analysis.
Bayesian optimization explains human active search
Borji, Ali and Itti, Laurent
The authors explore different optimization strategies for 1-D continuous functions and their relationship to how people optimize the functions. They used a wide variety of continuous functions (with one exception): polynomial, exponential, trigonometric, and the Dirac function. They also explore how people interpolate and extrapolate noisy samples from a latent function (which has a long tradition in psychology under the name of function learning) and how people select an additional sample to observe under the task of interpolating or extrapolating. Over all, they found that Gaussian processes do a better job at describing human performance than any of the approx. 20 other tested optimization methods.
Transfer Learning in a Transductive Setting
Rohrbach, Marcus and Ebert, Sandra and Schiele, Bernt
This paper describes how to attack the zero-, one-, or few-shot recognition problem, where we have a fair amount of training data for some classes, but none or very few for some other classes. It does this using three different techniques, all combined in a single framework: using semantically-meaningful mid-layer knowledge (attributes), building a graph on new classes to exploit the manifold structure, and finally by using an attribute-based representation for building the graph structure (rather than low-level features), which improves performance. The method is evaluated on 3 different datasets (Animals with Attributes, ImageNet, and MPII Cooking composites), and shows improved performance on all compared to the state-of-the-art (slightly).
Data-driven Distributionally Robust Polynomial Optimization
Mevissen, Martin and Ragnoli, Emanuele and Yu, Jia Yuan
The authors considered robust optimization for polynomial optimization problems where the uncertainty set is a set of possible distributions of the parameter. In specific, this set is a ball around a density function estimated from data samples. The authors showed that this distributionally robust optimization formulation can be reduced to a polynomial optimization problem, hence computationally the robust counterpart is of the same hardness as the nominal (non-robust) problem, and can be solved using a tower of SDP known in literature. The authors also provide finite-sample guarantees for estimating the uncertainty set from data. Finally, they applied their methods to a water network problem.
Latent Maximum Margin Clustering
Zhou, Guang-Tong and Lan, Tian and Vahdat, Arash and Mori, Greg
This work proposes an extension to the maximum margin clustering (MMC) method that introduces latent variables. The motivation for adding latent variables is that they can model additional data semantics, resulting in better final clusters. The authors introduce a latent MMC (LMMC) objective, state how to optimize it, and then apply it to the task of video clustering. For this task, the latent variables are tag words, and the affinity of a video for a tag is given by a pre-trained binary tag detector. Experiments show that LMMC consistently, and sometimes substantially, beats several reasonable baselines.
Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively
Zhang, Wenhao and Wu, Si
The authors show by approximate analysis of two identical continuous attractor networks (Zhang 1996), reciprocally coupled by Gaussian weights, that such a network can approximately implement the Bayesian posterior solution for queue integration.
Documents as multiple overlapping windows into grids of counts
Perina, Alessandro and Jojic, Nebojsa and Bicego, Manuele and Truski, Andrzej
This paper describes a creative alternative for topic modeling: mixed-membership on a "counting grid." The advantage of this approach seems to be that you can move smoothly across the grid, achieving a
high effective number of topics while the spatial smoothing prevents overfitting. The disadvantage seems to be that there are more parameters (grid dimension and size, and window size). A variational inference procedure that is somewhat to LDA is possible, although no speed/complexity comparisons are provided. The spatial nature of the approach has potential advantages for visualization as well.
The Randomized Dependence Coefficient
López-Paz, David and Hennig, Philipp and Schölkopf, Bernhard
The authors propose a non-linear measure of dependence between two random variables. This turns out to be the canonical correlation between random, nonlinear projections of the variables after a copula transformation which renders the marginals of the r.vs invariant to linear transformations.
The paper introduces a new method called RDC to measure the statistical dependence between random variables. It combines a copula transform to a variant of kernel CCA using random projections, resulting in a $O(n log n)$ complexity. Results on synthetic and real benchmark data show promising results for feature selection.
The RDC is a non-linear dependency estimator that satisfies Renyi's criteria and exploits the very recent FastFood speedup trick (ICML13) \cite{journals/corr/LeSS14}. This is a straightforward recipe: 1) copularize the data, effectively preserving the dependency structure while ignoring the marginals, 2) sample k non-linear features of each datum (inspired from Bochner's theorem) and 3) solve the regular CCA eigenvalue problem on the resulting paired datasets. Ultimately, RDC feels like a copularised variation of kCCA (misleading as this may sound). Its efficiency is illustrated successfully on a set of classical non-linear bivariate dependency scenarios and 12 real datasets via a forward feature selection procedure.
Bayesian Active Model Selection with an Application to Automated Audiometry
Gardner, Jacob R. and Malkomes, Gustavo and Garnett, Roman and Weinberger, Kilian Q. and Barbour, Dennis L. and Cunningham, John P.
The authors introduce a new method for actively selecting the model that best fits a dataset. Contrary to active learning, where the next learning point is chosen to get a better estimate of the model hyperparameters, this methods selects the next point to better distinguish between a set of models. Similar active model selection techniques exist, but they need to retrain each model for each new data point to evaluate. The strength of the author's method is that is only requires to evaluate the predictive distributions of models, without retraining.
They propose to apply this method to detect noise-induced hearing loss. The traditional way of screening for NIHL involves testing a wide range of intensities and frequencies, which is time consuming. The authors show that with their method, the number of tests to be run could be drastically decreased, reducing the cost of large-scale screenings for NIHL.
Training Very Deep Networks
Srivastava, Rupesh Kumar and Greff, Klaus and Schmidhuber, Jürgen
Machine learning researchers frequently find that they get better results by adding more and more layers to their neural networks, but the difficulties of initialization and decaying/exploding gradients have been severely limiting. Indeed, the difficulties of getting information to flow through deep neural networks arguably kept them out of widespread use for 30 years. This paper addresses this problem head on and demonstrates one method for training 100 layer nets.
The paper describes an affective method to train very deep neural networks by means of 'information highways', or building direct connections to upper network layers. Although a generalization of prior techniques, such as cross-layer connections, the authors have shown this method to be effective by experimentation. The contributions are quite novel and well supported by experimental evidence.
Particle Gibbs for Infinite Hidden Markov Models
Tripuraneni, Nilesh and Gu, Shixiang and Ge, Hong and Ghahramani, Zoubin
The paper proposes a sampler for iHMMs, which the authors show has improved mixing properties and performs better in posterior inference problems when compared to the existing state-of-the-art sampling methods. An existing Gibbs sampler is turned into a particle Gibbs sampler by using a conditional SMC step to sample the latent sequence of states. The paper uses conjugacy to derive optimal SMC proposals and ancestor sampling to improve the performance of the conditional SMC step. The result is more efficient sampling of the latent states, making the sampler robust to spurious states and yielding faster convergence.
A Bayesian Framework for Modeling Confidence in Perceptual Decision Making
Khalvati, Koosha and Rao, Rajesh P.
The authors' model confidence data from two experiments (conducted by others and previously published in the scientific literature) using a POMDP. In both experiments, subjects saw a random-dot kinematogram on each trial and made a binary choice about the dominant motion direction. The first experiment used monkeys as subjects and stimuli had a fixed duration. The second experiment used people as subjects and stimuli continued until a subject made a response. The paper reports that the POMDP model does a good job of fitting the experimental data, both the accuracy data and the confidence data.
Path-SGD: Path-Normalized Optimization in Deep Neural Networks
Neyshabur, Behnam and Salakhutdinov, Ruslan and Srebro, Nathan
Deep rectified neural networks are over-parameterized in the sense that scaling of the weights in one layer, can be compensated for exactly in the subsequent layer. This paper introduces Path-SGD, a simple modification to the SGD update rule, whose update is invariant to such rescaling. The method is derived from the proximal form of gradient descent, whereby a constraint term is added which preserves the norm of the "product weight" formed along each path in the network (from input to output node). Path-SGD is thus principled and shown to yield faster convergence for a standard 2 layer rectifier network, across a variety of dataset (MNIST, CIFAR-10, CIFAR-100, SVHN). As the method implicitly regularizes the neural weights, this also translates to better generalization performance on half of the datasets.
At its core, Path-SGD belongs to the family of learning algorithms which aim to be invariant to model reparametrizations. This is the central tenet of Amari's natural gradient (NG) \cite{amari_natural_1998}, whose importance has resurfaced in the area of deep learning. Path-SGD can thus be cast an approximation to NG, which focuses on a particular type of rescaling between neighboring layers. The paper would greatly benefit from such a discussion in my opinion. I also believe NG to be a much more direct way to motivate Path-SGD, than the heuristics of max-norm regularization.
DeViSE: A Deep Visual-Semantic Embedding Model
Frome, Andrea and Corrado, Gregory S. and Shlens, Jonathon and Bengio, Samy and Dean, Jeffrey and Ranzato, Marc'Aurelio and Mikolov, Tomas
This computer vision paper uses an unsupervised, neural net based semantic embedding of a Wikipaedia text corpus trained using skip-gram coding to enhance the performance of the Krizhevsky et al deep network \cite{krizhevsky2012imagenet} that won the 2012 ImageNet large scale visual recognition challenge, particularly for zero-shot learning problems (i.e. previously unseen classes with some similarity to previously seen ones). The two networks are trained separately, then the output layer of \cite{krizhevsky2012imagenet} is replaced with a linear mapping to the semantic text representation and re-trained on ImageNet 1k using a dot product loss reminiscent of a structured output SVM one. The text representation is not currently re-trained. The model is tested on ImageNet 1k and 21k. With the semantic embedding output it does not quite manage to reproduce the ImageNet 1k flat-class hit rates of the original softmax-output model, but it does better than the original on hierarchical-class hit rates and on previously unseen classes from ImageNet 21k. For unseen classes, the improvements are modest in absolute terms (albeit somewhat larger in relative ones).
It consists of the following steps:
1. Learn an embedding of a large number of words in a Euclidean space.
2. Learn a deep architecture which takes images as input and predicts one of 1,000 object categories.
The 1,000 categories are a subset of the 'large number of words' of step (1).
3. Remove the last layer of the visual model -- leaving what is referred to as the 'core' visual model.
Replace it by the word embeddings and add a layer to map the core visual model output to the word embeddings.
Generalized Denoising Auto-Encoders as Generative Models
Bengio, Yoshua and Yao, Li and Alain, Guillaume and Vincent, Pascal
This paper continues a recent line of theoretical work that seeks to explain what autoencoders learn about the data-generating distribution. Of practical importance from this work have been ways to sample from autoencoders. Specifically, this paper picks up where \cite{journals/jmlr/AlainB14} left off. That paper was able to show that autoencoders (under a number of conditions) estimate the score (derivative of the log-density) of the data-generating distribution in a way that was proportional to the difference between reconstruction and input. However, it was these conditions that limited this work: it only considered Gaussian corruption, it only applied to continuous inputs, it was proven for only squared error, and was valid only in the limit of small corruption. The current paper connects the autoencoder training procedure to the implicit estimation of the data-generating distribution for arbitrary corruption, arbitrary reconstruction loss, and can handle both discrete and continuous variables for non-infinitesimal corruption noise. Moreover, the paper presents a new training algorithm called "walkback" which estimates the same distribution as the "vanilla" denoising algorithm, but, as experimental evidence suggests, may do so in a more efficient way.
Deep Convolutional Neural Network for Image Deconvolution
Xu, Li and Ren, Jimmy S. J. and Liu, Ce and Jia, Jiaya
This paper presents a method for nonblind deconvolution of blurry images, that also can also fix artifacts (e.g. compression, clipping) in the input, and is robust to deviations from the input generation model. A convolutional network is used both to deblur and fix artifacts; deblurring is performed using a sequence of horizontal and vertical conv kernels, taking advantage of a high degree of separability in the pseudoinverse blur kernel, and are initialized with a decomposition of the pseudoinverse. A standard compact-kernel convnet is stacked on top, allowing further fixing of artifacts and noise, and traned end-to-end with pairs of blurry and ground truth images.
Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation
Denton, Emily L. and Zaremba, Wojciech and Bruna, Joan and LeCun, Yann and Fergus, Rob
The paper addresses the problem of speeding up the evaluation of pre-trained image classification ConvNets. To this end, a number of techniques are proposed, which are based on the tensor representation of the conv. layer weight matrix. Namely, the following techniques are considered (Sect. 3.2-3.5):
1. SVD decomposition of the tensor
2. outer product decomposition of the tensor
3. monochromatic approximation of the first conv. layer - projecting RGB colors to a 1-D space, followed by clustering
4. biclustering tensor approximation - clustering input and output features to split the tensor into a number of sub-tensors, each of which is then separately approximated
5. fine-tuning of approximate models to (partially) recover the lost accuracy
Two-Stream Convolutional Networks for Action Recognition in Videos
Simonyan, Karen and Zisserman, Andrew
This paper proposes a model for solving discriminative tasks with video inputs. The
model consists of two convolutional nets. The input to one net is an appearance
frame. The input to the second net is a stack of densely computed optical flow
features. Each pathway is trained separately to classify its input. The
prediction for a video is obtained by taking a (weighted) average of the
predictions made by each net.
Communication Efficient Distributed Machine Learning with the Parameter Server
Li, Mu and Andersen, David G. and Smola, Alexander J. and Yu, Kai
This paper presents improvements on a system for large-scale learning known as "parameter server". The parameter server is designed to perform reliable distributed machine learning in large-scale industrial systems (1000's of nodes). The architecture is based on a bipartite graph composed by "servers" and "workers". Workers compute gradients based on subsets of the training instances, while servers aggregate the workers' gradients, update the shared parameter vector and redistribute it to the workers for the next iteration. The architecture is based on asynchronous communication and allows trading-off convergence speed and accuracy through a flexible consistency model. The optimization problem is solved with a modified proximal gradient method, in which only blocks of coordinates are updated at a time. Results are shown in an ad-click prediction dataset with $O(10^{11})$ instances as well as features. Results are presented both in terms of convergence time of the algorithm and average time spent per worker. Both are roughly half of the values for the previous version of the parameter server (version called "B" in the paper). Roughly 1h convergence time using 1000 machines each with 16 cores and 192Gb RAM, 10Gb Ethernet connection (800 workers and 200 servers). Other jobs were concurrently run in the cluster. The authors claim it was not possible to compare against other algorithms since at the scale they are operating there is no other open-source solution. In the supplementary material they do compare their system with shotgun \cite{conf/icml/BradleyKBG11} and obtain faster convergence (4x) and similar value of the objective function at convergence.
Semi-Separable Hamiltonian Monte Carlo for Inference in Bayesian Hierarchical Models
Zhang, Yichuan and Sutton, Charles A.
This paper proposes a way to speed up Hamiltonian Monte Carlo (HMC) \cite{Duane1987216} sampling for hierarchical models. It is similar in spirit to RMHMC, in which the mass matrix varies according to local topology, except that here the mass matrices for each parameter type (parameter or hyperparameter) only depend on their counterpart, which allows an explicit leapfrog integrator to be used to simulate dynamics rather than an implicit integrator requiring fixed-point iteration to convergence for each step. The authors point out that their method goes beyond straightforward Gibbs sampling with HMC within each Gibbs step since their method leaves the counterpart parameter's momentum intact.
Kernel Mean Estimation via Spectral Filtering
Muandet, Krikamol and Sriperumbudur, Bharath K. and Schölkopf, Bernhard
The paper presents a family of kernel mean shrinkage estimators. These estimators generalize the ones proposed in \cite{journals/jmlr/FukumizuSG13} and can incoporate useful domain knowledge through spetral filters. Here is a summary of interesting contributions:
1. Theorem 1 that shows the consistency and admissibility of kmse presented in \cite{journals/jmlr/FukumizuSG13}.
2. The idea of spectral kmse (its use in this unsupervised setting) and similarity of final form with the supervised setting.
3. Theorem 5 that shows consistency of the proposed spectral kmse.
Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets
Joulin, Armand and Mikolov, Tomas
Endowing memory to recurrent neural networks is clearly one of the most important topics of deep learning and crucial to do real reasoning. The proposed stack-augmented recurrent nets outperform simple RNN and LSTM \cite{journals/neco/HochreiterS97} on a series of synthetic problems (learning simple algorithmic patterns). The complexity of problems is clearly defined and the behavior of resulting stack RNN could be well understood and easily analyzed. However, the conclusions merely depending on those synthetic data set may take a risk. The importance of the problems to real sequence modeling task could be uncertain and the failures of other models could be greatly improved by more and dense hyper-parameter searching. Like in \cite{journals/corr/LeJH15}, by a very simple trick a RNN works very well on a toy task (a adding problem) which seems to need to model long term dependencies.
Probabilistic Line Searches for Stochastic Optimization
Mahsereci, Maren and Hennig, Philipp
The authors propose a probabilistic version of the "line search" procedure that is commonly used as a subroutine in many deterministic optimization algorithms. The new technique can be applied when the evaluations of the objective function and its gradients are corrupted by noise. Therefore, the proposed method can be successfully used in stochastic optimization problems, eliminating the requirement of having to specify a learning rate parameter in this type of problems. The proposed method uses a Gaussian process surrogate model for the objective and its gradients. This allows us to obtain a probabilistic version of the conditions commonly used to terminate line searches in the deterministic scenario. The result is a soft version of those conditions that is used to stop the probabilistic line search process. At each iteration within such process, the next evaluation location is collected by using Bayesian optimization methods. A series of experiments with neural networks on the MNIST and CIFAR10 datasets validate the usefulness of the proposed technique.
Fast and Accurate Inference of Plackett-Luce Models
Maystre, Lucas and Grossglauser, Matthias
This paper propose a new inference mechanism for the Plackett-Luce model based on the preliminary observation that the ML estimate can be seen as the stationary distribution of a certain Markov chain. In fact, two inferences mechanisms are proposed, one is approximate and consistent, the other converges to the ML estimate but is slower. The authors then debate on the application settings (pairwise preferences, partial rankings). Finally, the authors exhibit three sets of experiments. The first one compares the proposed algorithm to other approximate inference mechanisms for the PL model in terms of statistical efficiency. Then on real-world datasets, one experiment compares the empirical performance of the approximate methods and a second the speed of exact methods to reach a certain level of optimality.
Color Constancy by Learning to Predict Chromaticity from Luminance
Chakrabarti, Ayan
The algorithm presented here is simple and interesting. Pixel luminance, chrominance, and illumination chrominance are all histogrammed, and then evaluation is simply each pixel's luminance voting on each pixel's true chrominance for each of the "memorized" illuminations. The model can be trained generative by simply counting pixels in the training set, or can be trained end-to-end for a slight performance boost. This algorithm's simplicity and speed are appealing, and additionally it seems like it may be a useful building block for a more sophisticated spatially-varying illumination model.
Bayesian Manifold Learning: The Locally Linear Latent Variable Model (LL-LVM)
Park, Mijung and Jitkrittum, Wittawat and Qamar, Ahmad and Szabó, Zoltán and Buesing, Lars and Sahani, Maneesh
The paper introduces a model which is probabilistic for non linear manifold discovery. It is based on a generative model with missing variables and required a variational EM implementation which is standard but nevertheless technical to derive in this specific context.
Unlocking neural population non-stationarities using hierarchical dynamics models
Park, Mijung and Bohner, Gergo and Macke, Jakob H.
This paper describes using an additional time scale over trials to model (slow) non-stationarities. It adds to the successful PLDS model, another gain vector matching the latent dimensions that is constant during each trial. Many neuroscientific datasets indeed show such slow drifts, which could very well be captured by such modeling effort.
On the Pseudo-Dimension of Nearly Optimal Auctions
Morgenstern, Jamie and Roughgarden, Tim
This paper addresses the problem of learning reserve prices that approximately maximize revenue, using sample draws from an unknown distribution over bidder valuations. The authors introduce t-level auctions, in which (roughly speaking) each bidder's bid space is effectively discretized into levels, and the bidder whose bid falls on the highest level wins and pays the lowest value that falls on its lowest level required to win.
The authors bound the number of samples needed to find an approximately revenue-maximizing auction from all auctions in a set C (e.g., from the set of 10-level auctions). They bound the difference in revenue between the revenue-maximizing t-level auction and the optimal auction. Results are presented for single-item auctions but are generalized to matroid settings and single-parameter settings.
Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning
Wu, Jiajun and Yildirim, Ilker and Lim, Joseph J. and Freeman, Bill and Tenenbaum, Joshua B.
The authors introduce a novel approach for inferring hidden physical properties of objects (mass and friction), which also allows the system to make subsequent predictions that depend on these properties. They use a black-box generative model (a physics simulator), to perform sampling-based inference, and leverage a tracking algorithm to transform the data into more suitable latent variables (and reduce its dimensionality) as well as a deep model to improve the sampler. The authors assume priors over the hidden physical properties, and make point estimates of the geometry and velocities of objects using a tracking algorithm, which comprise a full specification of the scene that can be input to a physics engine to generate simulated velocities. These simulated velocities then support inference of the hidden properties within an MCMC sampler: the properties' values are proposed and their consequent simulated velocities are generated, which are then scored against the estimated velocities, similar to ABC. A deep network can be trained as a recognition model, from the inferences of the generative model, and also from the Physics 101 dataset directly. Its predictions of the mass and friction can be used to initialize the MCMC sampler.
Smooth Interactive Submodular Set Cover
He, Bryan D. and Yue, Yisong
This paper considers a generalization of the Interactive Submodular Set Cover (ISSC) problem \cite{conf/icml/GuilloryB10}. In ISSC, the goal is to interactively collect elements until the value of the set of elements, represented by an unknown submodular function, reaches some threshold. In the original ISSC there is a single correct submodular function, which can be revealed using responses to each selected element, and a single desired threshold. This paper proposes to simultaneously require reaching some threshold for all the possible submodular functions. The threshold value is determined as a convex function of a submodular agreement measure between the given function and the responses to all elements. Each element has a cost, and so the goal is to efficiently decide which elements to collect to satisfy the goal at a small cost.
A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements
Zheng, Qinqing and Lafferty, John D.
The paper presents results on recovery of low-rank semidefinite matrices from linear measurements, using nonconvex optimization. The approach is inspired by recent work on phase retrieval, and combines spectral initialization with gradient descent. The connection to phase retrieval comes because measurements which are linear in the semidefinite matrix $X = Z Z'$ are quadratic in the factors $Z$. The paper proves recovery results which imply that correct recovery occurs when the number of measurements m is essentially proportional to n $r^2$, where n is the dimensionality and r is the rank. The convergence analysis is based on a form of restricted strong convexity (restricted because there is an $r(r-1)/2$-dimensional set of equivalent solutions along which the objective is flat). This condition also implies linear convergence of the proposed algorithm.
The implementation seems awful. When compared to recent implementations, e.g. http://arxiv.org/abs/1408.2467 the performance seems orders of magnitude away from the state of the art -- and being an order of magnitude faster than general-purpose SDP solver on the nuclear norm does not make it any better. The authors should acknowledge that and compare the results with other codes on some established benchmark (e.g. Lenna), so as to show that the price in terms of run-time brings about much better performance in terms of objective function values (SNR, RMSE) -- which is plausible, but far from certain.
Space-Time Local Embeddings
Sun, Ke and Wang, Jun and Kalousis, Alexandros and Marchand-Maillet, Stéphane
The paper presents a data visualisation method based on the concept of space-time. The space-time representation is capable of showing a broader family of proximities than an Euclidean space with the same dimensionality. Based on the KL measure, the authors argue that the lower dimensional representation of the high dimensional data using the space-time local embedding method can keep more information than Euclidean embeddings. I am quite convinced, but there is one question about interpretability of the visualised data in space-time.
Parallel Correlation Clustering on Big Graphs
Pan, Xinghao and Papailiopoulos, Dimitris S. and Oymak, Samet and Recht, Benjamin and Ramchandran, Kannan and Jordan, Michael I.
This work addresses an important special case of the correlation clustering problem: Given as input a graph with edges labeled -1 (disagreement) or +1 (agreement), the goal is to decompose the graph so as to maximize agreement within components. Building on recent work \cite{conf/kdd/BonchiGL14} \cite{conf/kdd/ChierichettiDK14}, this paper contributes two concurrent algorithms, a proof of their approximation ratio, a run-time analysis as well as a set of experiments which demonstrate convincingly the advantage of the proposed algorithms over the state of the art.
Expressing an Image Stream with a Sequence of Natural Sentences
Park, Cesc C. and Kim, Gunhee
The paper attacks the problem of describing a sequence of images from blog-posts with a sequence of consistent sentences. For this the paper proposes to first retrieve the K=5 most similar images and associated sentences from the training set for each query image. The main contribution of the paper lies in defining a way to select the most relevant sentences for the query image sequence, providing a coherent description. For this sentences are first embedded in a vector and then the sequence of sentences is modeled with a bidirectional LSTM. The output of the bi-directional LSTM is first fed through a relu \cite{conf/icml/NairH10} and fully connected layer and then scored with a compatibility score between image and sentence. Additionally a local coherence model \cite{journals/coling/BarzilayL08} is included to enforce the compatibility between sentences.
Planar Ultrametrics for Image Segmentation
Yarkony, Julian and Fowlkes, Charless C.
The paper presents a method to obtain a hierarchical clustering of a planar graph by posing the problem as that of approximating a set of edge weights using an ultrametric. This is accomplished by minimizing the $\ell_2$ norm between the given edge weights and the learnt ultrametric. Learning the ultrametric amounts to estimating a collection of multicuts that satisfies a hierarchical partitioning constraint. An efficient algorithm is presented that solves an approximation based on a finding a linear combination of a subset of possible two-way cuts of the graph.
Logarithmic Time Online Multiclass prediction
Choromanska, Anna and Langford, John
This paper proposes a novel online algorithm for constructing a multiclass classifier that enjoys a time complexity logarithmic in the number of classes k. This is done by constructing online a decision tree which locally maximizes an appropriate novel objective function, which measures the quality of a tree according to a combined "balancedness" and "purity" score. A theoretical analysis (of a probably intractable algorithm) is provided via a boosting argument (assuming weak learnability), essentially extending the work of Kearns and Mansour (1996) \cite{conf/stoc/KearnsM96} to the multiclass setup. A concrete algorithm is given to a relaxed problem (but see below) without any guarantees, but quite simple, natural and interesting.
Robust Portfolio Optimization
Qiu, Huitong and Han, Fang and Liu, Han and Caffo, Brian
The authors derive an estimator of a "proxy" of the covariance matrix of a stationary stochastic process (in their case asset returns) which is robust to data outliers and does not make assumptions on the tails of the distribution. They show that for elliptical distributions, which includes Gaussians, this proxy is consistent with true covariance matrix up to a scaling factor; and that their proposed estimator of the proxy has bounded error.
Covariance-Controlled Adaptive Langevin Thermostat for Large-Scale Bayesian Sampling
Shang, Xiaocheng and Zhu, Zhanxing and Leimkuhler, Benedict J. and Storkey, Amos J. | CommonCrawl |
◄ ▲ ►
There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page:
http://www.feynmanlectures.caltech.edu/I_01.html
If it does not open, or only shows you this message again, then please let us know:
which browser you are using (including version #)
which operating system you are using (including version #)
This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below.
By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated.
Mike Gottlieb
[email protected]
Editor, The Feynman Lectures on Physics New Millennium Edition
The recording of this lecture is missing from the Caltech Archives.
4 Conservation of Energy
4–1What is energy?
In this chapter, we begin our more detailed study of the different aspects of physics, having finished our description of things in general. To illustrate the ideas and the kind of reasoning that might be used in theoretical physics, we shall now examine one of the most basic laws of physics, the conservation of energy.
There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law—it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in the manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. (Something like the bishop on a red square, and after a number of moves—details unknown—it is still on some red square. It is a law of this nature.) Since it is an abstract idea, we shall illustrate the meaning of it by an analogy.
Imagine a child, perhaps "Dennis the Menace," who has blocks which are absolutely indestructible, and cannot be divided into pieces. Each is the same as the other. Let us suppose that he has $28$ blocks. His mother puts him with his $28$ blocks into a room at the beginning of the day. At the end of the day, being curious, she counts the blocks very carefully, and discovers a phenomenal law—no matter what he does with the blocks, there are always $28$ remaining! This continues for a number of days, until one day there are only $27$ blocks, but a little investigating shows that there is one under the rug—she must look everywhere to be sure that the number of blocks has not changed. One day, however, the number appears to change—there are only $26$ blocks. Careful investigation indicates that the window was open, and upon looking outside, the other two blocks are found. Another day, careful count indicates that there are $30$ blocks! This causes considerable consternation, until it is realized that Bruce came to visit, bringing his blocks with him, and he left a few at Dennis' house. After she has disposed of the extra blocks, she closes the window, does not let Bruce in, and then everything is going along all right, until one time she counts and finds only $25$ blocks. However, there is a box in the room, a toy box, and the mother goes to open the toy box, but the boy says "No, do not open my toy box," and screams. Mother is not allowed to open the toy box. Being extremely curious, and somewhat ingenious, she invents a scheme! She knows that a block weighs three ounces, so she weighs the box at a time when she sees $28$ blocks, and it weighs $16$ ounces. The next time she wishes to check, she weighs the box again, subtracts sixteen ounces and divides by three. She discovers the following: \begin{equation} \label{Eq:I:4:1} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}+ \frac{(\text{weight of box})-\text{$16$ ounces}}{\text{$3$ ounces}}= \text{constant}. \end{equation} \begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}}{\text{$3$ ounces}}\notag\\[1ex] \label{Eq:I:4:1} &=\text{constant}. \end{align} There then appear to be some new deviations, but careful study indicates that the dirty water in the bathtub is changing its level. The child is throwing blocks into the water, and she cannot see them because it is so dirty, but she can find out how many blocks are in the water by adding another term to her formula. Since the original height of the water was $6$ inches and each block raises the water a quarter of an inch, this new formula would be: \begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}} {\text{$3$ ounces}}\notag\\[1ex] \label{Eq:I:4:2} &+\frac{(\text{height of water})-\text{$6$ inches}} {\text{$1/4$ inch}}= \text{constant}. \end{align} \begin{align} \begin{pmatrix} \text{number of}\\ \text{blocks seen} \end{pmatrix}&+ \frac{(\text{weight of box})-\text{$16$ ounces}} {\text{$3$ ounces}}\notag\\[1ex] &+\frac{(\text{height of water})-\text{$6$ inches}} {\text{$1/4$ inch}}\notag\\[2ex] \label{Eq:I:4:2} &=\text{constant}. \end{align} In the gradual increase in the complexity of her world, she finds a whole series of terms representing ways of calculating how many blocks are in places where she is not allowed to look. As a result, she finds a complex formula, a quantity which has to be computed, which always stays the same in her situation.
What is the analogy of this to the conservation of energy? The most remarkable aspect that must be abstracted from this picture is that there are no blocks. Take away the first terms in (4.1) and (4.2) and we find ourselves calculating more or less abstract things. The analogy has the following points. First, when we are calculating the energy, sometimes some of it leaves the system and goes away, or sometimes some comes in. In order to verify the conservation of energy, we must be careful that we have not put any in or taken any out. Second, the energy has a large number of different forms, and there is a formula for each one. These are: gravitational energy, kinetic energy, heat energy, elastic energy, electrical energy, chemical energy, radiant energy, nuclear energy, mass energy. If we total up the formulas for each of these contributions, it will not change except for energy going in and out.
It is important to realize that in physics today, we have no knowledge of what energy is. We do not have a picture that energy comes in little blobs of a definite amount. It is not that way. However, there are formulas for calculating some numerical quantity, and when we add it all together it gives "$28$"—always the same number. It is an abstract thing in that it does not tell us the mechanism or the reasons for the various formulas.
4–2Gravitational potential energy
Conservation of energy can be understood only if we have the formula for all of its forms. I wish to discuss the formula for gravitational energy near the surface of the Earth, and I wish to derive this formula in a way which has nothing to do with history but is simply a line of reasoning invented for this particular lecture to give you an illustration of the remarkable fact that a great deal about nature can be extracted from a few facts and close reasoning. It is an illustration of the kind of work theoretical physicists become involved in. It is patterned after a most excellent argument by Mr. Carnot on the efficiency of steam engines.1
Consider weight-lifting machines—machines which have the property that they lift one weight by lowering another. Let us also make a hypothesis: that there is no such thing as perpetual motion with these weight-lifting machines. (In fact, that there is no perpetual motion at all is a general statement of the law of conservation of energy.) We must be careful to define perpetual motion. First, let us do it for weight-lifting machines. If, when we have lifted and lowered a lot of weights and restored the machine to the original condition, we find that the net result is to have lifted a weight, then we have a perpetual motion machine because we can use that lifted weight to run something else. That is, provided the machine which lifted the weight is brought back to its exact original condition, and furthermore that it is completely self-contained—that it has not received the energy to lift that weight from some external source—like Bruce's blocks.
Fig. 4–1.Simple weight-lifting machine.
A very simple weight-lifting machine is shown in Fig. 4–1. This machine lifts weights three units "strong." We place three units on one balance pan, and one unit on the other. However, in order to get it actually to work, we must lift a little weight off the left pan. On the other hand, we could lift a one-unit weight by lowering the three-unit weight, if we cheat a little by lifting a little weight off the other pan. Of course, we realize that with any actual lifting machine, we must add a little extra to get it to run. This we disregard, temporarily. Ideal machines, although they do not exist, do not require anything extra. A machine that we actually use can be, in a sense, almost reversible: that is, if it will lift the weight of three by lowering a weight of one, then it will also lift nearly the weight of one the same amount by lowering the weight of three.
We imagine that there are two classes of machines, those that are not reversible, which includes all real machines, and those that are reversible, which of course are actually not attainable no matter how careful we may be in our design of bearings, levers, etc. We suppose, however, that there is such a thing—a reversible machine—which lowers one unit of weight (a pound or any other unit) by one unit of distance, and at the same time lifts a three-unit weight. Call this reversible machine, Machine $A$. Suppose this particular reversible machine lifts the three-unit weight a distance $X$. Then suppose we have another machine, Machine $B$, which is not necessarily reversible, which also lowers a unit weight a unit distance, but which lifts three units a distance $Y$. We can now prove that $Y$ is not higher than $X$; that is, it is impossible to build a machine that will lift a weight any higher than it will be lifted by a reversible machine. Let us see why. Let us suppose that $Y$ were higher than $X$. We take a one-unit weight and lower it one unit height with Machine $B$, and that lifts the three-unit weight up a distance $Y$. Then we could lower the weight from $Y$ to $X$, obtaining free power, and use the reversible Machine $A$, running backwards, to lower the three-unit weight a distance $X$ and lift the one-unit weight by one unit height. This will put the one-unit weight back where it was before, and leave both machines ready to be used again! We would therefore have perpetual motion if $Y$ were higher than $X$, which we assumed was impossible. With those assumptions, we thus deduce that $Y$ is not higher than $X$, so that of all machines that can be designed, the reversible machine is the best.
We can also see that all reversible machines must lift to exactly the same height. Suppose that $B$ were really reversible also. The argument that $Y$ is not higher than $X$ is, of course, just as good as it was before, but we can also make our argument the other way around, using the machines in the opposite order, and prove that $X$ is not higher than $Y$. This, then, is a very remarkable observation because it permits us to analyze the height to which different machines are going to lift something without looking at the interior mechanism. We know at once that if somebody makes an enormously elaborate series of levers that lift three units a certain distance by lowering one unit by one unit distance, and we compare it with a simple lever which does the same thing and is fundamentally reversible, his machine will lift it no higher, but perhaps less high. If his machine is reversible, we also know exactly how high it will lift. To summarize: every reversible machine, no matter how it operates, which drops one pound one foot and lifts a three-pound weight always lifts it the same distance, $X$. This is clearly a universal law of great utility. The next question is, of course, what is $X$?
Fig. 4–2.A reversible machine.
Suppose we have a reversible machine which is going to lift this distance $X$, three for one. We set up three balls in a rack which does not move, as shown in Fig. 4–2. One ball is held on a stage at a distance one foot above the ground. The machine can lift three balls, lowering one by a distance $1$. Now, we have arranged that the platform which holds three balls has a floor and two shelves, exactly spaced at distance $X$, and further, that the rack which holds the balls is spaced at distance $X$, (a). First we roll the balls horizontally from the rack to the shelves, (b), and we suppose that this takes no energy because we do not change the height. The reversible machine then operates: it lowers the single ball to the floor, and it lifts the rack a distance $X$, (c). Now we have ingeniously arranged the rack so that these balls are again even with the platforms. Thus we unload the balls onto the rack, (d); having unloaded the balls, we can restore the machine to its original condition. Now we have three balls on the upper three shelves and one at the bottom. But the strange thing is that, in a certain way of speaking, we have not lifted two of them at all because, after all, there were balls on shelves $2$ and $3$ before. The resulting effect has been to lift one ball a distance $3X$. Now, if $3X$ exceeds one foot, then we can lower the ball to return the machine to the initial condition, (f), and we can run the apparatus again. Therefore $3X$ cannot exceed one foot, for if $3X$ exceeds one foot we can make perpetual motion. Likewise, we can prove that one foot cannot exceed $3X$, by making the whole machine run the opposite way, since it is a reversible machine. Therefore $3X$ is neither greater nor less than a foot, and we discover then, by argument alone, the law that $X=\tfrac{1}{3}$ foot. The generalization is clear: one pound falls a certain distance in operating a reversible machine; then the machine can lift $p$ pounds this distance divided by $p$. Another way of putting the result is that three pounds times the height lifted, which in our problem was $X$, is equal to one pound times the distance lowered, which is one foot in this case. If we take all the weights and multiply them by the heights at which they are now, above the floor, let the machine operate, and then multiply all the weights by all the heights again, there will be no change. (We have to generalize the example where we moved only one weight to the case where when we lower one we lift several different ones—but that is easy.)
We call the sum of the weights times the heights gravitational potential energy—the energy which an object has because of its relationship in space, relative to the earth. The formula for gravitational energy, then, so long as we are not too far from the earth (the force weakens as we go higher) is \begin{equation} \label{Eq:I:4:3} \begin{pmatrix} \text{gravitational}\\ \text{potential energy}\\ \text{for one object} \end{pmatrix}= (\text{weight})\times(\text{height}). \end{equation} It is a very beautiful line of reasoning. The only problem is that perhaps it is not true. (After all, nature does not have to go along with our reasoning.) For example, perhaps perpetual motion is, in fact, possible. Some of the assumptions may be wrong, or we may have made a mistake in reasoning, so it is always necessary to check. It turns out experimentally, in fact, to be true.
The general name of energy which has to do with location relative to something else is called potential energy. In this particular case, of course, we call it gravitational potential energy. If it is a question of electrical forces against which we are working, instead of gravitational forces, if we are "lifting" charges away from other charges with a lot of levers, then the energy content is called electrical potential energy. The general principle is that the change in the energy is the force times the distance that the force is pushed, and that this is a change in energy in general: \begin{equation} \label{Eq:I:4:4} \begin{pmatrix} \text{change in}\\ \text{energy} \end{pmatrix}= (\text{force})\times \begin{pmatrix} \text{distance force}\\ \text{acts through} \end{pmatrix}. \end{equation} We will return to many of these other kinds of energy as we continue the course.
Fig. 4–3.Inclined plane.
The principle of the conservation of energy is very useful for deducing what will happen in a number of circumstances. In high school we learned a lot of laws about pulleys and levers used in different ways. We can now see that these "laws" are all the same thing, and that we did not have to memorize $75$ rules to figure it out. A simple example is a smooth inclined plane which is, happily, a three-four-five triangle (Fig. 4–3). We hang a one-pound weight on the inclined plane with a pulley, and on the other side of the pulley, a weight $W$. We want to know how heavy $W$ must be to balance the one pound on the plane. How can we figure that out? If we say it is just balanced, it is reversible and so can move up and down, and we can consider the following situation. In the initial circumstance, (a), the one pound weight is at the bottom and weight $W$ is at the top. When $W$ has slipped down in a reversible way, (b), we have a one-pound weight at the top and the weight $W$ the slant distance, or five feet, from the plane in which it was before. We lifted the one-pound weight only three feet and we lowered $W$ pounds by five feet. Therefore $W=\tfrac{3}{5}$ of a pound. Note that we deduced this from the conservation of energy, and not from force components. Cleverness, however, is relative. It can be deduced in a way which is even more brilliant, discovered by Stevinus and inscribed on his tombstone.2 Figure 4–4 explains that it has to be $\tfrac{3}{5}$ of a pound, because the chain does not go around. It is evident that the lower part of the chain is balanced by itself, so that the pull of the five weights on one side must balance the pull of three weights on the other, or whatever the ratio of the legs. You see, by looking at this diagram, that $W$ must be $\tfrac{3}{5}$ of a pound. (If you get an epitaph like that on your gravestone, you are doing fine.)
Fig. 4–4.The epitaph of Stevinus.
Let us now illustrate the energy principle with a more complicated problem, the screw jack shown in Fig. 4–5. A handle $20$ inches long is used to turn the screw, which has $10$ threads to the inch. We would like to know how much force would be needed at the handle to lift one ton ($2000$ pounds). If we want to lift the ton one inch, say, then we must turn the handle around ten times. When it goes around once it goes approximately $126$ inches. The handle must thus travel $1260$ inches, and if we used various pulleys, etc., we would be lifting our one ton with an unknown smaller weight $W$ applied to the end of the handle. So we find out that $W$ is about $1.6$ pounds. This is a result of the conservation of energy.
Fig. 4–5.A screw jack.
Fig. 4–6.Weighted rod supported on one end.
Take now the somewhat more complicated example shown in Fig. 4–6. A rod or bar, $8$ feet long, is supported at one end. In the middle of the bar is a weight of $60$ pounds, and at a distance of two feet from the support there is a weight of $100$ pounds. How hard do we have to lift the end of the bar in order to keep it balanced, disregarding the weight of the bar? Suppose we put a pulley at one end and hang a weight on the pulley. How big would the weight $W$ have to be in order for it to balance? We imagine that the weight falls any arbitrary distance—to make it easy for ourselves suppose it goes down $4$ inches—how high would the two load weights rise? The center rises $2$ inches, and the point a quarter of the way from the fixed end lifts $1$ inch. Therefore, the principle that the sum of the heights times the weights does not change tells us that the weight $W$ times $4$ inches down, plus $60$ pounds times $2$ inches up, plus $100$ pounds times $1$ inch has to add up to nothing: \begin{equation} \label{Eq:I:4:5} -4W+(2)(60)+(1)(100)=0,\quad W=\text{$55$ lb}. \end{equation} \begin{equation} \begin{gathered} -4W+(2)(60)+(1)(100)=0,\\[.5ex] W=\text{$55$ lb}. \end{gathered} \label{Eq:I:4:5} \end{equation} Thus we must have a $55$-pound weight to balance the bar. In this way we can work out the laws of "balance"—the statics of complicated bridge arrangements, and so on. This approach is called the principle of virtual work, because in order to apply this argument we had to imagine that the structure moves a little—even though it is not really moving or even movable. We use the very small imagined motion to apply the principle of conservation of energy.
4–3Kinetic energy
To illustrate another type of energy we consider a pendulum (Fig. 4–7). If we pull the mass aside and release it, it swings back and forth. In its motion, it loses height in going from either end to the center. Where does the potential energy go? Gravitational energy disappears when it is down at the bottom; nevertheless, it will climb up again. The gravitational energy must have gone into another form. Evidently it is by virtue of its motion that it is able to climb up again, so we have the conversion of gravitational energy into some other form when it reaches the bottom.
Fig. 4–7.Pendulum.
We must get a formula for the energy of motion. Now, recalling our arguments about reversible machines, we can easily see that in the motion at the bottom must be a quantity of energy which permits it to rise a certain height, and which has nothing to do with the machinery by which it comes up or the path by which it comes up. So we have an equivalence formula something like the one we wrote for the child's blocks. We have another form to represent the energy. It is easy to say what it is. The kinetic energy at the bottom equals the weight times the height that it could go, corresponding to its velocity: $\text{K.E.}= WH$. What we need is the formula which tells us the height by some rule that has to do with the motion of objects. If we start something out with a certain velocity, say straight up, it will reach a certain height; we do not know what it is yet, but it depends on the velocity—there is a formula for that. Then to find the formula for kinetic energy for an object moving with velocity $V$, we must calculate the height that it could reach, and multiply by the weight. We shall soon find that we can write it this way: \begin{equation} \label{Eq:I:4:6} \text{K.E.}=WV^2/2g. \end{equation} Of course, the fact that motion has energy has nothing to do with the fact that we are in a gravitational field. It makes no difference where the motion came from. This is a general formula for various velocities. Both (4.3) and (4.6) are approximate formulas, the first because it is incorrect when the heights are great, i.e., when the heights are so high that gravity is weakening; the second, because of the relativistic correction at high speeds. However, when we do finally get the exact formula for the energy, then the law of conservation of energy is correct.
4–4Other forms of energy
We can continue in this way to illustrate the existence of energy in other forms. First, consider elastic energy. If we pull down on a spring, we must do some work, for when we have it down, we can lift weights with it. Therefore in its stretched condition it has a possibility of doing some work. If we were to evaluate the sums of weights times heights, it would not check out—we must add something else to account for the fact that the spring is under tension. Elastic energy is the formula for a spring when it is stretched. How much energy is it? If we let go, the elastic energy, as the spring passes through the equilibrium point, is converted to kinetic energy and it goes back and forth between compressing or stretching the spring and kinetic energy of motion. (There is also some gravitational energy going in and out, but we can do this experiment "sideways" if we like.) It keeps going until the losses—Aha! We have cheated all the way through by putting on little weights to move things or saying that the machines are reversible, or that they go on forever, but we can see that things do stop, eventually. Where is the energy when the spring has finished moving up and down? This brings in another form of energy: heat energy.
Inside a spring or a lever there are crystals which are made up of lots of atoms, and with great care and delicacy in the arrangement of the parts one can try to adjust things so that as something rolls on something else, none of the atoms do any jiggling at all. But one must be very careful. Ordinarily when things roll, there is bumping and jiggling because of the irregularities of the material, and the atoms start to wiggle inside. So we lose track of that energy; we find the atoms are wiggling inside in a random and confused manner after the motion slows down. There is still kinetic energy, all right, but it is not associated with visible motion. What a dream! How do we know there is still kinetic energy? It turns out that with thermometers you can find out that, in fact, the spring or the lever is warmer, and that there is really an increase of kinetic energy by a definite amount. We call this form of energy heat energy, but we know that it is not really a new form, it is just kinetic energy—internal motion. (One of the difficulties with all these experiments with matter that we do on a large scale is that we cannot really demonstrate the conservation of energy and we cannot really make our reversible machines, because every time we move a large clump of stuff, the atoms do not remain absolutely undisturbed, and so a certain amount of random motion goes into the atomic system. We cannot see it, but we can measure it with thermometers, etc.)
There are many other forms of energy, and of course we cannot describe them in any more detail just now. There is electrical energy, which has to do with pushing and pulling by electric charges. There is radiant energy, the energy of light, which we know is a form of electrical energy because light can be represented as wigglings in the electromagnetic field. There is chemical energy, the energy which is released in chemical reactions. Actually, elastic energy is, to a certain extent, like chemical energy, because chemical energy is the energy of the attraction of the atoms, one for the other, and so is elastic energy. Our modern understanding is the following: chemical energy has two parts, kinetic energy of the electrons inside the atoms, so part of it is kinetic, and electrical energy of interaction of the electrons and the protons—the rest of it, therefore, is electrical. Next we come to nuclear energy, the energy which is involved with the arrangement of particles inside the nucleus, and we have formulas for that, but we do not have the fundamental laws. We know that it is not electrical, not gravitational, and not purely kinetic, but we do not know what it is. It seems to be an additional form of energy. Finally, associated with the relativity theory, there is a modification of the laws of kinetic energy, or whatever you wish to call it, so that kinetic energy is combined with another thing called mass energy. An object has energy from its sheer existence. If I have a positron and an electron, standing still doing nothing—never mind gravity, never mind anything—and they come together and disappear, radiant energy will be liberated, in a definite amount, and the amount can be calculated. All we need know is the mass of the object. It does not depend on what it is—we make two things disappear, and we get a certain amount of energy. The formula was first found by Einstein; it is $E=mc^2$.
It is obvious from our discussion that the law of conservation of energy is enormously useful in making analyses, as we have illustrated in a few examples without knowing all the formulas. If we had all the formulas for all kinds of energy, we could analyze how many processes should work without having to go into the details. Therefore conservation laws are very interesting. The question naturally arises as to what other conservation laws there are in physics. There are two other conservation laws which are analogous to the conservation of energy. One is called the conservation of linear momentum. The other is called the conservation of angular momentum. We will find out more about these later. In the last analysis, we do not understand the conservation laws deeply. We do not understand the conservation of energy. We do not understand energy as a certain number of little blobs. You may have heard that photons come out in blobs and that the energy of a photon is Planck's constant times the frequency. That is true, but since the frequency of light can be anything, there is no law that says that energy has to be a certain definite amount. Unlike Dennis' blocks, there can be any amount of energy, at least as presently understood. So we do not understand this energy as counting something at the moment, but just as a mathematical quantity, which is an abstract and rather peculiar circumstance. In quantum mechanics it turns out that the conservation of energy is very closely related to another important property of the world, things do not depend on the absolute time. We can set up an experiment at a given moment and try it out, and then do the same experiment at a later moment, and it will behave in exactly the same way. Whether this is strictly true or not, we do not know. If we assume that it is true, and add the principles of quantum mechanics, then we can deduce the principle of the conservation of energy. It is a rather subtle and interesting thing, and it is not easy to explain. The other conservation laws are also linked together. The conservation of momentum is associated in quantum mechanics with the proposition that it makes no difference where you do the experiment, the results will always be the same. As independence in space has to do with the conservation of momentum, independence of time has to do with the conservation of energy, and finally, if we turn our apparatus, this too makes no difference, and so the invariance of the world to angular orientation is related to the conservation of angular momentum. Besides these, there are three other conservation laws, that are exact so far as we can tell today, which are much simpler to understand because they are in the nature of counting blocks.
The first of the three is the conservation of charge, and that merely means that you count how many positive, minus how many negative electrical charges you have, and the number is never changed. You may get rid of a positive with a negative, but you do not create any net excess of positives over negatives. Two other laws are analogous to this one—one is called the conservation of baryons. There are a number of strange particles, a neutron and a proton are examples, which are called baryons. In any reaction whatever in nature, if we count how many baryons are coming into a process, the number of baryons3 which come out will be exactly the same. There is another law, the conservation of leptons. We can say that the group of particles called leptons are: electron, muon, and neutrino. There is an antielectron which is a positron, that is, a $-1$ lepton. Counting the total number of leptons in a reaction reveals that the number in and out never changes, at least so far as we know at present.
These are the six conservation laws, three of them subtle, involving space and time, and three of them simple, in the sense of counting something.
With regard to the conservation of energy, we should note that available energy is another matter—there is a lot of jiggling around in the atoms of the water of the sea, because the sea has a certain temperature, but it is impossible to get them herded into a definite motion without taking energy from somewhere else. That is, although we know for a fact that energy is conserved, the energy available for human utility is not conserved so easily. The laws which govern how much energy is available are called the laws of thermodynamics and involve a concept called entropy for irreversible thermodynamic processes.
Finally, we remark on the question of where we can get our supplies of energy today. Our supplies of energy are from the sun, rain, coal, uranium, and hydrogen. The sun makes the rain, and the coal also, so that all these are from the sun. Although energy is conserved, nature does not seem to be interested in it; she liberates a lot of energy from the sun, but only one part in two billion falls on the earth. Nature has conservation of energy, but does not really care; she spends a lot of it in all directions. We have already obtained energy from uranium; we can also get energy from hydrogen, but at present only in an explosive and dangerous condition. If it can be controlled in thermonuclear reactions, it turns out that the energy that can be obtained from $10$ quarts of water per second is equal to all of the electrical power generated in the United States. With $150$ gallons of running water a minute, you have enough fuel to supply all the energy which is used in the United States today! Therefore it is up to the physicist to figure out how to liberate us from the need for having energy. It can be done.
Our point here is not so much the result, (4.3), which in fact you may already know, as the possibility of arriving at it by theoretical reasoning. ↩
Stevinus' tombstone has never been found. He used a similar diagram as his trademark. ↩
Counting antibaryons as $-1$ baryon. ↩
Copyright © 1963, 2006, 2013 by the California Institute of Technology, Michael A. Gottlieb and Rudolf Pfeiffer | CommonCrawl |
Definition:Complement
This page lists articles associated with the same title. If an internal link led you here, you may wish to change the link to point directly to the intended article.
Complement may refer to:
Complementary angles: two angles whose measures add up to the measure of a right angle.
Complements of Parallelograms: The extra bits that need to be added to a pair of parallelograms sharing a line for their diagonals that need to be added to make one big parallelogram.
Relation Theory:
Complement of Relation: for a relation $\mathcal R$, its complement is all those pairs which are not in $\mathcal R$.
Set Theory:
Set Complement or Relative Complement: two related concepts: all the elements of a set which are not in a given subset.
Logic:
Logical Complement: In logic, the negation of a statement.
Lattice Theory:
Complement of Lattice Element: An in a strong and precise sense incomparable element of an element of a bounded lattice.
Graph Theory:
Complement of Graph: a graph with the same vertex set but whose edge set is all those edges not in that graph.
Probability Theory:
Complementary Events
Linguistic Note
The word complement comes from the idea of complete-ment, it being the thing needed to complete something else.
It is a common mistake to confuse the words complement and compliment. Usually the latter is mistakenly used when the former is meant.
Definition:Complementary Angles
Retrieved from "https://proofwiki.org/w/index.php?title=Definition:Complement&oldid=429471"
Definition Disambiguation Pages
This page was last modified on 4 October 2019, at 12:07 and is 0 bytes | CommonCrawl |
OSA Publishing > Optical Materials Express > Volume 10 > Issue 1 > Page 57
Alexandra Boltasseva, Editor-in-Chief
Ar/Cl2 etching of GaAs optomechanical microdisks fabricated with positive electroresist
Rodrigo Benevides, Michaël Ménard, Gustavo S. Wiederhecker, and Thiago P. Mayer Alegre
Rodrigo Benevides,1,2 Michaël Ménard,3 Gustavo S. Wiederhecker,1,2 and Thiago P. Mayer Alegre1,2,*
1Applied Physics Department, Gleb Wataghin Physics Institute, University of Campinas, 13083-859 Campinas, SP, Brazil
2Photonics Research Center, University of Campinas, Campinas 13083-859, SP, Brazil
3Department of Computer Science, Université du Québec à Montréal, Montréal, QC H2X 3Y7, Canada
*Corresponding author: [email protected]
Rodrigo Benevides https://orcid.org/0000-0001-7127-5069
Michaël Ménard https://orcid.org/0000-0002-0947-0258
Gustavo S. Wiederhecker https://orcid.org/0000-0002-8216-6797
R Benevides
M Ménard
T Mayer Alegre
•https://doi.org/10.1364/OME.10.000057
Rodrigo Benevides, Michaël Ménard, Gustavo S. Wiederhecker, and Thiago P. Mayer Alegre, "Ar/Cl2 etching of GaAs optomechanical microdisks fabricated with positive electroresist," Opt. Mater. Express 10, 57-67 (2020)
Study of wet and dry etching processes for antimonide-based photonic ICs (OME)
Precision etching for multi-level AlGaAs waveguides (OME)
Mechanically stable conjugate and suspended lasing membranes of bridged nano-cylinders (OME)
Materials for Integrated Optics
Microcavities
Whispering gallery modes
Microfabrication process
Optical and mechanical characterization
A method to fabricate GaAs microcavities using only a soft mask with an electrolithographic pattern in an inductively coupled plasma etching is presented. A careful characterization of the fabrication process pinpointing the main routes for a smooth device sidewall is discussed. Using the final recipe, optomechanical microdisk resonators are fabricated. The results show very high optical quality factors of Qopt > 2 × 105, among the largest already reported for dry-etching devices. The final devices are also shown to present high mechanical quality factors and an optomechanical vacuum coupling constant of g0 = 2π × 13.6 kHz enabling self-sustainable mechanical oscillations for an optical input power above 1 mW.
© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Harnessing the confinement of light with wavelength-scale waveguides and cavities has enabled the realization of table-top scale nonlinear optical phenomena in silicon-compatible photonic chips. Recent examples show the versatility of this integrated photonics approach, such as frequency comb generation [1,2], quantum computation [3–8], low voltage electro-optical modulation [9], and optical to microwave coherent conversion [10]. One key ingredient that may also boost this revolution is the interaction between light and mechanical degrees of freedom, enabling both read-out and actuation of mesoscale mechanical modes in waveguides [11] and cavities [12–15], paving the road towards the sound circuits revolution [16,17]. Throughout the quest for better-suited cavity geometries that can host optical and mechanical waves, microdisk cavities have proven to be a simple and effective choice. Their tight radial confinement of whispering gallery optical waves and rather strong interaction with radial breathing mechanical modes lead to high optomechanical coupling rates [18–20], which are necessary for efficient device-level functionalities [7,21–23]. Other ingredients in the optomechanical enhancement are the cavity or waveguide material properties. Although silicon is widespread in many optomechanical devices [14,24–27] due to its mature fabrication, the interest in III-V materials for optomechanical devices has increased [28–32] due to their unique optical, electronic and mechanical properties. Gallium Arsenide (GaAs), for instance, is advantageous because of its very high photoelastic coefficient [33], which leads to large electrostrictive forces and optomechanical coupling [18,34]. Moreover, optically active layers can be easily grown on GaAs wafers, allowing the fabrication of light-emitting optomechanical devices [35–37].
Despite the tougher challenges in fabricating high optical quality factor in GaAs cavities, in comparison to the more mature fabrication of silicon, both wet and dry chemistry etching have been successfully developed. Wet etching, followed by surface passivation, resulted in record high intrinsic optical quality factors of $6\times 10^{6}$ in GaAs microdisks [38]. Nonetheless, the fabrication of large aspect ratio features and small gaps challenge this route due to wet etch isotropic characteristics and diffusion-limited etching reaction. One alternative is to use inductively coupled plasma reactive ion etching (ICP-RIE) [39–42], which enables independent control of the plasma density and ion acceleration, allowing finer control of side-wall roughness and verticality. The difficulty with this technique is the low resist etch resistance that often demands the use of hard etching mask – in general silicon nitride [43]. The need for a hard mask not only complicates the lithography process but also requires a mask removal step.
In this work we report an optimized fabrication process for GaAs-based optomechanical devices that combines standard dry etching chemistry with an electro-resist soft mask, yielding high optical and mechanical quality factors. By avoiding hard masks we ease the GaAs fabrication process and widen its potential exploration in more complex optomechanical cavity designs [26]. A complete characterization of the most important ICP-RIE parameters is included, offering a path for the fabrication of GaAs optical microdevices. By controlling the sidewall roughness and verticality, we produced microdisks with optical quality factor as high as $Q= 2 \times 10^{5}$ – among the highest reported using soft or hard-mask assisted dry etching [32,42–45]. Moreover, the mechanical characterization of the devices shows mechanical modes with quality factors up to $Q_{\textrm {mec}}=760$ and an optomechanical vacuum coupling rate of $g_{0}=2\pi \times 13.6$ kHz.
2. Microfabrication process
The fabrication steps used in this work follow those of a standard top-down microfabrication process. We start with an intrinsic GaAs/Al$_{0.7}$Ga$_{0.3}$As wafer (250 nm/2000 nm). These wafers were grown over a GaAs substrate using molecular beam epitaxy (MBE) (CMC Microsystems, Canada). This technique enables control at the atomic monolayer level [46]. It uses an atomic beam of the materials to be deposited in an ultra-high vacuum atmosphere ($\sim 10^{-11}$ Torr) while the substrate is kept at a modest temperature ($400-800^{\textrm {o}}$C), when compared to other techniques, which reduces impurity atoms diffusion along the deposited layer. Therefore, a high purity device layer of GaAs is obtained, reducing imperfections in microdevices fabrication that also impact optical and mechanical quality factors of the final devices.
A thin layer of electroresist (ZEP520A - Zeon Chemicals) is spun (Figs. 1(a) and 1(b)) over the MBE-deposited GaAs layer. Then, the electroresist is patterned with electron-beam lithography, as detailed in section 2.1, yielding the microdisk shaped patterns illustrated in Fig. 1(c). The next step consists of the removal of the superficial GaAs layer using ICP-RIE chlorine-based plasma etching. Although this is a straightforward method, it possesses a large number of parameters (chamber pressure, gases flow, RF powers) that can be tuned to improve surface roughness and control the sidewall etching angle (anisotropy). In sections 2.3 and 2.4 we explored several of these parameters to understand their role in the etching process. Next, a wet release step of the devices if performed and finalized with a surface cleaning procedure detailed in section 2.5.
Fig. 1. Fabrication procedure. Over a MBE-epitaxy grown GaAs/AlGaAs wafer (250 nm/2000 nm)(a), an electroresist layer (500 nm) is spun (b). Electron beam lithography features the resist (c), with transfer to GaAs layer done with ICP-etching (d). The resist is removed (e) and a wet HF-release with post-cleaning is performed, yielding a suspended disk (f).
2.1 Lithography parameters
The biggest challenge in using an electroresist soft mask for GaAs etching is their low etch resistance. Here we use a positive electro-resist, ZEP520A, which has both high sensitivity and spatial resolution, yet its etch selectivity to GaAs is high enough to withstand the dry etching process. In this work, we used a $30$ kV electron beam lithography tool (eLINE Plus from Raith Inc.).
The samples were cleaned with hot acetone ($\sim 50^{\circ }$C) and hot isopropyl alcohol ($\sim 70^{\circ }$C), for 5 minutes, followed by a 30 seconds HF:H$_{2}$O (1:10) dip to remove native oxide. Then, they were pre-baked for $5$ minutes in a hot plate at $180^{\circ }$C, in order to remove residual water and improve adherence of the electroresist. Then ZEP520A is spun at $2000$ rpm during $60$ seconds, resulting in a $\sim 500$ nm thick resist. Thereafter, we bake the sample again at $180^{\circ }$C for 2 minutes, to evaporate solvents present in the resist.
Electron diffusion through the resist and scattering are known to affect the angle of the developed resist pattern edges. An exposure dose test was performed to find the optimum doses for our samples, as shown in Figs. 2(a)–2(c). At higher doses, a clearly sloped edge is formed. We choose to work with the vertical profiles as the diameters of the patterned disks are more accurate.
Fig. 2. Electroresist edges profile. Different walls slope are obtained changing the electron beam exposure dose from a) 45 µC$/$cm$^{2}$, to b) 60 µC$/$cm$^{2}$, to c) 75 µC$/$cm$^{2}$. All images were taken at 30 kV using secondary electron detector. The resist layer was false-ed as red.
2.2 Reflow of electroresist
Minimizing the resist's sidewall roughness is critical as it will be transferred to the GaAs layer during the dry etching process. However, evaluating developed resist sidewall roughness through SEM images is rather challenging. Indeed, despite the apparent sidewall smoothness, shown in Fig. 2, the etched GaAs sidewalls still presented noticeable roughness after plasma etching, as it is noticeable in Fig. 3(a). This suggests that that further treatment of the developed resist is required to improve sidewall roughness.
Fig. 3. Reflow process. Microdisks fabricated with resist reflowed at different temperatures: a) without reflow, b) $140^{\textrm {o}}$ C, c) $160^{\textrm {o}}$ C and d) $180^{\textrm {o}}$ C for 2 minutes at a hot plate. Etching parameters were gas flow Ar/Cl$_{2}=12/8$ sccm, RF power $=150$ W, ICP power $= 210$ W, and chamber pressure $= 4.5$ mTorr. All images were taken at 20 kV using secondary electron detector. The top resist layer was false-ed as red, the brightness contrast between the top GaAs layer and AlGaAs is intrinsic to the SEM image. The image in (c) is also shown in Fig. 4(b) and Fig. 5(a), for easier comparison.
In order to reduce the sidewall roughness transferred to the GaAs, we perform a thermal reflow of the patterned resist, which is another advantage of using ZEP 520A. This step consists of baking the resist on a hot plate at a temperature higher than the resist softening point, allowing its molecular redistribution and internal stress relaxation that prevents cracks. The manufacturer suggest a reflow temperature of $140-145^{\textrm {o}}$C [47] for ZEP520A. However, this temperature is not high enough to significantly reduce roughness, as shown in Figs. 3(a) and 3(b). This can be attributed to a higher molecular resist weight after exposure, which has been reported to translate into a higher reflow temperature for resists based on molecular chain scission [47–49]. Indeed, an increase of the reflow temperature significantly improves sidewalls profile, as shown in Figs. 3(c) and 3(d). This confirms our hypotheses that the exposed resist still exhibited residual roughness, despite it was hard to observe this issue in the SEM images shown in Fig. 2. It is also clear from Fig. 3 that the reflow process impacts the angle of the patterned disk sidewall [47]. This is a consequence of an arched or even nearly semi-spherical resist topography after reflow, translating into non-uniform protective layer thickness, that is transferred to the GaAs layer, especially at the patterned resist edges [50].
2.3 Argon (Ar) and chlorine (Cl$_2$) flow
Typical ICP-etching equipment allows control over several etching parameters, including gases flow, chamber pressure and plate and coil powers. In this section, we describe the role of gases flow in GaAs/AlGaAs etching with the ZEP520A masks. We fixed the total gas flow ($20$ sccm), such that changes in the gases proportion would not impact the chamber base pressure. Different proportions of the mixture Ar+Cl$_2$ were used and the profiles obtained can be seen in Fig. 4. Throughout these steps, we used a reflowed resist mask in the most suitable temperature (2 min at $160^{\textrm {o}}$C) in order to isolate the etch chemistry sidewall roughness.
Fig. 4. Gases flow. Etched disks sidewall profile for Ar/Cl$_{2}$ flow of a) $16/4$ sccm, b) 12/8 sccm, c) 8/12 sccm and d) 4/16 sccm. Etching parameters were RF power $=150$ W, ICP power $= 210$ W and chamber pressure $= 4.5$ mTorr and a $2$ minutes resist reflow at $160^{\circ }$ C. The etch duration were 90 s for a) and b), 35 s for c) and $25$ s for d). All images were taken at $20$ kV using secondary electron detector and false-ed. The image in b) is also shown in Fig. 3(c) and Fig. 5(a), for easier comparison.
Figure 4(a) shows a very smooth wall achieved with a high percentage ($80\%$) of argon in the mixture. However, a bit of roughness is still noticeable on the resist's upper surface, what could reflect on a higher amount of superficial defects in the disks. This is caused by the sputtering resulting from the presence of argon ions in the plasma and can be reduced by increasing the proportion of chlorine in the mixture, as shown in Fig. 4(b). Further increase of the chlorine flow significantly enhances chemical reactions in the plasma, resulting in a rougher etch, as can be seen in Figs. 4(c) and 4(d). Such roughness sets an upper limit to the chlorine proportion in the mixture.
In addition to the surface roughness of the walls, it is necessary to calibrate the etch rate of each plasma recipe, especially to fabricate devices with tight geometrical constraints [26]. These different rates were obtained using an atomic force microscope, which can fully characterize the sample topography. The measurements are performed in three steps: the topography of the sample is obtained i) before ICP-etching, ii) after ICP-etching with the electroresist and iii) after removing the electroresist. We can precisely measure the isolated etch rates of GaAs or AlGaAs, and the overall etch selectivity between the resist and GaAs/AlGaAs can be calculated, as shown in Table 1. We observe a large increase in the GaAs/AlGaAs etching rate when there is more chlorine in the plasma, with a significantly lower resist etching rate. This indicates the possibility of using a plasma of a high percentage of chlorine combined with a ZEP520A soft mask to achieve deep-etched [51] devices. Due to the large GaAs/AlGaAs etch rate obtained for the highest chlorine flow (4/16 Ar/Cl$_2$), we used a shorter duration time for this etch when compared to the other gas mixtures. Contrary to the trend observed for the lower chlorine mixtures, this led to a smaller resist etch rate. Given the etch short duration, this lower rate suggests a non-uniform resist etch rate.
Table 1. Etching rate dependence on gas flow.
View Table | View all tables in this article
2.4 Chamber pressure
Chamber pressure in ICP etching changes the ions mean free path and can significantly modify the chemical profile of the etch process [46]. In order to evaluate the impact of this parameter on the etch results, we fabricated microdisks using different chamber pressures, as can be seen in Fig. 5.
Fig. 5. Chamber pressure. SEM images of microdisks etched at chamber pressures of a) $4.5$ mTorr, a) $6.0$ mTorr, a) $7.5$ mTorr and a) $9.0$ mTorr. Changes in sidewall roughness, angle and depth can be observed. These devices were fabricated with Ar/Cl$_2$ flow = $12/8$ sccm, RF power = $150$ W, ICP power = $210$ W and a resist reflow process of $2$ minutes at $160^{\textrm {o}}$ C. The image in a) is also shown in Fig. 3(c) and Fig. 4(b), for easier comparison.
In general, changes in chamber pressure did not significantly modify the sidewall roughness of the GaAs, but it did so for the AlGaAs. An overall selectivity between GaAs/AlGaAs and resist higher than 3.75 was observed, as shown in the Table 2, indicating the suitability of this recipe to thicker GaAs-layer devices.
Table 2. Etching rate dependence on chamber pressure.
We also observe a change in the sidewall angles depending on the pressure, from a $\sim 25^{\textrm {o}}$ angle in Fig. 5(a) for $\textrm {P}_{\textrm {ch}}=4.5$ mTorr to an almost vertical wall in Fig. 5(d) for $\textrm {P}_{\textrm {ch}}=9.0$ mTorr. At lower chamber pressures, the effective mean free path of argon ions increases, which favors the physical sputtering etch through the process. Also ion milling is known to be strongly angle-dependent, with higher etch rates for ions colliding with the surface at low angles [46], translating into an angled GaAs sidewall profile. This degree of control of the sidewall angle could be used to manipulate the optical mode overlap with the disk edge, which has been shown to increase the optical quality factors [52]. We conclude this section with an optimized fabrication recipe, consisting of an lithography dose of 50 µC/cm$^{2}$, a 2 minutes reflow step at $160^{\circ }$C and ICP-RIE etching with RF power $=150$ W, ICP power $= 210$ W, Ar/Cl$_2$ flow = $12/8$ sccm and $\textrm {P}_{\textrm {ch}}=4.5$ mTorr. After the etching process, the residual resist is removed by dipping the sample for 5 minutes in trichloroethylene (TCE), 1 minute in acetone, 1 minute in isopropyl alcohol and then blow dry with N$_{\textrm {2}}$.
2.5 Undercut and cleaning steps
In order to ensure the mechanical degrees of freedom of the microdisk cavities, a final wet isotropic etching of the AlGaAs layer is performed to undercut the cavity. After the TCE cleaning step, the disks are released using a diluted solution of HF:H$_{2}$O (1:60), which is known to result in AlF$_{3}$ and Al(OH)$_{3}$ residues [53]. These residues are cleaned with a dip in a H$_{2}$O$_{2}$ solution ($30\%$) for 1 minute. A final dip in a KOH solution ($20\%$ in mass, at $80^{\circ }$C) during 3 minutes is performed to remove the oxidized surface layer [53]. These final cleaning steps are crucial to ensure a clean sample and high optical quality factors.
3. Optical and mechanical characterization
The GaAs microdisks fabricated with the optimized recipe exhibit high optical and mechanical quality factors, among the highest reported using soft or hard-mask assisted dry etching [32,42–45]. A schematic of the setup used for device characterization can be seen in Fig. 6(a). A tunable laser is sent to an optical fiber circuit, an 1% tap feeds a calibrated fiber-based Mach-Zehnder interferometer and acetylene gas cell for frequency detuning reference and absolute frequency reference. Light is coupled to the cavity through the evanescent field of a tapered optical fiber ($\approx 2$ µm diameter). The transmitted signal is split (10%) to feed a transmission monitoring slow photodetector, and (90%) to feed a fast photodetector ($\approx 800$ MHz bandwidth) to monitor the RF intensity modulation induced on the light field by the cavity mechanical modes [12]. Also, a phase modulator is used to calibrate the optomechanical coupling following the technique by Gorodetsky et al [54].
Fig. 6. a) Experimental setup used to characterize optomechanical disks. $\phi$-mod, PD, DAQ, MZ, Acet., FPD and ESA stand for phase modulator, photodetector, analog-digital converter, Mach-Zehnder interferometer, acetylene cell, fast photodetector and electrical spectrum analyzer, respectively. b) Broaband spectrum of of a $10\;\mu$m radius disk. The fitted intrinsic quality factor is $Q_{\textrm {opt}}=2.0\times 10^{5}$. c) Microscope image of the fabricated sample, TPL stands for the taper parking lot used to stabilize the tapered fiber position.d) Optical modes of a $10\;\mu$m radius disk, with intrinsic $Q_{\textrm {opt}} = 1.55\times 10^{5}$ and $Q_{\textrm {opt}} = 2.0\times 10^{5}$ respectively. e) Instrinsic optical quality factors for a cavity with (blue bars) and a cavity without resist reflow (orange bars). We see that the optimized recipe shows higher quality factors.
A typical optical transmission spectrum of a 10 µm radius microdisk fabricated with the optimized recipe is shown in Fig. 6(b) with the highest optical quality factor resonance highlighted in red. An optical microscopy image of the corresponding device is shown in Fig. 6(c). This highest $Q$ mode is also shown in Fig. 6(d), where the fitted model reveals an intrinsic optical quality factor of $Q_{\textrm {opt}}=2.0\times 10^{5}$; the observed splitting is accounted for in the fitting model and is due to clockwise and counter-clockwise mode coupling. This quality factor is on pair with those obtained through dry-etching using hard [43] or soft mask [32], yet lower than those of passivated wet-etched disks [31].
We have also performed further analysis of the optical modes to infer their transverse mode polarization and radial order. In the Fig. 6(b), it is possible to identify a periodic pattern of the modes, related to the free spectral range (FSR) of the transverse optical mode families. In fact, we find that the FSR of higher $Q_{\textrm {opt}}$ modes closely match those of TE-modes calculated with finite element method (with material dispersion [55] included). Finally, we have statistically compared our optimized etching recipe but differing at the resist reflow step. We show the results in Fig. 6(e) with the quality factors of all modes observed in the tested wavelength range. The orange bars represent the cavity without the resist reflow process, showing a distribution more localized at lower $Q$. On the other hand, the blue bars represent the reflowed device, showing a distribution with a higher $Q$ and peak values around $Q_{\textrm {opt}}\sim 2\times 10^{5}$. As a result, an overall improvement in the optical quality of the GaAs microdisks is seen after resist reflow, which is achieved by combining our optimized process steps.
To assess the optomechanical performance of our microdisk we analyzed the radio-frequency (RF) spectrum of the transmitted light. For these measurements, a 3.6 µm radius disk was used. We use a disk with smaller radius to increase the optomechanical coupling and, consequently, the transduction signal [32]. Exciting the disk at its highest Q mode at $1563.4$ nm, we were able to observe the RF tone corresponding to the fundamental mechanical breathing mode (see inset of Fig. 7(a)) at $\Omega _{\textrm {mec}} = 2\pi \times 370$ MHz. The corresponding mechanical quality factor $Q_{\textrm {mec}} = 760$ is obtained by fitting the lorentzian shape shown in Fig. 7(a). The signal of a calibrated phase-modulator is also shown and was used for the calibration of the optomechanical vacuum coupling constant [54], yielding $g_0=2\pi \times 13.6$ kHz. This coupling rate is 50% larger than one would achieve with a similar geometry using a silicon based device, mainly caused by the large photoelastic coefficients of GaAs [33], responsible for more than 30% of the total optomechanical coupling rate in this case.
Fig. 7. a) Mechanical mode at $\Omega _{\textrm {m}} = 2\pi \times 370$ MHz observed for a 3.6 µm radius disk, with a quality factor of $Q_{\textrm {mec}} = 760$. A phase modulator calibration tone can be seen at $374$ MHz, yielding an optomechanical coupling rate of $g_0=2\pi \times 13.6$ kHz. The inset shows a Finite Element Simulation (FEM) of normalized displacement profile of the fundamental mechanical breathing mode. b) Self-sustained oscillation of the fundamental mechanical mode is shown with a peak of more than $50$ dB above the noise floor.
One property of optomechanical devices is that the mechanical modes may experience a back-action force from the optical fields. When the laser is blue-detuned with respect to the mode, the positive feedback force can drive the mechanical system into self-sustained oscillations. Although this is rather straightforward, semiconductor-based cavities often suffer from a detrimental nonlinear loss that prevents reaching this regime. For our devices, when the blue-detuned pump power was increased to $P \sim 1$ mW, the self-sustained oscillation threshold was easily reached. A strong RF tone ($>\;50$ dBm of signal above noise) was observed and shown in Fig. 7(b). It is also possible to see a slight red-shift of the mechanical frequency in the oscillating peak of $\approx 3.5$ MHz, which is attributed to thermal softening of GaAs [56]. This shows that disks produced with our fabrication process can be used as mechanical oscillators, a key aspect to investigate other optomechanical phenomena, such as nonlinear dynamics [57,58], coherent optical conversion [22] and classical and quantum synchronization [59,60]. Although not explored here, achieving self-sustaining oscillation shows that optical cooling [14] is within reach of our devices.
We have shown that it is possible to control the roughness and the sidewall profile in GaAs microdisks by fully characterizing the ICP plasma etching conditions. This process simplifies fabrication, removing the need for a hard mask and reducing the number of steps in the production of the devices. High optical confinement can be achieved, with optical quality factor as high as $2\times 10^{5}$. Further improvement in optical quality factor can be done if one performs surface post-treatment as indicated in [31]. Comparing data and simulations, we have evidence that the higher optical quality factor modes correspond to TE-modes, supporting our hypothesis that improvement of the quality factors is due to the reduction in sidewall roughness. We have also investigated the impact of resist reflow in our fabrication process. The observation of radial breathing mechanical modes and excitation of self-sustaining oscillations with low optical powers shows that GaAs devices could also be used in nonlinear optomechanics experiments.
São Paulo Research Foundation (FAPESP) (2012/17610-3, 2012/17765-7, 2016/18308-0, 2018/15577-5, 2018/15580-6, 2019/01402-1); Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) (Finance Code 001).
1. B. Stern, X. Ji, Y. Okawachi, A. L. Gaeta, and M. Lipson, "Battery-operated integrated frequency comb generator," Nature 562(7727), 401–405 (2018). [CrossRef]
2. M. Karpov, M. H. P. Pfeiffer, J. Liu, A. Lukashchuk, and T. J. Kippenberg, "Photonic chip-based soliton frequency combs covering the biological imaging window," Nat. Commun. 9(1), 1146 (2018). [CrossRef]
3. M. Gräfe, R. Heilmann, M. Lebugle, D. Guzman-Silva, A. Perez-Leija, and A. Szameit, "Integrated photonic quantum walks," J. Opt. 18(10), 103002 (2016). [CrossRef]
4. J. E. Sharping, K. F. Lee, M. A. Foster, A. C. Turner, B. S. Schmidt, M. Lipson, A. L. Gaeta, and P. Kumar, "Generation of correlated photons in nanoscale silicon waveguides," Opt. Express 14(25), 12388 (2006). [CrossRef]
5. A. Mohanty, M. Zhang, A. Dutt, S. Ramelow, P. Nussenzveig, and M. Lipson, "Quantum interference between transverse spatial waveguide modes," Nat. Commun. 8(1), 14010 (2017). [CrossRef]
6. X. Qiang, X. Zhou, J. Wang, C. M. Wilkes, T. Loke, S. O'Gara, L. Kling, G. D. Marshall, R. Santagati, T. C. Ralph, J. B. Wang, J. L. O'Brien, M. G. Thompson, and J. C. F. Matthews, "Large-scale silicon quantum photonics implementing arbitrary two-qubit processing," Nat. Photonics 12(9), 534–539 (2018). [CrossRef]
7. J. Wang, S. Paesani, Y. Ding, R. Santagati, P. Skrzypczyk, A. Salavrakos, J. Tura, R. Augusiak, L. Mančinska, D. Bacco, D. Bonneau, J. W. Silverstone, Q. Gong, A. Acín, K. Rottwitt, L. K. Oxenløwe, J. L. O'Brien, A. Laing, and M. G. Thompson, "Multidimensional quantum entanglement with large-scale integrated optics," Science 360(6386), 285–291 (2018). [CrossRef]
8. A. Faraon, A. Majumdar, D. Englund, E. Kim, M. Bajcsy, and J. Vučković, "Integrated quantum optical networks based on quantum dots and photonic crystals," New J. Phys. 13(5), 055025 (2011). [CrossRef]
9. C. Wang, M. Zhang, B. Stern, M. Lipson, and M. Lončar, "Nanophotonic lithium niobate electro-optic modulators," Opt. Express 26(2), 1547–1555 (2018). [CrossRef]
10. K. C. Balram, M. I. Davanço, J. D. Song, and K. Srinivasan, "Coherent coupling between radiofrequency, optical and acoustic waves in piezo-optomechanical circuits," Nat. Photonics 10(5), 346–352 (2016). [CrossRef]
11. G. S. Wiederhecker, P. Dainese, and T. P. Mayer Alegre, "Brillouin optomechanics in nanophotonic structures," APL Photonics 4(7), 071101 (2019). [CrossRef]
12. M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, "Cavity optomechanics," Rev. Mod. Phys. 86(4), 1391–1452 (2014). [CrossRef]
13. R. Riedinger, S. Hong, R. A. Norte, J. A. Slater, J. Shang, A. G. Krause, V. Anant, M. Aspelmeyer, and S. Gröblacher, "Nonclassical correlations between single photons and phonons from a mechanical oscillator," Nature 530(7590), 313–316 (2016). [CrossRef]
14. J. Chan, T. P. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, and O. Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground state," Nature 478(7367), 89–92 (2011). [CrossRef]
15. D. Navarro-Urrios, N. E. Capuj, M. F. Colombano, P. D. García, M. Sledzinska, F. Alzina, A. Griol, A. Martínez, and C. M. Sotomayor-Torres, "Nonlinear dynamics and chaos in an optomechanical beam," Nat. Commun. 8(1), 14965 (2017). [CrossRef]
16. A. H. Safavi-Naeini, D. Van Thourhout, R. Baets, and R. Van Laer, "Controlling phonons and photons at the wavelength scale: integrated photonics meets integrated phononics," Optica 6(2), 213 (2019). [CrossRef]
17. B. J. Eggleton, C. G. Poulton, P. T. Rakich, M. J. Steel, and G. Bahl, "Brillouin integrated photonics," Nat. Photonics 13(10), 664–677 (2019). [CrossRef]
18. L. Ding, C. Baker, P. Senellart, A. Lemaitre, S. Ducci, G. Leo, and I. Favero, "High Frequency GaAs Nano-Optomechanical Disk Resonator," Phys. Rev. Lett. 105(26), 263903 (2010). [CrossRef]
19. W. C. Jiang, X. Lu, J. Zhang, and Q. Lin, "High-frequency silicon optomechanical oscillator with an ultralow threshold," Opt. Express 20(14), 15991–15996 (2012). [CrossRef]
20. X. Sun, X. Zhang, and H. X. Tang, "High-Q silicon optomechanical microdisk resonators at gigahertz frequencies," Appl. Phys. Lett. 100(17), 173116 (2012). [CrossRef]
21. A. G. Krause, M. Winger, T. D. Blasius, Q. Lin, and O. J. Painter, "A high-resolution microchip optomechanical accelerometer," Nat. Photonics 6(11), 768–772 (2012). [CrossRef]
22. J. T. Hill, A. H. Safavi-Naeini, J. Chan, and O. Painter, "Coherent optical wavelength conversion via cavity optomechanics," Nat. Commun. 3(1), 1196 (2012). [CrossRef]
23. M. Metcalfe, "Applications of cavity optomechanics," Appl. Phys. Rev. 1(3), 031105 (2014). [CrossRef]
24. M. Eichenfield, J. Chan, R. M. Camacho, K. J. Vahala, and O. Painter, "Optomechanical crystals," Nature 462(7269), 78–82 (2009). [CrossRef]
25. R. Benevides, F. G. Santos, G. O. Luiz, G. S. Wiederhecker, and T. P. Mayer Alegre, "Ultrahigh-Q optomechanical crystal cavities fabricated in a CMOS foundry," Sci. Rep. 7(1), 2491 (2017). [CrossRef]
26. F. G. Santos, Y. Espinel, G. de Oliveira Luiz, R. Benevides, G. Wiederhecker, and T. P. Alegre, "Hybrid confinement of optical and mechanical modes in a bullseye optomechanical resonator," Opt. Express 25(2), 508–1001 (2017). [CrossRef]
27. G. Luiz, R. Benevides, F. Santos, Y. Espinel, T. Mayer Alegre, and G. Wiederhecker, "Efficient anchor loss suppression in coupled near-field optomechanical resonators," Opt. Express 25(25), 31347–31361 (2017). [CrossRef]
28. C. Xiong, W. H. P. Pernice, X. Sun, C. Schuck, K. Y. Fong, and H. X. Tang, "Aluminum nitride as a new material for chip-scale optomechanics and nonlinear optics," New J. Phys. 14(9), 095014 (2012). [CrossRef]
29. J. Liu, K. Usami, A. Naesby, T. Bagci, E. S. Polzik, P. Lodahl, and S. Stobbe, "High-Q optomechanical GaAs nanomembranes," Appl. Phys. Lett. 99(24), 243102 (2011). [CrossRef]
30. K. Usami, A. Naesby, T. Bagci, B. Melholt Nielsen, J. Liu, S. Stobbe, P. Lodahl, and E. S. Polzik, "Optical cavity cooling of mechanical modes of a semiconductor nanomembrane," Nat. Phys. 8(2), 168–172 (2012). [CrossRef]
31. B. Guha, S. Mariani, A. Lemaître, S. Combrié, G. Leo, and I. Favero, "High frequency optomechanical disk resonators in III-V ternary semiconductors," Opt. Express 25(20), 24639–24649 (2017). [CrossRef]
32. K. C. Balram, M. Davanço, J. Y. Lim, J. D. Song, and K. Srinivasan, "Moving boundary and photoelastic coupling in GaAs optomechanical resonators," Optica 1(6), 414 (2014). [CrossRef]
33. R. W. Dixon, "Photoelastic Properties of Selected Materials and Their Relevance for Applications to Acoustic Light Modulators and Scanners," J. Appl. Phys. 38(13), 5149–5153 (1967). [CrossRef]
34. C. Baker, W. Hease, D.-T. Nguyen, A. Andronico, S. Ducci, G. Leo, and I. Favero, "Photoelastic coupling in gallium arsenide optomechanical disk resonators," Opt. Express 22(12), 14072 (2014). [CrossRef]
35. W. Yang, S. A. Gerke, K. W. Ng, Y. Rao, C. Chase, and C. J. Chang-Hasnain, "Laser optomechanics," Sci. Rep. 5(1), 13700 (2015). [CrossRef]
36. D. Princepe, G. S. Wiederhecker, I. Favero, and N. C. Frateschi, "Self-Sustained Laser Pulsation in Active Optomechanical Devices," IEEE Photonics J. 10(3), 1–10 (2018). [CrossRef]
37. X. Xi, J. Ma, and X. Sun, "Carrier-mediated cavity optomechanics in a semiconductor laser," Phys. Rev. A 99(5), 053837 (2019). [CrossRef]
38. B. Guha, F. Marsault, F. Cadiz, L. Morgenroth, V. Ulin, V. Berkovitz, A. Lemaître, C. Gomez, A. Amo, S. Combrié, B. Gérard, G. Leo, and I. Favero, "Surface-enhanced gallium arsenide photonic resonator with a quality factor of six million," Optica 4(2), 218–221 (2017). [CrossRef]
39. Y. W. Chen, B. S. Ooi, G. I. Ng, K. Radhakrishnan, and C. L. Tan, "Dry via hole etching of GaAs using high-density Cl2/Ar plasma," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 18(5), 2509 (2000). [CrossRef]
40. K. Liu, X.-M. Ren, Y.-Q. Huang, S.-W. Cai, X.-F. Duan, Q. Wang, C. Kang, J.-S. Li, Q.-T. Chen, and J.-R. Fei, "Inductively coupled plasma etching of GaAs in Cl2/Ar, Cl2/Ar/O2 chemistries with photoresist mask," Appl. Surf. Sci. 356, 776–779 (2015). [CrossRef]
41. A. Matsutani, F. Ishiwari, Y. Shoji, T. Kajitani, T. Uehara, M. Nakagawa, and T. Fukushima, "Chlorine-based inductively coupled plasma etching of GaAs wafer using tripodal paraffinic triptycene as an etching resist mask," Jpn. J. Appl. Phys. 55(6S1), 06GL01 (2016). [CrossRef]
42. A. Faraon, "Locally controlled photonic crystal devices with coupled quantum dots: physics and applications," Ph.D. thesis, Stanford University (2009).
43. K. Srinivasan, M. Borselli, T. J. Johnson, P. E. Barclay, O. Painter, A. Stintz, and S. Krishna, "Optical loss and lasing characteristics of high-quality-factor AlGaAs microdisk resonators with embedded quantum dots," Appl. Phys. Lett. 86(15), 151106 (2005). [CrossRef]
44. S. Buckley, M. Radulaski, J. L. Zhang, J. Petykiewicz, K. Biermann, and J. Vučković, "Nonlinear frequency conversion using high-quality modes in GaAs nanobeam cavities," Opt. Lett. 39(19), 5673 (2014). [CrossRef]
45. K. Rivoire, S. Buckley, and J. Vučković, "Multiply resonant photonic crystal nanocavities for nonlinear frequency conversion," Opt. Express 19(22), 22198 (2011). [CrossRef]
46. M. J. Madou, "Fundamentals of Microfabrication: The Science of Miniaturization," Fundamentals of Microfabrication: The Science of Miniaturization p. 49 (2002).
47. R. Kirchner, V. A. Guzenko, I. Vartiainen, N. Chidambaram, and H. Schift, "ZEP520A - A resist for electron-beam grayscale lithography and thermal reflow," Microelectron. Eng. 153, 71–76 (2016). [CrossRef]
48. S. Pfirrmann, A. Voigt, A. Kolander, G. Grützner, O. Lohse, I. Harder, and V. A. Guzenko, "Towards a novel positive tone resist mr-PosEBR for high resolution electron-beam lithography," Microelectron. Eng. 155, 67–73 (2016). [CrossRef]
49. A. Schleunitz, V. A. Guzenko, M. Messerschmidt, H. Atasoy, R. Kirchner, and H. Schift, "Novel 3D micro- and nanofabrication method using thermally activated selective topography equilibration (TASTE) of polymers," Nano Convergence 1(1), 7 (2014). [CrossRef]
50. A. Baca and C. I. Ashby, Fabrication of GaAs devices (IET, 2009), 2nd ed.
51. M. N. Mudholkar, G. Sai Saravanan, K. Mahadeva Bhat, C. Sridhar, H. P. Vyas, and R. Muralidharan, "Etching of 200 µm deep GaAs via holes with near vertical wall profile using photoresist mask with inductively coupled plasma," Proceedings of the 14th International Workshop on the Physics of Semiconductor Devices, IWPSD pp. 466–468 (2007).
52. H. Lee, T. Chen, J. Li, K. Y. Yang, S. Jeon, O. J. Painter, and K. J. Vahala, "Chemically etched ultrahigh-Q wedge-resonator on a silicon chip," Nat. Photonics 6(6), 369–373 (2012). [CrossRef]
53. L. Midolo, T. Pregnolato, G. Kirsanske, and S. Stobbe, "Soft-mask fabrication of gallium arsenide nanomembranes for integrated quantum photonics," Nanotechnology 26(48), 484002 (2015). [CrossRef]
54. M. L. Gorodetsky, A. Schliesser, G. Anetsberger, S. Deleglise, and T. J. Kippenberg, "Determination of the vacuum optomechanical coupling rate using frequency noise calibration," Opt. Express 18(22), 23236 (2010). [CrossRef]
55. T. Skauli, P. S. Kuo, K. L. Vodopyanov, T. J. Pinguet, O. Levi, L. A. Eyres, J. S. Harris, M. M. Fejer, B. Gerard, L. Becouarn, and E. Lallier, "Improved dispersion relations for gaas and applications to nonlinear optics," J. Appl. Phys. 94(10), 6447–6455 (2003). [CrossRef]
56. R. Benevides, N. C. Carvalho, M. Ménard, N. C. Frateschi, G. S. Wiederhecker, and T. P. M. Alegre, "Overcoming optical spring effect with thermo-opto-mechanical coupling in GaAs microdisks," in "Latin America Optics and Photonics Conference," (Optical Society of America, Lima, 2018), OSA Technical Digest, p. W4D.4.
57. K. Børkje, A. Nunnenkamp, J. D. Teufel, and S. M. Girvin, "Signatures of Nonlinear Cavity Optomechanics in the Weak Coupling Regime," Phys. Rev. Lett. 111(5), 053603 (2013). [CrossRef]
58. A. G. Krause, J. T. Hill, M. Ludwig, A. H. Safavi-naeini, J. Chan, F. Marquardt, and O. Painter, "Nonlinear radiation pressure dynamics in an optomechanical crystal," Phys. Rev. Lett. 115(23), 233601 (2015). [CrossRef]
59. M. A. Zhang, G. S. Wiederhecker, S. Manipatruni, A. Barnard, P. McEuen, and M. Lipson, "Synchronization of Micromechanical Oscillators Using Light," Phys. Rev. Lett. 109(23), 233906 (2012). [CrossRef]
60. G.-J. Qiao, H.-X. Gao, H.-D. Liu, and X. X. Yi, "Quantum synchronization of two mechanical oscillators in coupled optomechanical systems with Kerr nonlinearity," Sci. Rep. 8(1), 15614 (2018). [CrossRef]
B. Stern, X. Ji, Y. Okawachi, A. L. Gaeta, and M. Lipson, "Battery-operated integrated frequency comb generator," Nature 562(7727), 401–405 (2018).
M. Karpov, M. H. P. Pfeiffer, J. Liu, A. Lukashchuk, and T. J. Kippenberg, "Photonic chip-based soliton frequency combs covering the biological imaging window," Nat. Commun. 9(1), 1146 (2018).
M. Gräfe, R. Heilmann, M. Lebugle, D. Guzman-Silva, A. Perez-Leija, and A. Szameit, "Integrated photonic quantum walks," J. Opt. 18(10), 103002 (2016).
J. E. Sharping, K. F. Lee, M. A. Foster, A. C. Turner, B. S. Schmidt, M. Lipson, A. L. Gaeta, and P. Kumar, "Generation of correlated photons in nanoscale silicon waveguides," Opt. Express 14(25), 12388 (2006).
A. Mohanty, M. Zhang, A. Dutt, S. Ramelow, P. Nussenzveig, and M. Lipson, "Quantum interference between transverse spatial waveguide modes," Nat. Commun. 8(1), 14010 (2017).
X. Qiang, X. Zhou, J. Wang, C. M. Wilkes, T. Loke, S. O'Gara, L. Kling, G. D. Marshall, R. Santagati, T. C. Ralph, J. B. Wang, J. L. O'Brien, M. G. Thompson, and J. C. F. Matthews, "Large-scale silicon quantum photonics implementing arbitrary two-qubit processing," Nat. Photonics 12(9), 534–539 (2018).
J. Wang, S. Paesani, Y. Ding, R. Santagati, P. Skrzypczyk, A. Salavrakos, J. Tura, R. Augusiak, L. Mančinska, D. Bacco, D. Bonneau, J. W. Silverstone, Q. Gong, A. Acín, K. Rottwitt, L. K. Oxenløwe, J. L. O'Brien, A. Laing, and M. G. Thompson, "Multidimensional quantum entanglement with large-scale integrated optics," Science 360(6386), 285–291 (2018).
A. Faraon, A. Majumdar, D. Englund, E. Kim, M. Bajcsy, and J. Vučković, "Integrated quantum optical networks based on quantum dots and photonic crystals," New J. Phys. 13(5), 055025 (2011).
C. Wang, M. Zhang, B. Stern, M. Lipson, and M. Lončar, "Nanophotonic lithium niobate electro-optic modulators," Opt. Express 26(2), 1547–1555 (2018).
K. C. Balram, M. I. Davanço, J. D. Song, and K. Srinivasan, "Coherent coupling between radiofrequency, optical and acoustic waves in piezo-optomechanical circuits," Nat. Photonics 10(5), 346–352 (2016).
G. S. Wiederhecker, P. Dainese, and T. P. Mayer Alegre, "Brillouin optomechanics in nanophotonic structures," APL Photonics 4(7), 071101 (2019).
M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, "Cavity optomechanics," Rev. Mod. Phys. 86(4), 1391–1452 (2014).
R. Riedinger, S. Hong, R. A. Norte, J. A. Slater, J. Shang, A. G. Krause, V. Anant, M. Aspelmeyer, and S. Gröblacher, "Nonclassical correlations between single photons and phonons from a mechanical oscillator," Nature 530(7590), 313–316 (2016).
J. Chan, T. P. M. Alegre, A. H. Safavi-Naeini, J. T. Hill, A. Krause, S. Gröblacher, M. Aspelmeyer, and O. Painter, "Laser cooling of a nanomechanical oscillator into its quantum ground state," Nature 478(7367), 89–92 (2011).
D. Navarro-Urrios, N. E. Capuj, M. F. Colombano, P. D. García, M. Sledzinska, F. Alzina, A. Griol, A. Martínez, and C. M. Sotomayor-Torres, "Nonlinear dynamics and chaos in an optomechanical beam," Nat. Commun. 8(1), 14965 (2017).
A. H. Safavi-Naeini, D. Van Thourhout, R. Baets, and R. Van Laer, "Controlling phonons and photons at the wavelength scale: integrated photonics meets integrated phononics," Optica 6(2), 213 (2019).
B. J. Eggleton, C. G. Poulton, P. T. Rakich, M. J. Steel, and G. Bahl, "Brillouin integrated photonics," Nat. Photonics 13(10), 664–677 (2019).
L. Ding, C. Baker, P. Senellart, A. Lemaitre, S. Ducci, G. Leo, and I. Favero, "High Frequency GaAs Nano-Optomechanical Disk Resonator," Phys. Rev. Lett. 105(26), 263903 (2010).
W. C. Jiang, X. Lu, J. Zhang, and Q. Lin, "High-frequency silicon optomechanical oscillator with an ultralow threshold," Opt. Express 20(14), 15991–15996 (2012).
X. Sun, X. Zhang, and H. X. Tang, "High-Q silicon optomechanical microdisk resonators at gigahertz frequencies," Appl. Phys. Lett. 100(17), 173116 (2012).
A. G. Krause, M. Winger, T. D. Blasius, Q. Lin, and O. J. Painter, "A high-resolution microchip optomechanical accelerometer," Nat. Photonics 6(11), 768–772 (2012).
J. T. Hill, A. H. Safavi-Naeini, J. Chan, and O. Painter, "Coherent optical wavelength conversion via cavity optomechanics," Nat. Commun. 3(1), 1196 (2012).
M. Metcalfe, "Applications of cavity optomechanics," Appl. Phys. Rev. 1(3), 031105 (2014).
M. Eichenfield, J. Chan, R. M. Camacho, K. J. Vahala, and O. Painter, "Optomechanical crystals," Nature 462(7269), 78–82 (2009).
R. Benevides, F. G. Santos, G. O. Luiz, G. S. Wiederhecker, and T. P. Mayer Alegre, "Ultrahigh-Q optomechanical crystal cavities fabricated in a CMOS foundry," Sci. Rep. 7(1), 2491 (2017).
F. G. Santos, Y. Espinel, G. de Oliveira Luiz, R. Benevides, G. Wiederhecker, and T. P. Alegre, "Hybrid confinement of optical and mechanical modes in a bullseye optomechanical resonator," Opt. Express 25(2), 508–1001 (2017).
G. Luiz, R. Benevides, F. Santos, Y. Espinel, T. Mayer Alegre, and G. Wiederhecker, "Efficient anchor loss suppression in coupled near-field optomechanical resonators," Opt. Express 25(25), 31347–31361 (2017).
C. Xiong, W. H. P. Pernice, X. Sun, C. Schuck, K. Y. Fong, and H. X. Tang, "Aluminum nitride as a new material for chip-scale optomechanics and nonlinear optics," New J. Phys. 14(9), 095014 (2012).
J. Liu, K. Usami, A. Naesby, T. Bagci, E. S. Polzik, P. Lodahl, and S. Stobbe, "High-Q optomechanical GaAs nanomembranes," Appl. Phys. Lett. 99(24), 243102 (2011).
K. Usami, A. Naesby, T. Bagci, B. Melholt Nielsen, J. Liu, S. Stobbe, P. Lodahl, and E. S. Polzik, "Optical cavity cooling of mechanical modes of a semiconductor nanomembrane," Nat. Phys. 8(2), 168–172 (2012).
B. Guha, S. Mariani, A. Lemaître, S. Combrié, G. Leo, and I. Favero, "High frequency optomechanical disk resonators in III-V ternary semiconductors," Opt. Express 25(20), 24639–24649 (2017).
K. C. Balram, M. Davanço, J. Y. Lim, J. D. Song, and K. Srinivasan, "Moving boundary and photoelastic coupling in GaAs optomechanical resonators," Optica 1(6), 414 (2014).
R. W. Dixon, "Photoelastic Properties of Selected Materials and Their Relevance for Applications to Acoustic Light Modulators and Scanners," J. Appl. Phys. 38(13), 5149–5153 (1967).
C. Baker, W. Hease, D.-T. Nguyen, A. Andronico, S. Ducci, G. Leo, and I. Favero, "Photoelastic coupling in gallium arsenide optomechanical disk resonators," Opt. Express 22(12), 14072 (2014).
W. Yang, S. A. Gerke, K. W. Ng, Y. Rao, C. Chase, and C. J. Chang-Hasnain, "Laser optomechanics," Sci. Rep. 5(1), 13700 (2015).
D. Princepe, G. S. Wiederhecker, I. Favero, and N. C. Frateschi, "Self-Sustained Laser Pulsation in Active Optomechanical Devices," IEEE Photonics J. 10(3), 1–10 (2018).
X. Xi, J. Ma, and X. Sun, "Carrier-mediated cavity optomechanics in a semiconductor laser," Phys. Rev. A 99(5), 053837 (2019).
B. Guha, F. Marsault, F. Cadiz, L. Morgenroth, V. Ulin, V. Berkovitz, A. Lemaître, C. Gomez, A. Amo, S. Combrié, B. Gérard, G. Leo, and I. Favero, "Surface-enhanced gallium arsenide photonic resonator with a quality factor of six million," Optica 4(2), 218–221 (2017).
Y. W. Chen, B. S. Ooi, G. I. Ng, K. Radhakrishnan, and C. L. Tan, "Dry via hole etching of GaAs using high-density Cl2/Ar plasma," J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. 18(5), 2509 (2000).
K. Liu, X.-M. Ren, Y.-Q. Huang, S.-W. Cai, X.-F. Duan, Q. Wang, C. Kang, J.-S. Li, Q.-T. Chen, and J.-R. Fei, "Inductively coupled plasma etching of GaAs in Cl2/Ar, Cl2/Ar/O2 chemistries with photoresist mask," Appl. Surf. Sci. 356, 776–779 (2015).
A. Matsutani, F. Ishiwari, Y. Shoji, T. Kajitani, T. Uehara, M. Nakagawa, and T. Fukushima, "Chlorine-based inductively coupled plasma etching of GaAs wafer using tripodal paraffinic triptycene as an etching resist mask," Jpn. J. Appl. Phys. 55(6S1), 06GL01 (2016).
A. Faraon, "Locally controlled photonic crystal devices with coupled quantum dots: physics and applications," Ph.D. thesis, Stanford University (2009).
K. Srinivasan, M. Borselli, T. J. Johnson, P. E. Barclay, O. Painter, A. Stintz, and S. Krishna, "Optical loss and lasing characteristics of high-quality-factor AlGaAs microdisk resonators with embedded quantum dots," Appl. Phys. Lett. 86(15), 151106 (2005).
S. Buckley, M. Radulaski, J. L. Zhang, J. Petykiewicz, K. Biermann, and J. Vučković, "Nonlinear frequency conversion using high-quality modes in GaAs nanobeam cavities," Opt. Lett. 39(19), 5673 (2014).
K. Rivoire, S. Buckley, and J. Vučković, "Multiply resonant photonic crystal nanocavities for nonlinear frequency conversion," Opt. Express 19(22), 22198 (2011).
M. J. Madou, "Fundamentals of Microfabrication: The Science of Miniaturization," Fundamentals of Microfabrication: The Science of Miniaturization p. 49 (2002).
R. Kirchner, V. A. Guzenko, I. Vartiainen, N. Chidambaram, and H. Schift, "ZEP520A - A resist for electron-beam grayscale lithography and thermal reflow," Microelectron. Eng. 153, 71–76 (2016).
S. Pfirrmann, A. Voigt, A. Kolander, G. Grützner, O. Lohse, I. Harder, and V. A. Guzenko, "Towards a novel positive tone resist mr-PosEBR for high resolution electron-beam lithography," Microelectron. Eng. 155, 67–73 (2016).
A. Schleunitz, V. A. Guzenko, M. Messerschmidt, H. Atasoy, R. Kirchner, and H. Schift, "Novel 3D micro- and nanofabrication method using thermally activated selective topography equilibration (TASTE) of polymers," Nano Convergence 1(1), 7 (2014).
A. Baca and C. I. Ashby, Fabrication of GaAs devices (IET, 2009), 2nd ed.
M. N. Mudholkar, G. Sai Saravanan, K. Mahadeva Bhat, C. Sridhar, H. P. Vyas, and R. Muralidharan, "Etching of 200 µm deep GaAs via holes with near vertical wall profile using photoresist mask with inductively coupled plasma," Proceedings of the 14th International Workshop on the Physics of Semiconductor Devices, IWPSD pp. 466–468 (2007).
H. Lee, T. Chen, J. Li, K. Y. Yang, S. Jeon, O. J. Painter, and K. J. Vahala, "Chemically etched ultrahigh-Q wedge-resonator on a silicon chip," Nat. Photonics 6(6), 369–373 (2012).
L. Midolo, T. Pregnolato, G. Kirsanske, and S. Stobbe, "Soft-mask fabrication of gallium arsenide nanomembranes for integrated quantum photonics," Nanotechnology 26(48), 484002 (2015).
M. L. Gorodetsky, A. Schliesser, G. Anetsberger, S. Deleglise, and T. J. Kippenberg, "Determination of the vacuum optomechanical coupling rate using frequency noise calibration," Opt. Express 18(22), 23236 (2010).
T. Skauli, P. S. Kuo, K. L. Vodopyanov, T. J. Pinguet, O. Levi, L. A. Eyres, J. S. Harris, M. M. Fejer, B. Gerard, L. Becouarn, and E. Lallier, "Improved dispersion relations for gaas and applications to nonlinear optics," J. Appl. Phys. 94(10), 6447–6455 (2003).
R. Benevides, N. C. Carvalho, M. Ménard, N. C. Frateschi, G. S. Wiederhecker, and T. P. M. Alegre, "Overcoming optical spring effect with thermo-opto-mechanical coupling in GaAs microdisks," in "Latin America Optics and Photonics Conference," (Optical Society of America, Lima, 2018), OSA Technical Digest, p. W4D.4.
K. Børkje, A. Nunnenkamp, J. D. Teufel, and S. M. Girvin, "Signatures of Nonlinear Cavity Optomechanics in the Weak Coupling Regime," Phys. Rev. Lett. 111(5), 053603 (2013).
A. G. Krause, J. T. Hill, M. Ludwig, A. H. Safavi-naeini, J. Chan, F. Marquardt, and O. Painter, "Nonlinear radiation pressure dynamics in an optomechanical crystal," Phys. Rev. Lett. 115(23), 233601 (2015).
M. A. Zhang, G. S. Wiederhecker, S. Manipatruni, A. Barnard, P. McEuen, and M. Lipson, "Synchronization of Micromechanical Oscillators Using Light," Phys. Rev. Lett. 109(23), 233906 (2012).
G.-J. Qiao, H.-X. Gao, H.-D. Liu, and X. X. Yi, "Quantum synchronization of two mechanical oscillators in coupled optomechanical systems with Kerr nonlinearity," Sci. Rep. 8(1), 15614 (2018).
Acín, A.
Alegre, T. P.
Alegre, T. P. M.
Alzina, F.
Amo, A.
Anant, V.
Andronico, A.
Anetsberger, G.
Ashby, C. I.
Aspelmeyer, M.
Atasoy, H.
Augusiak, R.
Baca, A.
Bacco, D.
Baets, R.
Bagci, T.
Bahl, G.
Bajcsy, M.
Baker, C.
Balram, K. C.
Barclay, P. E.
Barnard, A.
Becouarn, L.
Benevides, R.
Berkovitz, V.
Biermann, K.
Blasius, T. D.
Bonneau, D.
Børkje, K.
Borselli, M.
Buckley, S.
Cadiz, F.
Cai, S.-W.
Camacho, R. M.
Capuj, N. E.
Carvalho, N. C.
Chan, J.
Chang-Hasnain, C. J.
Chase, C.
Chen, Q.-T.
Chen, T.
Chen, Y. W.
Chidambaram, N.
Colombano, M. F.
Combrié, S.
Dainese, P.
Davanço, M.
Davanço, M. I.
de Oliveira Luiz, G.
Deleglise, S.
Ding, L.
Ding, Y.
Dixon, R. W.
Duan, X.-F.
Ducci, S.
Dutt, A.
Eggleton, B. J.
Eichenfield, M.
Englund, D.
Espinel, Y.
Eyres, L. A.
Faraon, A.
Favero, I.
Fei, J.-R.
Fejer, M. M.
Fong, K. Y.
Foster, M. A.
Frateschi, N. C.
Fukushima, T.
Gaeta, A. L.
Gao, H.-X.
García, P. D.
Gerard, B.
Gérard, B.
Gerke, S. A.
Girvin, S. M.
Gomez, C.
Gong, Q.
Gorodetsky, M. L.
Gräfe, M.
Griol, A.
Gröblacher, S.
Grützner, G.
Guha, B.
Guzenko, V. A.
Guzman-Silva, D.
Harder, I.
Harris, J. S.
Hease, W.
Heilmann, R.
Hill, J. T.
Hong, S.
Huang, Y.-Q.
Ishiwari, F.
Jeon, S.
Ji, X.
Jiang, W. C.
Johnson, T. J.
Kajitani, T.
Kang, C.
Karpov, M.
Kim, E.
Kirchner, R.
Kirsanske, G.
Kling, L.
Kolander, A.
Krause, A.
Krause, A. G.
Krishna, S.
Kumar, P.
Kuo, P. S.
Laing, A.
Lallier, E.
Lebugle, M.
Lee, H.
Lee, K. F.
Lemaitre, A.
Lemaître, A.
Leo, G.
Levi, O.
Li, J.
Li, J.-S.
Lim, J. Y.
Lin, Q.
Lipson, M.
Liu, H.-D.
Liu, J.
Liu, K.
Lodahl, P.
Lohse, O.
Loke, T.
Loncar, M.
Lu, X.
Ludwig, M.
Luiz, G.
Luiz, G. O.
Lukashchuk, A.
Ma, J.
Madou, M. J.
Mahadeva Bhat, K.
Majumdar, A.
Mancinska, L.
Manipatruni, S.
Mariani, S.
Marquardt, F.
Marsault, F.
Marshall, G. D.
Martínez, A.
Matsutani, A.
Matthews, J. C. F.
Mayer Alegre, T.
Mayer Alegre, T. P.
McEuen, P.
Melholt Nielsen, B.
Ménard, M.
Messerschmidt, M.
Metcalfe, M.
Midolo, L.
Mohanty, A.
Morgenroth, L.
Mudholkar, M. N.
Muralidharan, R.
Naesby, A.
Nakagawa, M.
Navarro-Urrios, D.
Ng, G. I.
Ng, K. W.
Nguyen, D.-T.
Norte, R. A.
Nunnenkamp, A.
Nussenzveig, P.
O'Brien, J. L.
O'Gara, S.
Okawachi, Y.
Ooi, B. S.
Oxenløwe, L. K.
Paesani, S.
Painter, O.
Painter, O. J.
Perez-Leija, A.
Pernice, W. H. P.
Petykiewicz, J.
Pfeiffer, M. H. P.
Pfirrmann, S.
Pinguet, T. J.
Polzik, E. S.
Poulton, C. G.
Pregnolato, T.
Princepe, D.
Qiang, X.
Qiao, G.-J.
Radhakrishnan, K.
Radulaski, M.
Rakich, P. T.
Ralph, T. C.
Ramelow, S.
Rao, Y.
Ren, X.-M.
Riedinger, R.
Rivoire, K.
Rottwitt, K.
Safavi-Naeini, A. H.
Sai Saravanan, G.
Salavrakos, A.
Santagati, R.
Santos, F.
Santos, F. G.
Schift, H.
Schleunitz, A.
Schmidt, B. S.
Schuck, C.
Senellart, P.
Shang, J.
Sharping, J. E.
Shoji, Y.
Silverstone, J. W.
Skauli, T.
Skrzypczyk, P.
Slater, J. A.
Sledzinska, M.
Song, J. D.
Sotomayor-Torres, C. M.
Sridhar, C.
Srinivasan, K.
Steel, M. J.
Stern, B.
Stintz, A.
Stobbe, S.
Sun, X.
Szameit, A.
Tan, C. L.
Tang, H. X.
Teufel, J. D.
Thompson, M. G.
Tura, J.
Turner, A. C.
Uehara, T.
Ulin, V.
Usami, K.
Vahala, K. J.
Van Laer, R.
Van Thourhout, D.
Vartiainen, I.
Vodopyanov, K. L.
Voigt, A.
Vuckovic, J.
Vyas, H. P.
Wang, C.
Wang, J.
Wang, J. B.
Wang, Q.
Wiederhecker, G.
Wiederhecker, G. S.
Wilkes, C. M.
Winger, M.
Xi, X.
Xiong, C.
Yang, K. Y.
Yang, W.
Yi, X. X.
Zhang, J. L.
Zhang, M.
Zhang, M. A.
Zhang, X.
Zhou, X.
APL Photonics (1)
Appl. Phys. Lett. (3)
Appl. Phys. Rev. (1)
Appl. Surf. Sci. (1)
IEEE Photonics J. (1)
J. Appl. Phys. (2)
J. Opt. (1)
J. Vac. Sci. Technol., B: Microelectron. Process. Phenom. (1)
Jpn. J. Appl. Phys. (1)
Microelectron. Eng. (2)
Nano Convergence (1)
Nat. Commun. (4)
Nat. Photonics (5)
Nat. Phys. (1)
New J. Phys. (2)
Optica (3)
Rev. Mod. Phys. (1)
Sci. Rep. (3)
Etching rate dependence on gas flow.
Ar/Cl 2 flow (sccm)
GaAs/AlGaAs rate (nm/s)
Resist rate (nm/s)
16/4 5.9 3.7
12/8 10.2 5.2
8/12 22.8 8.4
Etching rate dependence on chamber pressure.
Pressure (mTorr)
4.5 10.2 5.2 | CommonCrawl |
Ismanuel Rabadán
Universidad Autónoma de Madrid | UAM · Department of Chemistry
10 Research Items
Ab Initio Calculations
Electronic Structure
Molecular Dynamics
Additional affiliations
September 1998 - present
March 1995 - September 1998
Charge Transfer and Electron Production in Proton Collisions with Uracil: A Classical and Semiclassical Study
Clara Illescas
Luis Méndez
Santiago Bernedo
Cross sections for charge transfer and ionization in proton–uracil collisions are studied, for collision energies 0.05<E<2500 keV, using two computational models. At low energies, below 20 keV, the charge transfer total cross section is calculated employing a semiclassical close-coupling expansion in terms of the electronic functions of the supermo...
Charge exchange in collisions of 1–100-keV Sn 3 + ions with H 2 and D 2
Subam Rai
K. I. Bijlsma
Ronnie Hoekstra
Absolute cross sections for single electron capture by Sn3+ colliding with H2 and D2 have been measured and calculated in the energy range of 1–100 keV. The cross sections are determined by measuring the change in ion beam current with varying target density and by measuring the yields of charged target fragments by means of a time-of-flight spectr...
A classical and semiclassical study of collisions between X q+ ions and water molecules
Marco Alfonso Lombana
Jaime Suárez
Collisions of He²⁺, Li ³⁺ and C³⁺Cppp ions with water molecules are studied at energies ranging between 20 keV/u and 500 keV/u. Three methods are employed: the classical trajectory Monte Carlo (CTMC), the expansion of the scattering wave function in terms of asymptotic frozen molecular orbitals (AFMO) and a lattice method to numerically solve the t...
Ionization and electron capture in Li 3+ + H 2 O collisions
M A Lombana
Jaime Suarez
Synopsis Total cross sections for ionization and electron capture in Li ³⁺ + H 2 O collisions are calculated at energies 20 ⩽ E ⩽ 750 keV/u. Three different models are applied: the classical trajectory Monte Carlo method, an expansion in terms of asymptotic-frozen-molecular-orbitals and the numerical solution of the time dependent Schrödinger equat...
Nonadiabatic fragmentation of H 2 O + and isotopomers. Wave packet propagation using ab initio wavefunctions
The fragmentation of the water cation from its B[combining tilde] 2B2 electronic state, allowing the participation of the X[combining tilde] 2B1, Ã 2A1 and C[combining tilde] 2B1 states in the process, is simulated using the extended capabilities of the collocation GridTDSE code to account for the nonadiabatic propagation of wave packets in several...
Electron capture cross sections in collisions of N2+ and O2+ with H
P Barragán
L. F. Errea
Luis Fernandez Menchero
A. Macías
Ionization, Single and Double Electron Capture in Proton-Ar Collisions
Alba María Jorge
Total cross sections for formation of H and H⁻, and electron production, in H⁺ + Ar collisions have been calculated at energies between 100 eV and 200 keV employing two methods: for E<10 keV, a semiclassical treatment with an expansion in a basis of electronic wave functions of the ArH⁺ quasimolecule and, for E>10 keV, the switching-Classical-Traje...
Orientation effects in ion-molecule collisions
The charge transfer process is studied in proton collisions with BeH. Given the anisotropy of the target molecule, two approaches are considered, and compared, in order to calculate orientation-averaged cross sections. The necessary molecular data (electronic energies and non-adiabatic couplings) are obtained from full configuration-interaction wav...
Ab initio calculation of electron-capture cross sections in H + + BeH collisions
J. W. Gao
J. G. Wang
We present calculations of electron-capture cross sections in collisions of H+ with BeH molecules in the energy range 25eV<E<10keV. We discuss the validity of the models employed to describe nonadiabatic ion-molecule collisions, specifically the eikonal approximation, the Franck-Condon approximation, and the isotropic approximation to obtain orient...
Ab initio study of proton collisions with BeH
We present calculations of electron capture cross sections in proton collisions with BeH molecules in the energy range 25 eV/u < E <25 keV/u.
A numerical lattice approach for ionization and capture processes in ion-H 2 O collisions
L Errea
We present a semi-classical numerical integration method for studying electron loss processes in ion collisions with water molecules. Capture and ionization cross section are calculated in the intermediate-high energy range, and the kinetic energy distribution of the ejected electrons is analyzed. The results are compared with those obtained with a...
Ab initio description of the fragmentation of H 2 O + ( 2 B 2 )
A quantum-dynamical study of the fragmentation of H2O⁺(²B2) is carried out by using wave packet propagations on ab initio potential energy surfaces connected by nonadiabatic couplings assuming a Franck- Condon initial wave packet from the ground state of the water molecule. The simulations indicate that a conical intersection between the ²B2 and Ã...
XXIX International Conference on Photonic, Electronic, and Atomic Collisions (ICPEAC2015)
C Díaz
F Martín
The 29th International Conference on Photonic, Electronic and Atomic Collisions (XXIX ICPEAC) was held at the Palacio de Congresos ''El Greco'', Toledo, Spain, on 22-28 July, 2015, and was organized by the Universidad Autónoma de Madrid (UAM) and the Consejo Superior de Investigaciones Científicas (CSIC). ICPEAC is held biannually and is one of the...
Lattice description of electron loss in high-energy H++H2O collisions
L.F. Errea
Electron loss in proton-water collisions is studied in the energy range 100keV<E<1MeV by employing a three-center model potential. The electron-loss probabilities are calculated by applying a new lattice method that yields cross sections which are in good agreement with previous semiclassical close-coupling and classical calculations. The lattice m...
Calculation of total cross sections for electron capture in collisions of Carbon ions with H(D,T)(1s)
L F Errea
Alba Jorge
The calculations of total cross sections of electron capture in collisions of C q + with H(1s) are reviewed. At low collision energies, new calculations have been performed, using molecular expansions, to analyze isotope effects. The Classical Trajectory Monte Carlo method have been also applied to discuss the accuracy of previous calculations and...
Nonadiabatic Quantum Dynamics Predissociation of H2O+(B̃ 2B2)
A quantum-mechanical study of the predissociation of H2O+ ((B) over tilde B-2(2)) is carried out by using wave packet propagations on ab initio potential energy surfaces connected by nonadiabatic couplings. The simulations show that within the first 30 fs 80% of the initial wave packet is transferred from the (B) over tilde B-2(2) to the (A) over t...
Ab initio description of the fragmentation of H2O+(B 2B2)
The single ionization of water molecules by collisions with photons, electrons or ions leaves H2O+ in several possible electronic states: X ^2B_1 , A ^2A_1 and B ^2B_2 . The first two states are obtained after removal of essentially non-bonding electrons from H2O, and the corresponding cations do not fragment; however, the energy of the third estat...
Resolving vibration in H++H2 charge transfer collisions
Xavier Urbain
V. M. Andrianarijaona
Nathalie de Ruette
B Pons
We measure the vibrational distribution of Hj ions issued from charge transfer in H+ + H2 collisions to probe the details of the electron transfer mechanism from low to high impact energies. The experiments are accompanied by theoretical calculations. This joint experimental-theoretical study allows us to elicit the adequacy and accuracy of widely...
Study of electron capture and ionization in proton collisions with N2 using ab initio methods
E Rozsályi
L C Asensio
Electron capture cross sections in collisions of protons with the nitrogen molecule are obtained with two semiclassical ab initio methods: one uses multireference configuration interaction wavefunctions while the other is based on monoelectronic wavefunctions and the independent particle approximation. The accuracy and usefulness of this last, simp...
Calculation of ionization of H2O by H+ with classical and semiclassical methods
Total and singly differential (in the electron energy) cross section of electron emission in proton collisions with H2O are obtained at energies in the range 10 keV to 5 MeV. Two non-perturbative methods are employed, one classical and one semiclassical, both combining a multi-center target electron potential and an independent particle statistics....
New Light Shed on Charge Transfer in Fundamental H++H2 Collisions
There is no consensus on the magnitude and shape of the charge transfer cross section in low-energy H^{+}+H_{2} collisions, in spite of the fundamental importance of these collisions. Experiments have thus been carried out in the energy range 15≤E≤5000 eV. The measurements invalidate previous recommended data for E≤200 eV and confirm the existence...
Classical Treatment Of Ionization And Electron Capture In Ion-H2O Collisions At Intermediate Energies
We report both total and singly differential cross sections for ionization of water molecules by H+, He2+ and C6+ projectiles in the energy range of 15 keV/amu <= E <= 10 MeV/amu. We employ an independent-event model in the framework of the Classical Trajectory Monte Carlo method. Good agreement of calculated cross sections with the experiments is...
Ionization of water molecules by proton impact: Two nonperturbative studies of the electron-emission spectra
Two nonperturbative methods are applied to obtain total and singly differential (in the electron energy) cross sections of electron emission in proton collisions with H2O at impact energies in the range 10 keV ≤Ep≤ 5 MeV. Both methods, one classical and one semiclassical, combine an independent particle treatment with a multicenter model potential...
Ionization and electron capture in ion-molecule collisions: Classical (CTMC) and semiclassical calculations
Total cross-sections for electron capture and electron production in proton collisions with N(2), CO and H(2)O, are evaluated using a classical trajectory Monte Carlo treatment for collision energies between 30 and 3000keV. A semiclassical close-coupling treatment has been also employed for proton collisions with H(2)O, to discuss the accuracy of t...
"Study of inelastic processes in ion-water molecule collisions using classical CTMC and semiclassical methods", L. F. Errea, L. Mendez, C. Illescas, B. Pons, I. Rabadán, P. Martínez and A. Riera. Book chapter: World Scientist Publishing Company " Interdisciplinary Research on Particle Collisions and Quantitative Spectroscopy vol 1: Fast Ion-Atom and Ion-Molecule Collisions". Ed. Dzevad Belkic (November 24th, 2012).
A. Riera
We present calculations of cross sections for one- and two-electron processes in collisions of H+, He2+ and C6+ with water molecules. We employ two kind of methods: a classical trajectory Monte Carlo approach and a semiclassical treatment with expansions in terms of molecular wavefunctions. Anisotropy effects related to the structure of the target...
Charge exchange in proton collisions with the water dimer
Anabel Ravazzani
L. Errea
We calculate the electron capture cross sections in collisions of protons with water dimers, using a simple ab initio approach. The formalism involves one-electron scattering wave functions and a statistical interpretation to evaluate many-particle cross sections. By comparing with proton-water collisions, we aim at identifying aggregation effects...
Ab initio treatment of ion-water molecule collisions with a three-center pseudo potential
We calculate electron capture cross sections in collisions of protons with water molecules, using two simple ab initio approaches. The formalism involves the calculation of one-electron scattering wave functions and the use of three-center pseudo potential to represent the electron H2O+ interaction. Several methods to obtain many-electron cross sec...
Classical calculation of total and differential cross sections for electron capture and ionization in proton - Molecule collisions
Henok Getahun
The Classical Trajectory Monte Carlo method is applied to treat proton collisions with N2, CO and CH4 at collision energies 25< E < 2.5 × 103 keV. The calculation employs model potentials to describe the interaction of the active electron with the molecular core. General good agreement with available experimental data is found.
Ion-water collisions at intermediate energies
We employ the independent event electron model in the framework of the classical trajectory Monte Carlo method to compute ionization and capture cross sections for one- and two- electron processes in collisions of H+, He2+ and C6+ with water molecules at intermediate impact energies. Subsequente fragmentation processes are also considered, as well...
Hemiquantal treatment of low energy p+H2 collisions
We present calculations of charge exchange and vibrational excitation cross sections in low energy p+H2 collisions. These cross sections are obtained by means of an hemiquantal treatment which uses Diatomics In Molecules (DIM) diabatic wavefunctions of the H3+ molecule. The hemiquantal approach allows to distinguish the nonreactive, dissociative an...
Aggregation effects in proton collisions with water dimers
A Ravazzani
Charge transfer cross sections in proton collisions with water dimers are calculated using an ab initio method based on molecular orbitals of the system. Results are compared with their counterpart in proton-water collisions to gauge the importance of intermolecular interactions in the cross sections.
Ab initio calculation of charge transfer in proton collisions with N2
E. Rozsályi
Total and partial charge transfer cross sections are calculated in collisions of protons with the nitrogen molecule at energies between 0.1 and 10 keV. Ab initio potential energy curves and nonadiabatic couplings have been obtained for a number of N2 bond lengths using a multireference configuration interaction method. The influence of the anisotro...
Ab initio treatment of charge transfer in ion-molecule collisions based on one-electron wave functions
Two simple ab initio methods based on one-electron wave functions are employed to calculate the single-electron capture and single ionization of H2O and CO molecules by ion impact. The anisotropy of the molecular targets is taken into account by using multicenter pseudopotentials to represent the interaction of the active electron with the ionic mo...
Classical treatment of ion-H 2 O collisions with a three-center model potential
B. Pons
We present calculations of cross sections for one- and two-electron processes in collisions of H{sup +}, He{sup 2+}, and C{sup 6+} with water molecules in the framework of the Franck-Condon approximation. We employ an independent-electron method and a classical trajectory Monte Carlo approach. Anisotropy effects related to the structure of the targ...
Ab Initio Treatment of Charge Exchange in H+ + CH Collisions
E. Bene
Gábor Halász
Ágnes vibók
Marie-Christine Bacchus-Montabonel
Calculations of charge transfer total cross section in proton collisions with CH are presented at collision energies 10 eV < E < 6.25 keV. Two-state calculations using the sudden approximation for rotation and vibration do not show significant differences with respect to the simple Franck-Condon approximation, which is appropriate for energies abov...
Nuevos métodos para el estudio dinamico de colisiones ión-molecula aplicación a los sistemas H2O + Aq+ y CO + C2+
Luis Fernando Errea Ruiz
Tesis doctoral inédita. Universidad Autónoma de Madrid, Facultad de Ciencias, Departamento de Química. Fecha de lectura: 12-11-2010 Bibliogr.: p.100-103
Influence of nuclear exchange on nonadiabatic electron processes in H++H2 collisions
A Macías
A Riera
H(+)+H(2) collisions are studied by means of a semiclassical approach that explicitly accounts for nuclear rearrangement channels in nonadiabatic electron processes. A set of classical trajectories is used to describe the nuclear motion, while the electronic degrees of freedom are treated quantum mechanically in terms of a three-state expansion of...
Classical calculation of proton collisions with ethylene
H. Getahun
Single electron capture and single ionization total cross sections in collisions of proton with ethylene are calculated for an energy range 25 keV ≤ E ≤ 150 keV, using the classical trajectory Monte Carlo method. Multi-center model potentials are employed to represent the interaction of the active electron on each molecular orbital with the C2H cor...
Isotope effect in ion-atom collisions
Pedro José Barragán
We explain the origin of the unusual large isotopic dependence found in charge-transfer cross sections for H(D,T){sup +}+Be collisions. We show that this large effect appears in a semiclassical treatment as a consequence of the mass dependence of the charge-transfer transition probabilities, which is due to the variation of the radial velocity in t...
Resonances in electron-capture total cross sections for C4+ and B5+ collisions with H(1s)
P. Barragán
F. Guzmán
Itzik Ben-Itzhak
Quantal calculations of electron-capture and elastic cross sections have been carried out for collisions of C4+ and B5+ with H(1s) at collision energies 0.00025
Classical three-center model potential calculations for ion-H2O collisions
We present a study of the ionization and capture processes in collisions of H+, He2+ and C6+ with water molecules. Single and total cross sections are obtained in the energy range 20 keV/amu
Ab initio treatment of proton collisions with water and water dimer
Charge transfer and ionization in collisions of H+ with H2O and (H2O)2 is obtained by using simple ab initio methods based on one-electron wave functions and multi-center pseudo-potentials and independent particle model treatments.
Isotopic effects in proton-beryllium collisions
Charge exchange in collisions of H+, D+ and T+ with beryllium is examined at impact energies from 2.5 × 10-4 and 10 eV/amu. The ab initio cross sections show a large isotopic dependence below 0.6 eV/amu that cannot be explain in terms of kinematical effects.
H+ + H2 collisions at low impact energies
We present semiclassical calculations of total cross section for electron capture, vibrational excitation, dissociation and nuclear exchange reactions in H++H2 collisions at energies 5 eV < E < 1 keV. Our results for electron capture agree with close-coupling calculations but not with the recommended data for this system.
Electron capture and nuclear exchange in H+ + H2 collisions at low impact energies
Electron capture, vibrational excitation, dissociation and nuclear exchange in collisions of H+ with H2 are studied at impact energies between 5 eV and 1 keV. Calculations are performed by employing classical trajectories for the nuclear evolution and a three-state expansion of the electronic wavefunction that uses the diatomics in molecules approa...
Calculation of Rate Coefficients for Electron Capture in Collisions of O2+ and N2+ Ions with H
and A. Riera
We present calculations of electron capture cross sections in collisions of O2+ and N2+ with H(1s) for impact energies 0.001 eV < E < 10 keV and the corresponding rate coefficients for temperatures 102 K < T < 105 K. Our molecular close-coupling treatment leads to significant differences from the capture rates usually employed in the modeling of as...
Calculation of total cross sections for ionization and charge transfer in collisions of multicharged ions with water molecules
Classical (CTMC) calculations of single electron capture and single ionization cross sections are carried out for collisions of H+ and C6+ collisions with H2O molecules in the impact energy range 25< E < 5000keV/amu. The calculation employs a multi-center model potential for the interaction of the active electron with the H 2O+ core. The results ag...
State selective electron capture and excitation in proton collisions with Be
An ab initio study of charge exchange and excitation processes in collisions between protons and beryllium atoms is presented. State selective cross sections are obtained for collision energies between 2.5 × 10−7 and 16 keV/amu, using both quantum and semiclassical treatments. A very large isotopic dependence of the charge exchange cross section is...
Dynamical study of the Cs + ( 1 S 0 )+Mg(3 1 S 0 ) non adiabatic collision system in the few keV energy range
Mitch John Sabido
J. de Andrés
J. Sogas
Antonio Aguilar
. The dynamics of collisional processes between Mg atoms and caesium ions is studied using the hemiquantal (HQ) approach with special attention to the collisional channels leading to Mg(3 1P) and Cs(6 2P) states, for which the corresponding emission excitation functions have been previously measured in our laboratory. The radial and angular non-adi...
Asymptotic transitions around conical intersections in ion-diatom collisions
A semiclassical model is devised to shed light on the role of asymptotic transitions around conical intersections in the dynamics of ion-diatom collisions. We consider the fundamental H++H2 case, with vibrationally excited target, in the impact velocity range 10−4≤v≤5.10−2 a.u. The reliability of the sudden approximation for vibration and trajector...
Classical calculation of ionization and electron-capture total cross sections in H(+)+H(2)O collisions
We report total cross sections for single ionization and electron capture in ${\mathrm{H}}^{+}$ collisions with water molecules at impact energies $25\phantom{\rule{0.3em}{0ex}}\mathrm{keV}<E<5\phantom{\rule{0.3em}{0ex}}\mathrm{MeV}$. Calculations have been carried out by applying the independent-particle model and the eikonal\char21{}classical tra...
Resonances in electron capture total cross sections for ion-H(1s) collisions
L F. Errea
We have calculated charge transfer total cross sections in ion-H(1s) collisions at very low energies (E < 1 eV). These cross sections show a Langevin-type behaviour, although the corresponding transition probabilities are smaller than one, as obtained in the Landau-Zener model. The cross sections exhibit numerous spikes which are related to the exi...
Vibronic treatment of vibrational excitation and electron capture in H++H2 (HD, D2 , ...) collisions at low impact energies
We present ab initio calculations of cross sections for vibrational excitation and electron capture in collisions of ${\mathrm{H}}^{+}$ with ${\mathrm{H}}_{2}$ and its isotopical variants at impact energies between $10\phantom{\rule{0.3em}{0ex}}\mathrm{eV}$ and $10\phantom{\rule{0.3em}{0ex}}\mathrm{keV}$. Calculations have been carried out by means...
Ab initio total and partial cross sections for electron capture proccess in collisions of ground and metastable 14N2+ and 16O2+ with H(1s) are presented. Calculations are performed in a wide energy range from 1x10^-5 to 20 keV/amu, using both quantal and semiclassical treatments, and a molecular basis set that includes 80 and 39 states for NH2+ and...
Influence of the Metastable Ion Content in the Initial Beam in C2++H, H2 and O2++H Collisions
We present ab initio calculations of cross sections for electron capture by C2+ ions in their ground (1S) and metastablee (3P) states, from H and H2. We explain the difference between C2+–H2 collisions, where the content of metastable ions is critical, and C2+–H collisions, where it is less important. We also present preliminary results for electro...
State-to-state Vibrational Cross Sections in H+, C2+–H2 Charge Transfer Reactions
A Rojas
The vibrational distribution of H2+ (X2Σg+;v) after charge transfer in H++H2 and C2+(2s2p;3P)+H2 collisions is obtained using the SEIKON approach and ab initio molecular data. We also present H2 vibrational excitation cross sections in H++H2 collisions.
Electron capture in collisions of N2+ and O2+ ions with H(1s) at low impact energies
We present ab initio quantal calculations of electron capture cross sections for collisions of ground and metastable states of ¹⁴N{sup 2+} and ¹⁶O{sup 2+} ions with H(1s), at collision energies 10⁻²
A study of conical intersections for the H3+ system
A parametrization of the three asymptotic conical intersections between the energies of the H3(+) ground state and the first excited singlet state is presented. The influence of an additional, fourth conical intersection between the first and second excited states at the equilateral geometry on the connection between the three conical regions is st...
Charge transfer in H2+ - H(1s) collisions
We present an ab initio study of H2+ + H(1s) collisions at H2+ impact energies between 0.4 and 50 keV. Cross sections are obtained within the sudden approximation for rotation and vibration of the diatomic molecule. We have found that anisotropy effects are crucial to correctly describe this system in this energy range.
Study of ab initio molecular data for inelastic and reactive collisions involving the H3 + quasimolecule
A. Macias
The lowest two ab initio potential energy surfaces (PES), and the corresponding nonadiabatic couplings between them, have been obtained for the Hâ{sup +} system; the molecular data are compared to those calculated with the diatomic in molecules (DIM) method. The form of the couplings is discussed in terms of the topology of the molecular structure...
State-selective electron capture in collisions of ground and metastable N^{2+} ions with H (1s)
An ab initio calculation of the electron capture cross sections for collisions of ground and metastable states of N2+ with H(1s) is presented. Total cross sections are evaluated for 14N impact energies from 2×10−3 to 300 keV, using both quantal and semiclassical treatments. The results are compared with experimental and previous theoretical data, a...
Sign-consistent dynamical couplings between ab initio three-center wave functions
A method to evaluate the sign consistency of dynamical couplings between ab initio three-center wave functions was discussed. The method was found to systemically diabatize avoided crossings between two potential surfaces, which also included conical intersections. The delayed overlap matrix (DOM), whose elements are overlaps between the molecular...
Single and double electron capture in N5++H2 collisions at low impact energies
We present ab initio calculations of cross sections for single and autoionizing double electron capture in collisions of N5+ with H2, for impact energies between 0.2 and 10 keV/amu. Calculations have been carried out by means of a close-coupling molecular treatment using the sudden approximation for rotation and vibration of the diatomic molecules....
State-selective electron capture in collisions of ground and metastable O2+ ions with H(1s)
C N Cabello
An ab initio calculation of the electron capture cross sections for collisions of ground and metastable states of N2+ with H(1s) is presented. Total cross sections are evaluated for N14 impact energies from 2×10−3to300keV, using both quantal and semiclassical treatments. The results are compared with experimental and previous theoretical data, and...
Calculated rotational and vibrational excitation rates for electron–HeH+ collisions
Baljit K. Sarpal
Jonathan Tennyson
Molecular R-matrix calculations are performed at a range of energies to give rotational and vibrational excitation and de-excitation cross-sections and, hence, rates for electron collisions with HeH+ up to electron temperatures of 20 000 K. Critical electron densities are also given. The rotational calculations include the Coulomb–Born completion o...
Vibrationally resolved charge transfer and ionisation cross sections for ion-H2 (D2,DT,T2) collisions
D. Elizaga
Jimena D Gorfinkiel
P. Sanz
Limit of the vibrational sudden approximation for H++H2 collisions
Vibronic close-coupling calculations of charge transfer and ${\mathrm{H}}_{2}$ vibrational excitation total cross sections in ${\mathrm{H}}^{+}+{\mathrm{H}}_{2}{(}^{1}{$\Sigma${}}_{g}^{+},$\nu${}=0)$ collisions are presented and compared with experimental data in the energy range $50 \mathrm{eV}<E<2 \mathrm{keV}$. It is shown that the sudden approx...
Molecular treatment of charge transfer cross sections in N5+ collisions with H2
We present cross sections for electron capture in N5++H2 collisions in the energy range 100 eV/amu≤E≤6 keV/amu. We employ a model potential aproximation to treat the interaction of the active electron with the cores, and a recently proposed method, which applies the independent particle model to evaluate the Hamiltonian matrix elements. © 2001 John...
Properties and removal of singular couplings at conical intersections
We present an analysis of the characteristics of nonadiabatic couplings due to the existence of conical intersections between potential energy surfaces of triatomic systems in collinear configurations. We discuss the relative merits and performance of four techniques that we tested to remove the singularities, and illustrate our findings for a coni...
Dissociative recombination of NO+: Calculations and comparison with experiment
I. F. Schneider
L Carata
Multichannel quantum defect calculations for NO+ dissociative recombination (DR) for electron energies from threshold to 8 eV are presented. The calculations use electronic energies and autoionization widths of valence states obtained from ab initio R-matrix calculations with the corresponding potential curves calibrated using available spectroscop...
LETTER TO THE EDITOR: H2+ vibrational distribution after single-electron capture in C2+(2s2p; 3P) + H2 collisions
The vibrational distribution of H2+(X 2Σ+g; ν) after charge transfer in C2+(2s2p; 3P) + H2 collisions is obtained ab initio using the semiclassical eikonal method and the sudden approximation. Results are in good agreement with experiment (Leputsch et al 1997 J. Phys. B: At. Mol. Opt. Phys. 30 5009).
An ab initio calculation of electron impact vibrational excitation of NO+
An ab initio calculation, using the R-matrix method, of the cross sections for the electron impact vibrational excitation of NO+(X 1+,v = 0) up to v = 5 is presented. Calculations have been carried out in both an adiabatic and a non-adiabatic approximation. It is shown that these two approximations produce dramatically different results. This is du...
Electron‐impact rotational excitation of CH+
Andrew J. Lim
New coupled-state R-matrix calculations are performed at energies up to 1 eV to give rotational excitation and de-excitation cross-sections for electron collisions with CH+. Rotational excitations with Δ j up to 7 are considered. Transitions with Δ j up to 6 are found to have appreciable cross-sections, those with Δ j = 2 being comparable to (indee...
Classical description of ionization in He2++H collisions
We report a study on electronic distributions obtained from a CTMC treatment of the reaction, which may be used as criteria to improve the quality of pseudostates within the framework of close-coupling molecular methods. By using the concept of molecular energy of classical electrons, an analysis of the mechanism at intermediate energies confirms t...
On the calculation of electron-impact rotational excitation cross sections for molecular ions
Baljit K Sarpal
Methods of computing rotational excitation cross sections for electron collisions with diatomic molecular ions are examined for impact energies up to 5 eV. The HeH and NO ions are used as test cases and calculations are performed at various levels of approximation. Previous studies have all used the Coulomb-Born approximation assuming only dipolar...
Molecular treatment of excitation and charge transfer in Be++H collisions
B Herrero
Pedro J. Sánchez Gómez
A 28-term molecular expansion is employed to calculate total and partial cross sections for charge exchange and excitation in Be++H collisions in the range of impact energies 0.1-25 keV amu-1, which are of predictive value.
Comment on differential cross sections of low-energy electron-hydrogen scattering in a CO2 laser field
A S Dickinson
Francis Robicheaux
Cionga et al have employed the Kramers-Henneberger gauge when performing Floquet close-coupling calculations of electron-hydrogen scattering. It is shown that with their basis the effect of the laser on the target electron is described very inaccurately: their model assumes that the electron attached to the proton oscillates freely in the laser fie...
Ab initio potential energy curves of Rydberg, valence and continuum states of NO
Bound states and quasibound Rydberg states of NO are studied as a function of internuclear distances in the range . R-matrix calculations which include 12 configuration interaction target states in the close-coupling expansion are performed. Rydberg states of up to n = 10 are studied for the , and symmetries of NO. Intruder states, of both valence...
On the validity of the soft-photon approximation for electron-atom collisions in the presence of a laser field
The validity of the Kroll-Watson approximation to treat electron-atom collisions in the presence of a laser field is discussed. Numerical calculations have been carried out using classical mechanics and a model potential to describe electron-helium scattering for conditions of experimental interest. The effect of an additional polarization term, ar... | CommonCrawl |
Search all SpringerOpen articles
ROBOMECH Journal
Research Article | Open | Published: 23 May 2018
An experimental study on surface state description by wiping motion for the estimation of floor surface condition using indoor search robot
Koichiro Matsumoto1 &
Kimitoshi Yamazaki2
ROBOMECH Journalvolume 5, Article number: 11 (2018) | Download Citation
In this paper, we aimed to establish a novel method for surface condition measurement for the indoor floor. To measure the surface condition, we proposed wiping motion that to stroke the target surface with changing the stroking speed. We developed the wiping device with a 6-axis force sensor, a passive pivot, and a contact plate to realize the wiping motion. In the experiment, the surface condition was measured using four kinds of floor materials and two kinds of liquids. From the experimental results, it was confirmed that the resistance force depends on the wiping velocity. From the experimental results, we confirmed the effectiveness of the proposed method and examined the quantitative index used for surface state description.
It is needed to investigate damaged building such as a factory or a plant, but it is dangerous for the people to walk-in there. So, it is better to investigate such an environment using a teleoperated robot. The operator who is in the safe place will know the environmental information through the robot and will become able to steer the robot for investigation or object manipulation.
Tasks expected for such a robot are a movement to places where human is hard to arrive, and operation of objects. To carry out these tasks, it is needed to know the spatial information like the coordinate or the map, and the object information like the rubbles or the doorknob.
For the previous study, Ohno et al. [1] made a dense 3D map of a disaster-affected environment by using a camera and laser scanner. Development of robots that manipulate valves, doorknobs, etc., has been studied [2]. They developed the manipulator with the camera on it's behind of end-effector. With this system, the operator can manipulate the door with the visual information which was transmitted by the robot.
However, in those studies remains some issues to solve. In those research premised that the surface that the robot will traverse or of the object to manipulate is not contaminated by oil or water. In a damaged building such as a factory, it is conceivable that a high-viscosity liquid such as oil or a powder of fine particle size such as dust is scattered on the floor. When the robot traverses such a region, liquid or powder adheres to the contact area. For example, when a wheel type mobile robot enters an oil sump, oil adheres to the wheel. With such a state, if operator makes the robot to try to move quickly or make a sudden turn, it will cause slipping and falling. This can lead to failure or damage of self position estimation. Therefore, by measuring the surface condition and presenting the information to the pilot who is in a remote place, it would be possible to adopt advance measures such as avoiding that route, which would lead to failure avoidance.
The purpose of this study is to establish a novel method to measure the surface condition of the floor. We propose a mechanism that can quantitatively estimate differences in flooring materials, the presence of liquids, the properties of liquids, etc.
We use the contact measurement for surface condition measurement. In contact measurement, it can be proposed a method by using a robot with a sensor that can measure the frictional force on contact as shown in Fig. 1. The contact measurement has the advantage to obtain surface information like surface hardness, slipperiness, and shape in compared with the non-contact measurement. In this study, we propose a method of contact measurement called wiping motion. Wiping motion is the motion that sweeps on the surface with using a part of the robot body. By changing wiping condition, the resistance force from the surface can be changed, and will make it possible to obtain the information of surface state.
Surface condition measurement robot
The contributions of this study are described in below.
Proposal of the wiping motion:
We proposed a simple method of surface condition measurement. Floor materials and sediments can be detected from the frictional force and floor reaction force when the target surface is wiped off.
Device development for wiping motion:
We developed the simple and robust wiping device consist of a force sensor, a passive joint, and a contact plate.
Investigation of surface state description:
We examined the possibility of describing the surface condition. We used the relationship between resistance force obtained from friction force and floor reaction force and floor material hardness and liquid viscosity.
Surface condition estimation has been studied by using many kinds of device and methods. Anelia et al. [3] made a study to predict slippery areas based on visual information obtained from the camera and the past slip information. For measuring the slip, the difference between the body speed and the rotation speed of the tire was used. This method is a simple and effective on wheel type mobile robot. However, it can be said that it is the measurement method which depends on moving form of robot. So it is not the general method for surface condition measurement. It is needed to propose novel measurement method that not depend on robot's moving form.
Regarding of the studies about the identification of surface and estimation of friction force, literature [4,5,6] can be mentioned. In these studies, the static friction forces of various road surfaces are measured in advance by using spring balances, and the road surface type and the frictional force are estimated by judging the road surface from the image appearance and image features in the region. In the study of Martim et al. they pointed out that it is important to accurately estimate the material of the surface for estimation of frictional force. The material used for the floor can be sufficiently different material even though it has the same appearance. For this reason, there is room for further study in considering evaluation indices that are different from indices based on image information for every floor material.
In this paper, we propose a relatively general surface state measurement method independent of the robot movement form. In addition, we compare the measured value with the physical properties of the object and investigate the quantitative index used for the surface state description.
Problem settings
We consider a situation where human can not enter because buildings are damaged by disasters and toxic gases are generated in the building. In such situation, we will consider investigating by using a teleoperated robot. The robot chooses and moves the surface where flatness remains as much as possible as a path which sticking or falling will hardly occur. In this study, we consider a method to clarify the hardness of the flat floor surface, the slipperiness, and the presence of liquid.
Wiping motion
The wiping motion is a motion of moving a part of a robot so that it strokes the surface of the floor while applying a load. Figure 2 shows the force components that will be generated when wiping motion applied. By doing wiping motion, it will be generated frictional force, floor reaction force and self-excited vibration in between the surface and measurement device. The frictional force will appear when rubbing together the surface and measurement device, and the floor reaction force will appear when the floor deforms by doing wiping motion. The self-excited vibration is called Stick-slip, and it will appear when the system is unstable.
Force components of wiping motion
The frictional force will be changed depending on surface slipperiness, and the floor reaction force will be changed depending on the surface hardness of the floor. This means that the information of surface hardness and slipperiness for surface description can be acquired by measuring the frictional force and floor reaction force.
The purpose of this study is to make it possible to describe the surface condition of floor surface by quantitative index. As mentioned above, floor reaction force and frictional force will be changed by surface hardness and surface slipperiness. So, we take the approach that utilizes the resultant force of them as the resistance force. However, with this approach, it is difficult to identify the different condition surface that indicates same resistance force. Therefore, we consider wiping motion by changing the velocity of wiping (we call it wiping velocity). This motion is based on the idea that the floor reaction force and the dynamic frictional force will change depending on the velocity of rubbing [7].
The friction force \(F_f\) which is now generally understood is a theory that it consists of the adhesion term \(F_a\) and the digging term \(F_d\). Comparing the adhesion term and the digging term, the influence of digging term is tiny, so the frictional force is approximated by the adhesion term as follows.
$$\begin{aligned} F_f = F_a + F_d \simeq F_a \end{aligned}$$
Regarding this adhesion term, a conceptual diagram is shown in Fig. 3. The two surfaces to be contacted are not perfectly smooth, and are in contact with each other by the fine projections with high load. The protruding portion adheres due to this load. The frictional force is considered to be the main cause of the shearing force of this adhesion part. The adhesion point is thought to grow as the contact time of the surface is longer, so the friction coefficient is thought to be a function depending on the sliding speed.
Surface adhesion
Figure 4 shows a conceptual diagram of viscous friction when liquid is present between two faces. Considering the case where there are relative slippage in the two faces in the figure, the frictional force generated by the fluid can be obtained from the Reynolds equation. Friction forces \(F_x\), \(F_y\) applied in the \(x\) axis direction and \(y\) axis direction are shown below.
$$\begin{aligned} F_x|_{z=_0^{h}}= & {} \iint \left(\frac{\eta (u_1-u_2)}{h} \mp \frac{h}{2} \frac{dp}{dx}\right) dxdy \end{aligned}$$
$$\begin{aligned} F_y|_{z=_0^{h}}= & {} \iint \left(\frac{\eta (v_1-v_2)}{h} \mp \frac{h}{2} \frac{dp}{dy}\right) dxdy \end{aligned}$$
\(\eta\) is the average viscosity of fluid, \(h\) is the distance between two planes, \(u_i\), \(v_i\) (\(i\)=1, 2) are the flow velocity of the \(x\) axis and \(y\) axis direction, and the \(p\) correspond to the pressure. Since the flow velocity \(u_i\), \(v_i\) in the equation depends on the sliding velocity of the surface, it can be said that viscous friction is a function dependent on the sliding velocity.
Viscous friction
Based on the above, it is predicted that resistance force, which is the resultant force of frictional force and floor reaction force, changes variously depending on the wiping velocity.
Wiping device components
The wiping device needs to be simple and robustness design because it will be assumed to use in disaster scenes. Therefore, we need to develop it without elaborate sensor or exquisite mechanism to acquire the resistance force. For that design, it is effective to avoid direct contact of the object surface and sensor. This makes it possible to avoid breakage and contamination of the sensor itself. In other words, it is necessary to interpose some components between the sensor and the target surface, which may degrade the quality of the sensor data.
Based on the above, we propose a wiping device with the configuration as shown in Fig. 5. The main components of this device are a 6-axis force sensor, a passive pivot, a contact plate, and a plate cover. The pivot reduces the influence of a slight level difference or irregularities on the floor surface. The contact plate is a part to be grounded so as to be parallel to the floor surface. For obtain stable frictional force measurement, the bottom surface of the plate have a certain area. The plate cover is a part that directly touches the floor surface. Also, the cover needs to be a removable fitted thin plate. The cover to be removable, the measurement can be continued when contaminated the cover. By making the cover removable, it is not necessary to interrupt the measurement even if liquid or the like adheres due to the wiping motion. The cover may be disposable, there is also a usage such as tentatively attaching the used cover to the body of the robot and taking it back and analyzing the adheres.
In creating the wiping device, there are several factors to consider. The frictional force expressing the slipperiness varies depending on the magnitude of the load on the floor surface. This load varies depending on the attachment position of the sensor and the weight of the device. Also, the contact plate is better to have a smooth and hard surface. If the surface of contact plate is rough, the contact area between the target surface and the device will decrease. And, if the hardness of the contact plate cover is too soft, it will be assumed that wear is caused by the wiping motion, and repeat measurement becomes difficult. Besides wear, self-excited vibration tends to occur because the system becomes unstable if the plate cover is soft. If we want to pay attention to the resistance force, this vibration is undesirable as it can hinder stable contact with the ground. For these reasons, it is desired that the plate cover is hard and smooth.
Result and discussion
Trial production for wiping device
Using the elements mentioned in the previous section, we prepared a wiping device of five different shapes. The schematic is shown in Fig. 6, and features of each shape are described below.
Type A::
The rotating shaft position is high, and the contacting surface and the rotating shaft are separated. An aluminum plate with a thickness of 2 mm is added between the force sensor and the ground plate. The angle between the aluminum plate and the ground is an acute angle.
Type B::
It is attached to a horizontally extended frame from the front edge of the robot, and the angle made by the aluminum plate and the ground is close to a right angle.
Type C::
The rotation axis position is attached near the ground, and the angle made by the aluminum plate and the ground is obtuse.
Type D400::
A shape in which an aluminum plate is eliminated and a force sensor is directly attached to a contact plate. It carries 400 g of weight.
It is the same shape as Type D400, with a weight of 800 g installed.
In Type A to C, the force sensor and the contact plate were connected via an aluminum plate. When a wiping motion is performed, a resistance force is applied to the contact plate, and the resistance is measured by the sensor by bending the metal plate. In these three types, the angle the aluminum plate makes with the ground and the position of the rotary shaft are different. Due to this difference, the direction of the tangential force around the rotation axis of the contact plate is different, and the load applied to the ground at the time of executing the wiping motion is different. In Type A and Type C, the angle between the ground and the metal plate is an acute angle, obtuse angle respectively. By doing like this, we expected that the amount of deflection of the metal plate can be made smaller than the right angle, and unnecessary force witch is the cause of stick-slip vibration will not be applied.
Five kinds of wiping device. a Type A, b Type B, c Type C, d Type D400/800
In Type D, an arbitrary weight can be mounted instead of a aluminum plate. It is possible to adjust the load applied to the ground by the weight.
Experimental settings
The experiment has occurred that verification effectiveness for quantitative description of a surface condition by wiping motion. For the first, four kinds of flooring materials which have different physical property have selected. Liquids having different viscosities such as water and oil were selected, and they were thinly applied to each floor materials. The wiping device was installed the front part of wheel type mobile robot and collected force data when the robot moves forward. The floor material which we use in this experiment was selected often used indoors. The selected floor materials are shown in Fig. 7, are PVC sheet (PVC), linoleum (Linoleum), acrylic board (Acrylic) and stainless board (Stainless). In the Table 1, there is the surface hardness of each floor materials, measured by Scratch hardness test (Pencil method), which is defined in JIS K5600-5-4. The liquids which we used in this experiment are water, salad oil. In Table 2, the viscosity of each liquids was described. The regions to coat these liquids were the width of the contact plate and the surface by wiping motion. This means the liquid will not affect the robot's wheel. The contact plate was made by ABS resin. Its shape is shown in Fig. 8. It is a shape having a curved surface on the bottom surface, the front surface, and the rear surface of the rectangular parallelepiped. A stainless steel cover with a thickness of 0.2 mm was attached as a hard and smooth material.
Flooring materials. a PVC, b linoleum, c acrylic, d stainless
Table 1 Floor materials list
Table 2 Liquids list
Sectional view of contact plate
Experiment of surface condition measurement
For each flooring material, the surface condition measurement using the wiping motion was performed. The state of each flooring material was set to the following three states, and the resistance force in each state was measured.
Clean: The surface condition without liquid
Water: The surface is covered with water
Salad oil: The surface is covered with salad oil
With regard to the wiping velocity, the moving speed of the robot was changed from 0.1 to 0.7 m/s in increments of 0.1 m/s.
Comparison of measurement results
Figure 9 is the measurement result of each device in Clean. The graph is drawn with a box-whisker plot, with the horizontal axis representing the wiping velocity and the vertical axis representing the resistance force. Each resistance values in the result are the median value at that wiping velocity. The magnitude of the resistance force \(F_r\) is obtained from the load on the sensor's \(F_x\), \(F_z\) direction shown in Fig. 6 by the following formula.
$$\begin{aligned} F_r = \sqrt{F_x^2+F_z^2} \end{aligned}$$
Type A resulted in a decrease in resistance in PVC flooring with increasing wiping velocity. From these results, resistance force increased with velocity increasing in four kinds of devices except Type A. Therefore, the wiping device needed to select from among four types of Type B to Type D800.
Resistance force for each device. a Type A clean, b Type B clean, c Type C clean, d Type D400 clean, e Type D800 clean
The results among Type B to Type D800 can be separated into two groups: (I) Resistance is clearly different for each flooring material, (II) There is little difference in resistance force. Type B and Type D800 belong to the group (I), and Type C and Type D400 belong to the group (II). This difference occurs because of the difference in load applied to the floor. It can be said that (I) is the suitable wiping device because the difference in surface condition does not appear in resistance force in the group (II).
The floor surface to which water or salad oil was applied was subjected to wiping motion and the resistance force was measured with the device of the group (I). Figure 10 shows the result of resistance force when applied wiping motion on the floor coated with liquid by using Type B and Type D800. These graphs labels are same as Fig. 9. Compare the result with Type B and Type D800 of Water, the Type B result has a very small difference of resistance force in each floor materials. From these results, Type D800 is the suitable for wiping motion.
Resistance force deference. a Type B clean, b Type B water, c Type B salad oil, d Type D800 clean, e Type D800 water, f Type D800 salad oil
Consideration of surface state description
Summarize the results of Type D800 shown in Fig. 10d–f.
Clean::
Increasing trend with respect to wiping velocity. Softer flooring material has higher resistance.
Water::
Decrease tendency with respect to wiping velocity. Softer flooring material has higher resistance.
Salad oil::
Increasing trend with respect to wiping velocity. Regardless of the hardness of the flooring material, the resistance force is almost the same value.
Here, using the result obtained from Type D800, the magnitude of the hardness of the floor material is compared with the magnitude of the resistance force. Figure 11 is a graph obtained by taking the hardness of the flooring material on the horizontal axis and the resistance force on the vertical axis. It is understood from this graph that the harder the hardness of the flooring material is, the smaller the resistance value becomes regardless of the surface slipperiness. That is, in surface state measurement, the information on the hardness can be represented by the magnitude of the resistance force.
Average resistance force and pencil hardness
From the results of previous experiments, the following were found about the hardness and the slipperiness of the flooring materials expressing the surface condition.
Hardness of floor::
It is related to the magnitude of resistance force. The greater the resistance, the lower the resistance, the harder it becomes.
Slipperiness::
It appears as a tendency of resistance to wiping velocity. It appears as a tendency of resistance to wiping velocity.
From this result, we propose average resistance force value and average resistance force variation as the feature quantity for surface state description. The average resistance force value (\(R_{avg}\)) is the average value of the resistance value at each wiping velocity, as represented by the following expression (5). The average resistance force variation (\(V_{avg}\)) is the average value of resistance change amount for each change in wiping velocity and is expressed by the following Eq. (6).
$$\begin{aligned}&R_{avg}=\frac{1}{N}\sum _{i=1}^{N} f_{v_i} \end{aligned}$$
$$\begin{aligned}&V_{avg}=\frac{1}{N-1}\sum _{i=1}^{N-1}(f_{v_{i+1}}-f_{v_i}) \end{aligned}$$
\(F_{v_i}\) in the formula means the median of resistance at each wiping velocity \(v_i\), and \(N\) means the kind of wiping velocity. (In this case, \(N\)=7, since wiping velocity was set to 0.1–0.7 m/s.)
Figure 12 shows the above index of the result of Type D800. In this graph, the average resistance force variation is plotted on the horizontal axis and the average resistance value is taken on the vertical axis.
Each marker refers to each floor material. As can be seen from this results, in the three states of Clean, Water, Salad oil, markers are present in each area that is grouped for each floor material. This means that the variation of data is small for each trial of the wiping motion. At the same time, the existence position of the marker strongly depends on the surface condition. These results suggest that the index of average resistance force variation and average resistance force value is an index that works effectively in feature quantity description.
Variation and value of the resistance force
In this paper, we proposed a wiping motion that makes it possible to measure the surface condition as surface hardness and slipperiness. Experiments showed that wiping motion works effectively for surface state measurement, and proposed an index for surface state description.
We will summarize what we predicted in this paper and what we found through experiments below.
What we predicted in this paper:
The surface condition can be measured from the resistance force generated by wiping motion
The resistance force depends on the wiping velocity, since the dynamic frictional force and the floor reaction force, components of the force, are dependent on the rubbing speed
The wiping device can be configurable with simple design consist of the multi-axis force sensor and the passive pivot
Things found by experiment:
Possibility to measure surface condition by wiping motion
The resistance force changes depending on the wiping velocity, the surface hardness of the floor, and the kind the liquid applied to the floor
The lower the surface hardness, the higher the resistance force
The suitable shape for the wiping motion is the shape doesn't have aluminum plate like Type D.
As future work include downsizing of the wiping device, measurement of the surface condition of lower wiping velocity, and examination of the surface state estimation method. The analysis on signal vibration information of measurement data, and investigation about data dependence of robot posture are also future works.
Ohno K, Takenaka E, Nagatani K, Tadokoro S, Koyanagi E, Yoshida T (2011) 3-D shape measurement of disaster environment using remote-controlled tracked vehicle with laser scanner. IPSJ SIG-Comput Vision Image Media CVIM 2011(176):1–8 (domestic)
Toda K, Yamamoto H, Shimizu M, Kodachi T, Yoshida T, Nishimura T, Furuta T (2014) Camera arm system for disaster response robots (2nd report: a collision protection mechanism for real-world missions). In: The 3rd international conference on design engineering and science (ICDES2014), pp 80–85
Angelova A, Matthies LH, Helmick DM, Perona P (2007) Learning and prediction of slip from visual information. J Field Robot Special Issue Space Robot 24(3):205–231
Goto T, Tamura H (2011) Estimation system of coefficient of friction by photo image for mobile robot. Forum Inf Technol 10(3):301–304 (domestic)
Tamura H, Kambayashi Y (2016) Estimation of coefficient of static friction of surface by analyzing photo images. Intell Decis Technol 2016(57):15–26
Brandao M, Hashimoto K, Takanishi A (2016) Friction from vision: a study of algorithmic and human performance with consequences for robot perception and teleoperation. In: 2016 IEEE-RAS international conference on humanoid robots
Yamamot Y, Kaneda M (2015) Tribology, vol 2. Ohmsha, Tokyo (in Japanese)
All authors equally contributed to develop the method. KM conducted device development, the experiments, analyzed data and wrote the manuscript. KY devised the proposed method. KY supervised the research. Both authors read and approved the final manuscript.
This study has been done as the part of the research program ImPACT: tough robotics challenge.
Interdisciplinary Graduate School of Science and Technology, Shinshu University, 4-17-1, Wakasato, Nagano, Nagano, Japan
Koichiro Matsumoto
Mechanical Systems Engineering, The Faculty of Engineering, Shinshu University, 4-17-1, Wakasato, Nagano, Nagano, Japan
Kimitoshi Yamazaki
Search for Koichiro Matsumoto in:
Search for Kimitoshi Yamazaki in:
Correspondence to Koichiro Matsumoto.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Surface condition
Dynamic frictional force
Floor reaction force
23rd Robotics Symposia
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Analysis of Material Properties under Bending Load
Stephen Yonke
In this experiment, we attempt to better understand how materials properties are tested. We tested a number of simple beams of different materials under a stress. The bending of the materials allowed for us to calculate the Poisson's Ratio and elastic moduli for each material. From this, we were able to not only compare materials but also methods of measuring elasticity. Despite some error in our results, which can be explained by the scale of our measurements in relation to the stiffness of certain materials, we find both strain gauges and equations of cantilever to be appropriate measurement techniques for measuring the elastic modulus of simple beams.
Project / Lab Report
\documentclass[prb,preprint]{revtex4-1}
\usepackage[colorinlistoftodos]{todonotes}
\title{Analysis of Material Properties under Bending Load}
\author{Stephen R. Yonke, with Johan Johns, Niya Taylor, Andrew Bothun, Demis Thomas, and Michael Duncan}
\email{[email protected]}
\affiliation{MMAE 372 Friday Lab, Dr. Sammy Tin}
\date{22 January 2016}
\begin{abstract}
In this experiment, we attempt to better understand how materials properties are tested. We tested a number of simple beams of different materials under a stress. The bending of the materials allowed for us to calculate the Poisson's Ratio and elastic moduli for each material. From this, we were able to not only compare materials but also methods of measuring elasticity. Despite some error in our results, which can be explained by the scale of our measurements in relation to the stiffness of certain materials, we find both strain gauges and equations of cantilever to be appropriate measurement techniques for measuring the elastic modulus of simple beams.
\end{abstract}
\section{Introduction}
When any material is subjected to a force, that force is distributed throughout the total area of the object. We refer to this distribution of force on the material as stress, and is defined in Equation 1. Stress, as a function of force and area, will be measured in pounds per inches squared, or psi.
\begin{equation}
\sigma = \frac{F}{A}
\end{equation}
When stress is applied on an object, the object must react accordingly. Thus, when force is applied to a beam, the beam bends. This bending, or any reaction to a force across a material, is called strain \cite{nasa}. The nominal tensile strain is defined as the extension of a material compared to its length. We represent this concept as $\epsilon$. Stressed materials do not, however, only strain in the lateral direction. Any lateral strain is accompanied by tensile strain. The ratio between the two is called the Poisson's ratio, and is expressed as $\nu$ in Equation 2.
\nu = \frac{\epsilon_{lateral}}{\epsilon_{tensile}}
Now that we know how a material will react to stress compared to itself, it is possible for us to define how different materials will act to the same stress. We call this the elastic modulus, and is defined as the stress on an object versus the strain it experiences. This varies for every material, allowing us to tailor our material choices to the needs of the application. The elastic modulus, in terms of stress and strain, is shown in Equation 3.
E = \frac{\sigma}{\epsilon}
In this experiment, we will attempt to determine the elastic modulus three ways. In the first method, the stress and strain will be determined, plotted, and the slope of the plot will give us what is called the Young's Modulus. The final two methods stem from Equation 3. Using data taken from our materials, we will use Equation 4 to solve for the stress \cite{lab}. In this equation, \textit{F} is force applied, \textit{L$_i$} is the length from the strain measuring guide to the applied load, and \textit{b} and \textit{h} represent the base and height of the beam measured.
\sigma = \frac{6 F L_i}{b h^2}
Alternatively, we will use Equation 5 which allows us to measure stress from the displacement of the material being stressed. This gives us our third method of calculating what we will call the average modulus. \textit{L} represents the length from the support to the load, while $\delta L$ represents the displacement of the beam due to the load.
E = \frac{4 F L^3}{\delta L b h^3}
\section{Materials and Procedure}
In this experiment, four materials were tested by the MMAE 372 Friday Lab group. Glass fiber reinforced polymer (GFRP), wood, steel and aluminum were all put under similar strain conditions meant to stress them across the spectrum of their elastic load limit in order to collect the necessary stress, strain, and displacement data. The height, base, total length, and length from strain gauge to load was measured for each beam, and is displayed in Table 1.
\begin{tabular}{| l | c | c | c | c |}
Sample & Height (in) & Base (in) & Total Length (in) & Strain Gauge to Load (in) \\ \hline
GFRP & 0.2528 & 1.04 & 12 & 11.47 \\
Wood & 0.5225 & 0.983 & 12.3 & 11.8 \\
Aluminum & 0.5 & 1.09 & 12.25 & 11.75 \\
Steel & 0.184 & 1.1 & 12.23 & 11.73 \\
\caption{Dimensional data for four materials tested}
In order to measure stress, strain, and displacement, the material is fixed to a support wall and subjected to a skyward load. Strain gauges were placed along the top and bottom to measure transverse and longitudinal strain. A diagram of the set up is pictured in Figure 1. Figure 2 shows a picture of the experimental set-up, with the displacement measurement device visible. Three pieces of data could be taken under a given load in this set up: the load in pounds-force, the displacement of the beam in inches, and the four-axis strain.
\begin{figure}[!ht]
\includegraphics[width=0.5\textwidth]{figure_1.jpg}
\caption{Diagram of the system}g
\includegraphics[width=0.5\textwidth]{photo2.jpg}
\caption{Picture of the system, with displacement register. Wood sample}
Each sample was tested at fifteen different loads across its elastic spectrum, in order to give us the widest data set possible. From this data, calculations were made in accordance with the before mentioned equations, providing information on the Poisson's Ratio and elastic moduli.
\section{Results}
To determine the results of the experiment, Poisson's Ratio was first calculated by plotting longitudinal and transverse strain. A fit of the data was taken, and from this fit, a Poisson's ratio could be determined. The plots and fit equations, of which the slope is the Poisson's Ratio, is pictured in Figure 3.
\includegraphics[width=0.5\textwidth]{poissons.JPG}
\caption{Poisson's Ratio, top (blue) and bottom (orange) strain considered, for four materials. The slope indicated by the fit equations represents Poisson's Ratio.}
Figure 4 represents the results of determining the Young's Modulus. The difference between the Young's Modulus and other elastic modlui is that Young's is the slope of a stress strain curve. Thus, longitudinal strain was plotted against stress, measured both on the top and bottom of the materials, and again, a fit was established from this data.
\includegraphics[width=0.5\textwidth]{stressstrain.JPG}
\caption{Stress versus Strain for all four materials, the Young's Modulus being the slope of the fit equation of each line.}
Our final two methods of calculation came in using Equation 3, 4, and 5 to determine the average elastic modulus by strain (Equation 4 into Equation 5) and by displacement (Equation 6). These results, based not on fit but on calculation of the data given by equations and then averaged, is summed with all our findings in Table II.
\begin{tabular}{| l | c | c | c | c | c | c |}
Sample & Poisson's & Poisson's & \textit{E} - Top & \textit{E} - Bottom & Average \textit{E} by & Average \textit{E} by \\
& Ratio - Top & Ratio - Bottom & (psi X 10$^6$) & (psi X 10$^6$) & Strain (psi X 10$^6$) & Displacement (psi X 10$^6$) \\ \hline
GFRP & -0.1187 & -0.1259 & 3.141 & 3.1798 & 3.08 & 2.80 \\
Wood & -0.483 & -0.229 & 2.2656 & 2.4156 & 2.30 & 1.72 \\
Aluminum & -0.3065 & -0.2934 & 8.7938 & 8.4965 & 8.97 & 8.02 \\
Steel & -0.2287 & -0.2357 & 28.774 & 28.852 & 30.50 & 69.51 \\
\caption{Poisson's Ratio and Elastic Modulus Results for all materials}
\section{Discussion}
From the determined results of the experiment, three conclusions can be made:
First, Poisson's Ratio, or the rate of strain-longitudinal to strain-traversal is rather constant. We find that the solutions are nearly identical in three cases (GFRP: factor of .94, Aluminum: .96, Steel: .97), with the outlier being wood at a factor of .47 between top and bottom. The plot of Poisson's Ratio makes this clear, and we attribute this to the possibility that wood, being the most porous and least dense of all materials surveyed, may posses unique properties for distributing strain across itself. Perhaps the strain "dissipates" into the porous space of the wood. Its natural flexibility allows it to bend less uniformly. For the metal and poly carbonates, however, we find these more brittle elements to strain quite uniformly.
Second, we see that the four different materials posses a wide range of elastic modulus values, as to be expected. What we traditionally think of as the "stronger" materials (aluminum, steel), exhibited greater resistance to stress. Metals, particularly steel, which has likely undergone a more rigorous hardening process than aluminum, possessed a modulus nearly 10 times as great. This is why steel is so widely used in rigid construction, while aluminum is more likely to be used in mass-production capacities where resistance to stress is not an issue.
Finally, we observe an interesting discrepancy in the relationship between the Young's Modulus (\textit{E}) and
the two methods for calculating average elastic modulus. If we consider Young's Modulus as the most accurate measure as assume two things: the linear fit of the data takes the stronger average and that the information regarding max elastic load was correct and not exceeded. If this is correct, we must judge strain-based average elastic modulus versus displacement-based average elastic modulus versus the fit of the Young's Modulus. We find that in almost all cases, displacement-based calculations were less accurate, generally returning too low of a value, or in the case of steel, too high.Sources of error for the displacement method may be that as the beams bend more and more, as steel did, the one-axis displacement gauge is incapable of measuring the true change in displacement. This, in conjunction with the knowledge that the strain gauge was capable of measuring at a level of specificity far beyond the displacement gauge, allows us to conclude that strain data and Equation 4 are a superior means of calculating the average elastic modulus.
\section{Conclusions}
In conclusion, this experiment was intended to provide familiarization with stress, strain, and displacement measuring methods and equipment, by calculating and making observations of the Poisson's Ratio and the different methods of determining the elastic modulus of different materials. The materials in question all performed rather consistently, as shown by Figures 3 and 4. The results, as shown in Table II, are generally in accordance with our perceptions of the materials tested, with exceptions likely due to measurement equipment limits or the natural properties of the samples tested. We ultimately find that Poisson's Ratio generally remains constant across a beam of the tested size from top to bottom, and when using the Young's Modulus as a benchmark for the elastic modulus, strain-based calculations provide the next most desirable average elastic modulus. Further experiment should be done to further test the displacement properties of steel, and theories regarding the porous nature of wood.
\begin{thebibliography}{5}
\bibitem{nasa} "Engineering Materials I" Fourth Edition, Elsevier Publishing, Michael Ashby and David Jones, pages 30-36, Accessed 29 January 2016
\bibitem{lab} MMAE 372 Lab 1 Instructions, IIT, Dr. Sammy Tin, Accessed 22 January 2016 | CommonCrawl |
Salil Vadhan
Vicky Joseph Professor of Computer Science and Applied Mathematics
Harvard College Professor
This portion of the site is new and papers are still being added.
For a more complete list of my publications, please see my former site, Digital Access to Scholarship at Harvard (DASH), the DBLP Computer Science Bibliography, the Privacy Tools Project, and/or my CV.
Copyright Notice: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder. Also, some of these works have been submitted for publication. Copyright may be transferred without further notice and this version may no longer be accessible.
Current Course: CS 229r: Spectral Graph Theory
HOME / PUBLICATIONS /
Pseudorandomness
Doron, Dean, Jack Murtagh, Salil Vadhan, and David Zuckerman. Spectral sparsification via bounded-independence sampling. Electronic Colloquium on Computational Complexity (ECCC), TR20-026, 2020. Publisher's VersionAbstract
v1, 26 Feb 2020: https://arxiv.org/abs/2002.11237
We give a deterministic, nearly logarithmic-space algorithm for mild spectral sparsification of undirected graphs. Given a weighted, undirected graph \(G\) on \(n\) vertices described by a binary string of length \(N\), an integer \(k \leq \log n \) and an error parameter \(\varepsilon > 0\), our algorithm runs in space \(\tilde{O}(k \log(N ^. w_{max}/w_{min}))\) where \(w_{max}\) and \(w_{min}\) are the maximum and minimum edge weights in \(G\), and produces a weighted graph \(H\) with \(\tilde{O}(n^{1+2/k} / \varepsilon^2)\)expected edges that spectrally approximates \(G\), in the sense of Spielmen and Teng [ST04], up to an error of \(\varepsilon\).
Our algorithm is based on a new bounded-independence analysis of Spielman and Srivastava's effective resistance based edge sampling algorithm [SS08] and uses results from recent work on space-bounded Laplacian solvers [MRSV17]. In particular, we demonstrate an inherent tradeoff (via upper and lower bounds) between the amount of (bounded) independence used in the edge sampling algorithm, denoted by \(k\) above, and the resulting sparsity that can be achieved.
ECCC 2020.pdf
Vadhan, Salil. "Computational entropy." In Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali (Oded Goldreich, Ed.), 693-726. ACM, 2019. Publisher's VersionAbstract
In this survey, we present several computational analogues of entropy and illustrate how they are useful for constructing cryptographic primitives. Specifically, we focus on constructing pseudorandom generators and statistically hiding commitments from arbitrary one-way functions, and demonstrate that:
The security properties of these (and other) cryptographic primitives can be understood in terms of various computational analogues of entropy, and in particular how these computational measures of entropy can be very different from real, information-theoretic entropy.
It can be shown that every one-way function directly exhibits some gaps between real entropy and the various computational entropies.
Thus we can construct the desired cryptographic primitives by amplifying and manipulating the entropy gaps in a one-way function, through forms of repetition and hashing.
The constructions we present (which are from the past decade) are much simpler and more efficient than the original ones, and are based entirely on natural manipulations of new notions of computational entropy. The two constructions are "dual" to each other, whereby the construction of pseudorandom generators relies on a form of computational entropy ("pseudoentropy") being larger than the real entropy, while the construction of statistically hiding commitments relies on a form of computational entropy ("accessible entropy") being smaller than the real entropy. Beyond that difference, the two constructions share a common structure, using a very similar sequence of manipulations of real and computational entropy. As a warmup, we also "deconstruct" the classic construction of pseudorandom generators from one-way permutations using the modern language of computational entropy.
This survey is written in honor of Shafi Goldwasser and Silvio Micali.
ACM-2019-Computational Entropy.pdf
Ahmadinejad, AmirMahdi, Jonathan Kelner, Jack Murtagh, John Peebles, Aaron Sidford, and Salil Vadhan. "High-precision estimation of random walks in small space." arXiv: 1912.04525 [cs.CC], 2019 (2019). ArXiv VersionAbstract
In this paper, we provide a deterministic \(\tilde{O}(\log N)\)-space algorithm for estimating the random walk probabilities on Eulerian directed graphs (and thus also undirected graphs) to within inverse polynomial additive error \((ϵ = 1/\mathrm{poly}(N)) \) where \(N\) is the length of the input. Previously, this problem was known to be solvable by a randomized algorithm using space \(O (\log N)\) (Aleliunas et al., FOCS '79) and by a deterministic algorithm using space \(O (\log^{3/2} N)\) (Saks and Zhou, FOCS '95 and JCSS '99), both of which held for arbitrary directed graphs but had not been improved even for undirected graphs. We also give improvements on the space complexity of both of these previous algorithms for non-Eulerian directed graphs when the error is negligible \((ϵ=1/N^{ω(1)})\), generalizing what Hoza and Zuckerman (FOCS '18) recently showed for the special case of distinguishing whether a random walk probability is 0 or greater than ϵ.
We achieve these results by giving new reductions between powering Eulerian random-walk matrices and inverting Eulerian Laplacian matrices, providing a new notion of spectral approximation for Eulerian graphs that is preserved under powering, and giving the first deterministic \(\tilde{O}(\log N)\)-space algorithm for inverting Eulerian Laplacian matrices. The latter algorithm builds on the work of Murtagh et al. (FOCS '17) that gave a deterministic \(\tilde{O}(\log N)\)-space algorithm for inverting undirected Laplacian matrices, and the work of Cohen et al. (FOCS '19) that gave a randomized \(\tilde{O} (N)\)-time algorithm for inverting Eulerian Laplacian matrices. A running theme throughout these contributions is an analysis of "cycle-lifted graphs," where we take a graph and "lift" it to a new graph whose adjacency matrix is the tensor product of the original adjacency matrix and a directed cycle (or variants of one).
ARXIV 2019.pdf
Murtagh, Jack, Omer Reingold, Aaron Sidford, and Salil Vadhan. "Deterministic approximation of random walks in small space." In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019), Dimitris Achlioptas and László A. Végh (Eds.). Vol. 145. Cambridge, Massachusetts (MIT) : Leibniz International Proceedings in Informatics (LIPIcs), 2019. Publisher's VersionAbstract
Version History: v1, 15 Mar. 2019: https://arxiv.org/abs/1903.06361v1
v2 in ArXiv, 25 Nov. 2019: https://arxiv.org/abs/1903.06361v2
Publisher's Version (APPROX-RANDOM 2019), 20 Sep 2019:
http://drops.dagstuhl.de/opus/volltexte/2019/11257/
We give a deterministic, nearly logarithmic-space algorithm that given an undirected graph \(G\), a positive integer \(r\), and a set \(S\) of vertices, approximates the conductance of \(S\) in the \(r\)-step random walk on \(G\) to within a factor of \(1+ϵ\), where \(ϵ > 0\) is an arbitrarily small constant. More generally, our algorithm computes an \(ϵ\)-spectral approximation to the normalized Laplacian of the \(r\)-step walk. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is space-efficient) and randomized (while ours is deterministic) for the case of even \(r\) (while ours works for all \(r\)). Along the way, we provide some new results that generalize technical machinery and yield improvements over previous work. First, we obtain a nearly linear-time randomized algorithm for computing a spectral approximation to the normalized Laplacian for odd \(r\). Second, we define and analyze a generalization of the derandomized square for irregular graphs and for sparsifying the product of two distinct graphs. As part of this generalization, we also give a strongly explicit construction of expander graphs of every size.
APPROX-RANDOM 2019 ArXiv2019.pdf ArXiv 2019 v2.pdf
Agrawal, Rohit, Yi-Hsiu Chen, Thibaut Horel, and Salil Vadhan. "Unifying computational entropies via Kullback-Leibler divergence." In Advances in Cryptology: CRYPTO 2019, A. Boldyreva and D. Micciancio, (Eds), 11693:831-858. Springer Verlag, Lecture Notes in Computer Science, 2019. Publisher's VersionAbstract
arXiv, first posted Feb 2019, most recently updated Aug 2019: https://arxiv.org/abs/1902.11202
Publisher's Version: https://link.springer.com/chapter/10.1007%2F978-3-030-26951-7_28
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12).
CRYPTO 2019.pdf ArXiv2019.pdf
Raghunathan, Ananth, Gil Segev, and Salil P. Vadhan. "Deterministic public-key encryption for adaptively-chosen plaintext distributions." Journal of Cryptology 31, no. 4 (2018): 1012-1063. Publisher's VersionAbstract
Version History: Preliminary versions in EUROCRYPT '13 and Cryptology ePrint report 2013/125.
Bellare, Boldyreva, and O'Neill (CRYPTO '07) initiated the study of deterministic public-key encryption as an alternative in scenarios where randomized encryption has inherent drawbacks. The resulting line of research has so far guaranteed security only for adversarially-chosen plaintext distributions that are independent of the public key used by the scheme. In most scenarios, however, it is typically not realistic to assume that adversaries do not take the public key into account when attacking a scheme.
We show that it is possible to guarantee meaningful security even for plaintext distributions that depend on the public key. We extend the previously proposed notions of security, allowing adversaries to adaptively choose plaintext distributions after seeing the public key, in an interactive manner. The only restrictions we make are that: (1) plaintext distributions are unpredictable (as is essential in deterministic public-key encryption), and (2) the number of plaintext distributions from which each adversary is allowed to adaptively choose is upper bounded by \(2^p\), where \(p\) can be any predetermined polynomial in the security parameter. For example, with \(p=0\) we capture plaintext distributions that are independent of the public key, and with \(p=0(s \log s)\) we capture, in particular, all plaintext distributions that are samplable by circuits of size \(s\).
Within our framework we present both constructions in the random-oracle model based on any public-key encryption scheme, and constructions in the standard model based on lossy trapdoor functions (thus, based on a variety of number-theoretic assumptions). Previously known constructions heavily relied on the independence between the plaintext distributions and the public key for the purposes of randomness extraction. In our setting, however, randomness extraction becomes significantly more challenging once the plaintext distributions and the public key are no longer independent. Our approach is inspired by research on randomness extraction from seed-dependent distributions. Underlying our approach is a new generalization of a method for such randomness extraction, originally introduced by Trevisan and Vadhan (FOCS '00) and Dodis (PhD Thesis, MIT, '00).
JCRYPTOL2018.pdf EUROCRYPT2013.pdf IACR2013.pdf
Chen, Yi-Hsiu, Mika Goos, Salil P. Vadhan, and Jiapeng Zhang. "A tight lower bound for entropy flattening." In 33rd Computational Complexity Conference (CCC 2018), 102:23:21-23:28. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Leibniz International Proceedings in Informatics (LIPIcs), 2018. Publisher's VersionAbstract
Version History: Preliminary version posted as ECCC TR18-119.
We study entropy flattening: Given a circuit \(C_X\) implicitly describing an n-bit source \(X\) (namely, \(X\) is the output of \(C_X \) on a uniform random input), construct another circuit \(C_Y\) describing a source \(Y\) such that (1) source \(Y\) is nearly flat (uniform on its support), and (2) the Shannon entropy of \(Y\) is monotonically related to that of \(X\). The standard solution is to have \(C_Y\) evaluate \(C_X\) altogether \(\Theta(n^2)\) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit \(C_Y\) for entropy flattening that repeatedly queries \(C_X\) as an oracle requires \(\Omega(n^2)\)queries.
Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [12, 22, 13, 6, 11, 10, 7, 24]. It is also used in reductions between problems complete for statistical zero-knowledge [19, 23, 4, 25]. The \(\Theta(n^2)\) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.
CCC2018.pdf
Haitner, Iftach, and Salil Vadhan. "The Many Entropies in One-way Functions." In Tutorials on the Foundations of Cryptography, 159-217. Springer, Yehuda Lindell, ed. 2017. Publisher's VersionAbstract
Earlier versions: May 2017: ECCC TR 17-084
Dec. 2017: ECCC TR 17-084 (revised)
Computational analogues of information-theoretic notions have given rise to some of the most interesting phenomena in the theory of computation. For example, computational indistinguishability, Goldwasser and Micali [9], which is the computational analogue of statistical distance, enabled the bypassing of Shannon's impossibility results on perfectly secure encryption, and provided the basis for the computational theory of pseudorandomness. Pseudoentropy, Håstad, Impagliazzo, Levin, and Luby [17], a computational analogue of entropy, was the key to the fundamental result establishing the equivalence of pseudorandom generators and one-way functions, and has become a basic concept in complexity theory and cryptography.
This tutorial discusses two rather recent computational notions of entropy, both of which can be easily found in any one-way function, the most basic cryptographic primitive. The first notion is next-block pseudoentropy, Haitner, Reingold, and Vadhan [14], a refinement of pseudoentropy that enables simpler and more ecient construction of pseudorandom generators. The second is inaccessible entropy, Haitner, Reingold, Vadhan, andWee [11], which relates to unforgeability and is used to construct simpler and more efficient universal one-way hash functions and statistically hiding commitments.
SPRINGER 2017.pdf ECCC 5-2017.pdf ECCC 12-2017.pdf
Steinke, Thomas, Salil Vadhan, and Andrew Wan. "Pseudorandomness and Fourier growth bounds for width 3 branching programs." Theory of Computing – Special Issue on APPROX-RANDOM 2014 13, no. 12 (2017): 1-50. Publisher's VersionAbstract
Version History: a conference version of this paper appeared in the Proceedings of the 18th International Workshop on Randomization and Computation (RANDOM'14). Full version posted as ECCC TR14-076 and arXiv:1405.7028 [cs.CC].
We present an explicit pseudorandom generator for oblivious, read-once, width-3 branching programs, which can read their input bits in any order. The generator has seed length \(Õ(\log^3 n)\).The previously best known seed length for this model is \(n^{1/2+o(1)}\) due to Impagliazzo, Meka, and Zuckerman (FOCS '12). Our work generalizes a recent result of Reingold, Steinke, and Vadhan (RANDOM '13) for permutation branching programs. The main technical novelty underlying our generator is a new bound on the Fourier growth of width-3, oblivious, read-once branching programs. Specifically, we show that for any \(f : \{0, 1\}^n → \{0, 1\}\) computed by such a branching program, and \(k ∈ [n]\),
\(\displaystyle\sum_{s⊆[n]:|s|=k} \big| \hat{f}[s] \big | ≤n^2 ·(O(\log n))^k\),
where \(\hat{f}[s] = \mathbb{E}_U [f[U] \cdot (-1)^{s \cdot U}]\) is the standard Fourier transform over \(\mathbb{Z}^n_2\). The base \(O(\log n)\) of the Fourier growth is tight up to a factor of \(\log \log n\).
TOC 2017.pdf APPROX-RANDOM 2014.pdf ArXiv 2014.pdf
Vadhan., Salil P. "On learning vs. refutation." 30th Conference on Learning Theory (COLT `17), 2017, 65, 1835-1848. Publisher's VersionAbstract
Building on the work of Daniely et al. (STOC 2014, COLT 2016), we study the connection between computationally efficient PAC learning and refutation of constraint satisfaction problems. Specifically, we prove that for every concept class \(\mathcal{P }\) , PAC-learning \(\mathcal{P}\) is polynomially equivalent to "random-right-hand-side-refuting" ("RRHS-refuting") a dual class \(\mathcal{P}^∗ \), where RRHS-refutation of a class \(Q\) refers to refuting systems of equations where the constraints are (worst-case) functions from the class \( Q\) but the right-hand-sides of the equations are uniform and independent random bits. The reduction from refutation to PAC learning can be viewed as an abstraction of (part of) the work of Daniely, Linial, and Shalev-Schwartz (STOC 2014). The converse, however, is new, and is based on a combination of techniques from pseudorandomness (Yao '82) with boosting (Schapire '90). In addition, we show that PAC-learning the class of \(DNF\) formulas is polynomially equivalent to PAC-learning its dual class \(DNF ^∗\) , and thus PAC-learning \(DNF\) is equivalent to RRHS-refutation of \(DNF\) , suggesting an avenue to obtain stronger lower bounds for PAC-learning \(DNF\) than the quasipolynomial lower bound that was obtained by Daniely and Shalev-Schwartz (COLT 2016) assuming the hardness of refuting \(k\)-SAT.
COLT 17.pdf
Murtagh, Jack, Omer Reingold, Aaron Sidford, and Salil Vadhan. "Derandomization beyond connectivity: Undirected Laplacian systems in nearly logarithmic space." 58th Annual IEEE Symposium on Foundations of Computer Science (FOCS `17), 2017. Publisher's VersionAbstract
ArXiv, 15 August 2017 https://arxiv.org/abs/1708.04634
FOCS, November 2017 https://ieeexplore.ieee.org/document/8104111
We give a deterministic \(\overline{O} (\log n)\)-space algorithm for approximately solving linear systems given by Laplacians of undirected graphs, and consequently also approximating hitting times, commute times, and escape probabilities for undirected graphs. Previously, such systems were known to be solvable by randomized algorithms using \(O(\log n)\) space (Doron, Le Gall, and Ta-Shma, 2017) and hence by deterministic algorithms using \( O(\log^{3/2} n)\) space (Saks and Zhou, FOCS 1995 and JCSS 1999).
Our algorithm combines ideas from time-efficient Laplacian solvers (Spielman and Teng, STOC '04; Peng and Spielman, STOC '14) with ideas used to show that Undirected S-T Connectivity is in deterministic logspace (Reingold, STOC '05 and JACM '08; Rozenman and Vadhan, RANDOM '05).
ArXiv2017.pdf FOCS2017.pdf
Chen, Yi-Hsiu, Kai-Min Chung, Ching-Yi Lai, Salil P. Vadhan, and Xiaodi Wu. "Computational notions of quantum min-entropy." In Poster presention at QIP 2017 and oral presentation at QCrypt 2017, 2017. Publisher's VersionAbstract
ArXiv v1, 24 April 2017 https://arxiv.org/abs/1704.07309v1
ArXiv v3, 9 September 2017 https://arxiv.org/abs/1704.07309v3
ArXiv v4, 5 October 2017 https://arxiv.org/abs/1704.07309v4
We initiate the study of computational entropy in the quantum setting. We investigate to what extent the classical notions of computational entropy generalize to the quantum setting, and whether quantum analogues of classical theorems hold. Our main results are as follows. (1) The classical Leakage Chain Rule for pseudoentropy can be extended to the case that the leakage information is quantum (while the source remains classical). Specifically, if the source has pseudoentropy at least \(k\), then it has pseudoentropy at least \(k−ℓ \) conditioned on an \(ℓ \)-qubit leakage. (2) As an application of the Leakage Chain Rule, we construct the first quantum leakage-resilient stream-cipher in the bounded-quantum-storage model, assuming the existence of a quantum-secure pseudorandom generator. (3) We show that the general form of the classical Dense Model Theorem (interpreted as the equivalence between two definitions of pseudo-relative-min-entropy) does not extend to quantum states. Along the way, we develop quantum analogues of some classical techniques (e.g. the Leakage Simulation Lemma, which is proven by a Non-uniform Min-Max Theorem or Boosting). On the other hand, we also identify some classical techniques (e.g. Gap Amplification) that do not work in the quantum setting. Moreover, we introduce a variety of notions that combine quantum information and quantum complexity, and this raises several directions for future work.
ArXiv2017.pdf
Chen, Sitan, Thomas Steinke, and Salil P. Vadhan. "Pseudorandomness for read-once, constant-depth circuits." CoRR, 2015, 1504.04675. Publisher's VersionAbstract
For Boolean functions computed by read-once, depth-D circuits with unbounded fan-in over the de Morgan basis, we present an explicit pseudorandom generator with seed length \(\tilde{O}(\log^{D+1} n)\). The previous best seed length known for this model was \(\tilde{O}(\log^{D+4} n)\), obtained by Trevisan and Xue (CCC '13) for all of AC0 (not just read-once). Our work makes use of Fourier analytic techniques for pseudorandomness introduced by Reingold, Steinke, and Vadhan (RANDOM '13) to show that the generator of Gopalan et al. (FOCS '12) fools read-once AC0. To this end, we prove a new Fourier growth bound for read-once circuits, namely that for every \(F : \{0,1\}^n\rightarrow \{0,1\}\) computed by a read-once, depth-\(D\) circuit,
\(\left|\hat{F}[s]\right| \leq O\left(\log^{D-1} n\right)^k,\)
where \(\hat{F}\) denotes the Fourier transform of \(F\) over \(\mathbb{Z}_2^n\).
Gopalan, Parikshit, Salil Vadhan, and Yuan Zhou. "Locally testable codes and Cayley graphs." In In Moni Naor, editor, Innovations in Theoretical Computer Science (ITCS '14), 81-92. ACM, 2014. Publisher's VersionAbstract
Version History: Full version posted as https://arxiv.org/pdf/1308.5158.pdf.
We give two new characterizations of (\(\mathbb{F}_2\)-linear) locally testable error-correcting codes in terms of Cayley graphs over \(\mathbb{F}^h_2\) :
A locally testable code is equivalent to a Cayley graph over \(\mathbb{F}^h_2\) whose set of generators is significantly larger than \(h\) and has no short linear dependencies, but yields a shortest-path metric that embeds into \(\ell_1\) with constant distortion. This extends and gives a converse to a result of Khot and Naor (2006), which showed that codes with large dual distance imply Cayley graphs that have no low-distortion embeddings into \(\ell_1\).
A locally testable code is equivalent to a Cayley graph over \(\mathbb{F}^h_2\) that has significantly more than \(h\) eigenvalues near 1, which have no short linear dependencies among them and which "explain" all of the large eigenvalues. This extends and gives a converse to a recent construction of Barak et al. (2012), which showed that locally testable codes imply Cayley graphs that are small-set expanders but have many large eigenvalues.
ITCS2014.pdf ArXiv2018.pdf
Chung, Kai-Min, Michael Mitzenmacher, and Salil P. Vadhan. "Why simple hash functions work: Exploiting the entropy in a data stream." Theory of Computing 9 (2013): 897-945. Publisher's VersionAbstract
Version History: Merge of conference papers from SODA '08 (with the same title) and RANDOM '08 (entitled "Tight Bounds for Hashing Block Sources").
Hashing is fundamental to many algorithms and data structures widely used in practice. For the theoretical analysis of hashing, there have been two main approaches. First, one can assume that the hash function is truly random, mapping each data item independently and uniformly to the range. This idealized model is unrealistic because a truly random hash function requires an exponential number of bits (in the length of a data item) to describe. Alternatively, one can provide rigorous bounds on performance when explicit families of hash functions are used, such as 2-universal or \(O\)(1)-wise independent families. For such families, performance guarantees are often noticeably weaker than for ideal hashing.
In practice, however, it is commonly observed that simple hash functions, including 2-universal hash functions, perform as predicted by the idealized analysis for truly random hash functions. In this paper, we try to explain this phenomenon. We demonstrate that the strong performance of universal hash functions in practice can arise naturally from a combination of the randomness of the hash function and the data. Specifically, following the large body of literature on random sources and randomness extraction, we model the data as coming from a "block source," whereby each new data item has some "entropy" given the previous ones. As long as the Rényi entropy per data item is sufficiently large, it turns out that the performance when choosing a hash function from a 2-universal family is essentially the same as for a truly random hash function. We describe results for several sample applications, including linear probing, chained hashing, balanced allocations, and Bloom filters.
Towards developing our results, we prove tight bounds for hashing block sources, determining the entropy required per block for the distribution of hashed values to be close to uniformly distributed.
THEORYCOMP2013.pdf RANDOM2008.pdf SODA2008.pdf
Haitner, Iftach, Omer Reingold, and Salil Vadhan. "Efficiency improvements in constructing pseudorandom generators from one-way functions." SIAM Journal on Computing 42, no. 3 (2013): 1405-1430. Publisher's VersionAbstract
Version History: Special Issue on STOC '10.
We give a new construction of pseudorandom generators from any one-way function. The construction achieves better parameters and is simpler than that given in the seminal work of Håstad, Impagliazzo, Levin, and Luby [SICOMP '99]. The key to our construction is a new notion of next-block pseudoentropy, which is inspired by the notion of "in-accessible entropy" recently introduced in [Haitner, Reingold, Vadhan, and Wee, STOC '09]. An additional advan- tage over previous constructions is that our pseudorandom generators are parallelizable and invoke the one-way function in a non-adaptive manner. Using [Applebaum, Ishai, and Kushilevitz, SICOMP '06], this implies the existence of pseudorandom generators in NC\(^0\) based on the existence of one-way functions in NC\(^1\).
SIAM2013.pdf STOC2010.pdf
Reshef, Yakir, and Salil Vadhan. "On extractors and exposure-resilient functions for sublogarithmic entropy." Random Structures & Algorithms 42, no. 3 (2013): 386-401. ArXiv VersionAbstract
Version History: Preliminary version posted as arXiv:1003.4029 (Dec. 2010).
We study resilient functions and exposure‐resilient functions in the low‐entropy regime. A resilient function (a.k.a. deterministic extractor for oblivious bit‐fixing sources) maps any distribution on n‐bit strings in which k bits are uniformly random and the rest are fixed into an output distribution that is close to uniform. With exposure‐resilient functions, all the input bits are random, but we ask that the output be close to uniform conditioned on any subset of n – k input bits. In this paper, we focus on the case that k is sublogarithmic in n.
We simplify and improve an explicit construction of resilient functions for k sublogarithmic in n due to Kamp and Zuckerman (SICOMP 2006), achieving error exponentially small in krather than polynomially small in k. Our main result is that when k is sublogarithmic in n, the short output length of this construction (\(O(\log k)\) output bits) is optimal for extractors computable by a large class of space‐bounded streaming algorithms.
Next, we show that a random function is a resilient function with high probability if and only if k is superlogarithmic in n, suggesting that our main result may apply more generally. In contrast, we show that a random function is a static (resp. adaptive) exposure‐resilient function with high probability even if k is as small as a constant (resp. log log n). No explicit exposure‐resilient functions achieving these parameters are known.
ArXiv2010.pdf RANDOM2013.pdf
Mahmoody, Mohammad, Tal Moran, and Salil Vadhan. "Publicly verifiable proofs of sequential work." In Innovations in Theoretical Computer Science (ITCS '13), 373-388. ACM, 2013. Publisher's VersionAbstract
Version History: Preliminary version posted as Cryptology ePrint Archive Report 2011/553, under title "Non-Interactive Time-Stamping and Proofs of Work in the Random Oracle Model".
We construct a publicly verifiable protocol for proving computational work based on collision- resistant hash functions and a new plausible complexity assumption regarding the existence of "inherently sequential" hash functions. Our protocol is based on a novel construction of time-lock puzzles. Given a sampled "puzzle" \(\mathcal{P} \overset{$}\gets \mathbf{D}_n\), where \(n\) is the security parameter and \(\mathbf{D}_n\) is the distribution of the puzzles, a corresponding "solution" can be generated using \(N\) evaluations of the sequential hash function, where \(N > n\) is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as \(\Omega(N)\) sequential evaluations of the hash function after receiving \(\mathcal{P}\). Thus, valid solutions constitute a "proof" that \(\Omega(N)\) parallel time elapsed since \(\mathcal{P}\) was received. Solutions can be publicly and efficiently verified in time \(\mathrm{poly}(n) \cdot \mathrm{polylog}(N)\). Applications of these "time-lock puzzles" include noninteractive timestamping of documents (when the distribution over the possible documents corresponds to the puzzle distribution \(\mathbf{D}_n\)) and universally verifiable CPU benchmarks.
Our construction is secure in the standard model under complexity assumptions (collision- resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non- interactive in the random oracle model using the Fiat-Shamir Heuristic.
Our construction makes a novel use of "depth-robust" directed acyclic graphs—ones whose depth remains large even after removing a constant fraction of vertices—which were previously studied for the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO '11) for time-lock puzzles in the random oracle model, which showed that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.
IACR2013.pdf ITCS2013.pdf
Vadhan, Salil, and Colin Jia Zheng. "A uniform min-max theorem with applications in cryptography." In Ran Canetti and Juan Garay, editors, Advances in Cryptology—CRYPTO '13, Lecture Notes on Computer Science, 8042:93-110. Springer Verlag, Lecture Notes in Computer Science, 2013. Publisher's VersionAbstract
Full version published in ECCC2013 and IACR ePrint 2013.
We present a new, more constructive proof of von Neumann's Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior '99) with the advantage that the algorithm runs in poly\((n)\) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over \(\{0,1\}^n\). This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).
We describe several applications, including a more modular and improved uniform version of Impagliazzo's Hardcore Theorem (FOCS '95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC '11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC '09)).
CRYPTO2013.pdf ECCC2013.pdf
Reingold, Omer, Thomas Steinke, and Salil Vadhan. "Pseudorandomness for regular branching programs via Fourier analysis." In Sofya Raskhodnikova and José Rolim, editors, Proceedings of the 17th International Workshop on Randomization and Computation (RANDOM '13), Lecture Notes in Computer Science, 8096:655-670. Springer-Verlag, 2013. Publisher's VersionAbstract
Version History: Full version posted as ECCC TR13-086 and arXiv:1306.3004 [cs.CC].
We present an explicit pseudorandom generator for oblivious, read-once, permutation branching programs of constant width that can read their input bits in any order. The seed length is \(O(\log^2n)\), where \(n\) is the length of the branching program. The previous best seed length known for this model was \(n^{1/2+o(1)}\), which follows as a special case of a generator due to Impagliazzo, Meka, and Zuckerman (FOCS 2012) (which gives a seed length of \(s^{1/2+o(1)}\) for arbitrary branching programs of size \(s\)). Our techniques also give seed length \(n^{1/2+o(1)}\) for general oblivious, read-once branching programs of width \(2^{n^{o(1)}}\)) , which is incomparable to the results of Impagliazzo et al.
Our pseudorandom generator is similar to the one used by Gopalan et al. (FOCS 2012) for read-once CNFs, but the analysis is quite different; ours is based on Fourier analysis of branching programs. In particular, we show that an oblivious, read-once, regular branching program of width \(w\) has Fourier mass at most \((2w^2)^k\) at level \(k\), independent of the length of the program.
RANDOM2013.pdf ArXiv2013.pdf
Spectral sparsification via bounded-independence sampling
Privacy games
Computational entropy
High-precision estimation of random walks in small space
Complexity of Counting (3)
Computational Complexity (54)
Cryptography (36)
Law & Policy (6)
Proof Systems (19)
Pseudorandomness (36)
Statistics & ML (13)
Search by Grant
A Unified Theory of Pseudorandomness (NSF CCF-0133096) (17)
Alfred P. Sloan Research Fellowship (10)
Applying Theoretical Advances in Privacy to Computational Social Science Practice (Sloan Foundation) (7)
Computational Entropy (NSF CCF-1116616) (10)
Computational Notions of Entropy and Cryptographic Applications (US-Israel Binational Science Foundation; BSF 2010196) (5)
Computing Over Distributed Sensitive Data (NSF CNS-1565387) (1)
Exploiting Opportunities in Pseudorandomness (NSF CCF-1763299) (4)
Formal Privacy Models and Title 13 (Census Cooperative Agreement CB16ADR0160001) (1)
Guggenheim Fellowship (7)
Identifying Opportunities in Pseudorandomness (NSF CCF-1749750) (1)
Incentive-Compatible Machine Learning (NSF CCF-0915016) (2)
ITR: Information Theoretic Secure Hyper-Encryption and Protocols (NSF CNS-0205423) (3)
Miller Institute for Basic Research in Science (1)
New Complexity-Theoretic Techniques in Cryptography (NSF CNS-0430336) (11)
Privacy for Social Science Research (Gift from Google, Inc.) (8)
Privacy Tools for Sharing Research Data (NSF CNS-1237235) (19)
Pseudorandomness and Applications (ONR Young Investigator Award; ONR Grant N00014-04-1-0478) (18)
Pseudorandomness and Combinatorial Constructions (US-Israel Binational Science Foundation; BSF 2002246) (18)
Pseudorandomness for Space-Bounded Computation and Cryptography (NSF CCF-1420938) (2)
Radcliffe Institute Research Fellowship
Search by Publication Year | CommonCrawl |
fjp.github.io
The Motor and how it works
The motors of a FPV quad explained.
Source: EMAX RS2205/2300 $K_v$
Race Quad Components
Brushless ESC
Flightcontroller
RC Receiver
Flight Controller Software
RC Rates and Expo Settings
RC Transmitter
Taranis X9D Plus SE 2019
Receiver Binding R-XSR
RC Rate and EXPO Settings
FPV Transmitter and Receiver
Charger and Testers
PID Values
PID Basics
PID Lua Script
Thrust and Flight Time
Finding matching Motors
Finding matching ESCs
Flight Time Calculation
Build and Setup
Calculations and Part Selection
ESC Settings
Law and Insurance regulations
Brushed DC Electric Motor
Brushless DC Electric Motor
Outrunner vs Inrunner
Metrics of a Brushless Motor
Velocity Constant
Motor Configuration
Winding Scheme
Thrust and Power
Operating Voltage and Current Draw
Motor Weight
Relation between Thrust and Weight
Mounting and Rotation Direction
Further reading material
The motors of a copter convert the electrical energy of the battery into mechanical engergy that spins the propellors attached to the motor shaft and therefore results in kinetic energy of the drone. With a suitable propellor attached this energy makes it possible to reach speeds up to 130 km/h.
Brushless Motors EMAX RS2205/2300 $K_v$.
Top view of Brushless Motors EMAX RS2205/2300 $K_v$
Electric motors of both types, brushed and brushless, use the same principles of electromagnetism to convert electrical energy into rotation. If a voltage is applied to a copper wire, a current will flow, which will induce a magnetic field. With a permanent magnet near this current-carrying copper wire it is possible to generate linear or rotational forces to the wire and the magnet.
Some years ago brushed motors were used in rc plane models, which have a big disadvantage. As the name suggests, brushed dc motors require carbon brushes, which make contact to the rotating rotor to enable the flow of current. The brushes of a dc motor are made of graphite which is a wearing part that reduces the efficency. Another disadvantage are sparks, which can disturb or damage other electrical components. Because of the constant stress that these parts have to endure in a multicopter, they would need to be replaced often.
A brushed dc motor consist of multiple parts, that act together, to create a rotation: a static part, called the stator, and a rotating part, which is the rotor. Permanent magnets with alternating polarity are attached to the stator. The rotor also known as armature, which consists of a soft iron core that is wrapped by a copper coil.
Brushed DC Motor (Wikipedia).
A current that flows through the coils of the armature produces a magnetic field $\vec{B}$, which has a north and a south pole. The orientation of the poles is in accordance to the direction of the current. To achieve a rotation of the rotor the magnetic orientation of the current-carrying coil has to be directed, such that the repulsion and attraction of the magnetic fields between stator and rotor generate a permanent rotation. The problem is to keep the rotation by changing the poles of the copper coils - commutate. To achieve this a commutator is used. During the rotation of the rotor, its purpose is to switch (commutate) the direction of current flow and therefore to change the direction of the magnetic field. This makes the commutator an electrical switch. The current has to be transfered of a static part onto the rotating commutator, where the previously mentioned brushes are used. These are made of graphite and are in light contact with the commutator through a spring.
As the name suggests, brushless dc motors require no wearing brushes for their construction. The commutation occurs electrically, not mechanically. In this case commutation is the interchange of the current through the coils to make the motor spin. The reason that no brushes are necessary for the flow of current, is that the change in direction of the current is happening on the stator. Therefore the current doesn't have to be transfered to the rotating rotor, which aleviates the wearing parts - the brushes and the commutator. The current-carrying copper coils are wound around the stator sleeve. Attached to the rotor bell are permanent magnets in alternating polarity, which are often made of neodymium.
Operating principle of a Brushed DC Motor.
Neodymium is a chemical element in the periodic system with the symbol Nd and atomic number 60. It belongs to the lanthanide series and is a rare-earth element. Magnets manufactured from this element, count to the strongest magnets on earth.
For a brushless dc motor to work, it requires three phase wires instead of two in the case of a brushed dc motor. The three phases carry AC current where each phase is shifted by 120 degrees. This way the current direction and therefore the direction of magnetic fields are changed, which leads to repulsion and attraction with the permanent magnets that are attached to the rotor.
Brushless motors are distinguished into outrunners and inrunners, which defines how they are designed. Inrunners have their static part (stator) outside and the rotating part (rotor) inside. This principle is the same as found in brushed motors. The opposite is true for an outrunner. The rotating part lies outside, the fixed part is inside. Outrunners are used for multicopters, because of their properties and metrics.
Inrunner and Outrunner motors.
Compared to inrunners, outrunners often have a smaller $K_v$ rating, which is why they rotate slower (lower rpm) but have a higher torque and don't need a gearbox. Inrunners are far too fast for most aircraft propellers, which is why they are used in conjunction with a gearbox.
When selecting brushless dc motors for a quad the metrics of the motor, such as its dimensions, configuration and $K_v$ ratings are important. Here we see descriptions such as RS2208 and 12N14P. To select the right motor it is crucial to know what these figures mean.
One important constant of a motor is the velocity constant $K_v$ (not to be confused with kV, the symbol for kilovolt) also known as back EMF constant. The constant is measured in revolutions per minute (RPM) per volt or radians per volt second [rad/(V⋅s)]:
\[K_v = \frac{\omega_{\text{no_load}}}{V_{\text{Peak}}}\]
The $K_{v}$ rating of a brushless motor is the ratio of the motor's unloaded rotational speed (measured in RPM) to the peak (not RMS) voltage on the wires connected to the coils (the back EMF). For example, an unloaded motor of $K_{v}=2.300\text{rpm}/\text{V}$ supplied with 10.0 V will run at a nominal speed of 23.000 rpm (2.300 rpm/V × 10.0 V)[ref].
A high $K_v$ rating doesn't mean that the motor has more power. Instead, the higher the $K_v$ rating, the faster the motor spins with the same voltage applied. However, the faster it spins the less torque it can generate, which leads to a slower rotation with a mounted propeller. One major reason for this behavior lies within the motor configuration, explained below.
Note that the $K_v$ rating is a measured without load and therefore without mounted propeller (idle state).
A description similar to RS2205 describes the dimension of a motor to compare it to other models.
Dimensions of outrunner stator EMAX RS2205.
The first two characters show the motor series followed by the dimensions of a stator that's inside an outrunner. In the table below are two examples
Motor Description
Motor Series
Stator Diameter (mm)
Stator Height (mm)
$K_v$ rating (rpm/V)
EMAX RS2205/2300Kv EMAX RS 2205 22 05 2300
Lumenier RX2206-13 2000Kv Motor Lumenier RX 2206 22 06 2000
The 13 in the motor description of the Lumenier model describes the number of windings around each .
Another important metric describing the inner structure of a brushless motor is its motor configuration, also known as framework. For example 12N14P, where N denotes the number of stator "wire wound" poles and P denotes the number of rotor "permanent magnet" poles[ref].
A magnet always has a north and a south pole. The magnets inside the rotor bell are attached in alternating polarity and therefore attract and repel each other. This generates an alternating magnetic field and is the reason why only an even number of magnets can to construct a brushless dc motor. The stator consists of multiple armatures with copper windings. The armatures of the stator are also known as grooves or stator poles.
How often the copper wire is wound around the stator poles is known as windings or turns. As mentioned, a brushless dc motor has three phase wires because of the 120 degree phase shifted polarity of the current running through those wires. The copper windings act as a electromagnet when current flows through them. Because the dc motor operates with three phase electric power, its number of stator poles (N) is always divisible by three. For example, a 12N14P motor has always four windings that represent one stator pole. A 9N12P has twelve magnets attached to the rotor and nine stator poles with copper windings.
The framework of the outrunner EMAX RS2205.
Different winding schemes exist, which describe how to wind the copper wires around the stator poles. The most common winding scheme for outrunners is dLRK.
The magnets inside the rotor (poles) are always shifted (rotated) from one stator pole (armature) to the next. The more poles (P) and armatures (N) a motor has, the shorter is the distance between these two sections. This leads to a slower spinning motor and therefore a lower $K_v$ rating. The torque on the other hand will be increased. Intuitively this is the force that pulls the rotor magnets from one current carrying armature to the next.
Slower spinning brushless dc motors are output more torque and are more efficient than fast spinning ones. A slow spinning but large propeller is a more effeicient system than a fas spinning motor witha a small propeller attached.
Thrust is often measured grams and varies on how fast the motor is spinning and the size and shape of the attached propeller.
The next two metrics are also important for the thrust output of a brushless dc motor.
Two important metrics of a brushless dc are its operating voltage range $U$ and its maximum current drain $I$. These two parameters influence the thrust output. The maximum operating voltage that is allowed to be applied to a motor is often given in Volts (V) or the number cells of battery pack. For example, specifications such as 11.1 Volts or 3S are common. Operating such a motor with 4S or more would cause damage. Using less voltage would work but wouldn't allow to use the motor's full potential. Another important specification from the manufacturer is the maximum current drain given in Ampere (A). For a short time period of 60 to 180 seconds a motor will be capable of handling a higher current drain which will be specified by the manufacturer. With the two values (long term current drain and operating voltage) it is directly possible to calculate the maximum power $P$ consumption of the motor in Watt (W). This metric can be used to compare motors and to calculate the thrust.
\[P = U \cdot I\]
The weith of a motor plays a significant role not only to calculate the total weight of a copter, it also influences the maximum possible thrust and flight characteristics. A heavier motor has a higher inertia that leads to less acceleration but higher top speeds. On the other hand, lighter motors will provide higher acceleration and are useful for fast turns and easier to handle when doing acrobatic flips. Which motor weight to choose depends therefore on the intended purpose.
The thrust specifications are dependening on the used propeller and the applied voltage. Such relations are usually provided from the manufacturer. The following table shows such an example of the EMAX RS2205/2300 $K_v$.
The metrics of the outrunner EMAX RS2205.
This table also helps to choose appropriate propellers.
All About Multirotor Drone FPV Motors at GetFPV learning pages
Make your own miniature electric hub motor
Brushless DC Motor Fundamentals
Common Winding Schemes
Tags: brushless, dc, drone, fpv, motor, quad, race
Categories: fpv, quad
© 2022 Franz Pucher. Powered by Jekyll & Minimal Mistakes. | CommonCrawl |
HomeM&P&CMathematicalreal analysis – Construction of a Borel set with positive but not full measure in each interval
real analysis – Construction of a Borel set with positive but not full measure in each interval
By moting1a Mathematical 0 Comments
I was wondering how one can construct a Borel set that doesn't have full measure on any interval of the real line but does have positive measure everywhere.
To be precise, if $\mu$ denotes Lebesgue measure, how would one construct a Borel set $A \subset \mathbb{R}$ such that
$$0 < \mu(A \cap I) < \mu(I)$$
for every interval $I$ in $\mathbb{R}$?
Moreover, would such a set necessarily have to contain infinite measure?
Reading through this thread, I am having a flashback to the course I took in measure theory. In every homework assignment there would be at least two or three questions whose solution involved the sentence "Let $\{q_n\}_{n\in\mathbb N}$ an enumeration of the rationals…" 🙂
If you got this from Rudin (it is Exercise 8, Ch. 2 in his Real & Complex Analysis), here is his personal answer (excerpted from Amer. Math Monthly, Vol. 90, No.1 (Jan 1983) pp. 41-42). He works with the unit interval $[0,1]$, but of course this can be extended to $\mathbb R$ by doing the same thing in each interval (and by scaling these replications appropriately you can get the final set with finite measure). Anyways, here's how it goes:
"Let $I=[0,1]$, and let CTDP mean compact totally disconnected subset of $I$, having positive measure. Let $\langle I_n\rangle$ be an enumeration of all segments in $I$ whose endpoints are rational.
Construct sequences $\langle A_n\rangle,\langle B_n\rangle$ of CTDP's as follows: Start with disjoint CTDP's $A_1$ and $B_1$ in $I_1$. Once $A_1,B_1,\dots,A_{n-1},B_{n-1}$ are chosen, their union $C_n$ is CTD, hence $I_n\setminus C_n$ contains a nonempty segment $J$ and $J$ contains a pair $A_n,B_n$ of disjoint CTDP's. Continue in this way, and put
A=\bigcup_{n=1}^{\infty}A_n.
If $V\subset I$ is open and nonempty, then $I_n\subset V$ for some $n$, hence $A_n\subset V$ and $B_n\subset V$. Thus
0<m(A_n)\leq m(A\cap V)<m(A\cap V)+m(B_n)\leq m(V);
the last inequality holds because $A$ and $B_n$ are disjoint. Done.
The purpose of publishing this is to show that the highly computational construction of such a set in [another article] is much more complicated than necessary."
Edit: In his excellent comment below, @ccc managed to isolate the necessary components of my solution, and after incorporating his observation it has been greatly simplified. (Actually, after trimming the fat, I've realized that it is actually not entirely dissimilar from Rudin's.) Here it is:
Let $\{r_n\}$ be an enumeration of the rationals, let $V_1$ be a segment of finite length centered at $r_1$, and let $V_n$ be a segment of length $m(V_{n-1})/3$ centered at $r_n$. Set
W_n=V_n-\bigcup_{k=1}^{\infty}V_{n+k},
and observe that
m(W_n)\geq m(V_n)-\sum_{k=1}^{\infty}m(V_{n+k})=m(V_n)-m(V_n)\sum_{k=1}^{\infty}3^{-k}=\frac{m(V_n)}{2}.
In particular, $m(W_n)>0$.
For each $n$, choose a Borel set $A_n\subset W_n$ with $0<m(A_n)<m(W_n)$. Finally, put $A=\bigcup_{n=1}^{\infty}A_n$. Because $A_n\subset W_n$ and the $W_n$ are disjoint, $m(A\cap W_n)=m(A_n)$. That is to say,
0<m(A\cap W_n)<m(W_n)
for every $n$. But every interval contains a $W_n$, so $A$ meets the criteria, and has finite measure (specifically, $m(A)\leq\sum_n m(V_n)=2 m(V_1)<\infty$).
As a curiosity, here's my own "unnecessarily computational" way (though it's not quite as lengthy as that in the article Rudin was referring to), which I can't resist including because I slaved over it when I first came across this problem, before finding Rudin's solution:
Let $\{r_n\}$ be an enumeration of the rationals, and put
V_n=\left(r_n-3^{-n-1},r_n+3^{-n-1}\right),\qquad W_n=V_n-\bigcup_{k=1}^{\infty}V_{n+k}.
Observe that
m(W_n)>m(V_n)-\sum_{k=1}^{\infty}m(V_{n+k})=m(V_n)-m(V_n)\sum_{k=1}^{\infty}3^{-k}=\frac{m(V_n)}{2}.\qquad\qquad(1)\label{8.1}
(We have strict inequality because there exist rationals $r_i$, with $i>n$, in the complement of $V_n$.)
For each $n$, let $K_n$ be a Borel set in $V_n$ with measure $m(K_n)=m(V_n)/2$. Finally, put
$$A_n=W_n\cap K_n,\qquad A=\bigcup_{n=1}^{\infty}A_n.$$
To prove that $A$ has the desired property, it is enough to verify that the inequalities
$$0<m(A\cap V_n)<m(V_n)\qquad\qquad(3)\label{8.3}$$
hold for every $n$. (This is because every interval contains a $V_n$.) For the left inequality, it is enough to prove that $m(A_n\cap V_n)=m(A_n)=m(W_n\cap K_n)>0$. This follows from the relations
$$m(W_n\cup K_n)\leq m(V_n)<m(W_n)+m(K_n)=m(W_n\cup K_n)+m(W_n\cap K_n),$$
the second inequality being a consequence of (1) and the fact that $m(K_n)=m(V_n)/2$.
For the right inequality of (3), observe that $V_n\subset W_i^c$ for $i<n$, and that therefore
m(A\cap V_n)=m\left(\bigcup_{k=0}^{\infty}A_{n+k}\cap V_n\right)\leq\sum_{k=0}^{\infty}m(K_{n+k}\cap V_n)
<\sum_{k=0}^{\infty}m(K_{n+k})=\sum_{k=0}^{\infty}\frac{m(V_{n+k})}{2}=\sum_{k=0}^{\infty}\frac{m(V_n)}{2^{k+1}}=m(V_n).
The strict inequality above follows from three observations: (i) $m(K_i)>0$ for every $i$; (ii) $K_i\subset V_i$; and (iii) there exist neighborhoods $V_i$, with $i>n$, that are contained entirely in the complement of $V_n$.
So $A$ meets the criteria (and also has finite measure).
Nick S. has already posted a solution and mentioned a 1983 paper by Rudin, but I thought the following additional comments could be of interest. Essentially the same construction that Rudin gives can be found in Oxtoby's book "Measure and Category" (p. 37 of the 1980 2nd edition). Also, note that these constructions give $F_{\sigma}$ examples which, from a descriptive set theoretic point of view, are about as simple as you can get. (It's like asking for a countable ordinal bound on something and you're able to come up with 1 as the bound.)
If we relax the assumption from "Borel set" to "Lebesgue measurable set" then, surprisingly, MOST measurable sets in the sense of Baire category have this property. For simplicity, let's restrict ourselves to measurable subsets of the interval $[0,1]$. Let $M[0,1]$ be the collection of Lebesgue measurable subsets of $[0,1]$ modulo the equivalence relation $\sim$ defined by $E \sim F \Leftrightarrow \lambda(E \Delta F) = 0$, where $\lambda$ is Lebesgue measure. That is, two measurable sets are considered equivalent iff their symmetric difference has Lebesgue measure zero. Note that the property you're asking about (both the set and its complement have positive measure intersection with every open subinterval of $[0,1]$) is well defined on these equivalence classes, so we can safely work with representatives of these equivalence classes.
The set $M[0,1]$ can be made into a metric space by defining $d(E,F) = \lambda (E \Delta F)$. Under this metric $M[0,1]$ is a complete metric space. One way to show this is to note that $d(E,F) =\int_{0}^{1}\left| \chi_{E}-\chi _{F}\right| d\lambda$ and then make use of theorems involving Lebesgue integration (e.g. show that $M[0,1]$ is a closed subset of the complete metric space $\mathcal{L}^{1}[0,1]$). See, for instance, p. 137 in Adriaan C. Zaanen's 1967 book "Integration" (or pp. 80-81 of Zaanen's 1961 "An Introduction to the Theory of Integration"). For a direct proof that doesn't rely on Lebesgue integration, see p. 44 in Oxtoby's book "Measure and Category", pp. 87-88 in Malempati M. Rao's 2004 book "Measure Theory and Integration", or pp. 214-215 (Problem #13, which has an extensive hint) in Angus E. Taylor's 1965 book "General Theory of Functions and Integration".
In the complete metric space $M[0,1]$ (which also happens to be separable), the collection of elements (each of these elements is essentially a subset of $[0,1]$) that have the property you're looking for (both the set and its complement have positive measure intersection with every open subinterval of $[0,1]$) has a first Baire category complement. This is Exercise 10:1.12 (p. 411) in Bruckner/Bruckner/Thomson's 1997 text "Real Analysis" (freely available on the internet), Exercise 2 (pp. 78-79) in Kharazishvili's 2000 book "Strange Functions in Real Analysis", and it's buried within the proof of Theorem 1 (p. 886) of Kirk's paper "Sets which split families of measurable sets" [American Mathematical Monthly 79 (1972), 884-886].
(Added the next day) I thought I'd mention a few ways that such sets have been applied. Probably the most common application is to easily obtain an absolutely continuous function that is monotone in no interval. If $E$ is such a set in $[0,1]$ and $E' = [0,1] – E$, then $\int_{0}^{x}\left| \chi_{E}-\chi _{E'}\right| d\lambda$ is such a function. See, for example: [1] Charles Vernon Coffman, Abstract #5, American Math. Monthly 72 (1965), p. 941; [2] Andrew M. Bruckner, "Current trends in differentiation theory", Real Analysis Exchange 5 (1979-80), 90-60 (see pp. 12-13); [3] Wise/Hall's 1993 "Counterexamples in Probability and Real Analysis" (see Example 2.26 on p. 63). Another application is to construct a Lebesgue measurable function (Baire class 2, in fact) $g$ such that there is no Baire class 1 function $f$ that is almost everywhere equal to $g$. (Recall that every Lebesgue measurable function is almost everywhere equal to a Baire class 2 function.) For this ($g = \chi_{E}$ for any set $E$ as above will work), see: [4] Hahn/Rosenthal's 1948 "Set Functions" (see middle of p. 147); [5] Stromberg's 1981 "Introduction to Classical Real Analysis" (see Exercise 13c on p. 309). Finally, here are some miscellaneous other applications I found in some notes of mine last night: [6] Goffman, "A generalization of the Riemann integral", Proc. AMS 3 (1952), 543-547 (see about 2/3 down on p. 544); [7] MR 36 #5892 (review of a paper by A. Settari); [8] Gardiner/Pau, "Approximation on the boundary …", Illinois J. Math. 47 (2003), 1115-1136 (see Lemma 3 on p. 1130).
This question has a cute answer — if you know about Markov chains.
Consider the nearest-neigbhour Markov chain $(X_n)$ on $\mathbf{Z}$ with the following transition probabilities: from $0$ it goes to $\pm 1$ with probability $1/2$, from $n\neq 0$ it moves one step towards $0$ with probability $1/4$ and one step towards $\infty$ (if $n>0$) or $-\infty$ (if $n<0$) with probability $3/4$.
It is simple to check (e.g. via the strong law of large numbers) that $|X_n|$ goes to $+\infty$ almost surely, so by symmetry
$$ \mathbf{P}(\lim X_n = + \infty | X_0=0) = \mathbf{P}(\lim X_n = – \infty | X_0=0) = \frac{1}{2}. $$
Now define a Borel set $A \subset [0,1]$ as follows: use the digits in the binary expansion of $x \in [0,1]$ (which are i.i.d. Bernoulli variables which parameter $1/2$) to simulate the Markov chain $(X_n)$ starting from the origin, and let $A$ be set of $x$ for which $X_n \to + \infty$. Since the Markov chain is irreducible, $A$ has non-full non-zero mesaure in every subinterval of $[0,1]$.
For the OP last question: since
$$ \lim_{k \to – \infty} \mathbf{P} (\lim X_n = +\infty | X_0= k) =0 ,$$
one gets a variant of the example with $m(A)$ arbitrary small by starting the Markov chain from $k$. Using such constructions in every interval $[n,n+1]$ produces a set $B \subset \mathbf{R}$ with finite measure whose intersection with every interval has postive measure.
Let $I = [0,1]$. We construct a partition of $I$ into $A_0$ and $A_1$ using binary digits of numbers in $x$. Recall that with respect to Lebesgue measure, the binary digits are like infinite coin toss experiment. Also recall that while a number can have more than one binary representations, such numbers form a measure zero set, so we can safely forget existence of such numbers.
Let $ c_1 = 1, c_2 = 2 < c_3 < c_4 < c_5 < \cdots $ be an increasing sequence of integers, such that $c_{n+1} – c_n$ grows fast enough. (How fast? To be determined later.) Let $J_n = \{k \in \mathbb N: c_n \le k < c_{n+1} \}$.
We say $x\in I$ is $n$-happy if k'th (binary) digit of x for each k in $J_n$ is 0. We say $x$ is $n$-angry if k'th digit of x for each k in $J_n$ is 1 . We say $x$ is $n$-emotional if it is $n$-happy or $n$-angry. If you imagine $x$ as an outcome of tossing a coin infinite times, then you can think of "1-emotional", "2-emotional", "3-emotional", …. as a sequence of events with rapidly decreasing probability.
Given an $x\in I$, the set $N(x)$ of all $n$ for which $x$ is $n$-emotional is a non-empty set because $1 \in N(x)$. If $c_{n+1} – c_n$ grows fast enough (linear growth is enough), then $N(x)$ is a finite set for almost all $x$, due to Borel-Cantelli lemma. Let $n(x) = \sup N(x)$. Now the function $x \mapsto n(x) $ is finite a.e. This allows us to partition $I$ into $A_0$ consisting of all $x$ that are $n(x)$-happy and $A_1$ consisting of all $x$ that are $n(x)$-angry.
To show that $A_0$ intersects every subinterval of $I$ in positive density, let $I'$ be an arbitrary subinterval of $I$. Then there exist $n$ and digits $x_1, \cdots x_{c_n-1}$ such that $I'$ contains the interval $I"$ consisting of all numbers in $I$ whose first $c_n -1$ digits are precisely $x_1, \cdots x_{c_n-1}$. The $n$-lucky numbers in $I"$ form yet another interval, say $I"'$. Given a random choice of $x \in I"'$, it's more likely for $x$ to be in $A_0$ than to be in $A_1$ because of $n$-luckiness of $x$. Therefore $A_0$ intersects $I"'$ in more than half density, and thus intersects $I'$ in positive density.
Remark 1. There is no measurable subset of $I$ which intersects every subinterval in exactly half density. Terence Tao uses this to construct a non-measurable set in here.
Remark 2. Lebesgue density theorem
-2 people think this answer is useful
Well, suppose you enumerated the rational intervals $I_n$, and for each $I_n$, let $J_n = I_n \setminus \cup_{i=1}^{n-1}I_i$. If $J_n$ has measure 0, let $A_n = \varnothing$. Otherwise, let $A_n$ be a subset of $I_n$ such that $\mu(A_n) < \max\{\mu(I_n)/2, \epsilon\cdot 2^{-n}\}$. Then let $A$ be the union of the sets $A_n$. Doesn't that do it?
Tags:measure-theory, real-analysis
soft question – Why are mathematical proofs that rely on computers controversial?
alculus – Why does factoring eliminate a hole in the limit?
functional analysis – $L^p$ and $L^q$ space inclusion | CommonCrawl |
Schubert calculus
From Encyclopedia of Mathematics
2010 Mathematics Subject Classification: Primary: 14M15 [MSN][ZBL]
The Schubert calculus or Schubert enumerative calculus is a formal calculus of symbols representing geometric conditions used to solve problems in enumerative geometry. This originated in work of M. Chasles [Ch] on conics and was systematized and used to great effect by H. Schubert in [Sc]. The justification of Schubert's enumerative calculus and the verification of the numbers he obtained was the contents of Hilbert's 15th problem (cf. also Hilbert problems).
Justifying Schubert's enumerative calculus was a major theme of twentieth century algebraic geometry, and intersection theory provides a satisfactory modern framework. Enumerative geometry deals with the second part of Hilbert's problem. See [Fu2] for a complete reference on intersection theory; for historical surveys and a discussion of enumerative geometry, see [Kl], [Kl2].
The Schubert calculus also refers to mathematics arising from the following class of enumerative geometric problems: Determine the number of linear subspaces of projective space that satisfy incidence conditions imposed by other linear subspaces. For a survey, see [KlLa]. For example, how many lines in projective $3$-space meet $4$ given lines? These problems are solved by studying both the geometry and the cohomology or Chow rings of Grassmann varieties (cf. also Chow ring; Grassmann manifold). This field of Schubert calculus enjoys important connections not only to algebraic geometry and algebraic topology, but also to algebraic combinatorics, representation theory, differential geometry, linear algebraic groups, and symbolic computation, and has found applications in numerical homotopy continuation [HuSoSt], linear algebra [Fu] and systems theory [By].
The Grassmannian $G_{m,n}$ of $m$-dimensional subspaces ($m$-planes) in $\def\P{\mathbb{P}}\P^n$ over a field $k$ has distinguished Schubert varieties
$$\def\O{\Omega}\def\a{\alpha}\O_{a_0,\dots,a_m}V_*:= \{W\in G_{m,n} : W\cap V_{a_j}\ge j\},$$ where $V_*:V_0\subset\cdots\subset V_n=\P^n$ is a flag of linear subspaces with $\dim V_j = j$. The Schubert cycle $\def\s{\sigma}\s_{a_0,\dots,a_n}$ is the cohomology class Poincaré dual to the fundamental homology cycle of $\O_{a_0,\dots,a_m} V_*$ (cf. also Homology). The basis theorem asserts that the Schubert cycles form a basis of the Chow ring $A^* G_{m,n}$ (when $k$ is the complex number field, these are the integral cohomology groups $H^* G_{m,n}$) of the Grassmannian with
$$\s_{a_0,\dots,a_m}\in A^{(m+1)(n+1)-{m+1\choose n+1} -a_0-\cdots -a_m} G_{m.n},$$ (see also Grassmann manifold). The duality theorem asserts that the basis of Schubert cycles is self-dual under the intersection pairing
$$\def\b{\beta} (\a,\b) \in H^* G_{m,n} \otimes H^* G_{m,n} \to\deg(\a \cdot \b) = \int_{G_{m,n}} \a\cdot\b$$ with $\s_{a_0,\dots,a_m}$ dual to $\s_{n-a_m,\dots,n-a_0}$.
Let $\def\t{\tau}\t_b := \s_{n-m-b,n-m+1,\dots,n}$ be a special Schubert cycle (cf. Schubert cycle). Then
$$\s_{a_0,\dots,a_m}\cdot \t_b = \sum \s_{c_0,\dots,c_m},$$ the sum running over all $(c_0,\dots,c_m)$ with $0\le c_0\le a_0\le c_1\le a_1\cdots\le c_m\le a_m$ and $b = \sum_i(a_i-c_i)$. This Pieri formula determines the ring structure of cohomology; an algebraic consequence is the Giambelli formula for expressing an arbitrary Schubert cycle in terms of special Schubert cycles. Define $\t_b = 0$ if $B<0$ or $B>m$, and $\t_0 = 1$. Then Giambelli's formula is
$$\s_{a_0,\dots,a_m} = \det(\t_{n-m+j-a_i})_{i,j=0,\dots,m}.$$ These four results enable computation in the Chow ring of the Grassmannian, and the solution of many problems in enumerative geometry. For instance, the number of $m$-planes meeting $(m+1)(n-m)$ general $(n-m-1)$-planes non-trivially is the coefficient of $\s_{0,\dots,m}$ in the product $(\t_1)^{(m+1)(n-m)}$, which is [Sc2] $$\frac{1!\cdots(n-m-1)!\cdots ((m+1)(n-m))!}{(n-m)!(n-m+1)!\cdots(n!-1)!}.$$ These four results hold more generally for cohomology rings of flag manifolds $G/P$; Schubert cycles form a self-dual basis, the Chevalley formula [Ch2] determines the ring structure (when $P$ is a Borel subgroup), and the Bernshtein–Gel'fand–Gel'fand formula [BeGeGe] and Demazure formula [De] give the analogue of the Giambelli formula. More explicit Giambelli formulas are provided by Schubert polynomials.
One cornerstone of the Schubert calculus for the Grassmannian is the Littlewood–Richardson rule [LiRi] for expressing a product of Schubert cycles in terms of the basis of Schubert cycles. (This rule is usually expressed in terms of an alternative indexing of Schubert cycles using partitions. A sequence $(a_0,\dots,a_m)$ corresponds to the partition $(n-m-a_0,n-m+1,\dots,n-a_m)$; cf. Schur functions in algebraic combinatorics.) The analogue of the Littlewood–Richardson rule is not known for most other flag varieties $G/P$.
[BeGeGe] I.N. Bernshtein, I.M. Gel'fand, S.I. Gel'fand, "Schubert cells and cohomology of the spaces $G/P$" Russian Math. Surveys, 28 : 3 (1973) pp. 1–26 MR0686277
[By] C.I. Byrnes, "Algebraic and geometric aspects of the control of linear systems" C.I. Byrnes (ed.) C.F. Martin (ed.), Geometric Methods in Linear systems Theory, Reidel (1980) pp. 85–124
[Ch] M. Chasles, "Construction des coniques qui satisfont à cinque conditions" C.R. Acad. Sci. Paris, 58 (1864) pp. 297–308
[Ch2] C. Chevalley, "Sur les décompositions cellulaires des espaces $G/B$" W. Haboush (ed.), Algebraic Groups and their Generalizations: Classical Methods, Proc. Symp. Pure Math., 56:1, Amer. Math. Soc. (1994) pp. 1–23 MR1278698 Zbl 0824.14042
[De] M. Demazure, "Désingularization des variétés de Schubert généralisées" Ann. Sci. École Norm. Sup. (4), 7 (1974) pp. 53–88
[Fu] W. Fulton, "Eigenvalues, invariant factors, highest weights, and Schubert calculus" Bull. Amer. Math. Soc., 37 (2000) pp. 209–249 MR1754641 Zbl 0994.15021
[Fu2] W. Fulton, "Intersection theory", Ergebn. Math., 2, Springer (1998) (Edition: Second) MR1644323 Zbl 0885.14002
[HuSoSt] B. Huber, F. Sottile, B. Sturmfels, "Numerical Schubert calculus" J. Symbolic Comput., 26 : 6 (1998) pp. 767–788 MR1662035 Zbl 1064.14508
[Kl] S. Kleiman, "Problem 15: Rigorous foundation of Schubert's enumerative calculus", Mathematical Developments arising from Hilbert Problems, Proc. Symp. Pure Math., 28, Amer. Math. Soc. (1976) pp. 445–482 MR429938
[Kl2] S. Kleiman, "Intersection theory and enumerative geometry: A decade in review" S. Bloch (ed.), Algebraic Geometry (Bowdoin, 1985), Proc. Symp. Pure Math., 46:2, Amer. Math. Soc. (1987) pp. 321–370 MR0927987 Zbl 0664.14031
[KlLa] S.L. Kleiman, D. Laksov, "Schubert calculus" Amer. Math. Monthly, 79 (1972) pp. 1061–1082 MR0323796 Zbl 0272.14016
[LiRi] D.E. Littlewood, A.R. Richardson, "Group characters and algebra" Philos. Trans. Royal Soc. London., 233 (1934) pp. 99–141 Zbl 0009.20203 Zbl 60.0896.01
[Sc] H. Schubert, "Kalkül der abzählenden Geometrie", Springer (1879) (Reprinted (with an introduction by S. Kleiman): 1979) MR0555576
[Sc2] H. Schubert, "Anzahl-Bestimmungen für lineare Räume beliebiger Dimension" Acta Math., 8 (1886) pp. 97–118 Zbl 18.0632.01
How to Cite This Entry:
Schubert calculus. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Schubert_calculus&oldid=23715
This article was adapted from an original article by Frank Sottile (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Retrieved from "https://encyclopediaofmath.org/index.php?title=Schubert_calculus&oldid=23715"
TeX done | CommonCrawl |
Representations of Semisimple Lie Algebras in the BGG Category $\mathscr{O}$ Page iv (5 of 310)
David Cox (Chair)
Steven G. Krantz
Rafe Mazzeo
Martin Scharlemann
2000 Mathematics Subject Classification. Primary 17B10; Secondary 20G05, 22E47.
For additional information and updates on this book, visit
w w w . a m s . o r g / b o o k p a g e s / g s m - 9 4
Library of Congress Cataloging-in-Publication Dat a
Humphreys, James E.
Representations of semisimple Lie algebras in the BGG category O / James E. Humphreys.
p. cm. — (Graduate studies in mathematics ; v. 94)
ISBN 978-0-8218-4678-0 (alk. paper)
1. Representations of Lie algebras. 2. Categories (Mathematics) I. Title.
QA252.3.H864 2008
512'.482—dc22 2008012667
Copying and reprinting. Individual readers of this publication, and nonprofit libraries
acting for them, are permitted to make fair use of the material, such as to copy a chapter for use
in teaching or research. Permission is granted to quote brief passages from this publication in
reviews, provided the customary acknowledgment of the source is given.
Republication, systematic copying, or multiple reproduction of any material in this publication
is permitted only under license from the American Mathematical Society. Requests for such
permission should be addressed to the Acquisitions Department, American Mathematical Society,
201 Charles Street, Providence, Rhode Island 02904-2294, USA. Requests can also be made by
e-mail to [email protected].
© 2008 by the American Mathematical Society. All rights reserved.
The American Mathematical Society retains all rights
except those granted to the United States Government.
Printed in the United States of America.
@ The paper used in this book is acid-free and falls within the guidelines
established to ensure permanence and durability.
Visit the AMS home page at http://www.ams.org/
10 9 8 7 6 5 4 3 2 1 13 12 11 10 09 08 | CommonCrawl |
Search all SpringerOpen articles
Nanoscale Research Letters
Nano Express | Open | Published: 17 April 2019
A Simple Electrochemical Route to Access Amorphous Co-Ni Hydroxide for Non-enzymatic Glucose Sensing
Hongbo Li1,
Ling Zhang1,
Yiwu Mao1,
Chengwei Wen1 &
Peng Zhao ORCID: orcid.org/0000-0002-7782-24461
Nanoscale Research Lettersvolume 14, Article number: 135 (2019) | Download Citation
Among the numerous transition metal hydroxide materials, cobalt- and nickel-based hydroxides have been extensively studied for their excellent electrochemical performances such as non-enzymatic electrochemical sensors. Binary cobalt-nickel hydroxide has received extensive attention for its exceptionally splendid electrochemical behaviors as a promising glucose sensor material. In this work, we report the synthesis of three-dimensional amorphous Co-Ni hydroxide nanostructures with homogeneous distribution of elements via a simple and chemically clean electrochemical deposition method. The amorphous Co-Ni hydroxide, as a non-enzymatic glucose sensor material, exhibits a superior biosensing performance toward glucose detection for its superior electron transfer capability, high specific surface area, and abundant intrinsic redox couples of Ni2+/Ni3+ and Co2+/Co3+/Co4+ ions. The as-synthesized amorphous Co-Ni hydroxide holds great potential in glucose monitoring and detection as non-enzymatic glucose sensors with high sensitivity 1911.5 μA mM−1 cm−2 at low concentration, wide linear range of 0.00025–1 mM and 1–5 mM, low detection limit of 0.127 μM, super long-term stability, and excellent selectivity in 0.5 M NaOH solution.
Carbohydrate, as one of the most important energy sources, can be used to evaluate the healthy condition of the human body by monitoring sugar level in blood. Diabetes mellitus or diabetes, a clinical chronic life-threatening disease caused by an elevated level of glucose in the blood, has become a global universal epidemic. Diabetes can be classified into two types based on their controlled mechanism: type 1 caused by inadequate insulin production in the body and type 2 governed by the body's inability to use its produced insulin [1]. Because the essential for the proper diagnosis and treatment of diabetes is to monitor and control the physiological glucose level precisely, continuous monitoring of physiological glucose levels with high accuracy and fast response has become a potential trend. The urgent requirements of glucose detection devices attract extensive attention to focus on the design and development of novel glucose sensors with high accuracy and sensitivity, low cost, fast response, excellent selectivity, and reliability. According to the transducing mechanism, glucose detection devices can be categorized into a series of potential strategies such as a resonator, field-effect transistor, optical detector, and electrochemical sensor [2,3,4,5,6]. Among of them, the electrochemical sensors have been recognized as the most promising glucose sensors with the several attractive features of being stable, inexpensive, implantable, portable, and miniaturize for their fast, accurate, and reliable route to determine the glucose concentration [6, 7]. Based on the sensing mechanism, electrochemical glucose sensors can be classified into two species: enzyme-based biosensors and non-enzymatic sensors [2]. Because of poor stability, high cost, and easy denaturation of enzyme-based biosensors, extensive efforts have been devoted on the development of novel non-enzymatic electrochemical glucose sensors for high sensitivity, long-term stability, low cost, and facile fabrication [8].
Thus far, the development in non-enzymatic electrochemical glucose sensors has significantly progressed using some original nanomaterials, such as pure metal [9,10,11], metal oxide [12, 13], and carbon-based composite [14]. Among numerous non-enzymatic electrochemical sensors, non-noble transition metal (such as Ni [15] and Cu [16]) and metal oxide (such as NiOX/Ni(OH)2/NiOOH [17,18,19], CoOx Co(OH)2/ CoOOH [20,21,22], CuOx [23], and ZnOX [13]) have been demonstrated as promising active materials for non-enzymatic glucose sensors with low cost and abundance in earth compared with that of noble metal-based materials. The sensing performance of non-enzyme glucose sensors is significantly controlled by the morphology, microstructure, and composition of nanomaterials. The single metal or metal oxide-based materials have exhibited limited potential for their unitary composition. According to previous studies, it can be found that the multi-metal alloy or multi-metallic compounds greatly promote the integrated electrochemical performance. Recently, more and more attention has been focused on designing and fabricating binary metal or bimetallic oxide composite such as Co-Ni [24], Ni-Fe [25], and Ni-Cu [26] for their diversity in preparing bimetallic compositions and flexibility in forming complex three-dimensional (3D) structures resulting in superior electrochemical activity for glucose sensing. The bimetallic compound electrochemical sensors of Ni-Co-based materials receive increasing attention for their advanced electrocatalytic properties and chemical stability [24, 27,28,29].
Most of the active materials that have been widely applied to non-enzymatic glucose sensors are based on crystalline phase. Amorphous phase materials are previously assessed as unsuitable electrochemical glucose sensors for their poor electrochemical performance [30]. However, the amorphous phase has been recently demonstrated to possess impressive electrocatalytic behavior, which may be exploitable in certain device applications [31, 32]. Considering the excellent electrochemical glucose sensing performance of crystalline bimetallic hydroxide electrodes, several benefits should be realizable by the development of amorphous bimetallic hydroxide sensors. Here, we focus our attention on the advantages of the amorphous phase in non-enzymatic electrochemical glucose sensors based on amorphous Co-Ni hydroxide, prepared by a simple, facile, and chemically clean electrochemical cathode deposition technique. This study is aimed to explore the biosensing performance of the prepared amorphous Co-Ni hydroxide toward glucose oxidation in alkaline solution.
Synthesis of Amorphous Co-Ni Hydroxide Nanostructures/Graphite Electrode
The amorphous Co-Ni hydroxide nanostructures were fabricated in one step by an idiographic electrochemical cathode deposition method. In detail, the prepared amorphous products were deposited onto a cathodic graphite electrode under the application of a working voltage of 90 V for 12 h from a quartz deposition bath. The reaction bath contains three parts: two parallel graphite sheets as the cathode and anode, both with a working area of approximately 15 mm × 7 mm in size; high-purity de-ionized water as the electrolyte; and a transition Ni-Co alloy (Ni/Co molar ratio of 1:1) target of 20 mm × 20 mm × 20 mm in size at the center of the bath floor as metal ion source.
Characterization of Amorphous Co-Ni Hydroxide Nanostructures
Scanning electron microscope (SEM) and transmission electron microscope (TEM) were carried out on a JSM-7600F SEM at an accelerating voltage of 15 kV and FEI Tecnai G2 F30 TEM at an accelerating voltage of 300 kV, respectively, to identify the morphology, crystallinity, and microstructure of the as-synthesized amorphous glucose sensors [33, 34]. The energy-filter TEM mapping was employed to analyze the element distribution of the amorphous products. In addition, the surface chemical states of the bonded elements of the products were characterized by using an ESCA Lab250 X-ray photoelectron spectroscopy (XPS) and an argon-ion laser micro-Raman spectroscopy (Renishaw inVia, 785 nm).
Electrochemical Measurements
All electrochemical measurements were conducted using a CHI-760E electrochemical workstation in a typical three-electrode setup with 0.5 M NaOH solution as electrolyte, amorphous Co-Ni hydroxide/graphite as the working electrode, saturated calomel electrode (SCE) as the reference electrode, and platinum wire as the counter electrode.
Results and Discussions
Characterizations of the Amorphous Co-Ni Hydroxide Nanostructures
A series of physical characterizations were carried out to confirm the formation of amorphous Co-Ni hydroxide nanostructures on the graphite substrate. The surface morphology of the modified electrode was characterized by SEM and TEM images as shown in Fig. 1. The amorphous Co-Ni hydroxide nanostructures were successfully fabricated on the surface of the graphite substrate. SEM images (Fig. 1a, b) of as-prepared products show that the nanostructures display a dominant diameter of ~ 400 nm with wrinkled surfaces, revealing a three-dimensional surface. In order to characterize the detailed morphology and structure of the as-synthesized samples, typical low-magnification and high-resolution TEM (HRTEM) images of nanostructures prepared are depicted in Fig. 1c and d, respectively. No evident crystal lattice fringe can be observed in the high-resolution TEM image, so no crystalline morphology was generated in this process. Furthermore, the corresponding selected area electron diffraction (SAED) pattern was investigated as shown in the inset of Fig. 1c and a broad and diffused halo ring can be observed suggesting an amorphous nature [35]. The composition distribution of the as-synthesized nanostructures was investigated by using the element mapping technique as shown in Fig. 1f–h. The element mapping analysis results suggest a highly homogeneous distribution of O (Fig. 1f), Co (Fig. 1g), and Ni (Fig. 1h) in the products, implying the well-homogeneous structure of amorphous Co-Ni hydroxide nanostructures [36].
Morphology and structure of the products. a, b SEM images of the amorphous Co-Ni hydroxide deposited on a graphite sheet. c, d TEM images of the amorphous Co-Ni hydroxide sample (the insets show the corresponding SAED pattern and HRTEM image). e STEM image. f–h Element mappings of O, Co, and Ni
XPS, a reliable technique for phase composition detection, was used to investigate the atoms' chemical states of the modified electrode surface by measuring the binding energy. The typical XPS spectra of Ni 2p, Co 2p, and O 1s with fitting curves by using a Gaussian fitting method are shown in Fig. 2a, b, and c, respectively. The high-resolution Ni 2p spectrum displays that two spin-orbit doublets corresponding shake-up satellite peaks can be seen over the range of 851–888 eV. The fitting results show that two strong peaks with binding energies appeared at 856.3 and 873.9 eV characteristic peaks of the Ni 2p3/2 and Ni 2p1/2 spin-orbit doublets, respectively, and two corresponding shake-up satellite peaks at 862.2 and 880.1 eV for Ni 2p3/2 and Ni 2p1/2. These results suggest that the Ni species in the prepared samples are in the +2 oxidation state [18]. Furthermore, the shape of spectrum and the spin-energy separation of 17.6 eV are the characteristics of the Ni(OH)2 phase, which is in good agreement with previous reports [18, 37]. Meanwhile, the fitted high-resolution Co 2p spectrum shows the spin-orbit splitting of Co 2p3/2 at 781.5 eV corresponding shake-up satellite peaks at 787.2 eV and Co 2p1/2 at 797.1 eV corresponding shake-up satellite peaks at 804.2 eV, indicating the +2 oxidation state of Co in the prepared product [38, 39]. In addition, the O 1s spectrum with a strong peak at the binding energy of 531 eV can be associated with bound hydroxyl ion (OH-), confirming the formation of M–OH (M = Co, Ni) [18, 37,38,39]. Moreover, the atomic ratio of Co to Ni is close to 1:1 from the XPS analysis.
Chemical states of bonded element characterizations of the products. a–c XPS spectra of Ni 2p, Co 2p, and O 1s. d The Raman spectrum of the amorphous Co-Ni hydroxide
Raman spectrum was employed to collect more information about the surface functional groups of the product as shown in Fig. 2d. Two strong broad peaks located at 461 and 529 cm−1 and three weak peaks at 299, 313, and 688 cm−1 can be observed in the Raman spectrum of Co–Ni(OH)2. In particular, the peaks at 299, 461, and 688 cm−1 can be indicative of the Co(OH)2 phase [39] and the peaks at 313, 461, and 688 cm−1 are characteristic of Ni(OH)2 phase [40]. The strong bands at 461 and 529 cm−1, shifted and broadened, may come from the combination of symmetric Ni–OH/Co–OH stretching mode and the symmetric O–Ni–O/O–Co–O stretching mode, respectively. The band at 313 cm−1 can be attributed to the Eg(T) mode for Ni(OH)2 phase. The peaks of 299, 689, and 191 cm−1 may have resulted from the Eg and A1g symmetric stretching mode for Co(OH)2 phase, respectively. In summary, the characterization results of morphology and structure, obtained from SEM, TEM, SAED, XPS, and Raman measurements, reveal that the amorphous Co–Ni(OH)2 nanostructures with irregular and wrinkled surface features were synthesized successfully.
Electrochemical Performance of Amorphous Co-Ni Hydroxide Electrode
In order to obtain an activated and stabilized electrochemical performance, the amorphous Co-Ni hydroxide electrode was firstly scanned at a scan rate of 50 mV s−1 in 0.5 M NaOH electrolyte until the cyclic voltammetry (CV) curves overlapped absolutely. Afterward, the CV technique was used to investigate the electrochemical behaviors of amorphous Co-Ni hydroxide electrode in 0.5 M NaOH electrolyte without the glucose addition at various scan rates in the potential window of 0.0 and 0.55 V vs SCE. As shown in Fig. 3a, CV curves of amorphous Co-Ni hydroxide exhibit typical pseudocapacitive behavior for the pair of well-defined quasi-reversible redox couple peaks, specifying the reversible conversion of Ni2+/Ni3+ and Co2+/Co3+/Co4+ [41]. For example, amorphous Co-Ni hydroxide electrode exhibits a broad strong anodic peak centered at about 0.36 V vs SCE at a scan rate of 50 mV s−1, which could be attributed to the different complex oxidation states of Ni and Co. In detail, the Ni2+ and Co2+ ions were transformed into Ni3+ and Co3+ ions, respectively, and then, the Co3+ ion was further oxidized into Co4+ ion at higher potentials. Under the reverse scan, two board cathodic peaks centered at 0.19 and 0.14 V vs SCE were observed at a scan rate of 50 mV s−1, corresponding to the reduction of Ni3+/Ni2+, Co4+/Co3+, and Co3+/Co2+, respectively.
Electrochemical behaviors of amorphous Co-Ni hydroxide toward glucose oxidation in 0.5 M NaOH. a CV curves at various scan rates in the absence of glucose. b CV curves of the Co-Ni hydroxide electrode with different concentrations of glucose at a scan rate of 50 mV s−1. c The CV curves of glucose oxidation at different scan rates from 10 to 60 mV in the presence of 5 mM glucose. d The fitting plots of Ipa-ν1/2 in the absence and presence of glucose
With increasing scan rate, the value of redox peaks current increases gradually, whereas the potentials of the anodic peak (Epa) and cathodic peak (Epc) undergo positive and negative shifts, respectively. These phenomena could be attributed to the internal resistance of the amorphous Co-Ni hydroxide electrode. As shown in Fig. 4d, the anode peak current (Ipa) of the amorphous Co-Ni hydroxide electrode in 0.5 M NaOH solution as a function of the square root of scan rate (ν1/2) was depicted in the absence of glucose. The fitting result reveals that Ipa performed a linear relationship with ν1/2 with a high correlation coefficient of 0.999 under the alkaline conditions, suggesting that electrochemical kinetic mechanism for Co-Ni hydroxide electrode is a diffusion controlled process.
Amperometric detection of glucose. Amperometric current-time (i-t) curve (a) and corresponding calibration curve (b) of the glucose oxidation on the Co-Ni hydroxide electrode acquired in 0.5 M NaOH
Voltammetric Behavior of Co-Ni Hydroxide Toward Glucose Oxidation
The electrochemical behavior of amorphous Co-Ni hydroxide toward glucose oxidation under the alkaline conditions was further explored by the CV technique. Figure 3b depicts the typical CV curves of the as-prepared Co-Ni hydroxide electrode as a function of glucose concentration ranging from 0–5 mM in 0.5 M NaOH solution at a scan rate of 50 mV s−1. As can be seen, upon the addition of glucose, the anodic peak potential shifted positively and the anodic peak currents Ipa were enhanced gradually, which lays out a prospect for the subsequent quantitative analysis. All the CV curves in Fig. 3b exhibit broad strong anodic peaks which mainly attribute to the oxidation of Ni2+/Ni3+ and Co2+/Co3+/Co4+ at first. Then, the analyze glucose (C6H12O6) is oxidized into gluconolactone (C6H10O6) through Ni3+ and Co4+ in alkali electrolyte. Simultaneously, NiO(OH) and CoO2 are reduced into Ni(OH)2 and CoO(OH), respectively. This process results in the increasement of anodic peak current by prompting the oxidation of Ni(OH)2 to NiO(OH) and CoO(OH) to CoO2. It is worth noting that the anodic peak currents display a linear dependence on glucose concentration (Cglucose) ranging from 0–5 mM as shown in the inset of Fig. 3b. The linear fitting equation of Ipa-Cglucose can be expressed as follows:
$$ {I}_{\mathrm{pa}}\left(\upmu \mathrm{A}\right)=11104\ \upmu \mathrm{A}+1353.8\ \upmu \mathrm{A}\cdotp {\mathrm{mM}}^{\hbox{-} 1}\ {C}_{\mathrm{glucose}}\left({\mathrm{R}}^2=0.9964\right) $$
The working area of the amorphous phase in our case was 1.0 cm−2, and the sensitivity of the amorphous samples was 1353.8 μA mM−1 cm−2. Furthermore, with the increasing glucose concentration, the cathodic peak currents Ipc were decreased gradually, which can be attributed to the consumption of Ni3+ and Co4+ in the electrooxidation of glucose. The detail catalytic oxidation reactions can be described as follows [41]:
$$ \mathrm{Ni}{\left(\mathrm{OH}\right)}_2+{\mathrm{OH}}^{\hbox{-}}\to \mathrm{Ni}\mathrm{O}\left(\mathrm{OH}\right)+{\mathrm{H}}_2\mathrm{O}+{\mathrm{e}}^{\hbox{-} } $$
$$ \mathrm{Ni}\mathrm{O}\left(\mathrm{OH}\right)+\mathrm{glucose}\to \mathrm{Ni}{\left(\mathrm{OH}\right)}_2+\mathrm{gluconolactone} $$
$$ \mathrm{Co}{\left(\mathrm{OH}\right)}_2+{\mathrm{OH}}^{\hbox{-}}\to \mathrm{Co}\mathrm{O}\left(\mathrm{OH}\right)+{\mathrm{H}}_2\mathrm{O}+{\mathrm{e}}^{\hbox{-} } $$
$$ \mathrm{CoO}\left(\mathrm{OH}\right)+{\mathrm{OH}}^{\hbox{-}}\to {\mathrm{CoO}}_2+{\mathrm{H}}_2\mathrm{O}+{\mathrm{e}}^{\hbox{-} } $$
$$ {\mathrm{CoO}}_2+\mathrm{glucose}\to \mathrm{CoO}\left(\mathrm{OH}\right)+\mathrm{gluconolactone} $$
In order to understand the electrochemical kinetic process during the electrooxidation of glucose on the Co-Ni hydroxide electrode, the CV curves of glucose oxidation as a function of scan rate were carried out in 0.5 M CNaOH solution containing 5 mM glucose as shown in Fig. 3c. The redox peak current values (Ipa and Ipc) increased with the increase of scan rate from 10 to 60 mV s−1, whereas the peak potentials were shifted negatively for Epc and positively for Epa. These phenomena could be attributed to the increase of overpotential and kinetic limitation for amorphous Co-Ni hydroxide electrode toward glucose electrooxidation. As shown in Fig. 3d, the plot of Ipa-ν1/2 in 0.5 M NaOH solution containing 5 mM glucose performed excellent linearity with the high correlation coefficient of 0.998, suggesting that the glucose oxidation which occurred on the Co-Ni hydroxide electrode is a diffusion controlled process [29].
Amperometric Detection of Glucose
To evaluate the accurate electrocatalytic response of the glucose oxidation at the amorphous Co-Ni hydroxide electrode surface, amperometry technique was carried out in 20 mL stirred 0.5 M CNaOH solution with the successive step addition of a known concentration of glucose at an applied potential of 0.36 V vs SCE (Fig. 4a). It can be easily found that a notable enhancement of current response was rapidly acquired after the glucose solution addition and reached a steady state within 5 s, suggesting a fast rate of oxidation reaction between glucose and redox sites of Co-Ni hydroxide electrode. The above phenomena reveal that Co-Ni hydroxide electrode performed a sensitive and fast response to the Cglucose variation under the alkaline conditions. As shown in Fig. 3b, the calibration curve of the response current as a function of glucose concentration demonstrates that the response currents increased linearly with the successive additions of glucose. The corresponding fitting plot reveals that the response curve can be divided into two distinctive linear ranges. The first linear range at low concentrations was observed from 0.00025 to 1 mM with a high correlation coefficient of 0.9994, and the linear fitting equation can be expressed as follows:
$$ {\mathrm{I}}_{\mathrm{pa}}\left(\upmu \mathrm{A}\right)=40.026\ \upmu \mathrm{A}+1911.5\ \upmu \mathrm{A}\cdot {\mathrm{mM}}^{-1}\ {C}_{glucose} $$
The second linear range at higher concentrations of glucose was from 1 to 5 mM with a linear correlation coefficient of 0.997, and the linear fitting equation can be expressed as follows:
$$ \mathrm{Ipa}\ \left(\upmu \mathrm{A}\right)=780.2\ \upmu \mathrm{A}+1397.5\ \upmu \mathrm{A}\cdot {\mathrm{mM}}^{\hbox{-} 1}{C}_{glucose} $$
From the fitting plot, the sensitivity of the sensor was calculated to be 1911.5 μA·mM−1 cm−2 at low concentrations of glucose and 1397.5 μA·mM−1 cm−2 at high concentrations of glucose. Thus, the result is similar to that calculated from the CV curves, confirming the high sensitivity of the amorphous Co–Ni(OH)2 nanostructures. The limit of detection (LOD) can be calculated by using the equation as follows [18, 29]:
$$ \mathrm{LOD}=3\upsigma /\mathrm{S} $$
where σ is the standard deviation of the background current obtained before the addition of the glucose and S is the slope of calibration plot of I-Cglucose. The detection limit was estimated to be 0.12 μM in 0.5 M NaOH solution from a linear range at low glucose concentrations. Furthermore, it can be found that the response current ΔI (defined as the increment of current corresponding to the Cglucose increment of 1 mM) nearly kept a constant value with a Cglucose increase under the low Cglucose conditions, and the sensor exhibited a faster current response to the addition of glucose with higher sensitivity and linear correlation coefficient. But, when increasing the Cglucose to a high level, ΔI decreased. If the Cglucose exceeds a certain level, ΔI decreased dramatically resulting in the linear relationship being destroyed. Under lower Cglucose conditions, the amount of surface active sites is more than that of glucose molecules, which make the response current to be enhanced linearly with the successive additions of glucose. With the Cglucose increasing, more and more active sites are covered by the glucose molecules, resulting in some glucose molecules inaccessible for oxidation and making ΔI decrease notably.
As the electrochemical activity is highly dependent on the OH− anion concentrations (COH−) under the alkali conditions, the glucose detection performance of Co-Ni hydroxide electrode is possibly influenced by the COH− of alkali electrolyte [42]. In order to get optimal concentration, the effect of COH− on amperometric response toward glucose oxidation for Co-Ni hydroxide was investigated in NaOH solution with several concentrations at an applied potential around the anodic peak (Fig. 5). A known concentration of glucose ranging from 0.00025 to 5 mM was consecutive stepwise added into 20 mL stirred NaOH solution with three different concentrations of 0.1, 0.5, and 1.0 M. All the amperometry curves in different CNaOH exhibited rapid current response to the addition of glucose as shown in Fig. 5a. The plots of the response current as a function of glucose concentration in 0.1 M, 0.5 M, and 1 M NaOH solution are shown in Fig. 5b. It can be observed that the amorphous Co-Ni hydroxide for the glucose detection in 0.5 M NaOH solution displayed an optimal glucose-sensing performance with a higher sensitivity, a lower detection limit, and a wider linear range with high correlation coefficient compared to that of in 0.1 and 1 M NaOH. Considering that high sensitivity, low LOD, wide linear concentration range, and high correlation coefficient are the benefit for enhanced accuracy and decreased deviation of glucose detection as the sensor materials applied in practical applications, the OH− anion concentration of 0.5 M was chosen as the optimal working electrolyte for glucose detection in our studies.
Current response to glucose in different concentrations of NaOH. The current-time response (a) and current-Cglucose curves (b) for Co-Ni hydroxide electrode acquired in 0.1 M, 0.5 M, and 1 M NaOH.
The long-term stability, another critical factor for electrochemical sensors in practical applications, was investigated by examining the amperometric response to glucose with the successive step addition in 20 mL stirred 0.5 M NaOH electrolyte. As shown in Fig. 6a, after 2 months, no decrement of the current response toward the glucose electrooxidation compared with its initial response current can be observed. The sensitivity retained 103% of the initial value even after 2 months, implying excellent long-term stability of glucose sensor based on amorphous Ni-Co hydroxide. Furthermore, Table 1 shows a comparative sensing performance evaluation of our fabricated glucose sensor with the other nickel-based and cobalt-based non-enzymatic glucose sensors reported by previous literature on the sensitivity, linear range, detection limit, and long-term stability. The glucose detection performances of fabricated amorphous Co-Ni hydroxide sensor obtained in our study are comparable and even superior to most nickel-based and cobalt-based non-enzymatic glucose sensors reported elsewhere, especially, our sensor shows remarkable long-term stability, guaranteeing its optimal potential in real biological sample analysis.
The stability and selectivity of amorphous electrode for electrochemical sensors. a The current response to glucose with the successive step addition in 20 mL stirred electrolyte of 0.5 M NaOH over 2 months. b The amperometric current-time (i-t) curve of the glucose oxidation with other carbohydrates on the Co-Ni hydroxide electrode acquired in 20 mL stirred electrolyte of 0.5 M NaOH with the successive step addition of 1 mM glucose and 0.1 mM common interferents.
Table 1 Comparison of the sensing performance of non-enzymatic amorphous Co-Ni hydroxide glucose sensor with other reported nickel-based and cobalt-based non-enzymatic glucose sensors
Generally, the electrochemical sensing performances of a sensor for glucose detection depend significantly on the intrinsic sensing activity of sensor material and the number of active sites. For hydroxide-based electrochemical glucose sensors, the intrinsic sensing activity is usually tightly determined by the material composition, crystal structure, defects, redox couples, electronic conductivity, and charge transfer capability; the number of active sites is largely related to the material morphology, particle size, and surface microstructure. Based on the above consideration, the outstanding glucose sensing ability of the amorphous Co–Ni(OH)2-based sensor primarily comes from the following factors. The morphology of pure and ravine-like surfaces and three-dimensional (3D) nanostructure is the first factor for their promising electrochemical sensing activity. The pure surfaces, resulting from a chemically clean reaction environment preparation, are the benefit for the enhanced efficiency of amorphous Co–Ni(OH)2 nanostructures interacting with glucose molecular. The ravine-like surfaces and 3D nanostructures offer high specific surface area, resulting in the number of active sites increased, which can significantly improve the sensing activity. The second fact is the homo-incorporation of a second metal element for metal hydroxide, providing an easier accessible pathway for the intercalation and deintercalation of charges, which promotes the conversion of Ni2+/Ni3+ and Co2+/Co3+/Co4+. The fast Ni2+/Ni3+ and Co2+/Co3+/Co4+ conversion rate means that the amount of NiO(OH) and CoO2 active sites can be kept in a sufficient value to make glucose oxidized fast and adequately even in the presence of high Cglucose. Lastly, the self-assembled amorphous phase plays a crucial role in improving the electronic conductivity, charge transfer capability, and longevity of the sensor. The amorphous phase is characteristic of long-range disorder, short-range ordered structure, lots of defects, and unsaturated ligand atoms, which lead to a significant improvement in the electronic conductivity of the amorphous materials and an increment in the number of electrochemically active sites. Meanwhile, the self-assembly maintains the structural continuity which significantly enhances the electronic conductivity and electrical contact between the nanostructures and the substrate. Moreover, during redox reaction, electrostatic interactions between metal ions are thus uniformly distributed or isotropic in the amorphous structure because of the long-range disordered nature of the amorphous phase. The electrostatic force, caused by changing the charge during the conversion of Ni2+/Ni3+ and Co2+/Co3+/Co4+, can fully relax and be released in the amorphous structure resulting in the structure of the amorphous material keeping stable. In other words, the amorphous phase sensors will perform considering the long-term stability during glucose detection.
Interference Studies
The above results indicate that the amorphous Co-Ni hydroxide displays an excellent glucose-sensing behavior with high sensitivity, wide linear range, and long-term stability in the absence of other interfering species under the alkali conditions. However, it is known that some easily oxidative interfering species, such as ascorbic acid (AA), uric acid (UA), and dopamine (DA), are usually coexisting with glucose in the human serum. The selectivity of glucose detection, related to its response for glucose in the presence of other competing species, is another important factor and challenge for electrochemical sensors in practical applications. The influences of various interfering species, such as AA, UA, and DA, on glucose sensing of the amorphous Co-Ni hydroxide electrode were studied by using amperometry technique. The physiological level of glucose in human blood is about 3–8 mM, which is substantially higher than the concentrations of interfering species such as AA (0.1 mM), UA (0.1 mM), and DA (0.1 mM). Hence, the interference test of the modified electrode toward glucose oxidation was carried out in 20 mL stirred electrolyte of 0.5 M NaOH by the successive step additions of 1 mM glucose and 0.1 mM AA, UA, and DA. The corresponding amperometric responses are exhibited in Fig. 6b. A small rise of the response current can be observed with the addition of AA which induces a small rise of the current, but the increment is much smaller than that for glucose (about 2.5%). Meanwhile, there are no obvious current responses observed in the addition of DA and UA. As a result, the amorphous Co-Ni hydroxide electrode displayed negligible current responses toward the interfering species in comparison with that of glucose, suggesting high selectivity toward glucose for the prepared amorphous Co-Ni hydroxide as non-enzymatic electrochemical sensor and excellent applicability to the real sample analysis.
Real Sample Analysis
In order to verify the commercial reliability and applicability of the modified glucose sensor, the glucose concentration in the real sample was detected using the amperometric method. Amperometric response of the amorphous Co-Ni hydroxide electrode toward glucose oxidation was monitored with the successive step addition of 0.1 mM glucose to 20 mL stirred 0.5 M NaOH solution containing serum samples. As shown in Table 2, the non-enzymatic glucose sensor displays recoveries in the range of 97.92–100.33% and 2.66–3.99% relative standard deviation (RSD), implying that the as-synthesized amorphous Co-Ni hydroxide glucose sensor holds great potential in real biological sample analysis.
Table 2 Real sample analysis of glucose in human serum samples
A facile approach has been demonstrated for the synthesis of amorphous Co-Ni hydroxide with a homogeneous architecture by a simple and chemically clean electrochemical deposition route. The electrocatalytic activities of fabricated amorphous samples toward non-enzymatic glucose sensors have been investigated under the alkaline conditions. The as-synthesized amorphous Co-Ni hydroxide sensor exhibits a superior biosensing performance toward glucose oxidation with high sensitivity of 1911.5 μA·mM−1 cm−2 and low-level detection limit of 0.12 μM at the lower concentration of glucose, wide linear range from 0.25 to 5 mM, fast response within 5 s, and super long-term stability and excellent selectivity in 0.5 M NaOH solution. These results reveal the great potential of amorphous Co-Ni hydroxide as glucose sensor materials for use in non-enzymatic glucose detection.
Cyclic voltammetry
E pa :
The potentials of anodic peak
E pc :
The potentials of cathodic peak
HRTEM:
High-resolution transmission electron microscope
I pa :
Rhe anode peak current
RSD:
Relative standard deviation
SAED:
Selected area electron diffraction
SCE:
Saturated calomel electrode
TEM:
Transmission electron microscope
UA:
XPS:
X-ray photoelectron spectroscopy
Tian K, Prestgard M, Tiwari A (2014) A review of recent advances in non-enzymatic glucose sensors. Materials Science and Engineering C 41:100–118
Zaidi SA, Shin JH (2016) Recent developments in nanostructure based electrochemical glucose sensors. Talanta 149:30–42
Hu R, Stevenson AC, Lowe CR (2012) An acoustic glucose sensor. Biosensors and Bioelectronics 35:425–428
Kwak YH, Choi DS, Kim YN, Kim H, Yoon DH, Ahn S, Yang J, Yang W, Seo S (2012) Flexible glucose sensor using CVD-grown graphene-based field effect transistor. Biosensors and Bioelectronics 37:82–87
Toncelli C, Innocenti Malini R, Jankowska D, Spano F, Colfen H, Maniura-Weber K, Rossiaand RM, Boesel LF (2018) Optical glucose sensing using ethanolamine-Cpolyborate complexes. J Mater Chem B 6:816–823
Li J, Hu H, Li H, Yao C (2017) Recent developments in electrochemical sensors based on nanomaterials for determining glucose and its by product H2O2. J Mater Sci 52:10455–10469
Zhu H, Li L, Zhou W, Shao Z, Chen X (2016) Advances in non-enzymatic glucose sensors based on metal oxides. J. Mater. Chem. B 4:7333–7349
Rahman MM, Ahammad AJ, Jin JH, Ahn SJ, Lee JJ (2010) A comprehensive review of glucose biosensors based on nanostructured metal-oxides. Sensors 10:4855–4886
Weremfo A, Fong STC, Khan A, Hibbert DB, Zhao C (2017) Electrochemically roughened nanoporous platinum electrodes for non-enzymatic glucose sensors. Electrochimica Acta 231:20–26
Zhou YG, Yang S, Qian QY, Xia XH (2009) Gold nanoparticles integrated in a nanotube array for electrochemical detection of glucose. Electrochemistry Communications 11:216–219
Xu Q, Zhao Y, Xu JZ, Zhu JJ (2006) Preparation of functionalized copper nanoparticles and fabrication of a glucose sensor. Sensors and Actuators B 114:379–386
Yu J, Zhao T, Zeng B (2008) Mesoporous MnO2 as enzyme immobilization host for amperometric glucose biosensor construction. Electrochemistry Communications 10:1318–1321
Dayakar T, Venkateswara RK, Bikshalu K, Rajendar V, Si-Hyun P (2017) Novel synthesis and structural analysis of zinc oxide nanoparticles for the nonenzymatic glucose biosensor. Materials Science and Engineering C 75:1472–1479
Li S, Zhang Q, Lu Y, Ji D, Zhang D, Wu J, Chen X, Liu Q (2017) One step electrochemical deposition and reduction of graphene oxide on screen printed electrodes for impedance detection of glucose. Sensors and Actuators B 244:290–298
Ensafi AA, Ahmadi N, Rezaei B (2017) Nickel nanoparticles supported on porous silicon flour, application as a non-enzymatic electrochemical glucose sensor. Sensors and Actuators B 239:807–815
Durana GM, Benavidezb TE, Giulianic JG, Riosa A, Garciab CD (2016) Synthesis of CuNP-modified carbon electrodes obtained by pyrolysis of paper. Sensors and Actuators B 227:626–633
Zhang H, Liu S (2017) Nanoparticles-assembled NiO nanosheets templated by grapheneoxide film for highly sensitive non-enzymatic glucose sensing. Sensors and Actuators B 238:788–794
Xia K, Yang C, Chen Y, Tian L, Su Y, Wang J, Li L (2017) In situ fabrication of Ni(OH)2 flakes on Ni foam through electrochemical corrosion as high sensitive and stable binder-free electrode for glucose sensing. Sensors and Actuators B 240:979–987
Li SJ, Guo W, Yuan BQ, Zhang DJ, Feng ZQ, Du JM (2017) Assembly of ultrathin NiOOH nanosheets on electrochemically pretreated glassy carbon electrode for electrocatalytic oxidation of glucose and methanol. Sensors and Actuators B 240:398–407
Ibupoto ZH, Tahira A, Mallah AB, Shahzad SA, Willander M, Wang B, Yu C (2017) The synthesis of functional Cobalt oxide nanostructures and their sensitive glucose sensing application. Electroanalysis 29:213–222
Shackery I, Patil U, Pezeshki A, Shinde NM, Im S, Jun SC (2016) Enhanced non-enzymatic amperometric sensing of glucose using Co(OH)2 nanorods deposited on a three dimensional graphene network as an electrode material. Microchim Acta 183:2473–2479
Zhang L, Yang C, Zhao G, Mu J, Wang Y (2015) Self-supported porous CoOOH nanosheet arrays as a non-enzymatic glucose sensor with good reproducibility. Sensors and Actuators B 210:190–196
Yuan R, Li H, Yin X, Lu J, Zhang L (2017) 3D CuO nanosheet wrapped nanofilm grown on Cu foil for high-performance non-enzymatic glucose biosensor electrode. Talanta 174:514–520
Xu J, Dong Y, Cao J, Guo B, Wang W, Chen Z (2013) Microwave-incorporated hydrothermal synthesis of urchin-like Ni(OH)2–Co(OH)2 hollow microspheres and their supercapacitor applications. Electrochimica Acta 114:76–82
Kannan P, Maiyalagan T, Marsili E, Ghosh S, Niedziolka-Jönsson J, Jönsson-Niedziolka M (2016) Hierarchical 3-dimensional nickel–iron nanosheet arrays on carbon fiber paper as a novel electrode for non-enzymatic glucose sensing. Nanoscale 8:843–855
Shabnam L, Faisal SN, Roy AK, Minett AI, Gomes VG (2017) Nonenzymatic multispecies sensor based on Cu-Ni nanoparticle dispersion on doped grapheme. Electrochimica Acta 224:295–305
Lien CH, Chen JC, Hu CC, Wong DS (2014) Cathodic deposition of binary nickel-cobalt hydroxide for non-enzymatic glucose sensing. Journal of the Taiwan Institute of Chemical Engineers 45:846–851
Balram A, Jiang J, Fernández MH, Meng DD (2015) Nickel-cobalt double hydroxide decorated carbon nanotubes via aqueous electrophoretic deposition towards catalytic glucose detection. Key Engineering Materials 654:70–75
Ramachandran K, Raj kumar T, Justice Babu K, Gnana kumar G (2016) Ni-Co bimetal nanowires filled multiwalled carbon nanotubes for the highly sensitive and selective non-enzymatic glucose sensor applications. Scientific Reports 6:36583
Nai J, Wang S, Bai Y, Guo L (2013) Amorphous Ni(OH)2 nanoboxes: fast fabricationand enhanced sensing for glucose. Small 9:3147–3152
Smith RDL, Prévot MS, Fagan RD, Zhang Z, Sedach PA, Siu MKJ, Trudel S, Berlinguette CP (2016) Photochemical route for accessing amorphous metal oxide materials for water oxidation catalysis. Science 340:60
Zhu J, Yin H, Cui Z, Qin D, Gong J, Nie Q (2017) Amorphous Ni(OH)2/CQDs microspheres for highly sensitive non-enzymatic glucose detection prepared via CQDs induced aggregation process. Applied Surface Science 420:323–330
Ahmed B, Hashmi A, Khan MS, Musarrat J (2018) ROS mediated destruction of cell membrane, growth and biofilms of human bacterial pathogens by stable metallic AgNPs functionalized from bell pepper extract and quercetin. Advanced Powder Technology 29:1601–1616
Ahmed B, Dwivedi S, Abdin MZ, Azam A, Al-Shaeri M, Khan MS, Saquib Q, Al-Khedhairy AA, Musarrat J (2017) Mitochondrial and chromosomal damage induced by oxidative stress in Zn2+ Ions, ZnO-Bulk and ZnO-NPs treated Allium cepa roots. Scientific Reports 7:40685
Saleem S, Ahmed B, Khan MS, Al-Shaeri M, Musarrat J (2017) Inhibition of growth and biofilm formation of clinical bacterial isolates by NiO nanoparticles synthesized from Eucalyptus globulus plants. Microbial Pathogenesis 111:375–387
Ahmed B, Khan MS, Musarrat J (2018) Toxicity assessment of metal oxide nano-pollutants on tomato (Solanum lycopersicon): a study on growth dynamics and plant cell death. Environmental Pollution 240:802–816
Yan J, Fan Z, Sun W, Ning G, Wei T, Zhang Q, Zhang R, Zhi L, Wei F (2012) Advanced asymmetric supercapacitors based on Ni(OH)2/graphene and porous graphene electrodes with high energy density. Adv Funct Mater 22:2632–2641
Wang Q, Ma Y, Jiang X, Yang N, Coffinier Y, Belkhalfa H, Dokhane N, Li M, Boukherroub R, Szunerits S (2016) Electrophoretic deposition of carbon nanofibers/Co(OH)2 nanocomposites: application for non-enzymatic glucose sensing. Electroanalysis 28:119–125
Yang J, Liu H, Martens WN, Frost RL (2010) Synthesis and characterization of cobalt hydroxide, cobalt oxyhydroxide, and cobalt oxide nanodiscs. J Phys Chem C 114:111–119
Hermet P, Gourrier L, Bantignies JL, Ravot D, Michel T, Deabate S, Boulet P, Henn F (2011) Dielectric, magnetic, and phonon properties of nickel hydroxide. Phys Rev B 84:235211
Ma G, Yang M, Li C, Tan H, Deng L, Xie S, Xu F, Wang L, Song Y (2016) Preparation of spinel nickel-cobalt oxide nanowrinkles/reduced graphene oxide hybrid for nonenzymatic glucose detection at physiological level. Electrochimica Acta 220:545–553
Wang L, Tang Y, Wang L, Zhu H, Meng X, Chen Y, Sun Y, Yang XJ, Wan P (2015) Fast conversion of redox couple on Ni(OH)2/C nanocomposite electrode for high-performance nonenzymatic glucose sensor. J Solid State Electrochem 19:851–860
Wang L, Zhang Y, Xie Y, Yu J, Yang H, Miao L, Song Y (2017) Three-dimensional macroporous carbon/hierarchical Co3O4 nanoclusters for nonenzymatic electrochemical glucose sensor. Applied Surface Science 402:47–45
The work was carried out with the financial support from the National Natural Science Foundation of China and the Science and Technology Foundation of China Academy Engineering Physics.
This work was supported by the National Natural Science Foundation of China (51602294), the Science and Technology Foundation of China Academy Engineering Physics (2015B0302047).
The datasets supporting the conclusions of this article are included within the article
Institute of Nuclear Physics and Chemistry (INPC), China Academy of Engineering Physics (CAEP), Mianyang, 621999, People's Republic of China
Hongbo Li
, Ling Zhang
, Yiwu Mao
, Chengwei Wen
& Peng Zhao
Search for Hongbo Li in:
Search for Ling Zhang in:
Search for Yiwu Mao in:
Search for Chengwei Wen in:
Search for Peng Zhao in:
HBL initiated the research and built and carried out the experiments under the supervision of PZ. HBL and PZ drafted the manuscript. HBL, LZ, YWM, and CWW contributed to the data analysis. All authors read and approved the manuscript.
Correspondence to Peng Zhao.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Co-Ni hydroxide
Glucose sensor
Non-enzymatic
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Duke Mathematical Journal
Duke Math. J.
Volume 162, Number 14 (2013), 2645-2689.
Completions, branched covers, Artin groups, and singularity theory
Daniel Allcock
More by Daniel Allcock
Full-text: Access denied (no subscription detected)
We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text
We study the curvature of metric spaces and branched covers of Riemannian manifolds, with applications in topology and algebraic geometry. Here curvature bounds are expressed in terms of the CAT ( χ ) inequality. We prove a general CAT ( χ ) extension theorem, giving sufficient conditions on and near the boundary of a locally CAT ( χ ) metric space for the completion to be CAT ( χ ) . We use this to prove that a branched cover of a complete Riemannian manifold is locally CAT ( χ ) if and only if all tangent spaces are CAT ( 0 ) and the base has sectional curvature bounded above by χ . We also show that the branched cover is a geodesic space. Using our curvature bound and a local asphericity assumption we give a sufficient condition for the branched cover to be globally CAT ( χ ) and the complement of the branch locus to be contractible.
We conjecture that the universal branched cover of C n over the mirrors of a finite Coxeter group is CAT ( 0 ) . This is closely related to a conjecture of Charney and Davis, and we combine their work with our machinery to show that our conjecture implies the Arnol$'$d–Pham–Thom conjecture on K ( π , 1 ) spaces for Artin groups. Also conditionally on our conjecture, we prove the asphericity of moduli spaces of amply lattice-polarized K3 surfaces and of the discriminant complements of all the unimodal hypersurface singularities in Arnol$'$d's hierarchy.
Duke Math. J., Volume 162, Number 14 (2013), 2645-2689.
Received: 2 August 2012
Revised: 1 March 2013
First available in Project Euclid: 6 November 2013
https://projecteuclid.org/euclid.dmj/1383760701
doi:10.1215/00127094-2380977
Primary: 51K10: Synthetic differential geometry
Secondary: 53C23: Global geometric and topological methods (à la Gromov); differential geometric analysis on metric spaces 57N65: Algebraic topology of manifolds 20F36: Braid groups; Artin groups 14B07: Deformations of singularities [See also 14D15, 32S30]
Allcock, Daniel. Completions, branched covers, Artin groups, and singularity theory. Duke Math. J. 162 (2013), no. 14, 2645--2689. doi:10.1215/00127094-2380977. https://projecteuclid.org/euclid.dmj/1383760701
[1] S. B. Alexander and R. L. Bishop, The Hadamard-Cartan theorem in locally convex metric spaces, Enseign. Math. (2) 36 (1990), 309–320.
[2] S. B. Alexander and R. L. Bishop, Comparison theorems for curves of bounded geodesic curvature in metric spaces of curvature bounded above, Differential Geom. Appl. 6 (1996), 67–86.
Digital Object Identifier: doi:10.1016/0926-2245(96)00008-3
[3] D. Allcock, Asphericity of moduli spaces via curvature, J. Differential Geom. 55 (2000), 441–451.
Project Euclid: euclid.jdg/1090341260
[4] D. Allcock, J. A. Carlson, and D. Toledo, The complex hyperbolic geometry of the moduli space of cubic surfaces, J. Algebraic Geom. 11 (2002), 659–724.
Digital Object Identifier: doi:10.1090/S1056-3911-02-00314-4
[5] V. I. Arnold, S. M. Guseĭn-Zade, and A. N. Varchenko, Singularities of Differentiable Maps, Vol. I: The Classification of Critical Points, Caustics and Wave Fronts, Monogr. Math. 82, Birkhäuser, Boston, 1985.
[6] D. Bessis, Finite complex reflection arrangements are $K(\pi,1)$, preprint, arXiv:math/0610777.
Digital Object Identifier: doi:10.4007/annals.2015.181.3.1
[7] M. R. Bridson, "Geodesics and curvature in metric simplicial complexes" in Group Theory from a Geometrical Viewpoint (Trieste, 1990), World Sci. Publ., River Edge, N.J., 1991, 373–463.
[8] M. R. Bridson and A. Haefliger, Metric Spaces of Non-positive Curvature, Grundlehren Math. Wiss. 319, Springer, Berlin, 1999.
[9] T. Bridgeland, Stability conditions on $K3$ surfaces, Duke Math. J. 141 (2008), 241–291.
Project Euclid: euclid.dmj/1200601792
[10] E. Brieskorn, Die Fundamentalgruppe des Raumes der regulären Orbits einer endlichen komplexen Spiegelungsgruppe, Invent. Math. 12 (1971), 57–61.
[11] E. Brieskorn, "The unfolding of exceptional singularities" in Leopoldina Symposium: Singularities (Thüringen, 1978), Nova Acta Leopoldina (N.F.) 52 (1981), 65–93.
[12] R. Charney and M. W. Davis, Singular metrics of nonpositive curvature on branched covers of Riemannian manifolds, Amer. J. Math. 115 (1993), 929–1009.
[13] R. Charney and M. W. Davis, The $K(\pi,1)$-problem for hyperplane complements associated to infinite reflection groups, J. Amer. Math. Soc. 8 (1995), 597–627.
[14] J. Damon, Finite determinacy and topological triviality, I, Invent. Math. 62 (1980/81), 299–324.
[15] J. Damon, Finite determinacy and topological triviality, II: Sufficient conditions and topological stability, Compos. Math. 47 (1982), 101–132.
[16] J. Damon, "Topological triviality in versal unfoldings" in Singularities, Part 1 (Arcata, Calif., 1981), Proc. Sympos. Pure Math. 40, Amer. Math. Soc., Providence, 1983, 255–266.
[17] P. Deligne, Les immeubles des groupes de tresses généralisés, Invent. Math. 17 (1972), 273–302.
[18] I. V. Dolgachev, Mirror symmetry for lattice polarized $K3$ surfaces: Algebraic geometry Vol. 4, J. Math. Sci. 81 (1996), 2599–2630.
[19] É. Ghys and A. Haefliger, eds., Sur les groupes hyperboliques d'après Mikhael Gromov (Bern, 1988), Progr. Math. 83, Birkhäuser, Boston, 1990, 215–226.
Digital Object Identifier: doi:10.1007/978-1-4684-9167-8_11
[20] H. Grauert, Über die Deformation isolierter Singularitäten analytischer Mengen, Invent. Math. 15 (1972), 171–198.
[21] M. Gromov, "Hyperbolic groups" in Essays in Group Theory, Math. Sci. Res. Inst. Publ. 8, Springer, New York, 1987, 75–263.
Digital Object Identifier: doi:10.1007/978-1-4613-9586-7_3
[22] M. Gross, P. Hacking, and S. Keel, Moduli of surfaces with an anti-canonical cycle of rational curves, preprint, 2010.
Zentralblatt MATH: 06424568
Digital Object Identifier: doi:10.1112/S0010437X14007611
[23] M. Inoue, "New surfaces with no meromorphic functions, II" in Complex Analysis and Algebraic Geometry, Iwanami Shoten, Tokyo, 1977, 91–106.
[24] L. Ji, "A tale of two groups: Arithmetic groups and mapping class groups" in Handbook of Teichmüller Theory, Vol. III, IRMA Lect. Math. Theor. Phys. 17, Eur. Math. Soc., Zürich, 2012, 157–295.
[25] A. Kas and M. Schlessinger, On the versal deformation of a complex space with an isolated singularity, Math. Ann. 196 (1972), 23–29.
[26] R. Laza, Deformations of singularities and variation of GIT quotients, Trans. Amer. Math. Soc. 361 (2009), no. 4, 2109–2161.
[27] E. Looijenga, On the semi-universal deformation of a simple-elliptic hypersurface singularity: Unimodularity, Topology 16 (1977), 257–262.
[28] E. Looijenga, On the semi-universal deformation of a simple-elliptic hypersurface singularity, II: The discriminant, Topology 17 (1978), 23–40.
[29] E. Looijenga, "Homogeneous spaces associated to certain semi-universal deformations" in Proceedings of the International Congress of Mathematicians (Helsinki, 1978), Acad. Sci. Fennica, Helsinki, 1980, 529–536.
[30] E. Looijenga, Invariant theory for generalized root systems, Invent. Math. 61 (1980), 1–32.
[31] E. Looijenga, Rational surfaces with an anticanonical cycle, Ann. of Math. (2) 114 (1981), 267–322.
[32] E. Looijenga, The smoothing components of a triangle singularity, II, Math. Ann. 269 (1984), 357–387.
[33] I. Nakamura, Inoue-Hirzebruch surfaces and a duality of hyperbolic unimodular singularities, I, Math. Ann. 252 (1980), 221–235.
[34] T. Nakamura, A note on the $K({\pi},1)$-property of the orbit space of the unitary reflection group $G(m,l,n)$, Sci. Papers College Arts Sci. Univ. Tokyo 33 (1983), 1–6.
[35] V. V. Nikulin, Finite groups of automorphisms of Kählerian surfaces of type $K3$ (in Russian), Uspekhi Mat Nauk 31, no. 2 (1976), 223–224; English translation in Trans. Moscow Math. Soc. 38 (1980), 71–135.
[36] P. Orlik and L. Solomon, Discriminants in the invariant theory of reflection groups, Nagoya Math. J. 109 (1988), 23–45.
Project Euclid: euclid.nmj/1118780889
[37] D. Panov, Complex surfaces with $\operatorname{CAT} (0)$ metrics, Geom. Funct. Anal. 21 (2011), 1218–1238.
Digital Object Identifier: doi:10.1007/s00039-011-0133-8
[38] H. Pinkham, Deformations of normal surface singularities with $\mathbb{C} ^{\ast}$ action, Math. Ann. 232 (1978), 65–84.
[39] K. Saito, Einfach-elliptische Singularitäten, Invent. Math. 23 (1974), 289–325.
[40] H. van der Lek, "Extended Artin groups" in Singularities, Part 2 (Arcata, Calif., 1981), Proc. Sympos. Pure Math. 40, Amer. Math. Soc., Providence, 1983, 117–121.
[41] K. Wirthmüller, Universell topologische triviale Deformationen, Ph.D. dissertation, University of Regensburg, Regensburg, Bavaria, 1978.
[42] S. A. Wolpert, Geodesic length functions and the Nielsen problem, J. Differential Geom. 25 (1987), 275–296.
[43] S. A. Wolpert, "Geometry of the Weil-Petersson completion of Teichmüller space" in Surveys in Differential Geometry, Vol. VIII (Boston, 2002), Int. Press, Somerville, Mass., 2003, 357–393.
[44] S. Yamada, On the geometry of Weil-Petersson completion of Teichmüller spaces, Math. Res. Lett. 11 (2004), 327–344.
Digital Object Identifier: doi:10.4310/MRL.2004.v11.n3.a5
Purchase print copies of recent issues
DMJ 100
Asphericity of Moduli Spaces via Curvature
Allcock, Daniel, Journal of Differential Geometry, 2000
Dispersive estimates for the wave equation on Riemannian manifolds of bounded curvature
Chen, Yuanlong and Smith, Hart F., Pure and Applied Analysis, 2019
INTEGRAL RICCI CURVATURES, VOLUME COMPARISON AND FUNDAMENTAL GROUPS OF COMPACT RIEMANNIAN MANIFOLDS
Paeng, Seong-Hun, Taiwanese Journal of Mathematics, 2007
Bakry–Émery curvature-dimension condition and Riemannian Ricci curvature bounds
Ambrosio, Luigi, Gigli, Nicola, and Savaré, Giuseppe, The Annals of Probability, 2015
Smith theory, L2-cohomology, isometries of locally symmetric manifolds, and moduli spaces of curves
Avramidi, Grigori, Duke Mathematical Journal, 2014
Fundamental groups of projective discriminant complements
Lönne, Michael, Duke Mathematical Journal, 2009
Metric measure spaces with Riemannian Ricci curvature bounded from below
Ambrosio, Luigi, Gigli, Nicola, and Savaré, Giuseppe, Duke Mathematical Journal, 2014
4-dimensional locally CAT(0)-manifolds with no Riemannian smoothings
Davis, M., Januszkiewicz, T., and Lafont, J.-F., Duke Mathematical Journal, 2012
The Number of Cusps of Complete Riemannian Manifolds with Finite Volume
Nguyen, Thac Dung, Nguyen, Ngoc Khanh, and Son, Ta Cong, Taiwanese Journal of Mathematics, 2018
Characterizations of the compactness of Riemannian manifolds by eigenfunctions, and a partial proof of a conjecture by Hamilton
Gao, Xiang, Rocky Mountain Journal of Mathematics, 2016
euclid.dmj/1383760701 | CommonCrawl |
Visualizing nationwide variation in medicare Part D prescribing patterns
Alexander Rosenberg1,2,
Christopher Fucile1,2,
Robert J. White1,5,
Melissa Trayhan1,5,
Samir Farooq1,5,
Caroline M. Quill1,4,5,
Lisa A. Nelson6,
Samuel J. Weisenthal1,5,
Kristen Bush1,5 &
Martin S. Zand ORCID: orcid.org/0000-0002-7095-86821,3,5
BMC Medical Informatics and Decision Making volume 18, Article number: 103 (2018) Cite this article
To characterize the regional and national variation in prescribing patterns in the Medicare Part D program using dimensional reduction visualization methods.
Using publicly available Medicare Part D claims data, we identified and visualized regional and national provider prescribing profile variation with unsupervised clustering and t-distributed stochastic neighbor embedding (t-SNE) dimensional reduction techniques. Additionally, we examined differences between regionally representative prescribing patterns for major metropolitan areas.
Distributions of prescribing volume and medication diversity were highly skewed among over 800,000 Medicare Part D providers. Medical specialties had characteristic prescribing patterns. Although the number of Medicare providers in each state was highly correlated with the number of Medicare Part D enrollees, some states were enriched for providers with > 10,000 prescription claims annually. Dimension-reduction, hierarchical clustering and t-SNE visualization of drug- or drug-class prescribing patterns revealed that providers cluster strongly based on specialty and sub-specialty, with large regional variations in prescribing patterns. Major metropolitan areas had distinct prescribing patterns that tended to group by major geographical divisions.
This work demonstrates that unsupervised clustering, dimension-reduction and t-SNE visualization can be used to analyze and visualize variation in provider prescribing patterns on a national level across thousands of medications, revealing substantial prescribing variation both between and within specialties, regionally, and between major metropolitan areas. These methods offer an alternative system-wide and pattern-centric view of such data for hypothesis generation, visualization, and pattern identification.
Pharmaceutical spending accounts for 5-25% of total medical care expenditures in Europe, and 16% of all Medicare expenditures in the United States. Variation in prescribing patterns is common, even within groups of providers with a similar scope of practice and patient mix. Prescribing variation may be due to provider preferences, patient case-mix, deviation from practice guidelines, insurance formulary restrictions, and occasionally fraud [1–5]. Understanding patterns of prescribing variation is critical to improving healthcare delivery. Visualizing prescribing variation in ways that accurately reflect underlying data structure can be challenging. Good data visualization can provide a "big picture" of complex data, especially variation and quantitative changes in large and complex data sets [6–8]. In this manuscript, we apply non-linear visualization methods to Medicare Part D provider prescribing data to evaluate patterns at the level of collections of prescriptions, as opposed to a univariate, per-medication approach. This reveals substantial provider variation at the local, regional and national levels, even when controlled for provider specialty and medication volumes.
Prescription claims data capture the volume, diversity and cost of medications prescribed by individual providers. For example, the 2013 Medicare Part D prescribing pattern data set consists of 1,049,381 providers and 3449 prescription drugs [9]. Because the claims are linked to thousands of individual provider treatment decisions, their patterns are an objective measure of how medical care is actually delivered. They quantify a pattern of medical practice within the population a provider treats. Lists of medications and associated claim volumes per provider, termed feature vectors, can be used to cluster providers with similar prescribing patterns. Cluster membership can then be compared to independent data such as geographic location, medical specialty, patient case mix or outcomes. Unsupervised clustering methods are very efficient at classifying data with hundreds or thousands of features, particularly when the gold-standard or ground-truth for cluster membership is unknown (e.g. how providers should be grouped).
Pattern recognition in high-dimensional data, such as large prescribing claims data sets, is difficult. Thus, visualizations that accurately reflect feature variation in high dimensional data are extremely useful for data exploration, inference and decision making [6, 7, 10]. Standard visualization methods for high dimensional data use classical multidimensional scaling [11] or Principal Components Analysis (PCA) [12]. These methods involve linear transformations that project multidimensional data into two or three dimensions, while preserving relative distances between data points. When applied to very high dimensional data, however, PCA and other linear transformation methods often result in dense visualizations that can overwhelm subtle sub-groupings and do little to highlight patterns in the underlying data.
Recently, van der Maaten and colleagues developed t-distributed stochastic neighbor embedding (t-SNE) [13], a non-linear mapping and dimension reduction method that balances cluster display at the local and global levels. This makes t-SNE is ideally suited to visualizing medication prescribing pattern variation for very large data sets. t-SNE has been used to improve visualization of patterns in single nucleotide polymorphisms [14], single-cell RNAseq analysis [15], drug synergy interactions [16], prognostic tumor markers [17], and electronic medical record data [18].
Variation of regional prescribing practices has important implications for behavioral, economic, and healthcare outcomes [19, 20]. To our knowledge, there are currently no published analyses that examine and visualize geographic variations in drug co-prescribing patterns at a national level, based on collections of medications, at a national level, irrespective of provider specialty. Regional variation in health services delivery has been well described [21–27]. In contrast, little is known about regional patterns of prescription drug utilization beyond focused studies of prescribing patterns for antibiotics [1], chemotherapy [28], cholinesterase therapy [29], psychiatric medications [30], and statin cholesterol lowering agents [31]. In these studies, patterns have been found to reflect the nature and complexity of health status of patient populations [32, 33], patient socioeconomic factors [34–37], provider preferences with self-reinforcing regional influences [38–40], social network influence (i.e. "prescriber contagion") [41], and composition of specialties and Medicare formulary [40].
The focus of this work is twofold. First, t-SNE is used to visualize the prescribing patterns of Medicare Part D providers based on the volumes and types of medication claims, and unsupervised agglomerative clustering is used to validate groupings of providers identified by t-SNE. Second, we identify and visualize regional prescribing pattern differences among Medicare Part D providers across specialties, and variations in the prescribing patterns across medical specialties, states, and geographic regions in the United States. That such variations exist is not surprising. The innovation here is that an entire national healthcare data set with hundreds of thousands of providers, millions of patients, and thousands of drugs, can be visualized in a way that identifies prescribing patterns linked to practitioner specialty and regional variation.
Medicare Part D data
Medicare Part D 2013 provider prescribing data were downloaded directly from the Center for Medicare Services (CMS) [9]. A provider refers to any individual who is licensed to prescribe medications and appears in the data set. The data were packaged as three files: 1) a table of providers and their associated annotations, including their unique national provider identifier (NPI), address, summary statistics on numbers of claims, costs, etc.; 2) a table of drugs and their associated annotations including flags for whether they are narcotics, DEA schedule II or III, or categorized as Beers (medications to avoid in older adults [42]), as well as summary statistics (e.g. numbers of claims, costs, etc.); and 3) a table of NPI, drug (both brand and generic names, which taken together are unique) and the number of claims, duration of prescription, and cost for each provider-drug combination. This third file represents a bipartite graph specifying connections between disjoint sets of nodes (i.e. providers and drugs) that are linked by a corresponding measure (e.g. number of claims). To comply with data privacy requirements, values in the provider-by-drug matrix less than 11 were set to 0 by CMS prior to data release [43]. All formatted data were imported into Matlab R2016a (Mathworks, Natick MA) or Mathematica 11.1 (Wolfram, Champaign IL) for further analysis and visualization.
Feature vector construction
For analysis, a feature vector was created for each provider Ωi={αi,1,αi,2...αi,m} where i is the provider number and αi,j is the number of Medicare outpatient prescription claims for drug αj attributed to provider i. The total number of providers is designated by n, and the total number of individual drugs by m. A restriction of the data set, implemented by CMS to ensure non-identifiability of Medicare recipients, is that if αi,j≤ 11, then αi,j=0. With this constraint, the summary number of claims associated with a particular provider (or drug) in the CMS data set may not be exactly equivalent to the sum of the provider-by-drug matrix. Thus, while there were 1,049,381 providers and 3449 drugs in the data set, there were only 808,020 providers with ≥ 11 claims for at least one drug. Similarly, there were 2892 drugs with ≥ 11 claims from at least one provider.
Supporting data sources
Additional file 1: Figure S1 shows a schema of the data sets used for this study, which are all publicly available. The number of Medicare Part D participants by state were obtained from CMS public use files (boxes 1, 2, and 3) [44]. To collapse individual drugs into categories, we used the National Drug File from the Veterans Administration [45], followed by further, minor manual aggregation to result in 198 drug categories (Additional file 1: Figure S1, box 4). For some analyses, we consider providers practicing in 52 metropolitan areas with a population ≥ 1,000,000 by the July 2012 Core-Based Statistical Areas (CBSAs) estimate [46]. We link CBSAs to county and Federal Information Processing Standards (FIPS) codes using a look-up table from the National Bureau of Economic Research (box 8) [47]. We linked providers to their FIPS county codes using a table from the U.S. Department of Housing and Urban Development website (box 5) [48]. Finally, we obtained population estimates of Medicare Part D enrollees by county from the Kaiser Family Foundation website [49], where we consider both Medicare Advantage and the Prescription Drug Plan (box 7) enrollees.
Visualization, clustering, and statistical methods
Providers with similar prescribing patterns were identified by agglomerative clustering implemented in Wolfram Mathematica. Ward's minimum variance criteria, which minimizes the total within-cluster variance [50], was used to determine cluster membership and number. Clusters were also grouped by provider geographical region, state, and medical specialty. Visualization of providers based on their prescribing patterns, we used the fast t-distributed stochastic neighbor embedding (t-SNE) dimension reduction method of van der Maaten and Hinton [13]. Given the size of the data set, with > 105 providers, we used the fast Barnes-Hut implementation of t-SNE in Matlab [51] with 50 initial dimensions based on principal component analysis pre-processing to improve computational efficiency. Unlike with clustering methods, there are no accepted standards for selecting t-SNE visualization hyperparameters, although such guidelines have been suggested [13, 51, 52]. We selected hyperparameter values within the range suggested by van der Maatan et al. [13, 51, 52] based on the data set size and parameter numbers, computational efficiency, t-SNE algorithm convergence, and final embeddings that minimized the cost-function. Reproducibility of the t-SNE visualization results was accomplished by fixing the pseudo-random number generator seed parameter.
Sensitivity analysis and dimensional reduction
We performed sensitivity analysis by varying initial PCA dimensions as well as perplexity and selected parameters that both minimized t-SNE cost and resulted in visual clarity of the embedding. For the visualizations used in this manuscript, we used a perplexity of 40, and theta=0.5. The algorithm performed 300-1500 iterations per run and we selected the result with the minimum t-SNE cost function (error rate) [13]. Dimensional reduction to visualize the CBSA groupings CBSAs was accomplished using classical multidimensional scaling [11] implemented in Matlab using a CBSA-CBSA distance matrix with one minus correlation as the metric. Comparisons of the differences in proportion of provider fractions between geographic regions was performed using the Mann-Whitney U test.
Measures of skewness
We used the bootstrap implementation of the Gini index [53–55], to quantify skewness of the claims distributions. The Gini index was calculated using the formula:
$$G = \frac{\sum_{i=1}^{n} (2i - n - 1)x_{i}}{n^{2} \mu} $$
where n is the number of observations (e.g. providers, drugs), xi is the ith value (e.g. number of prescriptions with ≥ 11 claims), with ordering such that xi≤xi+1. G normally varies {0,1}. When G=0, all providers would have the same number of medication claims, while the closer we get to G=1, the more skewed the distribution.
Volume and diversity of medicare prescriptions
As a prelude to dimension reduction and visualization, we first examined the overall univariate statistical distributions of prescribing volume and diversity among medication classes and providers (Fig. 1). This step allowed us to assess the utility of dimension reduction visualization methods, which would be best suited to data with high variation and skewed distributions of medication volumes and prescribing diversity.
Overall features of 2013 Medicare Part D prescribing patterns data set. a. Distribution of percentage of providers prescribing each of 2892 unique drugs, sorted by percentage of providers prescribing. b. Same as A except for 197 unique drug classes. c. Distribution of number of claims for each of 2892 unique drugs, sorted by number of claims. Note that the unique drug order is not necessarily the same as in a. d. Same as b except for 197 unique drug classes. e. Distribution of drug prescription diversity across all providers sorted by number of unique claims. Numbers of providers prescribing more than 100 and 300 unique drugs are annotated on plot. f. Distribution of number of claims across all providers sorted by claims per provider. Number of providers making more than 10,000 and 25,000 claims are annotated on plot. G = Gini index
We found that small fraction of the unique Medicare Part D outpatient medications were prescribed by > 5% of providers (Figs. 1a and 1c). Only 165 unique drugs (5.7%) were prescribed by ≥ 5% of providers (Fig. 1a). Similarly, only 197 unique drugs (6.8%) had more than one million claims across all providers (Fig. 1c). To reduce the effect of formulary and brand name versus generic medication restrictions, we mapped unique drugs onto 197 categories (Fig. 1b and 1d). Distribution skewness was assessed by the Gini index (G), which has the property of GI=0 if all providers prescribed the same number of medications or all drug types were prescribed at the same volume, and approaches G=1 with increasing skew of the distribution [53]. Drug class distributions were less skewed; for all drugs G=0.759 versus for classes G=0.301, with 72 drug classes (36.5%) prescribed by ≥ 5% of the providers, and 83 classes (42.1%) surpassing one million claims across all providers.
We next examined provider prescription diversity, defined as the number of different drugs prescribed by each provider (Fig. 1e). The majority (70.3%) of providers prescribe ≤ 25 unique drugs reimbursed by Medicare (Fig. 1f), with 71,506 providers prescribing ≥ 100, and 631 providers ≥ 300 unique drugs. We hypothesized that high volume prescribers were more likely to be general practitioners (i.e. general medicine, internal medicine, family medicine). There were 2062 high-volume prescribing providers (HV) with ≥ 25,000 claims, utilizing 1954 of the 2892 available drugs. This group of 0.2% of providers were responsible for 3.59% Medicare Part D drug costs in 2013. Compared with the standard volume prescribing providers (SV; n=805,958), this small subset of HV (n=2062) was heavily skewed towards general practice (p< 0.001): 89% of HV providers were categorized as either internal medicine, family medicine or general practice (SV = 25.8%), and 3% were geriatric medicine (SV = 0.2%).
We further examined the differences in the patient populations cared for between, and Medicare costs, between the low volume (≥ 1000 prescriptions) and high volume (≥ 25,000 prescriptions) prescribers (Additional file 2: Table S2). High volume prescribers had higher numbers of unique beneficiaries and Medicare payments per provider (p< 0.0001). They also had higher percentages of beneficiaries Hispanic and Asian Pacific Islander patients (p< 0.0001), with both Medicare and Medicaid entitlement reimbursement (p< 0.0001), with dementia (p< 0.0001), chronic kidney disease (p< 0.0001), diabetes (p< 0.0001), heart failure (p< 0.0001), ischemic heart disease (p< 0.0001) and rheumatoid arthritis (p< 0.0001). Thus, high volume providers appeared to have Medicare patient panels skewed towards chronic conditions, many of which require multiple medications for ongoing treatment.
Regional prescribing volumes and drug diversity
Prescribing volumes may be related to population density, and thus examined the degree to which they correlated with the regional distribution of Medicare Part D prescription benefit enrollees. We therefore examined the relationship between prescribing volumes (overall versus HV providers), density of Medicare Part D enrollees, and prescription volumes. The number of Medicare Part D providers in each state was highly correlated with the corresponding number of Medicare Part D enrollees (Fig. 2a, R2=0.950), but not (R2=0.697) for providers with > 25,000 claims. There were substantial deviations for several states. For Florida and New York, these deviations may be due to differences in the ratios of providers to enrollees, such that Medicare drug prescribing was more/less concentrated among those providers. In contrast, several states with a proportional number of providers and enrollees had more high-claims providers (e.g. Georgia).
Distribution of Medicare Part D providers across states. a. Share of providers by state (as a percentage of the total number of providers) plotted against share of Medicare Part D enrollees by state (as a percentage of the total number of enrollees nationwide) are shown by black circles and fit to a line (gray dashed line); green line is slope of one. A similar plot based on a data subset of high-claims providers (> 25,000 claims resulting in 2062 providers) is shown superimposed as open triangles colored by their relation to the corresponding data from the full data set. Some states are annotated. b. Comparison of the provider composition by state for the full data set (left) and the high-claims data set (right). Ribbons connecting the two join corresponding states
Figure 2b compares the ranking of all providers versus high-claims providers, with ribbons joining corresponding states. In contrast to the relatively similar ratios of Medicare providers per enrollee across states, the distribution of high-prescribing providers varies regionally (Additional file 3: Table S1). In general, high volume providers also had high prescribing diversity (Additional file 4: Figure S3). This distribution can be used to identify outliers in terms of prescribing diversity and volume. For example, only 10 Medicare providers accounted for approximately 12% of all 2013 Medicare Part D zoster vaccine claims, each with ≥ 10,000 claims accounting for over $30 million in claims. Such univariate outlier analyses are increasingly used to screen for activity defined as inappropriate or fraudulant (e.g. excessive opioid prescribing, prescription fraud). In this case, the data did not contain sufficient information to discriminate between potential explanations (e.g. fraud, contractual agreements with outpatient pharmacy chains, medical directorship of a large nursing home or eldercare facility).
High dimensional provider prescribing patterns highly correlate with provider specialty
While univariate prescribing volumes and diversity measurements are useful for describing aggregate patterns, they do not provide information about how closely related entire prescribing patterns are between individual providers. Specifically, we were most interested in how well PCA visualization performed against t-SNE with respect to visual clarity and the ability to visualize different clusters of providers by prescribing pattern and specialty. PCA uses orthogonal transformation to map a data set of potentially correlated variables into a new set of linearly uncorrelated variables (principal components). It is often used to visualize the relationship between high dimensional data elements and highlight the axes of greatest variation. In contrast, t-SNE maps data onto a non-linear projection designed to highlight differences between high dimensional feature distributions. t-SNE has an advantage over PCA for visualizing prescribing data because the embedding is not biased by a the skewed distribution of a few features, and t-SNE can reveal more subtleties in the differences between provider groups [13]. Thus, we hypothesized that t-SNE would allow greater visualization and discrimination between clusters of providers with different prescribing patterns.
Figure 3 shows the projection of provider densities resulting from t-SNE and PCA applied to providers with ≥ 1000 claims (n=227,573) and using a feature vector of corresponding drugs (n=2791; Fig. 3a) or drug classes (n=195; Fig. 3b), where claim volumes in Ωi were initially normalized by total claims per provider. Note the areas of very high density within the PCA projection obscuring finer variations in prescribing patterns. In contrast, t-SNE projections contain numerous spatially resolved groupings with fine detail visible, as well as one dominant grouping of Internal, Family, Geriatric, and General Medicine providers with areas of higher density reflecting subgroupings of providers with similar prescribing patterns.
Low-dimension embedding of providers using t-SNE and PCA. 2-D density plots in low dimensional space created using t-SNE (upper) or PCA (lower) of 227,573 Medicare Part D providers, each with ≥ 1000 prescription claims in 2013 organized by a the 227,573×2791 drug claims matrix or b the 227,573×195 drug class claims matrix. Number-of-claims data per provider by drug or drug class is scaled by the total claims per provider to express the prescribing pattern as a composition prior to t-SNE
The t-SNE groupings are highly correlated with provider specialty and subspecialty (Fig. 4). These plots, based on the provider-by-drug matrix and cross-referenced with provider specialty from the National Plan and Provider Enumeration System (NPPES) database, highlight that some specialties have single dominant clusters (e.g. Dermatology, Endocrinology, Nephrology) whereas others have multiple clusters or sub-clusters that reflect groupings of sub-specialty practice within a specialty (e.g. Gastroenterology, Urology). Furthermore, when compared to PCA, t-SNE clearly provides better visual resolution of related medical specialties and sub-specialties within the projection (e.g. Cardiology and Cardiac Electrophysiology).
Array of t-SNE plots each highlighting providers of a specific specialty. Each 2-D density plot (grey) is the same as shown in Additional file 5: Figure S4A, and represents the set of 227,573 Medicare Part D providers ×2791 drug claims. Included providers had ≥ 1000 prescription claims in 2013. The plot is a heatmap, with densities representing increased numbers of providers. Provider specialties are shown in red to emphasize their collocation by prescribing pattern, and are labeled by NPPES self-reported specialty designation. Note the separation of provider clusters, even to the extent that subspecialties (annotated in blue) are distinguishable within the specialty cluster (e.g. Cardiology and Cardiac Electrophysiology
Visualizing details of provider prescribing patterns
We next used t-SNE to visualize prescribing diversity across many different provider cluster regions (Fig. 5) using the full provider-drug matrix. Ten random providers were chosen from 20 regions of the low-dimensional t-SNE visualization (Fig. 5, labeled A-T), which mapped to 47 different agglomerative clusters. Location within the embedding clearly maps different prescribing patterns. For example, regions E and P both are dominated by Urology (see Fig. 4), but E is characterized by large proportions of claims for tamsulosin and finasteride, whereas P is mainly tamsulosin. Cluster L is largely Ophthalmologists, consistent with high proportions of latanoprost and to a lesser extent, timolol maleate, Lumigan (bimatoprost), Alphagan(brimonidine tartrate) and similar drugs. Area K is enriched for Allergists that prescribe high proportions of fluticasone proprionate and montelukast sodium. Cluster N is enriched for providers with a high incidence of opioid analgesic prescriptions.
Representative prescribing patterns corresponding to different regions of t-SNE plot. Left: t-SNE plot as shown in Additional file 5: Figure S4A with 20 different regions labeled as A through T. Right: Heat map showing prescribing patterns. Columns are individual providers, 10 randomly selected from each of the 20 regions. Each row represents a drug. The drugs shown are the union of the top eight most frequently prescribed in each region. Increasing gray density corresponds to the percent of claims for a particular drug made by a provider relative to their total claims, with white denoting no claims. Prescribing volume (total claims) and diversity (number of unique drugs prescribed) are shown above the heat map as bar graphs. Note region N, which is enriched for providers with a high volume of opioid analgesic claims
The t-SNE visualizations allow visualization of prescribing patterns likely associated with treating different patient populations, even within the same specialty. For example, groups G and S are dominated by Neurologists, but with substantially different prescribing patterns. Providers in cluster S prescribe large amounts of Parkinson's disease medications (i.e. carbidopa-levodopa, ropinirole, amantidine, azilect), whereas those in cluster G are biased towards medications used to treat epilepsy and Alzheimer's disease (i.e. levetiracetam, lamotrigine, lacosamide, topiramate, namenda and donepezil). In other cases, regional variation may strongly influence prescribing patterns. For example, cluster A is dominated by providers from Puerto Rico. These results demonstrate the utility of using t-SNE to visualize variation of prescribing patterns that highly correlate with formal provider clusters.
Visualizing prescribing volume and medication distribution patterns
t-SNE plots can also be annotated by the prescribing proportions for individual drugs (Fig. 6). Here, for eight drugs typically prescribed for cardiovascular-related conditions, the percentage of claims for individual providers relative to their total number of claims are coded by color. Note that these are visible as high proportions within the region corresponding to Cardiology (see Fig. 4). Even within the Cardiology region, high prescription rates of these drugs are associated with different provider groupings (see for example, atorvastatin, clopidogrel, and warfarin). These groupings may reflect differences in provider scope of practice, patient populations, Medicare formularies, or provider prescribing preferences.
Array of t-SNE plots of providers annotated for fraction of claims for each of eight heart/circulation related drugs. The t-SNE plots were created from the set of 227,573 Medicare Part D providers ×2791 drug claims. Included providers had ≥ 1000 prescription claims in 2013. The color for each provider corresponds to the percentage of claims for the indicated drug relative to the provider's total claims. Gray is 0%, the maximum scale (red) is 15% of total claims. Note the high volume of prescriptions within within both the cardiology and internal medicine areas
In a similar fashion, the dimension-reduced space can be annotated by claim volume as shown in Additional file 5: Figure S4. In this figure, each point is color coded by claim volume. There is slight gradient of claim volume in the large, central General Medicine/Internal Medicine/Family Practice region with several small densities of extremely high prescribing volume providers (e.g. ≥ 10,000 claims). Claim volume also correlated with drug diversity (see Additional file 4: Figure S3), so volume will be somewhat conflated with prescribing pattern and will affect position in the low-dimensional embedding. However, plots highlighting single drugs suggest that the variation across the large t-SNE region correlate well with the prescribing patterns of individual providers (Fig. 6). Finally, it is important to recognize that such visualizations allow comparison of high-dimensional co-prescribing variation across thousands of individual provider patterns, in contrast to bar graphs showing the top 10 medications proportionally prescribed within a self-identified specialty class (see Additional file 6: Figure S6).
Figure 7 shows the specialist-annotated embeddings based on medication class (see Fig. 3b). As with the embeddings based on individual medications, specialists are enriched in the smaller clusters surrounding the main cluster. Figure 8 shows this embedding annotated for prescription proportion of six cardiology-related drug classes (similar to Fig. 6). Even when considering classes instead of individual drugs, which eliminates clustering differences due to separately considering different formulations of the same drug (i.e. generic and brand name), there are clearly large variations in prescription patterns within the cardiology cluster (see for example, anticoagulants, calcium channel blockers, and platelet aggregation inhibitors).
Array of t-SNE plots each highlighting providers of a specific specialty. These t-SNE plots are derived from the dataset of 227,573 Medicare Part D providers ×195 drug classes. Even with dimension reduction from 2791 individual medications to 195 medication classes, t-SNE plots produced clear groupings of specialties and subspecialties. This plot removes potential bias introduced by prescribing of generic versus brand name medications, and thus is a better representation of prescribing variation across specialties due to patient populations and practice patterns
Array of t-SNE plots of providers annotated for fractions of claims for each of six cardiac drug classes The t-SNE plot layout was generated using the dataset of 227,573 Medicare Part D providers ×195 drug classes. The 195 drug classes include all medications (generic and brand name) collapsed into the the indicated class. The color for each provider corresponds to the percentage of claims for the indicated drug relative to the provider's total claims. Gray is 0%, the maximum scale (red) is 15%. This dimension reduction and visual representation eliminates differences due to formulary, or generic versus brand name medication prescribing patterns. Note, for example, the high percentage (red areas) of beta blockers prescribed in cardiology and nephrology (oral preparations) and opthomology (eye drops)
Hierarchical clustering of provider prescribing patters
To more rigorously identify provider subspecialty association within t-SNE heatmap regions, we performed unsupervised hierarchical cluster analysis. We identified 605 provider clusters using agglomerative clustering with Ward's minimum intercluster variance linkage minimization (Additional file 7: Figure S5, and Fig. 9a). The dominant provider subspecialty classification within a cluster, taken from the NPPES data, was used to map each of the 605 sub-clusters to provider sub-specialties. Ninety one percent of the clusters had one provider specialty accounting for ≥ 30% of the providers (Fig. 9b). Of those clusters with ≥ 2 specialties (n=595), 34.5% of the second most frequent specialties were either nurse practitioner or physician assistant, roles rather than disease-based identification. Inclusion within these clusters suggested practice scope within the dominant specialty. When mapped to US Federal Regions (Fig. 9c), clusters also reflected regional variation in prescribing patterns. For example, within the t-SNE projection, we highlighted sub-clusters of providers identified as Family Medicine and then divided by Federal Region. This combination of clustering and t-SNE visualization made visible large regional variations in regional medication prescribing volumes and patterns within Family Practice.
Unsupervised hierarchical clustering by drug class. Provider clusters obtained by hierarchical agglomerative clustering using a Euclidean distance measure and centroid criteria. a) Cumulative distribution of provider size over 605 clusters. b) Provider specialties within each cluster were tallied and the number of providers in the dominant specialty plotted against cluster size. The lines indicates where 100% (red), or 30% (gray) of providers in the cluster are the same medical specialty. c) t-SNE visualization of provider prescribing pattern variation for Family Medicine providers by United States Federal Region. Each plot represents a 2D density histogram
Regional variation in prescribing patterns
Given the variation in regional prescribing patterns observed within the Internal Medicine-Family Practice-General Medicine cluster, we hypothesized that such variation was present across all Medicare Part D program providers. To test this hypothesis, we next performed an in-depth characterization of regional differences in prescribing patterns over all sub-specialties by census region (Additional file 8: Figure S2).
Figure 10 shows how the prescribing patterns of providers with ≥ 1000 Medicare Part D claims are clustered within each census region, as compared to a non-overlapping random sample from the entire data set. For these visualizations, we used heat maps of provider density within the t-SNE embedding. This type of visualization accounts for equivalent sample sizes, but not variation in the proportion of Medicare Part D provider types (e.g. Family Practice versus Nephrology) between the random and regional samples. For example, the East North Central region has a much higher percentage of Neurologists compared with the East South Central region. Differences in provider and population density, and thus prescribing patterns and volumes, may also contribute to regional variations in Medicare part D prescription costs. The utility of the t-SNE visualization can be seen by comparison with traditional univariate bar graphs 6, which only shows differences in the univariate prescribing percentages for single medications, and provides no information about variation of co-prescribing patterns among individual providers.
Distribution of provider prescribing patterns by census region. Providers with ≥ 1000 claims (n=227,573) were divided into subsets by census region (lower figures within regional pairs). For comparison, a random sample of equivalent size was taken from the entire data set such that the providers in each random sub-sample did not overlap with any of the others (upper figures). This allows visual comparison of regional provider distributions with a random national sample of equivalent size
Urban prescribing pattern variation
The results from dimensional reduction visualization with t-SNE were again hypothesis generating, and suggested that regional prescribing patterns could be due to urban location, variation in income, or population density. To further explore regional variations in prescribing patterns, while diminishing the impact of these variables, we selected 52 metropolitan areas (core-based statistical areas, CBSA) with populations greater than one million (Additional file 9: Figure S7). Among the large metropolitan areas, there were large regional differences in terms of proportion of Medicare Part D enrollees of the total population, as shown in Additional file 9: Figure S7, ranging from 4.6% (Washington DC) to just under 15.7% (Pittsburgh). These results were not statistically correlated to overall population of the respective CBSAs.
Dimension-reduction with t-SNE visualizations also revealed regional variation in prescribing patterns across CBSAs. To characterize prescribing profiles within CBSAs, we selected 532 drugs with over 100,000 claims for all states. A 52 CBSA by 532 drug number-of-claims matrix was computed and each row was divided by the number of Medicare Part D enrollees in the corresponding CBSA, expressing the normalized data as drug claims per enrollee. Figure 11a shows the first two coordinates of the resulting multidimensional scaling based on pairwise CBSA-CBSA distances di,j=1−ri,j, where ri,j is the Pearson product-moment correlation coefficient for the CBSA pair i and j feature vectors. The red dots near the center of the plot are the result of multi-dimensional scaling following random permutation of the CBSA provider memberships (preserving the relative numbers of providers per CBSA) used as a reference against which to interpret the dispersion of the real data. Although the data do not segregate into distinct clusters in this dimension, there are apparent regional variations, notably, that most of the southern CBSAs appear on the left half of the plot, reflecting similar regional prescribing profiles within the southern CBSAs.
Variation of prescribing pattern by core-based statistical areas. a. Multidimensional scaling (MDS) of 52 CBSAs based on 532 drugs that have over 100,000 claims (across 50 states and Washington DC). Data were expressed as number of claims for a particular drug in a particular CBSA per number of enrollees in that CBSA. CBSAs are specified by IATA airport code. Magenta dots indicate MDS performed on a randomly permuted data sets where the data corresponding to the CBSA providers were shuffled, preserving the number of providers for each CBSA. b. Comparison of two CBSAs of similar sizes: Oklahoma City OK vs. Rochester NY. Dots represent individual drugs and axes are the number of claims per enrollee in log scale (for the respective CBSAs). Dashed lines indicate 5-fold differences in the per-enrollee numbers of claims. Drugs beyond these regions are indicated. c. Comparison of Houston TX and Dallas-Fort Worth Texas CBSAs that might be expected to have similar profiles as an internal control. d. MDS plot of 52 CBSAs based on 198 drug categories, similar to part A. e. Comparison of prescribing patterns in Boston MA and Miami FL based on drug categories. f. Houston TX vs. Dallas-Fort Worth TX based on drug categories
Further visualizations highlight the substantial variation in provider prescribing patterns between CBSA's. Figure 11b shows an example of claims-per-enrollee of the 532 drugs for two geographically distant but similarly sized CBSAs: Rochester, NY (ROC) and Oklahoma City, OK (OKC). Although their populations are similar, they have different median household incomes and percent Medicare Part D enrollees (see Additional file 5: Figure S4): $43,955 and 14.1%, respectively for ROC, and $36,797 and 7.8% for OKC. The dashed lines represent 5-fold differences in claims-per-enrollee for specific drugs, with those outside the range annotated. The selected CBSAs are annotated in t-SNE density plots shown in Additional file 10: Figure S8A. For comparison, Fig. 10c shows another pairwise visual comparison between two geographically proximate and similarly sized CBSAs: Dallas-Fort Worth, TX (DFW) and Houston, TX (IAH). If prescribing patterns reflect regional prescribing homophily or state-specific Medicare Part D approved medication formularies, such pairs would be expected to have similar prescribing profiles and could be considered an internal control. In this example, the claims per enrollee are more similar between the two CBSAs. The median household incomes and percent enrolled are $47,418 and 6.6% for Dallas Fort Worth (DFW), and $44,714 and 6.3% for Houston (IAH). These results provide further support for the hypothesis that regional variation in prescribing patterns increases with geographic distance.
t-SNE identifies regional variation in prescribing patterns
Medicare formulary composition varies by state and region. Such variation may lead to prescribing pattern differences between providers based on drug formulations, rather than the use of similar drugs of the same class. To control for this effect, we next examined the results obtained by dimension reduction and visualization with t-SNE based on drug classes, rather than individual medications. Figures 11d-f show results based on profiles of 195 drug categories, which still show substantial differences prescribing profiles between CBSAs. Figure 11e compares the Boston, MA (BOS) and Miami, FL (MIA) CBSAs (also see t-SNE plots in Additional file 10: Figure S8B), with 5- to 10-fold differences the claims-per-enrollee for some categories. While these are sized metropolitan areas, there are almost twice as many enrollees per provider in MIA than in BOS (see Additional file 9: Figure S7 and Fig. 2). As an example, "Amphetamines and Amphetamine-Like Stimulants" generate almost 6-fold more claims per 1000 Boston enrollees as compared to claims per 1000 Miami enrollees (126.4 vs. 21.7). In contrast, "Genito-Urinary Agents, Other" generate almost 10-fold more claims per 1000 enrollees in MIA as compared to BOS (28.9 vs. 2.9). Figure 11f shows that the Dallas-Fort Worth vs. Houston profiles are substantially more similar, with the largest differences for rarely prescribed drug categories.
One possible cause for regional variation in prescribing patterns could be differences in disease prevalence between regions. We used Medicare data on the disease prevalence for 13 conditions (see Additional file 11: Figure S9 for a detailed list and explanation) to construct a feature vector D={δ1,δ2,...δn}, where Medicare providers with ≥ 1000 Medicare prescriptions in 2013 (n=207,158) and complete data were grouped by state (50 US states, the District of Columbia, and Puerto Rico). We then calculated the mean feature vector prescribing pattern and provider patient-specific disease prevalence values for each state's providers. To test whether the multi-dimensional drug prescribing pattern differences were correlated with multi-dimensional disease prevalence, we calculated the Euclidean n-dimensional matrix of distances between each pair of states for both prescribing pattern distances and disease prevalence distances. Thus, that states with similar Medicare prescribing patterns should have have small multi-dimensional Euclidean feature distances, while those that differ would have large feature distances. A similar relationship would exist for n-dimensional feature distances calculated using the disease prevalence feature vector; pairs of states with similar prevalence of diseases would have small n-dimensional Euclidean feature distances. We found the correlation between disease prevalence and prescribing pattern distances to be R2=0.22185, indicating that variation in multi-dimensional prescribing patterns between states cannot be explained simply by variance in multi-dimensional disease prevalence.
Our results demonstrate that t-SNE dimensional reduction can be used to visualize prescribing pattern variation in very large administrative data sets, and reveal patterns not otherwiseapparent.
Previously, a number of focused studies have examined prescription diversity, mostly with respect to opioid analgesics [56–62], antibiotics [1, 63–67], psychiatric medications [68–71], and among general practitioners [37, 72–76]. One web site has made the Medicare Part D prescribing data searchable with various filters for provider, charges, and medications [77–79]. As far as we are aware, however, this is the first high level, aggregate analysis of provider prescribing diversity and patterns on a national scale (40 million patients and over 800,000 providers) across multiple specialties, medication classes and practitioner types. This type of analysis may be used as a starting point for future work comparing national prescribing patterns, especially in countries where regional formulary composition is centrally tracked. Thus, this multivariate approach has value in establishing an atlas of prescription pattern diversity, and can be a means for deeper, more targeted queries about groupings or sub-groupings of providers.
Provider prescribing volume and diversity patterns could be a powerful proxy for organizing how practitioners actually provide care, as opposed to self- or board- identified medical specialty. For example, providers with a "mixed practice" (e.g. adult internal medicine and endocrinology) will have prescribing patterns that differ from those practicing solely within one specialty. There are currently no data sets, survey results or accepted methods to identify such mixed-practice providers. Thus, our results are hypothesis generating and suggest that such practice mixes can be identified by unsupervised clustering of prescribing patterns, and visualized with t-SNE. Further work will need to be done to test this hypothesis, and could involve comparing survey data about self-identified practice mix with prescribing patterns. The current study provides the motivating hypothesis and groundwork for such investigations.
Additionally, our approach enhances hypothesis generation and testing regarding root causes of prescribing variation. For example, correlating provider clusters with clinical outcomes data may improve comparative effectiveness studies of prescribing patterns for specific diagnoses (e.g. effect of anti-hypertensive regimens with and without diuretics on blood pressure control and mortality) [80]. Similar approaches have recently been used to conduct "virtual clinical trials", replicating the results of randomized prospective clinical trials [81, 82], but lack a visualization component. Our results demonstrate that these methods can be used to identify and visualize complex, multi-dimensional, prescribing behaviors of interest (e.g. opioid prescribing) in geographically comprehensive data sets. In the future, studies coupling prescribing patterns, patient outcomes, and genomic data may aid in identification of genotype-phenotype associations and facilitate precision targeting of effective therapies to specific individual genotypes [83].
Our analysis and t-SNE visualizations also highlight prescribing variation in groups of metropolitan providers with similar Medicare claims patterns. These findings complement reports showing considerable geographic variation in both claims volume [84] and cost [4] across the United States. Potential contributing factors to such variation [35, 85–87], include suboptimal care or health services delivery inefficiencies [88, 89], and regional differences in prescriptions for branded drugs compared to generic counterparts [90–93]. The analysis of metropolitan areas, adjusted for population density, revealed considerable residual variation in prescribing patterns, with up to ten-fold variations for both individual drugs and drug classes.
Further work, incorporating more detailed data (e.g. regional Medicare formularies, provider-health system associations), are needed to determine the factors associated with such variation. Interestingly, we found that prescribing pattern differences increase with geographic differences. However, our results showed only modest correlation between n-dimensional prescribing patterns and n-dimensional disease prevalence among states. Regional prescribing patterns may be shaped by local factors (e.g. economic, social, state-specific Medicare formularies, local and regional provider practice patterns) Further work will need to be done to better elucidate sources of such regional variation. Nevertheless, these findings are a significant advance over single-specialty or disease-based variation studies, providing a method to compare comprehensive medication co-prescribing patterns.
Several caveats apply to this analysis. First, we recognize that most Medicare providers have a patient population with a mix of prescription plans, and our results may not be applicable beyond the Medicare population demographics [94]. For example, only 15.5% of Medicare Part D enrollees were ≤ 65 years of age. Thus, the prescribing profiles and provider cluster memberships described here cannot be generalized to younger individuals. Approximately 50% of individuals enrolled in Medicare Part D also have private or supplemental insurance for medication coverage, and prescription claims captured by Medicare Part D may differ from overall claims. This bias is somewhat mitigated by our selection of 227,000 providers with ≥ 1000 claims. Unfortunately, there is currently no available data set for the United States integrating the medication formularies of all the Medicare plans. Thus, we are unable to judge to what extent prescribing variation is dependent on Medicare Part D plan formulary differences. Future work might explore these issues with more comprehensive US data sets, or data sets from countries with national healthcare systems where formulary information is available.
In conclusion, we have presented a pattern-based approach for visualizing prescribing variation in a national administrative data set. The analysis highlighted regional variations in prescribing practices in the United States Medicare Part D program and captured this diversity based on overall prescribing patterns as opposed to single medications. The use of the t-SNE visualization algorithm enhances the analysis and visualization of variation in high-dimensional co-prescribing data, and can be used as a hypothesis generating method.
Center for medicare services
CBSA:
Core-based statistical areas
FIPS:
Federal information processing standards
HV:
High-volume prescribing providers
NPPES:
National plan and provider enumeration system
NPI:
National provider identifier
PCA:
Principal components analysis
Standard-volume prescribing providers
t-SNE:
t-distributed stochastic neighbor embedding. United States abbreviations for states can be found in Additional file 3: Table S1. Abbreviations for the 52 core based statistical areas used in this analysis can be found in Additional file 9: Figure S7.
Zhang Y, Steinman MA, Kaplan CM. Geographic variation in outpatient antibiotic prescribing among older adults. Arch Intern Med. 2012; 172(19):1465–71. https://doi.org/10.1001/archinternmed.2012.3717.
Zhang Y, Baicker K, Newhouse JP. Geographic variation in medicare drug spending. N Engl J Med. 2010; 363(5):405–9. https://doi.org/10.1056/NEJMp1004872.
Stuart B, Shoemaker JS, Dai M, Davidoff AJ. Regions with higher medicare part d spending show better drug adherence, but not lower medicare costs for two diseases. Health Aff. 2013; 32(1):120–6.
Donohue JM, Morden N, Gellad WF, Bynum JP, Zhou W, Hanlon JT, Skinner J. Sources of regional variation in medicare part d drug spending. N Engl J Med. 2012; 366(6):530–8.
Chen JH, Humphreys K, Shah NH, Lembke A. Distribution of opioids by different types of medicare prescribers. JAMA Intern Med. 2016; 176(2):259–61.
Chabris CF, Kosslyn SM. In: Tergan S-O, Keller T, (eds).Representational Correspondence as a Basic Principle of Diagram Design. Berlin, Heidelberg: Springer; 2005, pp. 36–57. https://doi.org/10.1007/11510154_.
Tufte ER. Visual Explanations: Images and Quantities, Evidence and Narrative vol. 36. Cheshire: Graphics Press; 1997.
Few S. Data visualization for human perception In: Soegaard M, Dam RF, editors. The Encycolpedia of Human-Computer Interaction, 2nd Ed. Aarhus: Interaction Design Foundation: 2013.
Center for Medicare Medicaid Services. Part D Prescriber Data CY 2013. 2016. http://download.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/Medicare-Provider-Charge-Data/Downloads/PartD_Prescriber_PUF_NPI_DRUG_13.zip.
Lavrač N, Bohanec M, Pur A, Cestnik B, Debeljak M, Kobler A. Data mining and visualization for decision support and modeling of public health-care resources. J Biomed Inform. 2007; 40(4):438–47.
Zand MS, Wang J, Hilchey S. Graphical representation of proximity measures for multidimensional data: Classical and metric multidimensional scaling. Math J. 2015; 17(7):1–31. https://doi.org/10.3888/tmj.17-7.
Hotelling H. Analysis of a complex of statistical variables into principal components. J Educ Psychol. 1933; 24(6):417.
Maaten L. v. d., Hinton G. Visualizing data using t-sne. J Mach Learn Res. 2008; 9(Nov):2579–605.
Platzer A. Visualization of snps with t-sne. PLoS ONE. 2013; 8(2):56883. https://doi.org/10.1371/journal.pone.0056883.
Andrews TS, Hemberg M. Identifying cell populations with scrnaseq. Mol Asp Med. 2018; 59:114–22. https://doi.org/10.1016/j.mam.2017.07.002. The emerging field of single-cell analysis.
Reutlinger M, Schneider G. Nonlinear dimensionality reduction and mapping of compound libraries for drug discovery. J Mol Graph Model. 2012; 34:108–17. https://doi.org/10.1016/j.jmgm.2011.12.006.
Abdelmoula WM, Balluff B, Englert S, Dijkstra J, Reinders MJT, Walch A, McDonnell LA, Lelieveldt BPF. Data-driven identification of prognostic tumor subpopulations using spatially mapped t-sne of mass spectrometry imaging data. Proc Natl Acad Sci. 2016; 113(43):12244–9. https://doi.org/10.1073/pnas.1510227113.
Rao AR, Chhabra A, Das R, Ruhil V. A framework for analyzing publicly available healthcare data. In: 2015 17th International Conference on E-health Networking, Application Services (HealthCom). Boston: HealthCom: 2015. p. 653–656. https://doi.org/10.1109/HealthCom.2015.7454585.
Epstein AM. Geographic variation in medicare spending. N Engl J Med. 2010; 363(1):85–6. https://doi.org/10.1056/NEJMe1005212.
Newhouse JP, Garber AM. Geographic variation in health care spending in the united states: insights from an institute of medicine report. JAMA. 2013; 310(12):1227–8.
Owen RR, Feng W, Thrush CR, Hudson TJ, Austen MA. Variations in prescribing practices for novel antipsychotic medications among veterans affairs hospitals. Psychiatr Serv. 2001; 52(11):1523–5. https://doi.org/10.1176/appi.ps.52.11.1523.
Baxter C, Jones R, Corr L. Time trend analysis and variations in prescribing lipid lowering drugs in general practice. BMJ. 1998; 317(7166):1134–5.
Heins JK, Heins A, Grammas M, Costello M, Huang K, Mishra S. Disparities in analgesia and opioid prescribing practices for patients with musculoskeletal pain in the emergency department. J Emerg Nurs. 2006; 32(3):219–24.
Ashworth M, Charlton J, Ballard K, Latinovic R, Gulliford M. Variations in antibiotic prescribing and consultation rates for acute respiratory infection in uk general practices 1995–2000. Br J Gen Pract. 2005; 55(517):603–8.
Birkmeyer JD, Reames BN, McCulloch P, Carr AJ, Campbell WB, Wennberg J. Understanding of regional variation in the use of surgery. Lancet. 2013; 382(9898):1121–9.
Goldberg T, Kroehl ME, Suddarth KH, Trinkley KE. Variations in metformin prescribing for type 2 diabetes. J Am Board Fam Med. 2015; 28(6):777–84.
Reames BN, Shubeck SP, Birkmeyer JD. Strategies for reducing regional variation in the use of surgery a systematic review. Ann Surg. 2014; 259(4):616.
Porter MP, Kerrigan MC, Donato BMK, Ramsey SD. Patterns of use of systemic chemotherapy for medicare beneficiaries with urothelial bladder cancer. Urol Oncol. 2011; 29:252–8.
Fong RK, Johnson A, Gill SS. Cholinesterase inhibitors: an example of geographic variation in prescribing patterns within a drug class. Int J Geriatr Psychiatry. 2015; 30(2):220–2. https://doi.org/10.1002/gps.4212.
Golberstein E, Rhee TG, McGuire TG. Geographic variations in use of medicaid mental health services. Psychiatr Serv. 2015; 66(5):452–4.
Ohlsson H, Vervloet M, van Dijk L. Practice variation in a longitudinal perspective: a multilevel analysis of the prescription of simvastatin in general practices between 2003 and 2009. Eur J Clin Pharmacol. 2011; 67(12):1205–11. https://doi.org/10.1007/s00228-011-1082-8.
Brookes-Howell L, Hood K, Cooper L, Little P, Verheij T, Coenen S, Godycki-Cwirko M, Melbye H, Borras-Santos A, Worby P, Jakobsen K, Goossens H, Butler CC. Understanding variation in primary medical care: a nine-country qualitative study of clinicians' accounts of the non-clinical factors that shape antibiotic prescribing decisions for lower respiratory tract infection. BMJ Open. 2012; 2(4). https://doi.org/10.1136/bmjopen-2011-000796.
Omar RZ, O'Sullivan C, Petersen I, Islam A, Majeed A. A model based on age, sex, and morbidity to explain variation in uk general practice prescribing: cohort study. BMJ. 2008; 337:238. https://doi.org/10.1136/bmj.a238.
Davis MM, Patel MS, Halasyamani LK. Variation in estimated medicare prescription drug plan costs and affordability for beneficiaries living in different states. J Gen Intern Med. 2007; 22(2):257–63. https://doi.org/10.1007/s11606-006-0018-y.
Forster DP, Frost CE. Use of regression analysis to explain the variation in prescribing rates and costs between family practitioner committees. Br J Gen Pract. 1991; 41(343):67–71.
Fretheim A, Oxman AD. International variation in prescribing antihypertensive drugs: its extent and possible explanations. BMC Health Serv Res. 2005; 5(1):21. https://doi.org/10.1186/1472-6963-5-21.
Sorensen HT, Steffensen FH, Nielsen GL, Gron P. Variation in antibiotic prescribing costs in danish general practice: an epidemiological pharmaco-economic analysis. Int J Risk Saf Med. 1996; 8(3):243–50. https://doi.org/10.3233/JRS-1996-8308.
Cutler D, Skinner J, Stern AD, Wennberg D. Physician beliefs and patient preferences: a new look at regional variation in health care spending. Technical report, National Bureau of Economic Research. 2013.
Rothberg MB, Bonner AB, Rajab MH, Kim HS, Stechenberg BW, Rose DN. Effects of local variation, specialty, and beliefs on antiviral prescribing for influenza. Clin Infect Dis. 2006; 42(1):95–9. https://doi.org/10.1086/498517.
Munson J, Morden N, Goodman D, Valle L, Wennberg J. The Dartmouth atlas of Medicare prescription drug use. Lebanon: NH: The Dartmouth Institute for Health Policy and Clinical Practice; 2013.
Christakis NA, Fowler JH. Commentary—contagion in prescribing behavior among networks of doctors. Mark Sci. 2011; 30(2):213–6.
Curtis LH, Østbye T, Sendersky V, Hutchison S, Dans PE, Wright A, Woosley RL, Schulman KA. Inappropriate prescribing for elderly americans in a large outpatient population. Arch Intern Med. 2004; 164(15):1621–5.
Center for Medicare Services. Physician shared patient patterns technical requirements. 2016. https://downloads.cms.gov/foia/physician_shared_patient_patterns_technical_requirements.pdf. Accessed 23 June 2016.
Center for Medicare Services. CMS 2013 Medicare Part D Statistical Supplement. 2016. https://www.cms.gov/Research-Statistics-Data-and-Systems/Statistics-Trends-and-Reports/MedicareMedicaidStatSupp/Downloads/2013PartD.zip. Accessed 20 May 2016.
Kaiser Family Foundation. Total Number of Medicare Beneficiaries Data File. 2016. https://catalog.data.gov/dataset/va-national-drug-file-may-2015.
United States Department of Agriculture Economic Research Service. Rural-Urban Continuum Codes. 2016. https://www.ers.usda.gov/webdocs/DataFiles/RuralUrban_Continuum_Codes__18011/ruralurbancodes2013.xls?v=41404. Accessed 2016-05-20.
National Bureau of Economic Research. SSA to FIPS CBSA and MSA County Crosswalk Files. 2016. http://www.nber.org/data/cbsa-msa-fips-ssa-county-crosswalk.html. Accessed 20 May 2016.
Office of Policy Development and Research: U.S. Department of Housing and Urban Development. HUD USPS Zip Code Crosswalk Files. 2016. https://www.huduser.gov/portal/datasets/usps_crosswalk.html. Accessed 20 May 2016.
Kaiser Family Foundation. Total Number of Medicare Beneficiaries Data File. 2013. http://kff.org/medicare/state-indicator/total-medicare-beneficiaries/?currentTimeframe=&sortModel=%7B%22colId%22:%22Location%22,%22sort%22:%22asc%22%7%D. Accessed 04 May 2017.
Ward Jr JH. Hierarchical grouping to optimize an objective function. J Am Stat Assoc. 1963; 58(301):236–44.
Van Der Maaten L. Accelerating t-sne using tree-based algorithms. J Mach Learn Res. 2014; 15(1):3221–45.
van der Maaten L. Barnes-hut-sne. CoRR. 2013; abs/1301.3342. 1301.3342.
Gini C. On the measurement of concentration and variability of characters (English translation from Italian by Fulvio de Santis). Metron. 1914; 63:3–38.
Dixon PM, Weiner J, Mitchell-Olds T, Woodley R. Bootstrapping the gini coefficient of inequality. Ecology. 1987; 68(5):1548–51. https://doi.org/10.2307/1939238. http://arxiv.org/abs/https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.2307/1939238.
Dixon PM, Weiner J, Mitchell-Olds T, Woodley R. Errata: Bootstrapping the gini coefficient of inequality. Ecology. 1988; 69(4):1307. https://doi.org/10.2307/1941291. http://arxiv.org/abs/https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.2307/1941291.
McDonald DC, Carlson K, Izrael D. Geographic variation in opioid prescribing in the u.s. J Pain. 2012; 13(10):988–96. https://doi.org/10.1016/j.jpain.2012.07.007.
McDonald DC, Carlson KE. The ecology of prescription opioid abuse in the usa: geographic variation in patients' use of multiple prescribers ("doctor shopping"). Pharmacoepidemiol Drug Saf. 2014; 23(12):1258–67. https://doi.org/10.1002/pds.3690.
Curtis LH, Stoddard J, Radeva JI, Hutchison S, Dans PE, Wright A, Woosley RL, Schulman KA. Geographic variation in the prescription of schedule ii opioid analgesics among outpatients in the united states. Health Serv Res. 2006; 41(3 Pt 1):837–55. https://doi.org/10.1111/j.1475-6773.2006.00511.x.
Paulozzi LJ, Mack KA, Hockenberry JM, Division of Unintentional Injury Prevention NCfIP, Control CDC. Vital signs: variation among states in prescribing of opioid pain relievers and benzodiazepines - united states, 2012. MMWR Morb Mortal Wkly Rep. 2014; 63(26):563–8.
Paulozzi LJ, Mack KA, Hockenberry JM. Variation among states in prescribing of opioid pain relievers and benzodiazepines–united states, 2012. J Safety Res. 2014; 51:125–9. https://doi.org/10.1016/j.jsr.2014.09.001.
Tang Y, Chang CC, Lave JR, Gellad WF, Huskamp HA, Donohue JM. Patient, physician and organizational influences on variation in antipsychotic prescribing behavior. J Ment Health Policy Econ. 2016; 19(1):45–59.
Schirle L, McCabe BE. State variation in opioid and benzodiazepine prescriptions between independent and nonindependent advanced practice registered nurse prescribing states. Nurs Outlook. 2016; 64(1):86–93.
Brookes-Howell L, Hood K, Cooper L, Coenen S, Little P, Verheij T, Godycki-Cwirko M, Melbye H, Krawczyk J, Borras-Santos A, Jakobsen K, Worby P, Goossens H, Butler CC. Clinical influences on antibiotic prescribing decisions for lower respiratory tract infection: a nine country qualitative study of variation in care. BMJ Open. 2012;2(3). https://doi.org/10.1136/bmjopen-2011-000795.
Steinman MA, Yang KY, Byron SC, Maselli JH, Gonzales R. Variation in outpatient antibiotic prescribing in the united states. Am J Manag Care. 2009; 15(12):861–8.
Cordoba G, Siersma V, Lopez-Valcarcel B, Bjerrum L, Llor C, Aabenhus R, Makela M. Prescribing style and variation in antibiotic prescriptions for sore throat: cross-sectional study across six countries. BMC Fam Pract. 2015; 16:7. https://doi.org/10.1186/s12875-015-0224-y.
Fleming-Dutra KE, Hersh AL, Shapiro DJ, Bartoces M, Enns EA, File JTM, Finkelstein JA, Gerber JS, Hyun DY, Linder JA, Lynfield R, Margolis DJ, May LS, Merenstein D, Metlay JP, Newland JG, Piccirillo JF, Roberts RM, Sanchez GV, Suda KJ, Thomas A, Woo TM, Zetts RM, Hicks LA. Prevalence of inappropriate antibiotic prescriptions among us ambulatory care visits, 2010-2011. JAMA. 2016; 315(17):1864–73. https://doi.org/10.1001/jama.2016.4151.
Williamson DA, Roos R, Verrall A, Smith A, Thomas MG. Trends, demographics and disparities in outpatient antibiotic consumption in new zealand: a national study. J Antimicrob Chemother. 2016; 71(12):3593–8. https://doi.org/10.1093/jac/dkw345.
Hansen DG, Sondergaard J, Vach W, Gram LF, Rosholm JU, Kragstrup J. Antidepressant drug use in general practice: inter-practice variation and association with practice characteristics. Eur J Clin Pharmacol. 2003; 59(2):143–9. https://doi.org/10.1007/s00228-003-0593-3.
Pharoah PD, Melzer D. Variation in prescribing of hypnotics, anxiolytics and antidepressants between 61 general practices. Br J Gen Pract. 1995; 45(400):595–9.
Lund BC, Abrams TE, Bernardy NC, Alexander B, Friedman MJ. Benzodiazepine prescribing variation and clinical uncertainty in treating posttraumatic stress disorder. Psychiatr Serv. 2013; 64(1):21–7. https://doi.org/10.1176/appi.ps.201100544.
Mayne SL, Ross ME, Song L, McCarn B, Steffes J, Liu W, Margolis B, Azuine R, Gotlieb E, Grundmeier RW, Leslie LK, Localio R, Wasserman R, Fiks AG. Variations in mental health diagnosis and prescribing across pediatric primary care practices. Pediatrics. 2016; 137(5). https://doi.org/10.1542/peds.2015-2974.
Scrivener G, Lloyd DC. Allocating census data to general practice populations: implications for study of prescribing variation at practice level. BMJ. 1995; 311(6998):163–5.
Davis P, Gribben B. Rational prescribing and interpractitioner variation. a multilevel approach. Int J Technol Assess Health Care. 1995; 11(3):428–42.
Davis PB, Yee RL, Millar J. Accounting for medical variation: the case of prescribing activity in a new zealand general practice sample. Soc Sci Med. 1994; 39(3):367–74.
Sinnige J, Braspenning JC, Schellevis FG, Hek K, Stirbu I, Westert GP, Korevaar JC. Inter-practice variation in polypharmacy prevalence amongst older patients in primary care. Pharmacoepidemiol Drug Saf. 2016. https://doi.org/10.1002/pds.4016.
Tomlin AM, Gillies TD, Tilyard M, Dovey SM. Variation in the pharmaceutical costs of new zealand general practices: a national database linkage study. J Public Health (Oxf). 2016; 38(1):138–46. https://doi.org/10.1093/pubmed/fdu116.
ProPublica. Prescriber Checkup Data. 2016. https://www.propublica.org/datastore/dataset/prescriber-checkup.
Ornstein C. Government Releases Massive Trove of Data on Doctors' Prescribing Patterns. 2015. https://www.propublica.org/article/government-releases-massive-trove-of-data-on-doctors-prescribing-patterns. Accessed 30 Apr 2015.
ProPublica. Prescriber Checkup. 2016. https://projects.propublica.org/checkup/ Accessed 23 June 2016.
Dean BB, Lam J, Natoli JL, Butler Q, Aguilar D, Nordyke RJ. Use of electronic medical records for health outcomes research a literature review. Med Care Res Rev. 2009; 66(6):611–38. https://doi.org/10.1177/1077558709332440.
Tannen RL, Weiner MG, Xie D. Replicated studies of two randomized trials of angiotensin-converting enzyme inhibitors: further empiric validation of the 'prior event rate ratio' to adjust for unmeasured confounding by indication. Pharmacol Drug Saf. 2008; 17(7):671–85. https://doi.org/10.1002/pds.1584.
Tannen RL, Weiner MG, Xie D, Barnhart K. A simulation using data from a primary care practice database closely replicated the women's health initiative trial. J Clin Epidemiol. 2007; 60(7):686–95. https://doi.org/10.1016/j.jclinepi.2006.10.012.
Caudle KE, Gammal RS, Whirl-Carrillo M, Hoffman JM, Relling MV, Klein TE. Evidence and resources to implement pharmacogenetic knowledge for precision medicine. Am J Health Syst Pharm. 2016; 73(23):1977–85. https://doi.org/10.2146/ajhp150977.
The Dartmouth Institute. The Dartmouth Atlas of Medicare Prescription Drug Use. 2013. http://www.dartmouthatlas.org/downloads/reports/Prescription_Drug_Atlas_101513.pdf. Accessed 14 Apr 2015.
Jaye C, Tilyard M. A qualitative comparative investigation of variation in general practitioners' prescribing patterns. Br J Gen Pract. 2002; 52(478):381–6.
Skegg K, Skegg DC, McDonald BW. Is there seasonal variation in the prescribing of antidepressants in the community? J Epidemiol Commun Health. 1986; 40(4):285–8.
Johnson RE, Azevedo DJ, Kieburtz KD. Variation in individual physicians' prescribing. J Ambul Care Manage. 1986; 9(1):25–37.
Kahn MG, Banade D. The impact of electronic medical records data sources on an adverse drug event quality measure. J Am Med Inf Assoc. 2010; 17(2):185–91. https://doi.org/10.1136/jamia.2009.002451.
Parsons A, McCullough C, Wang J, Shih S. Validity of electronic health record-derived quality measurement for performance monitoring. J Am Med Inform Assoc. 2012; 19(4):604–9. https://doi.org/10.1136/amiajnl-2011-000557.
Newman-Casey PA, Woodward MA, Niziol LM, Lee PP, De Lott LB. Brand medications and medicare part d: How eye care providers' prescribing patterns influence costs. Ophthalmology. 2017. https://doi.org/10.1016/j.ophtha.2017.05.024.
Kesselheim AS, Avorn J, Sarpatwari A. The high cost of prescription drugs in the united states: Origins and prospects for reform. JAMA. 2016; 316(8):858–71. https://doi.org/10.1001/jama.2016.11237.
Manzoli L, Flacco ME, Boccia S, D'Andrea E, Panic N, Marzuillo C, Siliquini R, Ricciardi W, Villari P, Ioannidis JP. Generic versus brand-name drugs used in cardiovascular diseases. Eur J Epidemiol. 2016; 31(4):351–68. https://doi.org/10.1007/s10654-015-0104-8.
Corrao G, Soranna D, La Vecchia C, Catapano A, Agabiti-Rosei E, Gensini G, Merlino L, Mancia G. Medication persistence and the use of generic and brand-name blood pressure-lowering agents. J Hypertens. 2014; 32(5):1146–531153. https://doi.org/10.1097/HJH.0000000000000130.
Barnett JC, Vornovitsky M. Health insurance coverage in the united states: 2015. Report P60-257, United States Census Bureau. 2016. https://www.census.gov/content/dam/Census/library/publications/2016/demo/p60-257.pdf.
The authors would like to thank Rick McGuire, Orna Intrator and David Mitten for discussions related to this work.
This work was supported in part by research grants from National Institute of Health: National Center for Advancing Translation Science UL1 TR000042 and U54 TR001625 (MZ, CQ, RW, and SW), KL2 TR001999 (CQ), TL1 TR002000 (SW, KB), R01AI098112 and R01AI069351 (MZ), from the Philip Templeton Foundation (MZ, RW, SF), and from the University of Rochester Center for Health Informatics (MZ, RW, AR, CF, SF, SW).
The datasets analyzed during the current study are all publicly available, and URLs for their download are listed in the Methods Section and references.
Rochester Center for Health Informatics at the University of Rochester Medical Center, 265 Crittenden Blvd - 1.207, Rochester, 14642, NY, USA
Alexander Rosenberg, Christopher Fucile, Robert J. White, Melissa Trayhan, Samir Farooq, Caroline M. Quill, Samuel J. Weisenthal, Kristen Bush & Martin S. Zand
University of Alabama Birmingham, Düsternbrooker Weg 20, Birmingham, 14642, AL, USA
Alexander Rosenberg & Christopher Fucile
Department of Medicine, Division of Nephrology, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, 14642, NY, USA
Martin S. Zand
Department of Medicine, Division of Pulmonary and Critical Care, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, 14642, NY, USA
Caroline M. Quill
Clinical and Translational Science Institute, University of Rochester Medical Center, 265 Crittenden Blvd, Rochester, 14642, NY, USA
Robert J. White, Melissa Trayhan, Samir Farooq, Caroline M. Quill, Samuel J. Weisenthal, Kristen Bush & Martin S. Zand
Department Pharmacy, University of Rochester Medical Center, 601 Elmwood Avenue, Rochester, 14642, NY, USA
Lisa A. Nelson
Alexander Rosenberg
Christopher Fucile
Robert J. White
Melissa Trayhan
Samir Farooq
Samuel J. Weisenthal
Kristen Bush
The project was designed and overseen by MZ and AR. Data wrangling and domain expertise were provided by CF, CQ, MT, LN, AR, MZ and SF. Statistical and machine learning analyses were performed by AR, CF, MZ, SF. Figures were produced by AR, MZ and RW. The manuscript was written by AR, MZ, RW, SW, CQ, SF, KB and LN. All authors read and approved the final manuscript.
Correspondence to Martin S. Zand.
Not applicable to this manuscript.
Additional file 1
Figure S1. Data sources used for this study. This schema depicts various sources of data and how they are related. Red font indicates a data column with unique values. (EPS 332 kb)
Table S2. Differences between providers by services, patient beneficiary demographics, and payments. Comparison between low volume (≤ 25,000 prescriptions over 12 months) and high volume (> 25,000 prescriptions over 12 months) provider patient populations. In general, high volume prescribers had a higher proportion of patients with more complex medical conditions (e.g. cancer, Alzheimer's disease, heart failure), more elderly patients, and much higher use of Medicare services. (PDF 123 kb)
Table S1. Differences in high-prescribing provider fractions by geographic region. Table quantifies the fraction of high prescribing Medicare prescribers by United States Administrative Region (see Additional file 1: Figure S1 for region definitions). (PDF 60.6 kb)
Figure S3. Comparison of prescribing diversity and prescribing volume. Density/scatter plot indicating the number of unique drugs (top) drug classes (bottom) prescribed (diversity; y-axis), number of claims (volume; x-axis) and number of providers bin height coded as color. Bins that have a single provider are indicated by a blue dot. (EPS 1751 kb)
Figure S4. t-SNE plot showing distribution of claim volume per provider. This t-SNE plot is based on the provider by drug matrix, as shown in Fig. 3a. Color corresponds to the Log10 of claims per provider (each represented by a dot). (PDF 899 kb)
Figure S6. Unidimensional bar graphs of medication class prescribing frequency by region. Bar graphs of each of the top 10 medication classes prescribed (by percentage of individual prescriber prescriptions) for each of 24 medical specialty groupings, plotted for each of 10 Federal Regions. Note that drug class prescribing percentages are mean levels, and truncated at 21% to make the visualizations informative. (ZIP 5280 kb)
Figure S5. Hierarchical clustering. Plots of the 605 clusters identified by hierarchical clustering with linkage using Ward's minimization criteria. The background is the full t-SNE projection, while each cluster is in red. This 19 page figure is available for download from https://figshare.com/account/projects/24664/articles/5388157. (PDF 19 MB)
Figure S2. United States Census Regions. Map of United States Census Regions used for geographic data comparisons. Adapted from the United States Census Bureau. (PDF 249 kb)
Figure S7. Characteristics of core-based statistical areas (CBSA). 52 CBSAs are listed that have July 2012 population estimates greater than 1,000,000 residents. See Methods for data sources. (PDF 24.2 kb)
Additional file 10
Figure S8. t-SNE plots with particular CBSAs highlighted. A. t-SNE plot based on provider by drug matrix (as in Fig. 3a) with providers in Rochester and Oklahoma City annotated (see Fig. 10b). B. t-SNE plot based on drug class by provider matrix (as in 3b) with providers in Miami and Boston annotated (see Fig. 10e). (PDF 1010 kb)
Figure S9. Comparison of between state mutidimensional distance matrices for prescribing pattern versus disease prevalence. Provider specific data for drug class prescribing patterns (n=68 drug classes) and provider-specific patient disease prevalence (n=13 diseases) were obtained from Medicare public use files. Disease prevalence figures included dementia, asthma, atrial fibrillation, cancer, depression, diabetes, chronic obstructive pulmonary disease, chronic kidney disease, heart failure, hyperlipidemia, hypertension, ischemic heart disease, and stroke. Medicare providers with ≥ 1000 Medicare prescriptions in 2013 (n=207,158) and complete data were grouped by state (50 US states, the District of Columbia, and Puerto Rico). We then calculated the mean feature vector prescribing pattern and provider patient-specific disease prevalence values for each state's providers. To test whether the multi-dimensional drug prescribing pattern differences were correlated with multi-dimensional disease prevelence, we calculated the Euclidean n-dimensional matrix of distances between each pair of states for both prescribing pattern distances and disease prevalence distances. Thus, that states with similar Medicare prescribing patterns should have have small multi-dimensional Euclidean distances, while those that differ would have large distances. A similar relationship would exist for n-dimensional distances calculated using the disease prevalence feature vector; pairs of states with similar prevalence of diseases would have small n-dimensional Euclidean distances. We then tested the correlation between disease prevalence and prescribing pattern distances by analysis of variance, finding an R2=0.22185, indicating that variation in prescribing patterns between states cannot be explained simply by variance in disease prevalence. (PDF 1370 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Rosenberg, A., Fucile, C., White, R. et al. Visualizing nationwide variation in medicare Part D prescribing patterns. BMC Med Inform Decis Mak 18, 103 (2018). https://doi.org/10.1186/s12911-018-0670-2
Healthcare variation | CommonCrawl |
Joint distributed beamforming and jamming schemes in decode-and-forward relay networks for physical layer secrecy
Chengmin Gu1 &
Chao Zhang ORCID: orcid.org/0000-0002-5884-21691
In this paper, we propose joint cooperative beamforming and jamming schemes in decode-and-forward (DF) relay networks for physical layer secrecy. In DF relay networks, only the relays decoding the message from the source correctly have to join in the forwarding phase and the relays with decoding error are not utilized sufficiently. Taking this property into consideration, we let the relays decoding successfully transmit information signals and the relays decoding in error transmit jamming signal to improve the secrecy capacity of system. For this purpose, we design a bi-level optimization algorithm to search the optimal beamforming vector and jamming vector for the relays via the semi-definite relaxation (SDR). In addition, for balancing the cost of system and secrecy performance, we also study some suboptimal schemes to improve information secrecy. Finally, the simulation results show that the optimal scheme outperforms all other simulated schemes and the suboptimal schemes achieve good tradeoff between secrecy performance and computational complexity.
For openness and broadcast properties in wireless communications, information carried by radio waves is vulnerable to be intercepted by the unintended users. Traditional secrecy mechanisms, which depend on data encryption in application layer, mainly utilize unaffordable computation complexity to prevent the eavesdroppers to obtain the encrypted messages. However, with the rapid development of computing apparatuses, the traditional encryption technology confronts the unprecedented challenge. Thus, to achieve information security in wireless transmission, physical layer secrecy, which takes advantage of intrinsic characteristics of the wireless channels to achieve transmission security without applying encryption technology, has drawn much attention and been applied into many scenarios [1–4].
On the other hand, relay-aided cooperative communication technology has been applied in wireless scenarios, since it can extend the coverage and improve the reliability of signal transmission [5]. Additionally, relay-aided cooperative networks have been proved to be able to enhance the transmission security [6, 7]. In the multiple-relay networks without the direct link from the source to the destination, Cooperative beamforming (CB), single-relay and single-jammer (SRSJ) scheme, single-relay and multiple-jammer (SRMJ) scheme, multiple-relay and single jammer (MRSJ) scheme, and multiple-relay and multiple-jammer (MRMJ) scheme are five main transmission schemes for physical layer secrecy. In the CB-based relay systems, relays just perform distributed beamforming directly to the legal destination in order to enlarge the capacity of legal channel as much as possible [8–11]. In the SRSJ scheme, for relaxing the requirement of signal synchronization, how to select the best pair of forwarding relay and jammer was addressed in [12]. In the SRMJ scheme, the best relay is picked out to forward the information and the left relays transmit jamming signals to confuse the eavesdropper [13]. Alternatively, the MRSJ scheme selects the best jammer from all relays and let left relays perform cooperative beamforming to the legal user [14]. As the relay with amplify-and-forward (AF) scheme can always transmit signals to the destination [15], the MRMJ scheme, in which some relays are used to transmit information to the legal destination and left ones produce jamming signals to improve the secrecy capacity of the information transmission, is usually investigated in AF relay networks [16–18]. In DF relay networks, a MRMJ scheme was proposed in [19], where the relays without decoding error are divided into two groups, one for information beamforming and the other one for cooperative jamming. However, it does not make the most of the relays that cannot decode the message from the source successfully to enhance the secrecy of information transmission. In [20], a simultaneous beamforming and jamming scheme was proposed in DF relay networks. However, the relays deocoding in error are not sufficiently exploited. A MRMJ scheme with fixed jamming and beamforming set was proposed in [21]. Similarly, a multi-relay secrecy transmission scheme with a dedicated multi-antenna jammer was proposed in [22]. Both [21] and [22] have deployed dedicated jamming nodes which have no ability of forwarding information to the legal destination. If there exists the direct link from the source to the destination, cooperative jamming (CJ) and transmission switching between CB and CJ were also proposed to improve the system secrecy [23, 24]. Moreover, joint CB and CJ schemes in co-located multi-antenna scenarios were also investigated in [25] and [26]. In [27], we have investigated the optimal joint CB and CJ scheme for the DF relay networks with direct links from the source to destination and eavesdropper. However, the assumptions on relay set partition and the inference cancelation are difficult to be implemented. In this paper, we consider a more practical scenario for the joint beamforming and jamming scheme in the DF relay networks and provide optimal scheme and suboptimal schemes with lower complexity to investigate the proposed idea of joint beamforming and jamming well. In [28], Guo et al. also addressed the power allocation in a joint beanforming and jamming scheme in DF relay networks to achieve the maximum secrecy rate.
In this paper, we intend to employ joint cooperative beamforming and jamming to improve the secrecy capacity in DF relay networks without direct link between the source and the destination. In DF relay networks, only the relays decoding the message from the source correctly have to forward information to the legitimate destination during the relaying phase. To efficiently utilize all relays, the relays decoding successfully consist of the beamforming set, where relays perform distributed beamforming to transmit information signals. At the same time, the relays decoding in error belong to the jamming set, where the relays transmit jamming to disturb the eavesdropper. Herein, our goal is to design the beamforming vector and jamming vector in order to maximize the achievable secrecy capacity under the constraint of total relay power. For this purpose, we design a based bi-level optimization algorithm to search the optimal beamforming vector and jamming vector for the relays via the semi-definite relaxation. In addition, for balancing the cost of system and secrecy performance, we also study some suboptimal schemes to improve information secrecy. Finally, by our numerical results, the optimal scheme outperforms all existing schemes and the proposed suboptimal schemes. In addition, some suboptimal schemes with low computational complexity also have better secrecy performance than existing schemes.
Compared to existing work, the contributions of this paper are as follows:
In most existing work, the relays decoding in error are not usually utilized during the cooperative transmission phase. In this paper, taking the distinguishing feature of DF relay into consideration, we employ the relays decoding in error to transmit jamming signals to improve the secrecy of cooperative transmission.
How to design the beamforming vector and jamming vector in order to maximize the achievable secrecy capacity under the constraint of total relay power is provided. To reduce the computation complexity, we also propose four suboptimal designs.
Through simulations, we compare the proposed designs with existing CB, SRMJ, MRSJ and SRSJ schemes and prove that the optimal design and some suboptimal designs outperform the existing schemes. The suitable scenarios of these proposed designs are also addressed.
We adopt the following notations. Bold uppercase letters denote matrices and bold lowercase letters denote column vectors. Transpose and conjugate transpose are represented by (·)T and (·)H respectively; I K is the identity matrix of K×K; \({\mathbb C}^{n}\) denotes the space of n×1 column vector with complex entries; Tr(·) is the trace of the matrix; Rank(·) denotes the rank of a matrix; ∥·∥2 is the two-norm of a vector; A≽0 and A≻0 mean that A is positive semi-definite and definite matrix, respectively; x⊥y denotes vector x and vector y are orthogonal; x∥y denotes vector x and vector y are parallel; \(\mathbb E\{\cdot \}\) denotes expectation.
System model and transmission scheme
As depicted in Fig. 1, we study a wireless relay network which consists of a source node (S), N (N≥1) relay nodes, a desired destination node (D) and a single eavesdropper (E). We assume that these relays are trusted during the information transmission. Each node is equipped with one omni-directional antenna and operates in the half duplex mode. We assume that the channel coefficient h ij from the node i to the node j, 1≤i,j≤N, follows complex Gaussian distribution with zero mean and variance \(K_{0} d_{ij}^{-\beta } \sigma _{ij}^{2}\), i.e., \({ h}_{ij}{\sim }{\mathcal {{CN}}}\left (0,K_{0} d_{ij}^{-\beta } \sigma _{ij}^{2}\right)\), where K 0 is the gain constant determined by transmitting and receiving antennas, d ij denotes the distance between node i and node j, β is path loss factor, and \(\sigma _{0}^{2}\) stands for the small scale fading. Considering the disperse relays, we assume that all channels experience independent and identically distributed (i.i.d.) small-scale fading, i.e., \(\sigma _{ij}^{2}=\sigma _{0}^{2}\). Furthermore, the channels are assumed to be reciprocal, i.e., h ij =h ji . Owing to the path loss and shadow fading, there does not exist a direct link from source to destination or eavesdropper. We consider a slow-fading channel scenario, where the channel coefficients keep constant in one transmission block T and vary from block to block. The noise at the receiver follows complex Gaussian distribution with zero mean and variance N 0. Additionally, although we consider the nodes with one antenna in the system model, it is easy to extend the analysis and results into a multi-antenna scenario.
Transmission scheme
The whole cooperative transmission is separated into two phases: source broadcasting and relay transmitting.
Source broadcasting
The source broadcasts its intended signal x in the whole network; the relays receive the information-bearing signal and tries to decode the message, owing to there does not exist a direct link from source to destination or eavesdropper. Thus, the information transmitting in the source broadcasting phase is secure. Then, the received signal at the ith relay can be written as
$$ {y}_{si}=\sqrt{P_{s}}{h_{si}}{{x}}+{n}_{si} $$
where P s is the transmit power of the source, h si is the channel coefficient from source to the ith relay, and n si is the receiver noise. Also, x is the normalized information symbol, i.e., \(\mathbb {E}\{|{x}|^{2}\}=1\). If the ith relay can decode the received message correctly, it will join the information transferring in the relay transmitting phase. Otherwise, the ith relay is assigned to jam the eavesdropper. We assume all transmitted information blocks are protected by an ideal error control coding. In other words, if the received signal-to-noise ratio (SNR) of the ith relay, i.e., γ si =P s |h si |2/N 0, is larger than the threshold γ th , where \(\phantom {\dot {i}\!}\gamma _{th}=2^{2R_{0}}-1\) and R 0 is the target information transmission rate, the receiver can decode the information packet correctly. Thus, if γ si ≥γ th , then the ith relay belongs to \(\mathcal {D}\). Otherwise, it is in the set \(\mathcal {J}\). As a result, each relay knows whether it can decode the message from the source successfully and which set it belongs to. Without loss of generality, we set \(\mathcal {D}=\{R_{1},R_{2},\cdots,R_{M}\}\) and \(\mathcal {J}=\{J_{1},J_{2},\cdots,J_{K}\}\), where 1≤R i ,J i ≤N are the indexes of these relays (see Fig. 1) and there is M+K=N. Additionally, we assume that all receiver noises have the same noise power N 0.
We define the weight coefficient vector of set \(\mathcal {D}\) for beam-forming as \(\mathbf {w}_{R}\,=\,\left [\!w_{R_{1}}^{*}\!,\!w_{R_{2}}^{*}\!,\!\cdots,\!w_{R_{M}}^{*}\!\right ]^{T}\!\) and weight coefficient vector of set \(\mathcal {J}\) for jamming as \({\mathbf {w}}_{J}=\left [w_{J_{1}}^{*},w_{J_{2}}^{*},\cdots,w_{J_{K}}^{*}\right ]^{T}\). Moreover, \({\mathbf {h}}_{rd}=\left [h_{R_{1}D},h_{R_{2}D},\cdots,h_{R_{M}D}\right ]^{T}\) represents the channel vector from the beamforming set to the destination and \({\mathbf {h}}_{je}=\left [h_{J_{1} E},h_{J_{2} E},\cdots,h_{J_{K} E}\right ]^{T}\) denotes the channel vector from the jamming set to the eavesdropper. Before relay transmitting, the destination can transmit a training signal to let each relay estimate the channel coefficient from the destination to itself. Note that the eavesdropper is often a wireless user unauthorized to access the message for the destination [29]. Hence, the eavesdropper is able to cooperatively transmit training signal to all relays. Then, each relay can obtain its channel coefficient h id , i=1,2,...,N. Moreover, we set up a central control node (CCN), which can be a relay node or a dedicated node. After that, each relay reports its related channel information to the CCN. Therefore, the CCN can compute the beamforming vector w R for the relays in set \(\mathcal {D}\) and jamming vector w J for the relays in set \(\mathcal {J}\). In the following analysis, we assume that all receivers can estimate their received channel coefficients perfectly to exploit the ideal performance and check up the theoretical feasibility of our proposals. The effect of estimation error will be discussed in future work. Whereas, the source node has no knowledge of its transmitting channel state information due to practical constraint.
Relay transmitting
The relays in set \(\mathcal {D}\) transmit the information symbol x to the destination, while the relays in set \(\mathcal {J}\) radiate jamming symbol z to achieve secure transmission. The received signal at the destination is given by
$$ {y}_{rd}={\mathbf{w}}_{R}^{H}{\mathbf{h}}_{rd}{x}+{\mathbf{w}}_{J}^{H} {\mathbf{h}}_{jd}{z}+n_{rd} $$
and the received signal at the eavesdropper can be expressed as
$$ {y}_{re}={\mathbf{w}}_{R}^{H}{\mathbf{h}}_{re}{x}+{\mathbf{w}}_{J}^{H} {\mathbf{h}}_{je}{z}+n_{re} $$
where z is independent of x, and \(\mathbb {E}\{|z|^{2}\}=1\), n rd and n re are receiver noise power at the destination and the eavesdropper, respectively. Considering the total relay power constraint, we have ∥w R ∥2+∥w J ∥2≤P t . Therefore, the information transmission capacity C d (M) at the destination is
$$ {C}_{d}(M)=\frac{1}{2}{\log_{2}\left(1+\frac{{\mathbf{w}}^{H} {\mathbf{\widetilde{H}}}_{rd}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\mathbf{\widetilde{H}}}_{jd}{\mathbf{w}}+N_{0}}} \right)} $$
and the capacity C e (M) at the eavesdropper can be expressed as
$$ {C}_{e}(M)=\frac{1}{2}{\log_{2}\left(1+\frac{{\mathbf{w}}^{H} {\mathbf{\widetilde{H}}}_{re}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\mathbf{\widetilde{H}}}_{je}{\mathbf{w}}+N_{0}}}\right)} $$
where M denotes the cardinal of beamforming set, \({\mathbf {w}}=\left [{\mathbf {w}}_{R}^{T}~{\mathbf {w}}_{J}^{T}\right ]^{T}\), \({\mathbf {\widetilde {H}}}_{rd}={\widetilde {\mathbf {h}}}_{rd}{\widetilde {\mathbf {h}}}_{rd}^{H}\), \({\widetilde {\mathbf {H}}}_{re}={\widetilde {\mathbf {h}}}_{re}{\widetilde {\mathbf {h}}}_{re}^{H}\), \({\widetilde {\mathbf {H}}}_{jd}={\widetilde {\mathbf {h}}}_{jd}{\widetilde {\mathbf {h}}}_{jd}^{H}\), \({\widetilde {\mathbf {H}}}_{je}={\widetilde {\mathbf {h}}}_{je}{\widetilde {\mathbf {h}}}_{je}^{H}\). Herein, \({{\widetilde {\mathbf {h}}}_{rd}}=\left [{\mathbf {h}}_{rd}^{T},\mathbf {0}_{1\times K}\right ]^{T}\), \({\widetilde {\mathbf {h}}}_{re}=\left [{\mathbf {h}}_{re}^{T},\mathbf {0}_{1\times K}\right ]^{T}\), \({\widetilde {\mathbf {h}}}_{jd}=[\mathbf {0}_{1\times M},{\mathbf {h}}_{jd}^{T}]^{T}\), \({{\widetilde {\mathbf {h}}}_{je}}=\left [\mathbf {0}_{1\times M},{\mathbf h}_{je}^{T}\right ]^{T}\). Then, the secrecy capacity of the whole transmission conditioned on the set \(\mathcal {D}\) and \(\mathcal {J}\) is given by
$$ C_{s}(M)=\max\{0,~C_{d}(M)-C_{e}(M)\} $$
Note that C s (M)=0 means the eavesdropper can obtain no less correct information than the destination, which is called completely unsafe transmission and should be avoided. Our aim herein is to maximize the secrecy capacity C s (M) as much as possible with the power constraint of relays.
Optimal design for maximizing secrecy capacity
In this section, we intend to design w R and w J to maximize the secrecy capacity under the total relay power constraint. On the basis of the previous section, the problem of secrecy capacity maximization under the power constraint can be expressed as
$$ \begin{aligned} &\max_{\mathbf{w}}~\left\{C_{d}(M)-C_{e}(M)\right\} \\ &\mathrm{s.t.}~\|{\mathbf{w}}\|^{2}\le P_{t} \end{aligned} $$
Since secrecy capacity is the difference of two concave functions, it is a non-convex optimization in general. To deal with this issue, we will conduct a series of transformations to turn it into a convex problem, which can be solved by some available solvers.
Firstly, by introducing an auxiliary variable 0<τ≤1, the problem of (7) can be equivalently rewritten as
$$\begin{array}{*{20}l} \max_{{\mathbf{w}},{\tau}}\quad &{\log_{2}\left(1+\frac{{\mathbf{w}}^{H} {\mathbf{\widetilde{H}}}_{rd}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\mathbf{\widetilde{H}}}_{jd}{\mathbf{w}}+N_{0}}}\right)}-\log_{2}\left(\frac{1}{\tau}\right) \end{array} $$
$$\begin{array}{*{20}l} \mathrm{s.t.} \quad &{\log_{2}\left(1+\frac{{\mathbf{w}}^{H} {\mathbf{\widetilde{H}}}_{re}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\widetilde{\mathbf{H}}}_{je}{\mathbf{w}}+N_{0}}} \right) }\leq \log_{2}\left(\frac{1}{\tau}\right) \end{array} $$
$$\begin{array}{*{20}l} &\|{\mathbf{w}}\|^{2}\leq P_{t} \end{array} $$
Then, on account of monotony property of logarithm function, the optimization problem can be further simplified as
$$ \begin{aligned} \min_{{\mathbf{W}},{\tau}}\quad& \frac{\text{Tr}\left({\mathbf{\widetilde{H}}}_{jd}{\mathbf{W}}\right)+ N_{0}}{\left[\text{Tr}\left({\widetilde{\mathbf{H}}}_{jd}+{\widetilde{\mathbf{H}}}_{rd} \right){\mathbf{W}}+N_{0}\right]{\tau}}\\ \mathrm{s.t.} \quad&\frac{\text{Tr}\left({\widetilde{\mathbf{H}}}_{je}{\mathbf{W}}\right)+ N_{0}}{\left[\text{Tr}\left({\widetilde{\mathbf{H}}}_{je}+{\widetilde{\mathbf{H}}}_{re}\right) {\mathbf{W}}+N_{0}\right]{\tau}}\geq1\\ &\text{Tr}(\mathbf{W})\leq P_{t},~{\mathbf{W}}\succeq{\mathbf{0}}\\ &\text{Rank}(\mathbf{W})=1\\ \end{aligned} $$
where W=w w H is a rank-one square matrix. To solve the problem of (11), we employ the idea of SDR in [30] to drop the rank-one non-convex constraint. After the relaxation transformation, the problem of (11) is still a non-convex optimization problem owing to the presence of auxiliary variable τ. However, the problem of (11) can be treated as the quasi-convex problem for each fixed τ [31]. Therefore, it can be treated as a bi-level optimization problem: the outer-level optimization is about auxiliary variable τ and the inner-level optimization is a quasi-convex problem.
Inner-level optimization
Since (11) is a quasi-convex problem given a τ, we can employ the classical bisection method [31] to seek the optimum W ⋆. Nevertheless, it may incur huge computational complexity. Thanks to the Charnes-Cooper transformation [32], we can convert the linear fractional optimization problem into the linear optimization problem, which can be solved efficiently by convex optimization tools. Define
$$\begin{aligned} \mathbf{Z}&=\frac{\mathbf{W}}{\left[\text{Tr}\left({\mathbf{\widetilde{H}}}_{jd}+{\widetilde{\mathbf{H}}}_{rd}\right){\mathbf{W}}+N_{0}\right] {\tau}}~~\text{and}\\ \psi&=\frac{1}{\left[\text{Tr}\left({\widetilde{\mathbf{H}}}_{jd}+{\widetilde{\mathbf{H}}}_{rd}\right){\mathbf W}+N_{0}\right]{\tau}}, \end{aligned} $$
then the problem (11) without a rank-one constraint can be equivalently described as
$$ \begin{aligned} \min_{{\mathbf{Z}},{\psi}}\quad&{\text{Tr}\left({\widetilde{\mathbf{H}}}_{jd}{\mathbf{Z}}\right)+{\psi}N_{0}}\\ \mathrm{s.t.}\quad&{\tau}\left(\text{Tr}\left({\widetilde{\mathbf{H}}}_{jd}+{\widetilde{\mathbf{H}}}_{rd}\right){\mathbf{Z}}+{\psi}N_{0}\right)=1\\ &{\tau}\left(\text{Tr}\left({\widetilde{\mathbf{H}}}_{je}+{\widetilde{\mathbf{H}}}_{re}\right){\mathbf{Z}}+{\psi}N_{0}\right) \le\text{Tr}\left({\widetilde{\mathbf{H}}}_{je}{\mathbf{Z}}\right)+{\psi}N_{0} \\ &\text{Tr}(\mathbf{Z})\leq {\psi}P_{t} \\ &{\psi}>0,~{\mathbf{Z}}\succeq{\mathbf{0}} \end{aligned} $$
Note that there is Z=ψ W. Using the well-known CVX toolbox [31], we can solve problem (12) easily.
Outer-level optimization
To find the optimal solution τ ⋆, we have to play an exhaustive searching with the general range 0<τ≤1. In order to further reduce the computational complexity, we must shrink the scope of τ. On one hand, observing (9), we have
$$ {{\tau} \le \frac{1}{1+\frac{{\mathbf{w}}^{H} {\widetilde{\mathbf{H}}}_{re}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\widetilde{\mathbf{H}}}_{je}{\mathbf{w}}+N_{0}}} }} $$
It means τ should be less than \({{{\max \left \{\left ({1+\frac {{\mathbf {w}}^{H} {\widetilde {\mathbf {H}}}_{re}{\mathbf {w}}} {{\mathbf {w}}^{H} {{\widetilde {\mathbf {H}}}_{je}{\mathbf {w}}+N_{0}}} }\right)^{-1}\right \}}}}\). Therefore, we have to solve the problem
$$ \min \limits_{{\mathbf{w}}\in\mathbb{C}^{N}} ~\frac{{\mathbf{w}}^{H}{\mathbf{A}}{\mathbf{w}}}{{\mathbf{w}}^{H}{\mathbf{B}}{\mathbf{w}}} $$
where \(\mathbf {A}={\widetilde {\mathbf {H}}}_{re}\succeq 0\) and \({\mathbf {B}}={\widetilde {\mathbf {H}}}_{je}+\frac {N_{0}}{P_{t}}{\mathbf I}_{K} \succ 0\). As the objective function of (14) is the generalized Rayleigh quotient problem [33], the minimum of (14) is equal to λ min(B −1 A) which stands for the minimum eigenvalue of B −1 A. Due to B −1 A a is semi-definite matrix, the eigenvalues of B −1 A are not less than zero [33]. Then, we have
$$ \tau \leq \frac{1}{1+{\lambda}_{\min}\left({\mathbf{B}}^{-1}{\mathbf{A}}\right)}=\tau_{\max} $$
On the other hand, considering that the secrecy capacity must be non-negative, by (8) we can obtain
$$ \begin{aligned} {\tau}\ge \frac{1}{1+\frac{{\mathbf{w}}^{H} {\widetilde{\mathbf{H}}}_{rd}{\mathbf{w}}} {{\mathbf{w}}^{H} {{\widetilde{\mathbf{H}}}_{jd}{\mathbf{w}}+N_{0}}}} \ge \frac{1}{1+\lambda_{\max}\left({\mathbf{D}}^{-1}{\mathbf{C}}\right)}=\tau_{\min} \end{aligned} $$
where \({\mathbf {C}}={\widetilde {\mathbf {H}}}_{rd}\), \({\mathbf {D}}={\widetilde {\mathbf {H}}}_{jd}+\frac {N_{0}}{P_{t}}{\mathbf {I}}_{K}\), and λ max(D −1 C) represents the maximum eigenvalue of the matrix D −1 C. Note that τ max and τ min are independent on w. Therefore, the outer optimization is formulated as
$$ \begin{aligned} &\min_{\tau }~{\phi(\tau)}\\ &\mathrm{s.t.}~{\tau}_{\min}\le{\tau}\le \tau_{\max} \end{aligned} $$
where \({\phi (\tau)}=\text {Tr}({\widetilde {\mathbf {H}}}_{jd}{\mathbf {Z}^{\star }(\tau)})+{\psi ^{\star }(\tau)}N_{0}\) and Z ⋆(τ) and ψ ⋆(τ) are solutions of the inner-level optimization problem (12) given a τ. After one-dimension searching during [τ min,τ max], the optimal τ ⋆ that makes ϕ(τ) minimum can be found.
As a result, we finally obtain the solution of problem (11), i.e., τ ⋆ and W ⋆=Z ⋆/ψ ⋆. If Rank(W ⋆)=1, we can obtain w ⋆ via singular value decomposition (SVD) [34] from W ⋆. If the rank of W ⋆ is larger than one, we can extract an approximate solution from W ⋆ via using the Gaussian Randomization Procedure (GRP) in [30]. To make it more clearly, we draw the procedure of solving the optimization problem in Algorithm 1.
Suboptimal designs with low complexity
It can be seen that the proposed optimal design always incurs huge computational complexity in general. To reduce the computational complexity, we can rewrite the equivalent problem of (7) as
$$ \begin{aligned} \max_{{P_{r}},{P_{j}}}&\left\{{\max_{{\mathbf{w}}_{R},{\mathbf{w}}_{J}}\{C_{d}(M)-C_{e}(M)\}}\right\}\\ \mathrm{s.t.}~ &0\leq \|{\mathbf{w}}_{R}\|^{2}\leq{P_{r}}\\ &0\leq \|{\mathbf{w}}_{J}\|^{2}\leq{P_{j}}\\ &P_{r}+P_{j}=P_{t} \end{aligned} $$
It means that we can firstly determine the optimal directions of w R and w J and then seek the optimal power allocation between w R and w J . Furthermore, given a fixed w R , we just need to search the optimal w J in a space with lower dimension than w and vice versa. From this point, we propose the following suboptimal designs in which w R or w J is directly determined by related channel information and the other weight vector is optimized to maximize the secrecy capacity.
Information beam determined schemes
w R ⊥h re scheme
Considering the information, beamforming vector completely lies on the null space of relay-eavesdropper channel, that is to say, there does not exist any information leakage to the unintended user in the relay transmitting phase, i.e., C e (M)=0. Therefore, there is no need to jam the eavesdropper. The optimization problem of (18) in this case is equivalently formulated as
$$ \begin{aligned} \max_{{\mathbf{w}_{R}}}\quad&\|{\mathbf{w}}_{R}^{H}\mathbf{h}_{rd}\|^{2} \\ \mathrm{s.t.}\quad&{\mathbf{w}}_{R}^{H}{\mathbf{h}}_{re}=0\\ &\|{\mathbf{w}}_{R}\|^{2}\leq{P_{t}} \end{aligned} $$
Using the Lagrange multiplier optimization technique directly in [31], we can obtain the solution as
$$ {{\mathbf{w}}_{R}^{\star}=\frac{{\sqrt{P_{t}}}\left({\mathbf{I}}_{M}-{\mathbf{P}}\right){\mathbf{h}}_{rd}}{\left\|\left({\mathbf{I}}_{M}-{\mathbf{P}}\right){\mathbf{h}}_{rd}\right\|}}, $$
where \({\mathbf {P}}={\mathbf {h}}_{re}({\mathbf {h}}_{re}^{H}{\mathbf {h}}_{re})^{-1}{\mathbf {h}}_{re}^{H}\) is the orthogonal projection matrix onto the subspace spanned by h rd . Note that the problem of (19) implies that it can be solved for \(M\geqslant 2\). Herein, if the number of relays in the beamforming set is one, i.e., M=1, the solution of (19) is
$$ \mathbf{w}_{R}^{\star}=\frac{\sqrt{P_{t}}{h}_{rd}}{|{h}_{rd}|} $$
w R ∥h rd scheme
As the maximum ratio transmission (MRT) in multi-antenna systems is an efficient strategy to achieve larger capacity [18], we let the relays decoding successfully perform MRT strategy, where the information beamforming vector directly points to the desired destination. Thus, the information beamforming vector can be expressed as [35]
$$ \mathbf{w}_{R}^{\star}=\frac{\sqrt{P_{t}-P_{j}} \mathbf{h}_{rd}}{\left\|\mathbf{h}_{rd}\right\|} $$
Substitute (22) into (18), the optimization problem of (18) can be equivalently expressed as
$$ \begin{aligned} \max_{{\mathbf{w}}_{J},P_{j}}~&\log_{2}\left(1+\frac{\left(P_{t}-P_{j}\right)\cdot\left\|\mathbf{h}_{rd}\right\|^{2}} {\mathbf{w}_{J}^{H}\mathbf{H}_{jd}\mathbf{w}_{J}+N_{0}}\right)\\ &-\log_{2} \left(1+\frac{(P_{t}-P_{j})\cdot\|\mathbf{h}_{rd}^{\ast} \mathbf{h}_{re}\|^{2}}{\|\mathbf{h}_{rd}\|^{2}\left(\mathbf{w}_{J}^{H}\mathbf{H}_{je}\mathbf{w}_{J}+N_{0}\right)}\right)\\ \mathrm{s.t.}~&0\leq \|{\mathbf{w}}_{J}\|^{2}\leq{P_{j}}\\ & 0\leq P_{j} \leq P_{t} \end{aligned} $$
Similarly, given a P j ∈[0,P t ], we introduce an auxiliary variable ν to equivalently convert the above problem as
$$ \begin{aligned} \max_{{\mathbf{w}}_{J},P_{j}}~&\log_{2}\left(1+\frac{\left(P_{t}-P_{j}\right)\cdot\|\mathbf{h}_{rd}\|^{2}} {\mathbf{w}_{J}^{H}\mathbf{H}_{jd}\mathbf{w}_{J}+N_{0}}\right)-\log_{2}\left(\frac{1}{\nu}\right)\\ \mathrm{s.t.}~&\log_{2}\left(1+\frac{\left(P_{t}-P_{j}\right)\cdot\|\mathbf{h}_{rd}^{\ast}\mathbf{h}_{re}\|^{2}} {\|\mathbf{h}_{rd}\|^{2}\left(\mathbf{w}_{J}^{H}\mathbf{H}_{je}\mathbf{w}_{J}+N_{0}\right)}\right)\leq\log_{2}\left(\frac{1}{\nu}\right)\\ &0\leq \|{\mathbf{w}}_{J}\|^{2}\leq{P_{j}} \end{aligned} $$
Due to the monotony property of logarithm function, it is equivalent to solve the problem
$$ {{\begin{aligned} \min_{{\mathbf{W}}_{J},{\nu}}~ &\frac{\text{Tr}\left(\mathbf{H}_{jd}{\mathbf{W}}_{J}\right)+N_{0}} {\left[\text{Tr}\left({\mathbf{H}}_{jd}{\mathbf{W}}_{J}\right)+N_{0}+\left(P_{t}-P_{j}\right)\|\mathbf{h}_{rd}\|^{2}\right]{\nu}}\\ \mathrm{s.t.} ~&\frac{\|\mathbf{h}_{rd}\|^{2}\text{Tr}\left({\mathbf{H}}_{je}{\mathbf{W}}_{J}\right) +\|\mathbf{h}_{rd}\|^{2}N_{0}} {\left[\|\mathbf{h}_{rd}\|^{2}\left(\text{Tr}\left({\mathbf{H}}_{je}{\mathbf{W}}_{J}\right)+N_{0}\right)+\left(P_{t}-P_{j}\right) \|\mathbf{h}_{rd}^{\ast}\mathbf{h}_{re}\|^{2}\right]{\nu}}\geq 1\\ &\text{Tr}\left({\mathbf{W}}_{J}\right)\leq P_{j},~{\mathbf{W}}_{J}\succeq{\mathbf 0}\\ &\text{Rank}\left(\mathbf{W}_{J}\right)=1 \end{aligned}}} $$
where \({\mathbf {W}}_{J}={\mathbf {w}}_{J}{\mathbf {w}}_{J}^{H}\) is a rank-one matrix. Observe (25) and (11), we can also apply Algorithm 1 to solve the problem (25). Then, we can obtain the optimal jamming vector \(\mathbf {w}_{J}^{\star }\) for a given P j .
In order to achieve the optimal power allocation between P r and P j , we appeal to the one-dimension searching on P j over the interval [0,P t ]. So that, we can finally obtain the optimal \(P_{j}^{\star }\), \(P_{r}^{\star }\), \(\mathbf {w}_{R}^{\star }\) and \(\mathbf {w}_{J}^{\star }\).
Jamming beam determined schemes
w J ⊥h jd scheme
We firstly consider a null-steering beamforming method to guarantee the jamming beamforming vector that completely lies on the null space of h jd [35]. Therefore, w J does not affect C d (M). For a given P j , to suppress C e (M) as much as possible, we have the optimization problem on w J ,
$$ \begin{aligned} \max_{\mathbf{w}_{J}}\quad&\|{\mathbf{w}}_{J}^{H}\mathbf{h}_{je}\|^{2} \\ \mathrm{s.t.}\quad&{\mathbf{w}}_{J}^{H}{\mathbf{h}}_{jd}=0\\ &0\leq \|{\mathbf{w}}_{J}\|^{2}\leq{P_{j}} \end{aligned} $$
When relays in the jammming set is larger than one, applying Lagrange multiplier optimization [8, 31], we can obtain the solution of (26)
$$ \begin{aligned} {\mathbf{w}}_{J}^{\star}= \left\{\begin{array}{ll} {\sqrt{P_{j}}} \frac{\left({\mathbf{I}}_{K}-{\mathbf{Q}}\right){\mathbf{h}}_{je}}{\left\|({\mathbf{I}}_{K}-{\mathbf Q}){\mathbf{h}}_{je}\right\|} & \text{if}~K\geq2,\\ \sqrt{P_{j}}\frac{\mathbf{h}_{je}} {\|\mathbf{h}_{je}\|} & \text{if}~K=1.\\ \end{array}\right. \end{aligned} $$
where \({\mathbf Q}={\mathbf {h}}_{jd}({\mathbf {h}}_{jd}^{H}{\mathbf {h}}_{jd})^{-1}{\mathbf {h}}_{jd}^{H}\) is the orthogonal projection matrix onto the subspace spanned by h je . Given a P j , substitute \(\mathbf {w}_{J}^{\star }\) into (18), the optimization problem is equivalent to the problem
$$ \begin{aligned} \max_{{\mathbf{w}}_{R}}\quad &\frac{{\mathbf{w}}_{R}^{H}{\widetilde{\mathbf{E}}{\mathbf{w}}_{R}}}{{\mathbf{w}}_{R}^{H}{\widetilde{\mathbf{F}}{\mathbf{w}}_{R}}} \\ \mathrm{s.t.} \quad&\|{\mathbf{w}}_{R}\|^{2}\leq{P_{t}-P_{j}} \end{aligned} $$
where \({\widetilde {\mathbf {E}}}=\frac {1}{\sqrt {P_{t}-P_{j}}}{\mathbf {I}}_{M}+{\frac {1}{N_{0}}}{\mathbf {H}}_{rd} \) and \({\widetilde {\mathbf {F}}}=\frac {1}{\sqrt {P_{t}-P_{j}}}{\mathbf {I}}_{M}+{\frac {{\mathbf {H}}_{re}}{\sqrt {P_{t}-P_{j}}| {{{\mathbf {w}}_{J}^{\star }}^{H}}\mathbf {h}_{je}|^{2}+N_{0}}} \). Like (14), the solution of (28) is
$$ {\mathbf{w}}_{R}^{\star}=\sqrt{P_{t}-P_{j}}\cdot \zeta_{max}\left({\widetilde{\mathbf{F}}}^{-1}{\widetilde{\mathbf{E}}}\right) $$
in which \(\zeta _{max}({\widetilde {\mathbf {F}}}^{-1}{\widetilde {\mathbf {E}}})\) represents the eigenvector corresponding to the maximum eigenvalue of the matrix \({\widetilde {\mathbf {F}}}^{-1}{\widetilde {\mathbf {E}}}\). Similarly, we also need to search the optimal \(P_{j}^{\star }\) during the interval [0,P t ] to maximize C s (M).
w J ∥h je scheme
In this case, we let the jamming beamforming aim at the eavesdropper directly to shrink the capacity of the eavesdropper as much as possible. Then, the jamming beamforming vector can be expressed as
$$ \mathbf{w}_{J}^{\star}=\sqrt{P_{j}}\frac{\mathbf{h}_{je}}{\|\mathbf{h}_{je}\|} $$
Substitute (30) into (18), the maximization problem of secrecy capacity can be further reformulated as
$$ \begin{aligned} \max_{{\mathbf{w}}_{R}}\quad &\frac{{\mathbf{w}}_{R}^{H}{\widetilde{\mathbf{G}}{\mathbf{w}}_{R}}}{{\mathbf{w}}_{R}^{H}{\widetilde{\mathbf{H}}{\mathbf{w}}_{R}}} \\ \mathrm{s.t.} \quad&\|{\mathbf{w}}_{R}\|^{2}\leq{P_{t}-P_{j}} \end{aligned} $$
where \({\widetilde {\mathbf {G}}}=\frac {1}{\sqrt {P_{t}-P_{j}}}{\mathbf {I}}_{M}+{\frac {1}{N_{0}}}{\mathbf {H}}_{rd} \) and \({\widetilde {\mathbf {H}}}=\frac {1}{\sqrt {P_{t}-P_{j}}}{\mathbf {I}}_{M}+{\frac {{\mathbf H}_{re}}{\sqrt {P_{t}-P_{j}}|{{\mathbf {w}}_{J}^{\star {H}}}\mathbf {h}_{je}|^{2}+N_{0}}} \). Thus, the optimal information beamforming vector of (31) is
$$ {\mathbf{w}}_{R}^{\star}=\sqrt{P_{t}-P_{j}}\cdot\phi_{max}({\widetilde{\mathbf{H}}}^{-1}{\widetilde{\mathbf{G}}}) $$
where \(\phi _{max}({\widetilde {\mathbf {H}}}^{-1}{\widetilde {\mathbf {G}}})\) denotes the eigenvector corresponding to the maximum eigenvalue of matrix \({\widetilde {\mathbf {H}}}^{-1}{\widetilde {\mathbf {G}}}\).
Thus, for a P j ∈[0,P t ] we can obtain \(\mathbf {w}_{J}^{\star }(P_{j})\) and \(\mathbf {w}_{R}^{\star }(P_{j})\). After checking enough power configurations, we ultimately obtain the global optimal solution \(({\mathbf {w}}_{R}^{\star }, \mathbf {w}_{J}^{\star }, P_{j}^{\star })\) that makes C s (M) become maximum.
Computational complexity analysis
In this subsection, we intend to compare the computational complexities of these proposed schemes. Since these designed schemes have different beamforming and jamming patterns in the relaying phase, we just need to analyze the computational complexity of computing the beamforming and jamming vectors given the beamforming set \(|\mathcal {D}|=M\) and jamming set \(|\mathcal {J}|=K\). Note that there is N=M+K.
Computational complexity of the optimal scheme
By Algorithm 1, we first need to determine τ min and τ max , which means we need to solve the Rayleigh quotient problem to get λ min (B −1 A) in (15) and λ max (D −1 C) in (16). Due to [33], the computational complexity of Rayleigh quotient problem is \(\mathcal {O}(22N^{2})\). So, the computational complexity of determining τ min and τ max is \(\mathcal {O}(44N^{2})\). During the searching scope [τ min ,τ max ], there are τ i , i=1,2,…,L so that the problem (12) will be operated L times. According to [25] and [36], the computational complexity of an inner level optimization (12) is \(\mathcal {O}\left ((N+1)^{0.5}(2(N+1)^{3}+4(N+1)^{2}+8)\right)\log (1/\epsilon)\) in which ε is the accuracy of solving the SDP. Please note that in Algorithm 1, running L times of problem (12) is activated after τ min and τ max are determined. It means computing τ min and τ max is only run once and the above two steps are performed sequentially. In addition, as the GRP is activated in a very slight probability and most of results from SDP meet the rank-one constraint, we only consider the computational complexity of SVD herein. Due to [36], the computational complexity of SVD is \(\mathcal {O}(N^{3})\). Therefore, the total computational complexity of the proposed optimal scheme is \(\mathcal {O}\left (44N^{2}\right)+\mathcal {O}(N^{3})+\mathcal {O}\left (L((N+1)^{0.5}(2(N+1)^{3}+4(N+1)^{2}+8))\right)\log (1/\epsilon)\).
Computational complexity of w R ⊥h re scheme
In w R ⊥h re scheme, we just need to calculate (20). Therefore, the computational complexity is \(\mathcal {O}(M^{2})\) [33].
Computational complexity of w R ∥h rd scheme
To obtain \(\mathbf {w}_{R}^{\star }\), we have to calculate (22) with computational complexity \(\mathcal {O}(M^{2})\). After that, in order to get \(\mathbf {w}_{J}^{\star }\), the proposed Algorithm 1 is also applied to solve (25). Moreover, it is assumed that we have L p times of the above two steps in the one-dimension searching over [0,P t ]. As a result, the computational complexity of w R ∥h rd scheme can be expressed as
$$\begin{aligned} \mathcal{O} \left(L_{p}M^{2}\right)&+ \mathcal{O}\left(44L_{p}K^{2}\right)+\mathcal{O}(L_{p}K^{3})\\ &+\mathcal{O}\left(L_{p}L\left((K+1)^{0.5}(2(K+1)^{3}\right.\right.\\ &\left.\left.+4(K+1)^{2}+8\right)\log(1/\epsilon))\right). \end{aligned} $$
Computational complexity of w J ⊥h jd Scheme
In w J ⊥h jd scheme, we just need to calculate (27) and (29). Thus, in the light of [33], the computational complexity of w J ⊥h jd scheme is \(\mathcal {O}\left (L_{p}K^{2}\right)+\mathcal {O}\left (22L_{p}M^{2}\right)\).
Computational complexity of w J ∥h je scheme
Similar to w J ⊥h jd scheme, the computational complexity is also \(\mathcal {O}\left (L_{p} K^{2}\right)+\mathcal {O}\left (22L_{p}M^{2} \right)\).
In summary, we listed the computational complexity of all proposed schemes in Table 1. Note that the one-dimension searching times L and L p always keep constant if N increases, we just focus on the amounts of computational complexity as N, K and M increase. As N is no less than M and K, we have two conclusions:
The optimal scheme has the most computational complexity among all proposed schemes.
Table 1 Computational complexity of all schemes
w R ⊥h re scheme incurs least computational complexity among all proposed schemes.
Numerical results
In this section, numerical results are presented to evaluate the secrecy performance of our proposed optimal and suboptimal designs. As shown in Fig. 2, without loss of generality, we consider the scenario that the source, the eavesdropper, the destination are located in a straight line, and relays are randomly distributed around the middle point between the source and destination. The location of these nodes are shown in Fig. 2. We suppose that all relays are so close that they have the same location in our simulations. The other parameters are set as K 0=1, β=3, and \(\sigma ^{2}_{0}=1\) [19]. The SNR threshold that relay decodes the received message correctly is 3 dB. Moreover, we set the additive Gaussian noise power N 0=1mW. As the benchmarks of our designs, we also simulate the performances of CB, SRSJ, SRMJ, and MRSJ schemes with optimal beamforming to achieve the maximum secrecy capacity. Note that the optimal beamforming vectors and power for CB, SRSJ, SRMJ, and MRSJ schemes are obtained by exhaustive searching. For SRSJ, SRMJ, and MRSJ schemes, to perform the jamming relay selection in DF relay networks, we give the priority to select the jamming relay among the relays failing to decode the message and have to pick out the best jamming relay if all relays decode the message from the source correctly.
Scenario used for numerical experiments
Figure 3 shows the secrecy performance of various secrecy transmission schemes versus P t . Explicitly, the proposed optimal scheme outperforms all other schemes. Except the optimal scheme, the suboptimal scheme w J ⊥h jd has the maximum secrecy capacity than left transmission schemes. As a result, if the system cannot afford the huge computational complexity of the proposed optimal scheme, the w J ⊥h jd scheme in this paper is recommended to achieve the good tradeoff between the secrecy capacity and computational complexity. Besides, in this scenario, w R ∥h rd scheme achieves similar performances as the SRMJ scheme. w J ∥h je scheme can obtain larger secrecy capacity than CB, MRSJ, and SRSJ schemes.
The average secrecy capacity versus total transmit power of relays, P s =0 dB, N=5
In Fig. 4, we show the average secrecy performance of these transmission schemes versus the number of relays N. Obviously, the average secrecy capacity increases as N increases. The optimal scheme always achieves the maximum secrecy capacity among all transmission schemes. Similarly, the w J ⊥h jd scheme with lower complexity has the second best performance. When N=3, we can see that the performance gaps become drastically slight. The reason is that there is no enough degree of freedom to make these transmission schemes produce different results. Meanwhile, as N increases, the performance gap between arbitrary two schemes increases. That is to say the proposed schemes are more suitable for large-scale DF relay networks.
The average secrecy capacity as a function of the number of relays N for DF relay network, P t =10 dB, P s =0 dB
To investigate the effects of the relay positions on the secrecy performances of these transmission schemes, we draw the average secrecy capacity versus d sr in Fig. 5. The proposed optimal scheme always perform the best secrecy performance among all these transmission schemes. Interestingly, there two extreme cases worth observing. When these relays approach the source closely, the optimal scheme, w J ⊥h jd scheme, w J ∥h je scheme, w R ⊥h re scheme and CB scheme have nearly the same performance. This is because the probability of the relay decoding the message from the source correctly tends to be 1, so that above mentioned schemes can almost employ all relays to perform information beamforming. When the relays approach the destination, all transmission schemes tend to incur zero secrecy capacity, which means the relay network can not provide physical layer secrecy. The reason is that the probability of the relay decoding the message from the source successfully tends to be zero. Meanwhile, we can see that there exists an optimal d sr for each schemes in Fig. 5. For example, with the simulation parameters, d sr ≈4 m is the optimal distance to achieve the best secrecy rate for the optimal scheme, w J ⊥h jd scheme, w J ∥h je scheme, w R ⊥h re scheme, and CB scheme. d sr ≈5 m is the optimal distance for the MRSJ scheme. Similarly, other schemes also have the optimal d sr in Fig. 5. Even though we can place these relays arbitrarily, the optimal scheme also performs the maximum secrecy rate among all schemes. In summary, if the relays approach the source, we can choose one of w J ⊥h jd scheme, w J ∥h je scheme, w R ⊥h re scheme, and CB scheme to configure the secrecy transmission. Otherwise, the optimal scheme is recommended to achieve the maximum secrecy capacity and the w J ⊥h jd scheme is suggested in the aspect of tradeoff between secrecy performance and computational complexity.
The average secrecy capacity versus the distance from source to relays (d sr ), P t =10 dB, N=5
In this paper, we proposed an optimal scheme and four suboptimal schemes with low computational complexity in the DF relay networks to enhance the transmission security. Unlike the prior works, the proposed transmission schemes utilize the property of DF relays to let the relays decoding incorrectly transmit jamming signals to confound the eavesdropper and the relays decoding correctly transmit information beamforming to the destination. By our numerical results, the optimal scheme outperforms all existing schemes and the proposed suboptimal schemes. In addition, some suboptimal schemes with low computational complexity also have better secrecy performance than existing schemes. Moreover, we found that our proposed schemes are more suitable for the large-scale relay networks and the scenarios where relays are near the middle position between the source and destination.
M Bloch, J Barros, Physical-Layer Security: From Information Theory to Security Engineering (Cambridge University Press, Cambridge, 2011).
A Mukherjee, S Fakoorian, J Huang, AL Swindlehurst, Principles of physical layer security in multiuser wireless networks: a survey. IEEE Commun. Surveys Tuts. 16(3), 1550–1573 (2014).
N Yang, HA Suraweera, IB Collings, C Yuen, Physical layer security of TAS/MRC with antenna correlation. IEEE Trans. Inf. Forensic Secur. 8(1), 254–259 (2013).
J Zhang, C Yuen, CK Wen, S Jin, KK Wong, H Zhu, Large system secrecy rate analysis for SWIPT MIMO wiretap channels. IEEE Trans. Inf. Forensic Secur. 11(1), 74–85 (2015).
X Chen, C Zhong, C Yuen, HH Chen, Multi-antenna relay aided wireless physical layer security. IEEE Commun. Mag. 53(12), 40–46 (2015).
R Bassily, E Ekrem, X He, E Tekin, J Xie, MR Bloch, S Ulukus, A Yener, Cooperative security at the physical layer: a summary of recent advances. IEEE Signal Process. Mag. 30(5), 16–28 (2013).
A Kuhestani, A Mohammadi, M Noori, Optimal power allocation to improve secrecy performance of non-regenerative cooperative systems using an untrusted relay. IET Commun. 10(8), 962–968 (2016).
L Dong, Z Han, AP Petropulu, HV Poor, Improving wireless physical layer security via cooperating relays. IEEE Trans. Signal Process. 58(3), 1875–1888 (2010).
C Jeong, I Kim, DI Kim, Joint secure beamforming design at the source and the relay for an amplify-and-forward MIMO untrusted relay system. IEEE Trans. Signal Process. 60(1), 310–325 (2012).
Y Yang, Q Li, W Ma, J Ge, PC Ching, Cooperative secure beamforming for AF relay networks with multiple eavesdroppers. IEEE Signal Process. Lett. 20(1), 35–38 (2013).
X Gong, H Long, H Yin, F Dong, B Ren, Robust amplify-and-forward relay beamforming for security with mean square error constraint. IET Commun. 9(8), 1081–1087 (2015).
L Wang, Y Cai, Y Zou, W Yang, L Hanzo, Joint relay and jammer selection improves the physical layer security in the face of CSI feedback delays. IEEE Trans. Veh. Technol. 65(8), 6259–6274 (2016).
C Wang, H Wang, X Xia, Hybrid opportunistic relaying and jamming with power allocation for secure cooperative networks. IEEE Trans. Wireless Commun. 14(2), 589–605 (2015).
H Wang, M Luo, X Xia, Q Yin, Joint cooperative beamforming and jamming to secure AF relay systems with individual power constraint and no eavesdropper's CSI. IEEE Signal Process. Lett. 20(1), 39–42 (2013).
X Chen, L Lei, H Zhang, C Yuen, Large-scale MIMO relaying techniques for physical layer security: AF or DF?IEEE Trans. Wirel. Commun. 14(9), 5135–5146 (2015).
M Lin, J Ge, Y Yang, An effective secure transmission scheme for af relay networks with two-hop information leakage. IEEE Commun. Lett. 17(8), 1676–1679 (2013).
S Vishwakarma, A Chockalingam, in Proc.IEEE Int. Conf. Commun. (ICC). Amplify-and-forward relay beamforming for secrecy with cooperative jamming and imperfect CSI (IEEEBudapest, 2013), pp. 1640–1645.
H Hui, A Swindlehurst, G Li, J Liang, Secure relay and jammer selection for physical layer security. IEEE Signal Process. Lett. 22(8), 1147–1151 (2015).
L Wang, C Cao, M Song, Y Cheng, in Proc.IEEE Int. Conf. Commun. (ICC). Joint cooperative relaying and jamming for maximum secrecy capacity in wireless networks (IEEESydney, 2014), pp. 4448–4453.
X Guan, Y Cai, Y Wang, W Yang, in Proc of IEEE PIMRC, Toronto, Canada. Increasing secrecy capacity via joint design of cooperative beamforming and jamming (IEEEToronto, 2011), pp. 1279–1283.
N Kolokotronis, A Manos, in Proc.IEEE Signal Process. Conf. (EUSIPCO). Improving physical layer security in DF relay networks via two-stage cooperative jamming, (2016), pp. 1173–1177.
ER Alotaibi, KA Hamdi, Optimal cooperative relaying and jamming for secure communication. IEEE Wireless Commun. Lett. 6:, 689–692 (2015).
J Li, AP Petropulu, S Weber, On cooperative relaying schemes for wireless physical layer security. IEEE Trans. Signal Process. 59(10), 4985–4997 (2011).
Z Lin, Y Cai, W Yang, L Wang, Robust secure switching transmission in multi-antenna relaying systems: cooperative jamming or decode-and-forward beamforming. IET Commun. 10(13), 1673–1681 (2016).
B Li, Z Fei, Robust beamforming and cooperative jamming for secure transmission in DF relay systems. EURASIP J. Wireless Commun (2016). Networking. doi:10.1186/s13638--016-0560-1.
J Myung, H Heo, J Park, Joint beamforming and jamming for physical layer security. ETRI. 37(6), 898–905 (2015).
C Gu, C Zhang, in Proc.of IEEE International Conference on Communication Systems (ICCS). Adaptive distributed beamforming and jamming in DF relay networks for physical layer secrecy (IEEEShenzhen, 2016), pp. 1–5.
H Guo, Z Yang, L Zhang, J Zhu, Y Zou, in Proc.of IEEE ICC. Optimal power allocation for joint cooperative beamforming and jamming assisted wireless networks (IEEEParis, 2017).
M Bloch, J Barros, MRD Rodrigues, SW McLaughlin, Wireless information-theoretic security. IEEE Trans. Inf. Theory. 54(6), 2515–2534 (2008).
Z Luo, W Ma, A So, Y Ye, S Zhang, Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 27(3), 20–34 (2010).
S Boyd, L Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004).
A Charnes, W Cooper, Programming with linear fractional functionals. Naval Res. Logist Quarter. 9:, 181–186 (1962).
GH Golub, CF Van Loan, Matrix Computations, 3rd edn. (The John Hopkins Univ. Press, Baltimore, 1996).
RA Horn, CR Johnson, Matrix Analysis (Cambridge University Press, Cambridge, 1985).
A Goldsmith, Wireless Communications (Cambridge University Press, Cambridge, 2004).
M Lobo, L Vandenberghe, S Boyd, H Lebret, Applications of second-order cone programming. Linear Algebra Appl. 284:, 193–228 (1998).
The work was supported in part by the National Natural Science Foundation of China under Grant No. 61431011, NSFC, China, and the Fundamental Research Funds for the Central Universities, XJTU, China.
School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China
Chengmin Gu
& Chao Zhang
Search for Chengmin Gu in:
Search for Chao Zhang in:
CG and CZ proposed the ideas in this paper. CG performed the analysis and simulations and wrote the paper. CZ reviewed and edited the manuscript. All authors read and approved the manuscript.
Correspondence to Chao Zhang.
Gu, C., Zhang, C. Joint distributed beamforming and jamming schemes in decode-and-forward relay networks for physical layer secrecy. J Wireless Com Network 2017, 206 (2017) doi:10.1186/s13638-017-0997-x
DOI: https://doi.org/10.1186/s13638-017-0997-x
Physical layer secrecy
Distributed beamforming
Relay networks | CommonCrawl |
L. Belova
A. V. Sereda
17 Publications, 13 Citations
I. Sukhov
8 Publications, 8 Citations
G. B. Lapa
O. N. Tolkachev
Evidence for high-temperature ferromagnetism in photolyzed C 60
F. Owens, F. Owens, Z. Iqbal, L. Belova, K. P. Rao
${\mathrm{C}}_{60}$ transforms to a polymeric phase where the ${\mathrm{C}}_{60}$ molecules are bonded to form a chain structure when it is subjected to ultraviolet radiation at ambient temperature… Expand
[Asarone and its biological properties].
L. Belova, S. D. Alibekov, A. I. Baginskaia, S. Sokolov, G. V. Pokrovskaia
Farmakologiia i toksikologiia
In acute and chronic experiments on mice, rats, cats and rabbits in a wide range of tests, a study was conducted of the biological properties of alpha-asarone (1-propenyl-2,4,5-… Expand
[Pharmacology of calenduloside B, a new triterpene glycoside from the roots of Calendula officinalis].
A. I. Iatsyno, L. Belova, G. S. Lipkina, S. Sokolov, E. A. Trutneva
Calendulozide B--trioside of oleanolic acid, isolated from rhizomes of Calendula officinalis, Fam. Compositae, used perorally in doses of 5, 10, 20 and 50 mg/kg exerted an antiulcerous action in 3… Expand
[Pharmacological properties of inulicin, a sesquiterpene lactone from Japanese inula].
L. Belova, A. I. Baginskaia, T. E. Trumpe, S. Sokolov, K. S. Rybalko
Inulicin is a sesquiterpene lactone obtained from Inula Japonica L. (Compositae) possesses a fairly wide spectrum of pharmacological action. In doses from 5 to 60 mg/kg it exerts a certain stimulant… Expand
OPTISCHE UND PHYSIKALISCH-CHEMISCHE EIGENSCHAFTEN VON ALPHA-LIJO3-EINKRISTALLEN
S. A. Kutolin, L. Belova, R. Samoilova, O. M. Kotenko, I. M. Dokuchaeva, N. Ivanova
Magnetic, optical and transport properties of transparent amorphuous Fe77B17Nb6 thin films
A. Masood, A. Bisvas, +4 authors K. Rao
Optical and physico-chemical properties of alpha Li I O3 monocrystal
S. A. Butolin, L. Belova, N. Ivanova, R. N. Samoylova, O. M. Kotenko, I. M. Dokuchaeva
Polarisation dependent photonic band gap of three-dimensional Fe3O4/SiO2 magnetic photonic crystals
T. Volotinen, M. Fang, +6 authors G. A. Gehring
EPOXIDIERUNG VON ALLYLCHLORID UND -BROMID MIT PERMALEINSAEURE
M. S. Malinovskii, V. G. Dryuk, A. F. Kurochkin, L. Belova, A. G. Yudasina
[Pharmacological properties of adicin, Soviet alpha-acetyldigitoxin].
L. Belova, S. Sokolov, E. V. Arzamastsev, S. D. Alibekov, M. I. Mironova
Adicin, Soviet alpha-acetyldigitoxin, obtained from Digitalis lanata Ehrh., c. Scrophulariaceae after isolation from it of celanid is a highly effective cardiotonic of the digitalis type of action.… Expand | CommonCrawl |
How To Join Nine Dots With Four Lines
9 dots puzzle solution? Kairo General Discussions
28/12/2006 · without raising your pen/pencil and without retracing any line. For the sake of convenience of explaining in this forum, you may identify the dots with numbers or alphabet like 1, 2, 3 in the first row, 4,5,6 in the second row and 7,8,9 in the third row.... Nine Dots Try to connect nine equally spaced dots using four lines or fewer without lifting a pen! This activity literally requires thinking outside of the box in order to solve it.
Training for Insight The Case of the Nine-Dot Problem
4/09/2008 · actually this problem is not just a matter of using 4 lines to connect 9 dots, set up in three rows of three, but also to not pick up your pen or pencil. not only is this possible, but like one person stated, think outside the box.... In adult discourse the phrase "connect the dots" can be used as a metaphor to illustrate an ability (or inability) to associate one idea with another, to find the "big picture", or salient feature, in a mass of data.
How do you join a square of 9 dots with 4 lines without
HELLO Peoples! I came up with a interesting but difficult Puzzle to Solve. In this puzzle image 9 Dots are given and you have to Connect the 9 Dots using 4 Straight lines or less do so without lifting the Pen or Tracing the same line more than Once! how to keep a guy interested after sleeping with him One classical example is where nine dots are arranged on the sides and the center of a square as in the picture below. The problem is to connect the dots with no more than 4 straight lines without lifting your hand from the paper.
Why Nine Dots? The NineDot Partnership
The aim of the Nine Dots Puzzle is to draw a path connecting 9 dots arranged in a $3\times 3$ grid using 4 continuous straight lines, never lifting the pen/pencil from the piece of paper. how to fly with an infant 23/01/2008 · Connect 9 dots in a square with 4 lines without lifting your pen and without crossing the line is it possible? How to connect 9 dots on 3X3 square grid with 4 straight lines …
Level Up Puzzles Connect a 4x4 Grid of Dots With 6
How to connect 9 dots with 4 lines keyword-suggest-tool.com
9 Dots 4 Lines AnandTech Forums Technology Hardware
Awesome Connect 9 Dots with 4 Lines without Lifting Pencil
How to connect the nine dots with four straight lines?
Here is one way through which you could join all 9 dots with four lines without coming back.
8/09/2010 · Best Answer: I shall draw Os as dots. I presume you mean in a set-up like this OOO OOO OOO Step 1. Start at the top left and draw a line to the right but go beyond the third dot …
Connect the dots (also known as dot to dot or join the dots) is a form of puzzle containing a sequence of numbered dots. When a line is drawn connecting the dots the outline of an object is revealed.
12/05/2009 · First of all, this is not math. Second of all, the answer is to start at a corner, go diagonal to the opposite corner. now from that corner, go along an edge, but go past the last dot as if there were one more dot after the end.
How To Get Prescription Drugs From Canada
How To Get Ants Out Of Your Worm Bed
Minecraft How To Get A Mob Spawner
How To Learn Your Times Tables Quickly
I Don T Know How To Survive In This World
How To Get A Lucky Egg In Pokemon Ultra Moon
How To Get Rid Of Sun Damaged Skin On Arms
How To Go Past Unsecure Warning Google Chrome
How To Lose Loose Belly Skin
How To Find Your Superfund
How To Get Rid Of Bumble Bee Nest In Roof
How To Find Weekend Work In Sydney
How To Fix Sticky Noodles
How To Give Footmassage Site Youtube.com
John on How To Get 13 Month Old To Sleep
Pablo on How To Lose Weight Around Tummy
Bruce G. Li on How To Find Company Ceo Email Address
Marlin on Dragon Ball Xenoverse How To Get Crusher Volcano
Samanta Cruze on How To Look At Already Applied Effects Illustrator | CommonCrawl |
A non-relativistic model of plasma physics containing a radiation reaction term
KRM Home
Dynamical pressure in a polyatomic gas: Interplay between kinetic theory and extended thermodynamics
February 2018, 11(1): 43-69. doi: 10.3934/krm.2018003
A derivation of the Vlasov-Stokes system for aerosol flows from the kinetic theory of binary gas mixtures
Etienne Bernard 1, , Laurent Desvillettes 2, , Franç cois Golse 3, and Valeria Ricci 4,
IGN-LAREG, Université Paris Diderot, Bâtiment Lamarck A, 5 rue Thomas Mann, Case courrier 7071, 75205 Paris Cedex 13, France,
Université Paris Diderot, Sorbonne Paris Cité, Institut de Mathématiques de Jussieu — Paris Rive Gauche, UMR CNRS 7586, CNRS, Sorbonne Universités, UPMC Univ. Paris 06, 75013, Paris, France,
CMLS, Ecole polytechnique et CNRS, Université Paris-Saclay, 91128 Palaiseau Cedex, France,
Dipartimento di Matematica e Informatica, Universitá degli Studi di Palermo, Via Archirafi 34, I90123 Palermo, Italy
Received October 2016 Revised March 2017 Published August 2017
Table(1)
In this paper, we formally derive the thin spray equation for a steady Stokes gas (i.e. the equation consists in a coupling between a kinetic — Vlasov type — equation for the dispersed phase and a — steady — Stokes equation for the gas). Our starting point is a system of Boltzmann equations for a binary gas mixture. The derivation follows the procedure already outlined in [Bernard, Desvillettes, Golse, Ricci, Commun.Math.Sci.,15 (2017), 1703-1741] wherethe evolution of the gas is governed by the Navier-Stokes equation.
Keywords: Vlasov-Stokes system, Boltzmann equation, hydrodynamic limit, aerosols, sprays, gas mixture.
Mathematics Subject Classification: Primary: 35Q20, 35B25; Secondary: 82C40, 76T15, 76D07.
Citation: Etienne Bernard, Laurent Desvillettes, Franç cois Golse, Valeria Ricci. A derivation of the Vlasov-Stokes system for aerosol flows from the kinetic theory of binary gas mixtures. Kinetic & Related Models, 2018, 11 (1) : 43-69. doi: 10.3934/krm.2018003
G. Allaire, Homogenization of the Navier-Stokes equations in open sets perforated with tiny holes. I. Abstract framework, a volume distribution of holes, Arch. Rational Mech. Anal., 113 (1990), 209-259. doi: 10.1007/BF00375065. Google Scholar
C. Bardos, F. Golse and C. D. Levermore, Fluid dynamic limits of kinetic equations. I. Formal derivations, J. Stat. Phys., 63 (1991), 323-344. doi: 10.1007/BF01026608. Google Scholar
E. Bernard, L. Desvillettes, F. Golse and V. Ricci, A derivation of the Vlasov-Navier-Stokes model for aerosol flows from kinetic theory, Commun. Math. Sci., 15 (2017), 1703-1741. doi: 10.4310/CMS.2017.v15.n6.a11. Google Scholar
J. A. Carrillo, Y.-P. Choi and T. K. Karper, On the analysis of a coupled kinetic-fluid model with local alignment forces, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 273-307. doi: 10.1016/j.anihpc.2014.10.002. Google Scholar
C. Cercignani, Theory and Applications of the Boltzmann Equation, Elsevier, New York, 1975. Google Scholar
F. Charles, Kinetic modelling and numerical simulations using particle methods for the transport of dust in a rarefied gas, Proceedings of the 26th International Symposium on Rarefied Gas Dynamics, AIP Conf. Proc, 1084 (2009), 409-414. doi: 10.1063/1.3076512. Google Scholar
F. Charles, Modélisation Mathématique et Étude Numérique d'un Aérosol dans un Gaz Raréfié. Application á la Simulation du Transport de Particules de Poussiére en Cas d'Accident de Perte de Vide dans ITER, Ph.D thesis, ENS Cachan, 2009.Google Scholar
F. Charles, S. Dellacherie and J. Segré, Kinetic modeling of the transport of dust particles in a rarefied atmosphere Math. Models Methods Appl. Sci. 22 (2012), 1150021, 60 pp. doi: 10.1142/S0218202511500217. Google Scholar
Y.-P. Choi, Finite-time blow-up phenomena of Vlasov/Navier-Stokes equations and related systems J. Math. Pures Appl. (2017). doi: 10.1016/j.matpur.2017.05.019. Google Scholar
Y.-P. Choi and B. Kwon, Global well-posedness and large-time behavior for the inhomogeneous Vlasov-Navier-Stokes equations, Nonlinearity, 28 (2015), 3309-3336. doi: 10.1088/0951-7715/28/9/3309. Google Scholar
D. Cioranescu and F. Murat, Un terme étrange venu d'ailleurs, Nonlinear Partial Differential Equations and their Applications, 60 (1982), 98-138. Google Scholar
P. Degond and B. Lucquin-Desreux, The asymptotics of collision operators for two species of particles of disparate masses, Math. Models Meth. Appl. Sci., 6 (1996), 405-436. doi: 10.1142/S0218202596000158. Google Scholar
B. Desjardins and M. J. Esteban, Existence of weak solutions for the motion of rigid bodies in a viscous fluid, Arch. Ration. Mech. Anal., 146 (1999), 59-71. doi: 10.1007/s002050050136. Google Scholar
L. Desvillettes and F. Golse, A remark concerning the Chapman-Enskog asymptotics, Advances in Kinetic Theory and Computing, Series on Advances in Mathematics for Applied Sciences, 22 (1994), 191-203. Google Scholar
L. Desvillettes, F. Golse and V. Ricci, The mean-field limit for solid particles in a Navier-Stokes flow, J. Stat. Phys., 131 (2008), 941-967. doi: 10.1007/s10955-008-9521-3. Google Scholar
L. Desvillettes and J. Mathiaud, Some aspects of the asymptotics leading from gas-particles equations towards multiphase flows equations, J. Stat. Phys., 141 (2010), 120-141. doi: 10.1007/s10955-010-0044-3. Google Scholar
M. A. Gallis, J. R. Torczyinski and D. J. Rader, An approach for simulating the transport of spherical particles in a rarefied gas flow via the direct simulation Monte-Carlo method, Phys. Fluids, 13 (2001), 3482-3492. doi: 10.1063/1.1409367. Google Scholar
D. Gérard-Varet and M. Hillairet, Regularity issues in the problem of fluid structure interaction, Arch. Ration. Mech. Anal., 195 (2010), 375-407. doi: 10.1007/s00205-008-0202-9. Google Scholar
F. Golse, Fluid dynamic limits of the kinetic theory of gases, From Particle Systems to Partial Differential Equations, 75 (2013), 3-91. doi: 10.1007/978-3-642-54271-8_1. Google Scholar
T. Goudon, P.-E. Jabin and A. Vasseur, Hydrodynamic limit for the Vlasov-Navier-Stokes equations. I. Light particles regime, Indiana Univ. Math. J., 53 (2004), 1495-1515. doi: 10.1512/iumj.2004.53.2508. Google Scholar
T. Goudon, P.-E. Jabin and A. Vasseur, Hydrodynamic limit for the Vlasov-Navier-Stokes equations. II. Fine particles regime, Indiana Univ. Math. J., 53 (2004), 1517-1536. doi: 10.1512/iumj.2004.53.2509. Google Scholar
K. Hamdache, Global existence and large time behaviour of solutions for the Vlasov-Stokes equations, Japan J. Indust. Appl. Math., 15 (1998), 51-74. doi: 10.1007/BF03167396. Google Scholar
M. Hauray, Wasserstein distances for vortices approximation of Euler-type equations, Math. Models Methods Appl. Sci., 19 (2009), 1357-1384. doi: 10.1142/S0218202509003814. Google Scholar
M. Hillairet, On the homogenization of the Stokes problem in a perforated domain, preprint, arXiv: 1604.04379 [math.AP].Google Scholar
P.-E. Jabin and F. Otto, Identification of the dilute regime in particle sedimentation, Comm. Math. Phys., 250 (2004), 415-432. doi: 10.1007/s00220-004-1126-3. Google Scholar
S. Klainerman and A. Majda, Compressible and incompressible fluids, Comm. Pure and Appl. Math., 35 (1982), 629-651. doi: 10.1002/cpa.3160350503. Google Scholar
L. D. Landau and E. M. Lifshitz, Physical Kinetics. Course of Theoretical Physics, Vol. 10, Pergamon Press, 1981. Google Scholar
P.-L. Lions, Mathematical Topics in Fluid Mechanics, Vol. 1. Incompressible Models, Oxford University Press Inc., New York, 1996. Google Scholar
P.-L. Lions and N. Masmoudi, Incompressible limit for a compressible fluid, J. Math. Pures Appl., 77 (1998), 585-627. doi: 10.1016/S0021-7824(98)80139-6. Google Scholar
V. A. L'vov and E. Ya. Khruslov, Perturbation of a viscous incompressible fluid by small particles, (Russian), Theor. Appl. Quest. Differ. Equ. Algebra, 267 (1978), 173-177. Google Scholar
G. de Rham, Differentiable Manifolds: Forms, Currents, Harmonic Forms Springer-Verlag, Berlin, 1984. doi: 10.1007/978-3-642-61752-2. Google Scholar
Y. Sone, Molecular Gas Dynamics. Theory, Techniques and Applications Birkhäuser, Boston, 2007. doi: 10.1007/978-0-8176-4573-1. Google Scholar
S. Taguchi, On the drag exerted on the sphere by a slow uniform flow of a rarefied gas, Proc. of the 29th Internat. Symp. on Rarefied Gas Dynamics, 1628 (2014), 51-59. doi: 10.1063/1.4902574. Google Scholar
S. Taguchi, Asymptotic theory of a uniform flow of a rarefied gas past a sphere at low Mach numbers, J. Fluid Mech., 774 (2015), 363-394. doi: 10.1017/jfm.2015.265. Google Scholar
S. Takata, Y. Sone and K. Aoki, Numerical analysis of a uniform flow of a rarefied gas past a sphere on the basis of the Boltzmann equation for hard-sphere molecules, Mathematical Analysis of Phenomena in Fluid and Plasma Dynamics, 824 (1993), 64-93. doi: 10.1063/1.858655. Google Scholar
D. Wang and C. Yu, Global weak solution to the inhomogeneous Navier-Stokes-Vlasov equations, J. Diff. Equations, 259 (2015), 3976-4008. doi: 10.1016/j.jde.2015.05.016. Google Scholar
C. Yu, Global weak solutions to the incompressible Navier-Stokes-Vlasov equations, J. Math. Pures Appl., 100 (2013), 275-293. doi: 10.1016/j.matpur.2013.01.001. Google Scholar
Parameter Definition
$L$ size of the container (periodic box)
$\mathcal{N}_p$ number of particles$/L^3$
$\mathcal{N}_g$ number of gas molecules$/L^3$
$V_p$ thermal speed of particles
$V_g$ thermal speed of gas molecules
$S_{pp}$ average particle/particle cross-section
$S_{pg}$ average particle/gas cross-section
$S_{gg}$ average molecular cross-section
$\eta=m_g/m_p$ mass ratio (molecules/particles)
$\mu=(m_g \mathcal{N}_g)/(m_p \mathcal{N}_p)$ mass fraction (gas/dust or droplets)
${\epsilon}=V_p/V_g$ thermal speed ratio (particles/molecules)
Raffaele Esposito, Yan Guo, Rossana Marra. Stability of a Vlasov-Boltzmann binary mixture at the phase transition on an interval. Kinetic & Related Models, 2013, 6 (4) : 761-787. doi: 10.3934/krm.2013.6.761
Jean Dolbeault. An introduction to kinetic equations: the Vlasov-Poisson system and the Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 361-380. doi: 10.3934/dcds.2002.8.361
Renjun Duan, Tong Yang, Changjiang Zhu. Boltzmann equation with external force and Vlasov-Poisson-Boltzmann system in infinite vacuum. Discrete & Continuous Dynamical Systems - A, 2006, 16 (1) : 253-277. doi: 10.3934/dcds.2006.16.253
Franco Flandoli, Marta Leocata, Cristiano Ricci. The Vlasov-Navier-Stokes equations as a mean field limit. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3741-3753. doi: 10.3934/dcdsb.2018313
Ioannis Markou. Hydrodynamic limit for a Fokker-Planck equation with coefficients in Sobolev spaces. Networks & Heterogeneous Media, 2017, 12 (4) : 683-705. doi: 10.3934/nhm.2017028
Robert T. Glassey, Walter A. Strauss. Perturbation of essential spectra of evolution operators and the Vlasov-Poisson-Boltzmann system. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 457-472. doi: 10.3934/dcds.1999.5.457
Shuangqian Liu, Qinghua Xiao. The relativistic Vlasov-Maxwell-Boltzmann system for short range interaction. Kinetic & Related Models, 2016, 9 (3) : 515-550. doi: 10.3934/krm.2016005
Stéphane Brull, Pierre Charrier, Luc Mieussens. Gas-surface interaction and boundary conditions for the Boltzmann equation. Kinetic & Related Models, 2014, 7 (2) : 219-251. doi: 10.3934/krm.2014.7.219
Raffaele Esposito, Mario Pulvirenti. Rigorous validity of the Boltzmann equation for a thin layer of a rarefied gas. Kinetic & Related Models, 2010, 3 (2) : 281-297. doi: 10.3934/krm.2010.3.281
Karsten Matthies, George Stone, Florian Theil. The derivation of the linear Boltzmann equation from a Rayleigh gas particle model. Kinetic & Related Models, 2018, 11 (1) : 137-177. doi: 10.3934/krm.2018008
Laurent Bernis, Laurent Desvillettes. Propagation of singularities for classical solutions of the Vlasov-Poisson-Boltzmann equation. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 13-33. doi: 10.3934/dcds.2009.24.13
Juhi Jang, Ning Jiang. Acoustic limit of the Boltzmann equation: Classical solutions. Discrete & Continuous Dynamical Systems - A, 2009, 25 (3) : 869-882. doi: 10.3934/dcds.2009.25.869
Ling Hsiao, Fucai Li, Shu Wang. Combined quasineutral and inviscid limit of the Vlasov-Poisson-Fokker-Planck system. Communications on Pure & Applied Analysis, 2008, 7 (3) : 579-589. doi: 10.3934/cpaa.2008.7.579
Renjun Duan, Shuangqian Liu, Tong Yang, Huijiang Zhao. Stability of the nonrelativistic Vlasov-Maxwell-Boltzmann system for angular non-cutoff potentials. Kinetic & Related Models, 2013, 6 (1) : 159-204. doi: 10.3934/krm.2013.6.159
Stefan Possanner, Claudia Negulescu. Diffusion limit of a generalized matrix Boltzmann equation for spin-polarized transport. Kinetic & Related Models, 2011, 4 (4) : 1159-1191. doi: 10.3934/krm.2011.4.1159
Karsten Matthies, George Stone. Derivation of a non-autonomous linear Boltzmann equation from a heterogeneous Rayleigh gas. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3299-3355. doi: 10.3934/dcds.2018143
Fanghua Lin, Ping Zhang. On the hydrodynamic limit of Ginzburg-Landau vortices. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 121-142. doi: 10.3934/dcds.2000.6.121
Xuwen Chen, Yan Guo. On the weak coupling limit of quantum many-body dynamics and the quantum Boltzmann equation. Kinetic & Related Models, 2015, 8 (3) : 443-465. doi: 10.3934/krm.2015.8.443
Stéphane Mischler, Clément Mouhot. Stability, convergence to the steady state and elastic limit for the Boltzmann equation for diffusively excited granular media. Discrete & Continuous Dynamical Systems - A, 2009, 24 (1) : 159-185. doi: 10.3934/dcds.2009.24.159
Xueke Pu, Boling Guo. Global existence and semiclassical limit for quantum hydrodynamic equations with viscosity and heat conduction. Kinetic & Related Models, 2016, 9 (1) : 165-191. doi: 10.3934/krm.2016.9.165
2018 Impact Factor: 1.38
Etienne Bernard Laurent Desvillettes Franç cois Golse Valeria Ricci | CommonCrawl |
ETRI Journal
ETRI Journal is an international, peer-reviewed multidisciplinary journal published bimonthly in English. The main focus of the journal is to provide an open forum to exchange innovative ideas and technology in the fields of information, telecommunications, and electronics. Key topics of interest include high-performance computing, big data analytics, cloud computing, multimedia technology, communication networks and services, wireless communications and mobile computing, material and component technology, as well as security. With an international editorial committee and experts from around the world as reviewers, ETRI Journal publishes high-quality research papers on the latest and best developments from the global community.
http://mc.manuscriptcentral.com/etrij KSCI KCI SCI
Volume 3 Issue 1.2
Multi-Functional Probe Recording: Field-Induced Recording and Near-Field Optical Readout
Park, Kang-Ho;Kim, Jeong-Yong;Song, Ki-Bong;Lee, Sung-Q;Kim, Jun-Ho;Kim, Eun-Kyoung 189
We demonstrate a high-speed recording based on field-induced manipulation in combination with an optical reading of recorded bits on Au cluster films using the atomic force microscope (AFM) and the near-field scanning optical microscope (NSOM). We reproduced 50 nm-sized mounds by applying short electrical pulses to conducting tips in a non-contact mode as a writing process. The recorded marks were then optically read using bent fiber probes in a transmission mode. A strong enhancement of light transmission is attributed to the local surface plasmon excitation on the protruded dots.
Improving Visual Accessibility for Color Vision Deficiency Based on MPEG-21
Yang, Seung-Ji;Ro, Yong-Man;Nam, Je-Ho;Hong, Jin-Woo;Choi, Sang-Yul;Lee, Jin-Hak 195
In this paper, we propose a visual accessibility technique in an MPEG-21 framework. In particular, MPEG-21 visual accessibility for the colored-visual resource of a digital item is proposed to give better accessibility of color information to people with color vision deficiency (CVD). We propose an adaptation system for CVD as well as a description of CVD in MPEG-21. To verify the usefulness of the proposed method, computer simulations with CVD and color adaptation were performed. Furthermore, a statistical experiment was performed using volunteers with CVD in order to verify the effectiveness of the proposed visual accessibility technique in MPEG-21. Both the experimental and simulation results show that the proposed adaptations technique can provide better color information, particularly to people with CVD.
Streaming Media and Multimedia Conferencing Traffic Analysis Using Payload Examination
Kang, Hun-Jeong;Kim, Myung-Sup;Hong, James W. 203
This paper presents a method and architecture to analyze streaming media and multimedia conferencing traffic. Our method is based on detecting the transport protocol and port numbers that are dynamically assigned during the setup between communicating parties. We then apply such information to analyze traffic generated by the most popular streaming media and multimedia conferencing applications, namely, Windows Media, Real Networks, QuickTime, SIP and H.323. We also describe a prototype implementation of a traffic monitoring and analysis system that uses our method and architecture.
Evaluation of Geometric Modeling for KOMPSAT-1 EOC Imagery Using Ephemeris Data
Sohn, Hong-Gyoo;Yoo, Hwan-Hee;Kim, Seong-Sam 218
Using stereo images with ephemeris data from the Korea Multi-Purpose Satellite-1 electro-optical camera (KOMPSAT-1 EOC), we performed geometric modeling for three-dimensional (3-D) positioning and evaluated its accuracy. In the geometric modeling procedures, we used ephemeris data included in the image header file to calculate the orbital parameters, sensor attitudes, and satellite position. An inconsistency between the time information of the ephemeris data and that of the center of the image frame was found, which caused a significant offset in satellite position. This time inconsistency was successfully adjusted. We modeled the actual satellite positions of the left and right images using only two ground control points and then achieved 3-D positioning using the KOMPSAT-1 EOC stereo images. The results show that the positioning accuracy was about 12-17 m root mean square error (RMSE) when 6.6 m resolution EOC stereo images were used along with the ephemeris data and only two ground control points (GCPs). If more accurate ephemeris data are provided in the near future, then a more accurate 3-D positioning will also be realized using only the EOC stereo images with ephemeris data and without the need for any GCPs.
A Dual-Mode 2.4-GHz CMOS Transceiver for High-Rate Bluetooth Systems
Hyun, Seok-Bong;Tak, Geum-Young;Kim, Sun-Hee;Kim, Byung-Jo;Ko, Jin-Ho;Park, Seong-Su 229
This paper reports on our development of a dual-mode transceiver for a CMOS high-rate Bluetooth system-onchip solution. The transceiver includes most of the radio building blocks such as an active complex filter, a Gaussian frequency shift keying (GFSK) demodulator, a variable gain amplifier (VGA), a dc offset cancellation circuit, a quadrature local oscillator (LO) generator, and an RF front-end. It is designed for both the normal-rate Bluetooth with an instantaneous bit rate of 1 Mb/s and the high-rate Bluetooth of up to 12 Mb/s. The receiver employs a dualconversion combined with a baseband dual-path architecture for resolving many problems such as flicker noise, dc offset, and power consumption of the dual-mode system. The transceiver requires none of the external image-rejection and intermediate frequency (IF) channel filters by using an LO of 1.6 GHz and the fifth order onchip filters. The chip is fabricated on a $6.5-mm^{2}$ die using a standard $0.25-{\mu}m$ CMOS technology. Experimental results show an in-band image-rejection ratio of 40 dB, an IIP3 of -5 dBm, and a sensitivity of -77 dBm for the Bluetooth mode when the losses from the external components are compensated. It consumes 42 mA in receive ${\pi}/4-diffrential$ quadrature phase-shift keying $({\pi}/4-DQPSK)$ mode of 8 Mb/s, 35 mA in receive GFSK mode of 1 Mb/s, and 32 mA in transmit mode from a 2.5-V supply. These results indicate that the architecture and circuits are adaptable to the implementation of a low-cost, multi-mode, high-speed wireless personal area network.
Improved Scalar Multiplication on Elliptic Curves Defined over $F_{2^{mn}}$
Lee, Dong-Hoon;Chee, Seong-Taek;Hwang, Sang-Cheol;Ryou, Jae-Cheol 241
We propose two improved scalar multiplication methods on elliptic curves over $F_{{q}^{n}}$ $q= 2^{m}$ using Frobenius expansion. The scalar multiplication of elliptic curves defined over subfield $F_q$ can be sped up by Frobenius expansion. Previous methods are restricted to the case of a small m. However, when m is small, it is hard to find curves having good cryptographic properties. Our methods are suitable for curves defined over medium-sized fields, that is, $10{\leq}m{\leq}20$. These methods are variants of the conventional multiple-base binary (MBB) method combined with the window method. One of our methods is for a polynomial basis representation with software implementation, and the other is for a normal basis representation with hardware implementation. Our software experiment shows that it is about 10% faster than the MBB method, which also uses Frobenius expansion, and about 20% faster than the Montgomery method, which is the fastest general method in polynomial basis implementation.
Synthesis of Silver Nanocrystallites by a New Thermal Decomposition Method and Their Characterization
Lee, Don-Keun;Kang, Young-Soo 252
We formed silver nanocrystallites by the thermal decomposition of a $Ag^{+1}$-oleate complex, which was prepared by a reaction with $AgNO_{3}$ and sodium oleate in a water solution. The resulting monodispersed silver nanocrystallites were produced by controlling the temperature (290$^{\circ}$C). Transmission electron microscopic (TEM) images of the particles showed a 2-dimensional assembly of the particles with a diameter of $9.5{\pm}0.7nm$, demonstrating the uniformity of these nanocrystallites. An energy-dispersive X-ray (EDX) spectrum and X-ray diffraction (XRD) peaks of the nanocrystallites showed the highly crystalline nature of the silver structure. We analyzed the decomposition of the $Ag^{+1}$-oleate complex using a Thermo Gravimetric Analyzer (TGA) and observed the crystallization process using XRD.
Amorphous Silicon Carbon Nitride Films Grown by the Pulsed Laser Deposition of a SiC-$Si_3N_4$ Mixed Target
Park, Nae-Man;Kim, Sang-Hyeob;Sung, Gun-Yong 257
We grew amorphous SiCN films by pulsed laser deposition using mixed targets. The targets were fabricated by compacting a mixture of SiC and $SiC-{Si_3}{N_4}$ powders. We controlled the film stoichiometry by varying the mixing ratio of the target and the target-to-substrate distance. The mixing ratio of the target had a dominant effect on the film composition. We consider the structures of the SiCN films deposited using 30~70 wt.% SiC in the target to be an intermediate phase of SiC and $SiN_x$. This provides the possibility of growing homogeneous SiCN films with a mixed target at a moderate target-to-substrate distance.
A 42-GHz Wideband Cavity-Backed Slot Antenna with Thick Ground Plane
Lee, Jong-Moon;Cho, Young-Heui;Pyo, Cheol-Sig;Choi, Ik-Guen 262
We investigate the characteristics of a wideband and high-gain cavity-backed slot antenna in terms of the reflection coefficients, radiation patterns, and gain. A cavity-backed slot antenna structure includes baffles, reflectors, and thick ground planes. The measured gain and bandwidth of a 10-dB return loss in a cavity-backed $2{\times}2$ array slot antenna with $h_1=2 $mm, d=2 mm are 15.5 dBi and nearly 27%, respectively, at 42 GHz. Baffles and reflectors are used to increase antenna gain, thus reducing the coupling among the slots on the thick ground plane.
Capacity Scheduling for Heterogeneous Mobile Terminals in Broadband Satellite Interactive Networks
Lee, Ki-Dong;Kim, Ho-Kyom 265
We develop a simple exact solution method for return link capacity allocation scheduling in a satellite interactive network using a hybrid code-division multiple access / time-division multiple access (CDMA/TDMA) scheme.
Threshold-Based Camera Motion Characterization of MPEG Video
Kim, Jae-Gon;Chang, Hyun-Sung;Kim, Jin-Woong;Kim, Hyung-Myung 269
We propose an efficient scheme for camera motion characterization in MPEG-compressed video. The proposed scheme detects six types of basic camera motions through threshold-based qualitative interpretation, in which fixed thresholds are applied to motion model parameters estimated from MPEG motion vectors (MVs). The efficiency and robustness of the scheme are validated by the experiment with real compressed video sequences.
Filtering of Filter-Bank Energies for Robust Speech Recognition
Jung, Ho-Young 273
We propose a novel feature processing technique which can provide a cepstral liftering effect in the log-spectral domain. Cepstral liftering aims at the equalization of variance of cepstral coefficients for the distance-based speech recognizer, and as a result, provides the robustness for additive noise and speaker variability. However, in the popular hidden Markov model based framework, cepstral liftering has no effect in recognition performance. We derive a filtering method in log-spectral domain corresponding to the cepstral liftering. The proposed method performs a high-pass filtering based on the decorrelation of filter-bank energies. We show that in noisy speech recognition, the proposed method reduces the error rate by 52.7% to conventional feature.
Effective Periodic Poling in Optical Fibers
Kim, Jong-Bae;Ju, Jung-Jin;Kim, Min-Su;Seo, Hong-Seok 277
The distributions of electric field and induced second-order nonlinearity are analyzed in the periodic poling of optical fibers. A quasi-phase matching efficiency for the induced nonlinearity is calculated in terms of both the electrode separation distance between the applied voltage and generalized electrode width for the periodic poling. Our analysis of the quasi-phase matching efficiency implies that the conversion efficiency can be enhanced through adjusting the separation distance, and the electrode width can be maximized if the electrode width is optimized.
Athermalized Polymeric Arrayed-Waveguide Grating by Partial Detachment from a Si Substrate
Lee, Jong-Moo;Ahn, Joon-Tae;Park, Sun-Tak;Lee, Myung-Hyun 281
We demonstrate a new fabrication method for adjusting the temperature dependence of a polymeric arrayed-waveguide grating (AWG) on a Si substrate. A temperature-dependent wavelength shift of-0.1nm/$^{\circ}C$ in a polymeric AWG on a Si substrate is reduced of+0.1nm/$^{\circ}C$ by detaching part of the polymer film, including the grating channel region of the AWG, from the Si substrate while the other parts remain fixed on the substrate. | CommonCrawl |
The Collocation Basis of Compact Finite Differences for Moment-Preserving Interpolations: Review, Extension and Applications
Julián T. Becerra-Sagredo, Rolf Jeltsch & Carlos Málaga
10.4208/cicp.OA-2019-0170
Preview Purchase PDF 251 16912 Abstract
The diagnostic of the performance of numerical methods for physical models, like those in computational fluid mechanics and other fields of continuum mechanics, rely on the preservation of statistical moments of extensive quantities. Dynamic and adaptive meshing often use interpolations to represent fields over a new set of elements and require to be conservative and moment-preserving. Denoising algorithms should not affect moment distributions of data. And numerical deltas are described using the number of moments preserved. Therefore, all these methodologies benefit from the use of moment-preserving interpolations. In this article, we review the presentation of the piecewise polynomial basis functions that provide moment-preserving interpolations, better described as the collocation basis of compact finite differences, or Z-splines. We present different applications of these basis functions that show the improvement of numerical algorithms for fluid mechanics, discrete delta functions and denoising. We also provide theorems of the extension of the properties of the basis, previously known as the Strang and Fix theory, to the case of arbitrary knot partitions.
Local Discrete Velocity Grids for Multi-Species Rarefied Flow Simulations
Stéphane Brull & Corentin Prigent
This article deals with the derivation of an adaptive numerical method for mono-dimensional kinetic equations for gas mixtures. For classical deterministic kinetic methods, the velocity domain is chosen accordingly to the initial condition. In such methods, this velocity domain is the same for all time, all space points and all species. The idea developed in this article relies on defining velocity domains that depend on space, time and species. This allows the method to locally adapt to the support of the distribution functions. The method consists in computing macroscopic quantities by the use of conservation laws, which enables the definition of such local grids. Then, an interpolation procedure along with a upwind scheme is performed in order to treat the advection term, and an implicit treatment of the BGK operator allows for the derivation of an AP scheme, where the stability condition is independent of the relaxation rate. The method is then applied to a series of test cases and compared to the classical DVM method.
A Data-Driven Random Subfeature Ensemble Learning Algorithm for Weather Forecasting
Chen Yu, Haochen Li, Jiangjiang Xia, Hanqiuzi Wen & Pingwen Zhang
In this paper, the RSEL (Random Subfeature Ensemble Learning) algorithm is proposed to improve the forecast results of weather forecasting. Based on the classical machine learning algorithms, RSEL algorithm integrates random subfeature selection and ensemble learning combination strategy to enhance the diversity of the features and avoid the influence of a small number of unstable outliers generated randomly. Furthermore, the feature engineering schemes are designed for the weather forecast data to make full use of spatial or temporal context. RSEL algorithm is tested by forecasting the wind speed and direction, and it improves the forecast accuracy of traditional methods and has good robustness.
High-Order Gas-Kinetic Scheme in Curvilinear Coordinates for the Euler and Navier-Stokes Solutions
Liang Pan & Kun Xu
The high-order gas-kinetic scheme (HGKS) has achieved success in simulating compressible flows with Cartesian meshes. To study the flow problems in general geometries, such as the flow over a wing-body, the development of HGKS in general curvilinear coordinates becomes necessary. In this paper, a two-stage fourth-order gas-kinetic scheme is developed for the Euler and Navier-Stokes solutions in the curvilinear coordinates from one-dimensional to three-dimensional computations. Based on the coordinate transformation, the kinetic equation is transformed first to the computational space, and the flux function in the gas-kinetic scheme is obtained there and is transformed back to the physical domain for the update of flow variables inside each control volume. To achieve the expected order of accuracy, the dimension-by-dimension reconstruction based on the WENO scheme is adopted in the computational domain, where the reconstructed variables are the cell averaged Jacobian and the Jacobian-weighted conservative variables. In the two-stage fourth-order gas-kinetic scheme, the point values as well as the spatial derivatives of conservative variables at Gaussian quadrature points have to be used in the evaluation of the time dependent flux function. The point-wise conservative variables are obtained by ratio of the above reconstructed data, and the spatial derivatives are reconstructed through orthogonalization in physical space and chain rule. A variety of numerical examples from the accuracy tests to the solutions with strong discontinuities are presented to validate the accuracy and robustness of the current scheme for both inviscid and viscous flows. The precise satisfaction of the geometrical conservation law in non-orthogonal mesh is also demonstrated through the numerical example.
Magnetic Deformation Theory of a Vesicle
Yao-Gen Shu & Zhong-Can Ou-Yang
We have extended the Helfrich's spontaneous curvature model [M. Iwamoto and Z. C. Ou-Yang. Chem. Phys. Lett. 590(2013)183; Y. X. Deng, et al., EPL. 123(2018)68002] of the equilibrium vesicle shapes by adding the interaction between magnetic field and the constituent molecules to explain the phenomena of the reversibly deformation of artificial stomatocyte [P. G. van Rhee, et al., Nat. Commun. Sep 24;5:5010(2014), doi: 10.1038/ncomms6010] and the anharmonic deformation of a self-assembled nanocapsules of bola-amphiphilic molecules and the linear birefringence [O.V. Manyuhina, et al., Phys. Rev. Lett. 98(2007)146101]. However, the sophisticated mathematics in differential geometry is still covered. Here, we present the derivations of formulas in detail to reveal the perturbation of deformation $ψ$ under two cases. New features such as the influence of temperature on the bend modulus of vesicle membrane have been revealed.
$H^2$-Conforming Methods and Two-Grid Discretizations for the Elastic Transmission Eigenvalue Problem
Yidu Yang, Jiayu Han & Hai Bi
The elastic transmission eigenvalue problem has important applications in the inverse elastic scattering theory. Recently, the numerical computation for this problem has attracted the attention of the researchers. In this paper, we propose the $H^2$-conforming methods including the classical $H^2$-conforming finite element method and the spectral element method, and establish the two-grid discretization scheme. Theoretical analysis and numerical experiments show that the methods presented in this paper can efficiently compute real and complex elastic transmission eigenvalues.
Fully Decoupled, Linear and Unconditionally Energy Stable Schemes for the Binary Fluid-Surfactant Model
Yuzhe Qin, Zhen Xu, Hui Zhang & Zhengru Zhang
Here, we develop a first and a second order time stepping schemes for a binary fluid-surfactant phase field model by using the scalar auxiliary variable approach. The free energy contains a double-well potential, a nonlinear coupling entropy and a Flory-Huggins potential. The resulting coupled system consists of a Cahn-Hilliard type equation and a Wasserstein type equation which leads to a degenerate problem. By introducing only one scalar auxiliary variable, the system is transformed into an equivalent form so that the nonlinear terms can be treated semi-explicitly. Both the schemes are linear and decoupled, thus they can be solved efficiently. We further prove that these semi-discretized schemes in time are unconditionally energy stable. Some numerical experiments are performed to validate the accuracy and energy stability of the proposed schemes.
A Kernel-Independent Treecode Based on Barycentric Lagrange Interpolation
Lei Wang, Robert Krasny & Svetlana Tlupova
[An open-access article; the PDF is free to any online user.]
A kernel-independent treecode (KITC) is presented for fast summation of particle interactions. The method employs barycentric Lagrange interpolation at Chebyshev points to approximate well-separated particle-cluster interactions. The KITC requires only kernel evaluations, is suitable for non-oscillatory kernels, and relies on the scale-invariance property of barycentric Lagrange interpolation. For a given level of accuracy, the treecode reduces the operation count for pairwise interactions from $\mathcal{O}$($N^2$) to $\mathcal{O}$($N$log$N$), where $N$ is the number of particles in the system. The algorithm is demonstrated for systems of regularized Stokeslets and rotlets in 3D, and numerical results show the treecode performance in terms of error, CPU time, and memory consumption. The KITC is a relatively simple algorithm with low memory consumption, and this enables a straightforward OpenMP parallelization.
A High Order Central DG Method of the Two-Layer Shallow Water Equations
Yongping Cheng, Haiyun Dong, Maojun Li & Weizhi Xian
In this paper, we focus on the numerical simulation of the two-layer shallow water equations over variable bottom topography. Although the existing numerical schemes for the single-layer shallow water equations can be extended to two-layer shallow water equations, it is not a trivial work due to the complexity of the equations. To achieve the well-balanced property of the numerical scheme easily, the two-layer shallow water equations are reformulated into a new form by introducing two auxiliary variables. Since the new equations are only conditionally hyperbolic and their eigenstructure cannot be easily obtained, we consider the utilization of the central discontinuous Galerkin method which is free of Riemann solvers. By choosing the values of the auxiliary variables suitably, we can prove that the scheme can exactly preserve the still-water solution, and thus it is a truly well-balanced scheme. To ensure the non-negativity of the water depth, a positivity-preserving limiter and a special approximation to the bottom topography are employed. The accuracy and validity of the numerical method will be illustrated through some numerical tests.
A High-Order Cell-Centered Discontinuous Galerkin Multi-Material Arbitrary Lagrangian-Eulerian Method
Fang Qing, Xijun Yu, Zupeng Jia, Meilan Qiu & Xiaolong Zhao
In this paper, a high-order cell-centered discontinuous Galerkin (DG) multi-material arbitrary Lagrangian-Eulerian (MMALE) method is developed for compressible fluid dynamics. The MMALE method utilizes moment-of-fluid (MOF) interface reconstruction technology to simulate multi-materials of immiscible fluids. It is an explicit time-marching Lagrangian plus remap type. In the Lagrangian phase, an updated high-order discontinuous Galerkin Lagrangian method is applied for the discretization of hydrodynamic equations, and Tipton's pressure relaxation closure model is used in the mixed cells. A robust moment-of-fluid interface reconstruction algorithm is used to provide the information of the material interfaces for remapping. In the rezoning phase, Knupp's algorithm is used for mesh smoothing. For the remapping phase, a high-order accurate remapping method of the cell-intersection-based type is proposed. It can be divided into four stages: polynomial reconstruction, polygon intersection, integration, and detection of problematic cells and limiting. Polygon intersection is based on the "clipping and projecting" algorithm, and detection of problematic cells depends on a troubled cell marker, and a posteriori multi-dimensional optimal order detection (MOOD) limiting strategy is used for limiting. Numerical tests are given to demonstrate the robustness and accuracy of our method.
A Compressible Conserved Discrete Unified Gas-Kinetic Scheme with Unstructured Discrete Velocity Space for Multi-Scale Jet Flow Expanding into Vacuum Environment
Jianfeng Chen, Sha Liu, Yong Wang & Chengwen Zhong
The mechanism of jet flow expanding into vacuum environment (or extremely low density environment) is important for the propulsion unit of micro-electro-mechanical systems (MEMS), the thruster of spacecraft, the attitude control system of satellite, etc.. Since its flow field is often composed of local continuum region and local rarefied region, the jet flow into vacuum has noteworthy multi-scale transportation behaviors. Therefore, the numerical study of such flows needs the multi-scale schemes which are valid for both continuum and rarefied flows. In the past few years, a series of unified methods for whole flow regime (from continuum regime to rarefied regime) have been developed from the perspective of the direct modeling, and have been verified by sufficient test cases. In this paper, the compressible conserved discrete unified gas-kinetic scheme is further developed and is utilized for predicting the jet flows into vacuum environment. In order to cover the working conditions of both aerospace and MEMS applications, the jet flows with a wide range of inlet Knudsen (Kn) numbers (from 1E-4 to 100) are considered. The evolution of flow field during the entire startup and shutdown process with Kn number 100 is predicted by the present method, and it matches well with the result of analytical collisionless Boltzmann equation. For Kn numbers from 1E-4 to 10, the flow field properties such as density, momentum, and pressure are investigated, and the results are provided in details, since the published results are not sufficient at the present stage. The extent and intensity of the jet flow influence are especially investigated, because they are strongly related to the plume contamination and momentum impact on objects facing the jet, such as the solar paddles which face the attitude control thruster during the docking process.
An Efficient Finite Element Method with Exponential Mesh Refinement for the Solution of the Allen-Cahn Equation in Non-Convex Polygons
Emine Celiker & Ping Lin
In this paper we consider the numerical solution of the Allen-Cahn type diffuse interface model in a polygonal domain. The intersection of the interface with the re-entrant corners of the polygon causes strong corner singularities in the solution. To overcome the effect of these singularities on the accuracy of the approximate solution, for the spatial discretization we develop an efficient finite element method with exponential mesh refinement in the vicinity of the singular corners, that is based on ($k$−1)-th order Lagrange elements, $k$≥2 an integer. The problem is fully discretized by employing a first-order, semi-implicit time stepping scheme with the Invariant Energy Quadratization approach in time, which is an unconditionally energy stable method. It is shown that for the error between the exact and the approximate solution, an accuracy of $\mathcal{O}$($h^k$+$τ$) is attained in the $L^2$-norm for the number of $\mathcal{O}$($h^{−2}$ln$h^{−1}$) spatial elements, where $h$ and $τ$ are the mesh and time steps, respectively. The numerical results obtained support the analysis made.
Effective Two-Level Domain Decomposition Preconditioners for Elastic Crack Problems Modeled by Extended Finite Element Method
Xingding Chen & Xiao-Chuan Cai
In this paper, we propose some effective one- and two-level domain decomposition preconditioners for elastic crack problems modeled by extended finite element method. To construct the preconditioners, the physical domain is decomposed into the "crack tip" subdomain, which contains all the degrees of freedom (dofs) of the branch enrichment functions, and the "regular" subdomains, which contain the standard dofs and the dofs of the Heaviside enrichment function. In the one-level additive Schwarz and restricted additive Schwarz preconditioners, the "crack tip" subproblem is solved directly and the "regular" subproblems are solved by some inexact solvers, such as ILU. In the two-level domain decomposition preconditioners, traditional interpolations between the coarse and the fine meshes destroy the good convergence property. Therefore, we propose an unconventional approach in which the coarse mesh is exactly the same as the fine mesh along the crack line, and adopt the technique of a non-matching grid interpolation between the fine and the coarse meshes. Numerical experiments demonstrate the effectiveness of the two-level domain decomposition preconditioners applied to elastic crack problems.
Parameter Identification in Uncertain Scalar Conservation Laws Discretized with the Discontinuous Stochastic Galerkin Scheme
Louisa Schlachter & Claudia Totzeck
We study an identification problem which estimates the parameters of the underlying random distribution for uncertain scalar conservation laws. The hyperbolic equations are discretized with the so-called discontinuous stochastic Galerkin method, i.e., using a spatial discontinuous Galerkin scheme and a Multielement stochastic Galerkin ansatz in the random space. We assume an uncertain flux or uncertain initial conditions and that a data set of an observed solution is given. The uncertainty is assumed to be uniformly distributed on an unknown interval and we focus on identifying the correct endpoints of this interval. The first-order optimality conditions from the discontinuous stochastic Galerkin discretization are computed on the time-continuous level. Then, we solve the resulting semi-discrete forward and backward schemes with the Runge-Kutta method. To illustrate the feasibility of the approach, we apply the method to a stochastic advection and a stochastic equation of Burgers' type. The results show that the method is able to identify the distribution parameters of the random variable in the uncertain differential equation even if discontinuities are present.
A Higher Order Interpolation Scheme of Finite Volume Method for Compressible Flow on Curvilinear Grids
Zhen-Hua Jiang, Xi Deng, Feng Xiao, Chao Yan & Jian Yu
A higher order interpolation scheme based on a multi-stage BVD (Boundary Variation Diminishing) algorithm is developed for the FV (Finite Volume) method on non-uniform, curvilinear structured grids to simulate the compressible turbulent flows. The designed scheme utilizes two types of candidate interpolants including a higher order linear-weight polynomial as high as eleven and a THINC (Tangent of Hyperbola for INterface Capturing) function with the adaptive steepness. We investigate not only the accuracy but also the efficiency of the methodology through the cost efficiency analysis in comparison with well-designed mapped WENO (Weighted Essentially Non-Oscillatory) scheme. Numerical experimentation including benchmark broadband turbulence problem as well as real-life wall-bounded turbulent flows has been carried out to demonstrate the potential implementation of the present higher order interpolation scheme especially in the ILES (Implicit Large Eddy Simulation) of compressible turbulence. | CommonCrawl |
Our Research Groups
Networks and Optimization
Networks and Optimization https://www.cwi.nl/research/groups/networks-and-optimization https://www.cwi.nl/@@site-logo/210708_cwi_75_web.png
Leader of the group Networks and Optimization: Daniel Dadush.
In today's society, complex systems surround us. From transport and traffic, to behavioral economics and operations management, real-world applications often demand that we identify simple, optimal solutions among a huge set of possibilities. Our research group Networks and Optimization (N&O) does fundamental research to tackle such challenging optimization problems.
We develop algorithmic methods to solve complex optimization problems efficiently. Our research provides efficient algorithms to some of the most challenging problems, for example, in planning, scheduling and routing. To come up with the best optimization algorithms, we combine and extend techniques from different disciplines in mathematics and computer science.
N&O covers a broad spectrum of optimization aspects. Our expertise ranges from discrete to continuous optimization and applies to centralized and decentralized settings. We focus on both problem-specific methods and universal toolkits to solve different types of optimization problems. The key in our investigations is to understand and exploit combinatorial structures, such as graphs, networks, lattices and matroids. Our research is of high scientific impact and contributes to various fields.
In several cooperations with industry partners, the algorithmic techniques that we develop in our group have proven useful to solve complex real-world problems. We are always interested in new algorithmic challenges arising in real-world applications and are open to new cooperations.
Watch our group video to get a glimpse of our activities.
Video about our collaboration with ProRail (in Dutch)
No vacancies currently.
Veni grant for Simon Telen
NWO awarded a Veni grant to Simon Telen, who will develop new strategies for solving complicated mathematical equations and apply them to real-world problems.
Veni grant for Simon Telen - Read More…
CWI co-organizes international trimester program at Bonn University
CWI is co-organizing an international trimester program on Discrete Optimization at the Hausdorff Research Institute for Mathematics in Bonn. Goal of this community service is to collaborate and make progress on long-standing open problems.
CWI co-organizes international trimester program at Bonn University - Read More…
Monique Laurent elected as EUROPT Fellow 2021
Monique Laurent (CWI and Tilburg University) was elected EUROPT Fellow 2021 for being an outstanding researcher in continuous optimization.
Monique Laurent elected as EUROPT Fellow 2021 - Read More…
Strong Contribution of Networks and Optimization at IPCO 2021
Research carried out by CWI's Networks and Optimization (N&O) group has resulted in several contributions to the 22nd Conference on Integer Programming and Combinatorial Optimization, IPCO 2021: three presentations of research papers and the award in the Student Poster Competition.
Strong Contribution of Networks and Optimization at IPCO 2021 - Read More…
Next 4 items »
Dutch Seminar on Optimization (online series) with Andreas Wiese (VU Amsterdam)
January 27 Thursday
Start: 2022-01-27 16:00:00+01:00 End: 2022-01-27 17:00:00+01:00
Online seminar
The Dutch Seminar on Optimization is an initiative to bring together researchers from the Netherlands and beyond, with topics that are centered around Optimization in a broad sense. We would like to invite all researchers, especially also PhD students, who are working on related topics to join the events.
Speaker: Andreas Wiese (VU Amsterdam)
A PTAS for the Unsplittable Flow on a Path problem
(joint work with Fabrizio Grandoni and Tobias Mömke)
In the Unsplittable Flow on a Path problem (UFP) we are given a path with edge capacities, and a set of tasks where each task is characterized by a subpath, a demand, and a weight. The goal is to select a subset of tasks of maximum total weight such that the total demand of the selected tasks using each edge $e$ is at most the capacity of $e$. The problem admits a QPTAS. After a long sequence of improvements, the currently best known polynomial time approximation algorithm for UFP has an approximation ratio of $1+\frac{1}{e+1}+\eps < 1.269$. It has been an open question whether this problem admits a PTAS.
In this talk, we present a polynomial time $(1+\eps)$-approximation algorithm for UFP.
The lecture will be given online. Please visit the website for more information and the zoom link.
Email: Sven Polak or Daniel Dadush
Krzysztof Apt
Sander Borst
Ruben Brokkelkamp
Daniel Dadush
Willem Feijen
Sophie Huiberts
Danish Kashaev
Sophie Klumper
Monique Laurent
Sven Polak
Lex Schrijver
Guido Schäfer
Lucas Slot
Andries Steenkamp
Samarth Tiwari
Nikhil Bansal
Neil Olver
Slot, L.F.H, & Laurent, M. (2022). Sum-of-squares hierarchies for binary polynomial optimization. Mathematical Programming. doi:10.1007/s10107-021-01745-9
Bansal, N, & Cohen, I.R. (2021). Contention resolution, matrix scaling and fair allocation. In Proceedings of the 19th International Workshop on Approximation and Online Algorithms (pp. 252–274). doi:10.1007/978-3-030-92702-8_16
Huizing, D, & Schäfer, G. (2021). The Traveling k-Median Problem: Approximating optimal network coverage. In Proceedings of the 19th International Workshop on Approximation and Online Algorithms (pp. 80–98). doi:10.1007/978-3-030-92702-8_6
Brosch, D, Laurent, M, & Steenkamp, J.A.J. (2021). Optimizing hypergraph-based polynomials modeling job-occupancy in queuing with redundancy scheduling. SIAM Journal on Optimization, 31(3), 2227–2254. doi:10.1137/20M1369592
Dadush, D.N, Végh, L.A, & Zambelli, G. (2021). Geometric rescaling algorithms for submodular function minimization. Mathematics of Operations Research, 46(3), 1081–1108. doi:10.1287/MOOR.2020.1064
Coester, C.E, Koutsoupias, E, & Lazos, P. (2021). The infinite server problem. ACM Transactions on Algorithms, 17(3), 1–23. doi:10.1145/3456632
Bansal, N, & Sinha, M. (2021). k-Forrelation optimally separates quantum and classical query complexity. In Proceedings of the Annual ACM SIGACT Symposium on Theory of Computing (pp. 1303–1316). doi:10.1145/3406325.3451040
Saha, A, Brokkelkamp, K.R, Velaj, Y, Khan, A, & Bonchi, F. (2021). Shortest paths and centrality in uncertain networks. In Proceedings of the VLDB Endowment (pp. 1188–1201). doi:10.14778/3450980.3450988
Borst, S.J, Dadush, D.N, Olver, N.K, & Sinha, M. (2021). Majorizing measures for the optimizer. In Leibniz International Proceedings in Informatics, LIPIcs. doi:10.4230/LIPIcs.ITCS.2021.73
Bubeck, S, Buchbinder, N, Coester, C.E, & Sellke, M. (2021). Metrical service systems with transformations. In Innovations in Theoretical Computer Science Conference (pp. 21:1–21:20). doi:10.4230/LIPIcs.ITCS.2021.21
Current projects with external funding
Smart Heuristic Problem Optimization ()
Mixed-Integer Non-Linear Optimisation Applications (MINOA)
New frontiers in numerical nonlinear algebra (None)
Optimization for and with Machine Learning (OPTIMAL)
Polynomial Optimization, Efficiency through Moments and Algebra (POEMA)
Towards a Quantitative Theory of Integer Programming (QIP)
Alma Mater Studiorum-Universita di Bologna
Alpen-Adria-Universität Klagenfurt
CNR Pisa
Dassault Systèmes B.V.
Rheinische Friedrich-Wilhelmus Universitaet Bonn
Universita degli Studi di Firenze
Universiteit van Tilburg | CommonCrawl |
One thing to notice is that the default case matters a lot. This asymmetry is because you switch decisions in different possible worlds - when you would take Adderall but stop you're in the world where Adderall doesn't work, and when you wouldn't take Adderall but do you're in the world where Adderall does work (in the perfect information case, at least). One of the ways you can visualize this is that you don't penalize tests for giving you true negative information, and you reward them for giving you true positive information. (This might be worth a post by itself, and is very Litany of Gendlin.)
Modafinil is a eugeroic, or 'wakefulness promoting agent', intended to help people with narcolepsy. It was invented in the 1970s, but was first approved by the American FDA in 1998 for medical use. Recent years have seen its off-label use as a 'smart drug' grow. It's not known exactly how Modafinil works, but scientists believe it may increase levels of histamines in the brain, which can keep you awake. It might also inhibit the dissipation of dopamine, again helping wakefulness, and it may help alertness by boosting norepinephrine levels, contributing to its reputation as a drug to help focus and concentration.
"How to Feed a Brain is an important book. It's the book I've been looking for since sustaining multiple concussions in the fall of 2013. I've dabbled in and out of gluten, dairy, and (processed) sugar free diets the past few years, but I have never eaten enough nutritious foods. This book has a simple-to-follow guide on daily consumption of produce, meat, and water.
In contrast to the types of memory discussed in the previous section, which are long-lasting and formed as a result of learning, working memory is a temporary store of information. Working memory has been studied extensively by cognitive psychologists and cognitive neuroscientists because of its role in executive function. It has been likened to an internal scratch pad; by holding information in working memory, one keeps it available to consult and manipulate in the service of performing tasks as diverse as parsing a sentence and planning a route through the environment. Presumably for this reason, working memory ability correlates with measures of general intelligence (Friedman et al., 2006). The possibility of enhancing working memory ability is therefore of potential real-world interest.
Terms and Conditions: The content and products found at feedabrain.com, adventuresinbraininjury.com, the Adventures in Brain Injury Podcast, or provided by Cavin Balaster or others on the Feed a Brain team is intended for informational purposes only and is not provided by medical professionals. The information on this website has not been evaluated by the food & drug administration or any other medical body. We do not aim to diagnose, treat, cure or prevent any illness or disease. Information is shared for educational purposes only. Readers/listeners/viewers should not act upon any information provided on this website or affiliated websites without seeking advice from a licensed physician, especially if pregnant, nursing, taking medication, or suffering from a medical condition. This website is not intended to create a physician-patient relationship.
Adderall increases dopamine and noradrenaline availability within the prefrontal cortex, an area in which our memory and attention are controlled. As such, this smart pill improves our mood, makes us feel more awake and attentive. It is also known for its lasting effect – depending on the dose, it can last up to 12 hours. However, note that it is crucial to get confirmation from your doctor on the exact dose you should take.
Do you want to try Nootropics, but confused with the plethora of information available online? If that's the case, then you might get further confused about what nootropic supplement you should buy that specifically caters to your needs. Here is a list of the top 10 Nootropics or 10 best brain supplements available in the market, and their corresponding uses:
The resurgent popularity of nootropics—an umbrella term for supplements that purport to boost creativity, memory, and cognitive ability—has more than a little to do with the recent Silicon Valley-induced obsession with disrupting literally everything, up to and including our own brains. But most of the appeal of smart drugs lies in the simplicity of their age-old premise: Take the right pill and you can become a better, smarter, as-yet-unrealized version of yourself—a person that you know exists, if only the less capable you could get out of your own way.
Adrafinil is Modafinil's predecessor, because the scientists tested it as a potential narcolepsy drug. It was first produced in 1974 and immediately showed potential as a wakefulness-promoting compound. Further research showed that Adrafinil is metabolized into its component parts in the liver, that is into inactive modafinil acid. Ultimately, Modafinil has been proclaimed the primary active compound in Adrafinil.
Nootropics are a great way to boost your productivity. Nootropics have been around for more than 40 years and today they are entering the mainstream. If you want to become the best you, nootropics are a way to level up your life. Nootropics are always personal and what works for others might not work for you. But no matter the individual outcomes, nootropics are here to make an impact!
Gamma-aminobutyric acid, also known as GABA, naturally produced in the brain from glutamate, is a neurotransmitter that helps in the communication between the nervous system and brain. The primary function of this GABA Nootropic is to reduce the additional activity of the nerve cells and helps calm the mind. Thus, it helps to improve various conditions, like stress, anxiety, and depression by decreasing the beta brain waves and increasing the alpha brain waves. It is one of the best nootropic for anxiety that you can find in the market today. As a result, cognitive abilities like memory power, attention, and alertness also improve. GABA helps drug addicts recover from addiction by normalizing the brain's GABA receptors which reduce anxiety and craving levels in the absence of addictive substances.
In addition, while the laboratory research reviewed here is of interest concerning the effects of stimulant drugs on specific cognitive processes, it does not tell us about the effects on cognition in the real world. How do these drugs affect academic performance when used by students? How do they affect the total knowledge and understanding that students take with them from a course? How do they affect various aspects of occupational performance? Similar questions have been addressed in relation to students and workers with ADHD (Barbaresi, Katusic, Colligan, Weaver, & Jacobsen, 2007; Halmøy, Fasmer, Gillberg, & Haavik, 2009; see also Advokat, 2010) but have yet to be addressed in the context of cognitive enhancement of normal individuals.
Some smart drugs can be found in health food stores; others are imported or are drugs that are intended for other disorders such as Alzheimer's disease and Parkinson's disease. There are many Internet web sites, books, magazines and newspaper articles detailing the supposed effects of smart drugs. There are also plenty of advertisements and mail-order businesses that try to sell "smart drugs" to the public. However, rarely do these businesses or the popular press report results that show the failure of smart drugs to improve memory or learning. Rather, they try to show that their products have miraculous effects on the brain and can improve mental functioning. Wouldn't it be easy to learn something by "popping a pill" or drinking a soda laced with a smart drug? This would be much easier than taking the time to study. Feeling dull? Take your brain in for a mental tune up by popping a pill!
And as before, around 9 AM I began to feel the peculiar feeling that I was mentally able and apathetic (in a sort of aboulia way); so I decided to try what helped last time, a short nap. But this time, though I took a full hour, I slept not a wink and my Zeo recorded only 2 transient episodes of light sleep! A back-handed sort of proof of alertness, I suppose. I didn't bother trying again. The rest of the day was mediocre, and I wound up spending much of it on chores and whatnot out of my control. Mentally, I felt better past 3 PM.
Segmental analysis of the key components of the global smart pills market has been performed based on application, target area, disease indication, end-user, and region. Applications of smart pills are found in capsule endoscopy, drug delivery, patient monitoring, and others. Sub-division of the capsule endoscopy segment includes small bowel capsule endoscopy, controllable capsule endoscopy, colon capsule endoscopy, and others. Meanwhile, the patient monitoring segment is further divided into capsule pH monitoring and others.
The use of cognitive enhancers by healthy individuals sparked debate about ethics and safety. Cognitive enhancement by pharmaceutical means was considered a form of illicit drug use in some places, even while other cognitive enhancers, such as caffeine and nicotine, were freely available. The conflict therein raised the possibility for further acceptance of smart drugs in the future. However, the long-term effects of smart drugs on otherwise healthy brains were unknown, delaying safety assessments.
Low level laser therapy (LLLT) is a curious treatment based on the application of a few minutes of weak light in specific near-infrared wavelengths (the name is a bit of a misnomer as LEDs seem to be employed more these days, due to the laser aspect being unnecessary and LEDs much cheaper). Unlike most kinds of light therapy, it doesn't seem to have anything to do with circadian rhythms or zeitgebers. Proponents claim efficacy in treating physical injuries, back pain, and numerous other ailments, recently extending it to case studies of mental issues like brain fog. (It's applied to injured parts; for the brain, it's typically applied to points on the skull like F3 or F4.) And LLLT is, naturally, completely safe without any side effects or risk of injury.
So what's the catch? Well, it's potentially addictive for one. Anything that messes with your dopamine levels can be. And Patel says there are few long-term studies on it yet, so we don't know how it will affect your brain chemistry down the road, or after prolonged, regular use. Also, you can't get it very easily, or legally for that matter, if you live in the U.S. It's classified as a schedule IV controlled substance. That's where Adrafinil comes in.
Caffeine dose dependently decreased the 1,25(OH)(2)D(3) induced VDR expression and at concentrations of 1 and 10mM, VDR expression was decreased by about 50-70%, respectively. In addition, the 1,25(OH)(2)D(3) induced alkaline phosphatase activity was also reduced at similar doses thus affecting the osteoblastic function. The basal ALP activity was not affected with increasing doses of caffeine. Overall, our results suggest that caffeine affects 1,25(OH)(2)D(3) stimulated VDR protein expression and 1,25(OH)(2)D(3) mediated actions in human osteoblast cells.
Took full pill at 10:21 PM when I started feeling a bit tired. Around 11:30, I noticed my head feeling fuzzy but my reading seemed to still be up to snuff. I would eventually finish the science book around 9 AM the next day, taking some very long breaks to walk the dog, write some poems, write a program, do Mnemosyne review (memory performance: subjectively below average, but not as bad as I would have expected from staying up all night), and some other things. Around 4 AM, I reflected that I felt much as I had during my nightwatch job at the same hour of the day - except I had switched sleep schedules for the job. The tiredness continued to build and my willpower weakened so the morning wasn't as productive as it could have been - but my actual performance when I could be bothered was still pretty normal. That struck me as kind of interesting that I can feel very tired and not act tired, in line with the anecdotes.
Another classic approach to the assessment of working memory is the span task, in which a series of items is presented to the subject for repetition, transcription, or recognition. The longest series that can be reproduced accurately is called the forward span and is a measure of working memory capacity. The ability to reproduce the series in reverse order is tested in backward span tasks and is a more stringent test of working memory capacity and perhaps other working memory functions as well. The digit span task from the Wechsler (1981) IQ test was used in four studies of stimulant effects on working memory. One study showed that d-AMP increased digit span (de Wit et al., 2002), and three found no effects of d-AMP or MPH (Oken, Kishiyama, & Salinsky, 1995; Schmedtje, Oman, Letz, & Baker, 1988; Silber, Croft, Papafotiou, & Stough, 2006). A spatial span task, in which subjects must retain and reproduce the order in which boxes in a scattered spatial arrangement change color, was used by Elliott et al. (1997) to assess the effects of MPH on working memory. For subjects in the group receiving placebo first, MPH increased spatial span. However, for the subjects who received MPH first, there was a nonsignificant opposite trend. The group difference in drug effect is not easily explained. The authors noted that the subjects in the first group performed at an overall lower level, and so, this may be another manifestation of the trend for a larger enhancement effect for less able subjects.
That left me with 329 days of data. The results are that (correcting for the magnesium citrate self-experiment I was running during the time period which did not turn out too great) days on which I happened to use my LED device for LLLT were much better than regular days. Below is a graph showing the entire MP dataseries with LOESS-smoothed lines showing LLLT vs non-LLLT days:
As for newer nootropic drugs, there are unknown risks. "Piracetam has been studied for decades," says cognitive neuroscientist Andrew Hill, the founder of a neurofeedback company in Los Angeles called Peak Brain Institute. But "some of [the newer] compounds are things that some random editor found in a scientific article, copied the formula down and sent it to China and had a bulk powder developed three months later that they're selling. Please don't take it, people!"
But how, exactly, does he do it? Sure, Cruz typically eats well, exercises regularly and tries to get sufficient sleep, and he's no stranger to coffee. But he has another tool in his toolkit that he finds makes a noticeable difference in his ability to efficiently and effectively conquer all manner of tasks: Alpha Brain, a supplement marketed to improve memory, focus and mental quickness.
The above information relates to studies of specific individual essential oil ingredients, some of which are used in the essential oil blends for various MONQ diffusers. Please note, however, that while individual ingredients may have been shown to exhibit certain independent effects when used alone, the specific blends of ingredients contained in MONQ diffusers have not been tested. No specific claims are being made that use of any MONQ diffusers will lead to any of the effects discussed above. Additionally, please note that MONQ diffusers have not been reviewed or approved by the U.S. Food and Drug Administration. MONQ diffusers are not intended to be used in the diagnosis, cure, mitigation, prevention, or treatment of any disease or medical condition. If you have a health condition or concern, please consult a physician or your alternative health care provider prior to using MONQ diffusers.
If you're considering taking pharmaceutical nootropics, then it's important that you learn as much as you can about how they work and that you seek professional advice before taking them. Be sure to read the side effects and contraindications of the nootropic that you are considering taking, and do not use it if you have any pre-existing medical conditions or allergies. If you're taking other medications, then discuss your plans with a doctor or pharmacist to make sure that your nootropic is safe for you to use.
If smart drugs are the synthetic cognitive enhancers, sleep, nutrition and exercise are the "natural" ones. But the appeal of drugs like Ritalin and modafinil lies in their purported ability to enhance brain function beyond the norm. Indeed, at school or in the workplace, a pill that enhanced the ability to acquire and retain information would be particularly useful when it came to revising and learning lecture material. But despite their increasing popularity, do prescription stimulants actually enhance cognition in healthy users?
They can cause severe side effects, and their long-term effects aren't well-researched. They're also illegal to sell, so they must be made outside of the UK and imported. That means their manufacture isn't regulated, and they could contain anything. And, as 'smart drugs' in 2018 are still illegal, you might run into legal issues from possessing some 'smart drugs' without a prescription.
I never watch SNL. I just happen to know about every skit, every line of dialogue because I'm a stable genius.Hey Donnie, perhaps you are unaware that:1) The only Republican who is continually obsessed with how he or she is portrayed on SNL is YOU.2) SNL has always been laden with political satire.3) There is something called the First Amendment that would undermine your quest for retribution.
I have no particularly compelling story for why this might be a correlation and not causation. It could be placebo, but I wasn't expecting that. It could be selection effect (days on which I bothered to use the annoying LED set are better days) but then I'd expect the off-days to be below-average and compared to the 2 years of trendline before, there doesn't seem like much of a fall.
It is at the top of the supplement snake oil list thanks to tons of correlations; for a review, see Luchtman & Song 2013 but some specifics include Teenage Boys Who Eat Fish At Least Once A Week Achieve Higher Intelligence Scores, anti-inflammatory properties (see Fish Oil: What the Prescriber Needs to Know on arthritis), and others - Fish oil can head off first psychotic episodes (study; Seth Roberts commentary), Fish Oil May Fight Breast Cancer, Fatty Fish May Cut Prostate Cancer Risk & Walnuts slow prostate cancer, Benefits of omega-3 fatty acids tally up, Serum Phospholipid Docosahexaenonic Acid Is Associated with Cognitive Functioning during Middle Adulthood endless anecdotes.
The chemicals he takes, dubbed nootropics from the Greek "noos" for "mind", are intended to safely improve cognitive functioning. They must not be harmful, have significant side-effects or be addictive. That means well-known "smart drugs" such as the prescription-only stimulants Adderall and Ritalin, popular with swotting university students, are out. What's left under the nootropic umbrella is a dizzying array of over-the-counter supplements, prescription drugs and unclassified research chemicals, some of which are being trialled in older people with fading cognition.
However, normally when you hear the term nootropic kicked around, people really mean a "cognitive enhancer" — something that does benefit thinking in some way (improved memory, faster speed-of-processing, increased concentration, or a combination of these, etc.), but might not meet the more rigorous definition above. "Smart drugs" is another largely-interchangeable term.
In avoiding experimenting with more Russian Noopept pills and using instead the easily-purchased powder form of Noopept, there are two opposing considerations: Russian Noopept is reportedly the best, so we might expect anything I buy online to be weaker or impure or inferior somehow and the effect size smaller than in the pilot experiment; but by buying my own supply & using powder I can double or triple the dose to 20mg or 30mg (to compensate for the original under-dosing of 10mg) and so the effect size larger than in the pilot experiment.
The one indisputable finding from the literature so far is that many people are seeking cognitive enhancement. Beyond that, the literature yields only partial and tentative answers to the questions just raised. Given the potential impact of cognitive enhancement on society, more research is needed. For research on the epidemiology of cognitive enhancement, studies focused on the cognitive-enhancement practices and experiences of students and nonstudent workers are needed. For research on the cognitive effects of prescription stimulants, larger samples are needed. Only with substantially larger samples will it be possible to assess small but potentially important benefits, as well as risks, and to distinguish individual differences in drug response. Large samples would also be required to compare these effects to the cognitive effects of improved sleep, exercise, nutrition, and stress management. To include more ecologically valid measures of cognition in academic and work environments would in addition require the equivalent of a large clinical trial.
Disclaimer: While we work to ensure that product information is correct, on occasion manufacturers may alter their ingredient lists. Actual product packaging and materials may contain more and/or different information than that shown on our Web site. We recommend that you do not solely rely on the information presented and that you always read labels, warnings, and directions before using or consuming a product. For additional information about a product, please contact the manufacturer. Content on this site is for reference purposes and is not intended to substitute for advice given by a physician, pharmacist, or other licensed health-care professional. You should not use this information as self-diagnosis or for treating a health problem or disease. Contact your health-care provider immediately if you suspect that you have a medical problem. Information and statements regarding dietary supplements have not been evaluated by the Food and Drug Administration and are not intended to diagnose, treat, cure, or prevent any disease or health condition. Amazon.com assumes no liability for inaccuracies or misstatements about products.
Either way, if more and more people use these types of stimulants, there may be a risk that we will find ourselves in an ever-expanding neurological arm's race, argues philosophy professor Nicole Vincent. But is this necessarily a bad thing? No, says Farahany, who sees the improvement in cognitive functioning as a social good that we should pursue. Better brain functioning would result in societal benefits, she argues, "like economic gains or even reducing dangerous errors."
There are hundreds of cognitive enhancing pills (so called smart pills) on the market that simply do NOT work! With each of them claiming they are the best, how can you find the brain enhancing supplements that are both safe and effective? Our top brain enhancing pills have been picked by sorting and ranking the top brain enhancing products yourself. Our ratings are based on the following criteria.
The advantage of adrafinil is that it is legal & over-the-counter in the USA, so one removes the small legal risk of ordering & possessing modafinil without a prescription, and the retailers may be more reliable because they are not operating in a niche of dubious legality. Based on comments from others, the liver problem may have been overblown, and modafinil vendors post-2012 seem to have become more unstable, so I may give adrafinil (from another source than Antiaging Central) a shot when my modafinil/armodafinil run out.
I am not alone in thinking of the potential benefits of smart drugs in the military. In their popular novel Ghost Fleet: A Novel of the Next World War, P.W. Singer and August Cole tell the story of a future war using drug-like nootropic implants and pills, such as Modafinil. DARPA is also experimenting with neurological technology and enhancements such as the smart drugs discussed here. As demonstrated in the following brain initiatives: Targeted Neuroplasticity Training (TNT), Augmented Cognition, and High-quality Interface Systems such as their Next-Generational Nonsurgical Neurotechnology (N3).
The peculiar tired-sharp feeling was there as usual, and the DNB scores continue to suggest this is not an illusion, as they remain in the same 30-50% band as my normal performance. I did not notice the previous aboulia feeling; instead, around noon, I was filled with a nervous energy and a disturbingly rapid pulse which meditation & deep breathing did little to help with, and which didn't go away for an hour or so. Fortunately, this was primarily at church, so while I felt irritable, I didn't actually interact with anyone or snap at them, and was able to keep a lid on it. I have no idea what that was about. I wondered if it might've been a serotonin storm since amphetamines are some of the drugs that can trigger storms but the Adderall had been at 10:50 AM the previous day, or >25 hours (the half-lives of the ingredients being around 13 hours). An hour or two previously I had taken my usual caffeine-piracetam pill with my morning tea - could that have interacted with the armodafinil and the residual Adderall? Or was it caffeine+modafinil? Speculation, perhaps. A house-mate was ill for a few hours the previous day, so maybe the truth is as prosaic as me catching whatever he had.
One last note on tolerance; after the first few days of using smart drugs, just like with other drugs, you may not get the same effects as before. You've just experienced the honeymoon period. This is where you feel a large effect the first few times, but after that, you can't replicate it. Be careful not to exceed recommended doses, and try cycling to get the desired effects again.
Still, the scientific backing and ingredient sourcing of nootropics on the market varies widely, and even those based in some research won't necessarily immediately, always or ever translate to better grades or an ability to finally crank out that novel. Nor are supplements of any kind risk-free, says Jocelyn Kerl, a pharmacist in Madison, Wisconsin.
The surveys just reviewed indicate that many healthy, normal students use prescription stimulants to enhance their cognitive performance, based in part on the belief that stimulants enhance cognitive abilities such as attention and memorization. Of course, it is possible that these users are mistaken. One possibility is that the perceived cognitive benefits are placebo effects. Another is that the drugs alter students' perceptions of the amount or quality of work accomplished, rather than affecting the work itself (Hurst, Weidner, & Radlow, 1967). A third possibility is that stimulants enhance energy, wakefulness, or motivation, which improves the quality and quantity of work that students can produce with a given, unchanged, level of cognitive ability. To determine whether these drugs enhance cognition in normal individuals, their effects on cognitive task performance must be assessed in relation to placebo in a masked study design.
But though it's relatively new on the scene with ambitious young professionals, creatine has a long history with bodybuilders, who have been taking it for decades to improve their muscle #gains. In the US, sports supplements are a multibillion-dollar industry – and the majority contain creatine. According to a survey conducted by Ipsos Public Affairs last year, 22% of adults said they had taken a sports supplement in the last year. If creatine was going to have a major impact in the workplace, surely we would have seen some signs of this already.
1 PM; overall this was a pretty productive day, but I can't say it was very productive. I would almost say even odds, but for some reason I feel a little more inclined towards modafinil. Say 55%. That night's sleep was vile: the Zeo says it took me 40 minutes to fall asleep, I only slept 7:37 total, and I woke up 7 times. I'm comfortable taking this as evidence of modafinil (half-life 10 hours, 1 PM to midnight is only 1 full halving), bumping my prediction to 75%. I check, and sure enough - modafinil.
Cost-wise, the gum itself (~$5) is an irrelevant sunk cost and the DNB something I ought to be doing anyway. If the results are negative (which I'll define as d<0.2), I may well drop nicotine entirely since I have no reason to expect other forms (patches) or higher doses (2mg+) to create new benefits. This would save me an annual expense of ~$40 with a net present value of <820 ($); even if we count the time-value of the 20 minutes for the 5 DNB rounds over 48 days (0.2 \times 48 \times 7.25 = 70), it's still a clear profit to run a convincing experiment.
It's not clear that there is much of an effect at all. This makes it hard to design a self-experiment - how big an effect on, say, dual n-back should I be expecting? Do I need an arduous long trial or an easy short one? This would principally determine the value of information too; chocolate seems like a net benefit even if it does not affect the mind, but it's also fairly costly, especially if one likes (as I do) dark chocolate. Given the mixed research, I don't think cocoa powder is worth investigating further as a nootropic.
Powders are good for experimenting with (easy to vary doses and mix), but not so good for regular taking. I use OO gel capsules with a Capsule Machine: it's hard to beat $20, it works, it's not that messy after practice, and it's not too bad to do 100 pills. However, I once did 3kg of piracetam + my other powders, and doing that nearly burned me out on ever using capsules again. If you're going to do that much, something more automated is a serious question! (What actually wound up infuriating me the most was when capsules would stick in either the bottom or top try - requiring you to very gingerly pull and twist them out, lest the two halves slip and spill powder - or when the two halves wouldn't lock and you had to join them by hand. In contrast: loading the gel caps could be done automatically without looking, after some experience.)
The FDA has approved the first smart pill for use in the United States. Called Abilify MyCite, the pill contains a drug and an ingestible sensor that is activated when it comes into contact with stomach fluid to detect when the pill has been taken. The pill then transmits this data to a wearable patch that subsequently transfers the information to an app on a paired smartphone. From that point, with a patient's consent, the data can be accessed by the patient's doctors or caregivers via a web portal. | CommonCrawl |
Silverman's Mode Estimation Method Explained
I started digging into the history of mode detection after watching Aysylu Greenberg's Strange Loop talk on benchmarking. She pointed out that the usual benchmarking statistics fail to capture that our timings may actually be samples from multiple distributions, commonly caused by the fact that our systems are comprised of hierarchical caches.
I thought it would be useful to add the detection of this to my favorite benchmarking tool, Hugo Duncan's Criterium. Not surprisingly, Hugo had already considered this and there's a note under the TODO section:
Multimodal distribution detection.
Use kernel density estimators?
I hadn't heard of using kernel density estimation for multimodal distribution detection so I found the original paper, Using Kernel Density Estimates to Investigate Multimodality (Silverman, 1981). The original paper is a dense 3 pages and my goal with this post is to restate Silverman's method in a more accessible way. Please excuse anything that seems overly obvious or pedantic and feel encouraged to suggest any modifications that would make it clearer.
What is a mode?
The mode of a distribution is the value that has the highest probability of being observed. Many of us were first exposed to the concept of a mode in a discrete setting. We have a bunch of observations and the mode is just the observation value that occurs most frequently. It's an elementary exercise in counting. Unfortunately, this method of counting doesn't transfer well to observations sampled from a continuous distribution because we don't expect to ever observe the exact some value twice.
What we're really doing when we count the observations in the discrete case is estimating the probability density function (PDF) of the underlying distribution. The value that has the highest probability of being observed is the one that is the global maximum of the PDF. Looking at it this way, we can see that a necessary step for determining the mode in the continuous case is to first estimate the PDF of the underlying distribution. We'll come back to how Silverman does this with a technique called kernel density estimation later.
What does it mean to be multimodal?
In the discrete case, we can see that there might be undeniable multiple modes because the counts for two elements might be the same. For instance, if we observe:
$$1,2,2,2,3,4,4,4,5$$
Both 2 and 4 occur thrice, so we have no choice but to say they are both modes. But perhaps we observe something like this:
$$1,1,1,2,2,2,2,3,3,3,4,9,10,10$$
The value 2 occurs more than anything else, so it's the mode. But let's look at the histogram:
That pair of 10's are out there looking awfully interesting. If these were benchmark timings, we might suspect there's a significant fraction of calls that go down some different execution path or fall back to a slower level of the cache hierarchy. Counting alone isn't going to reveal the 10's because there are even more 1's and 3's. Since they're nestled up right next to the 2's, we probably will assume that they are just part of the expected variance in performance of the same path that caused all those 2's. What we're really interested in is the local maxima of the PDF because they are the ones that indicate that our underlying distribution may actually be a mixture of several distributions.
Imagine that we make 20 observations and see that they are distributed like this:
We can estimate the underlying PDF by using what is called a kernel density estimate. We replace each observation with some distribution, called the "kernel," centered at the point. Here's what it would look like using a normal distribution with standard deviation 1:
If we sum up all these overlapping distributions, we get a reasonable estimate for the underlying continuous PDF:
Note that we made two interesting assumptions here:
We replaced each point with a normal distribution. Silverman's approach actually relies on some of the nice mathematical properties of the normal distribution, so that's what we use.
We used a standard deviation of 1. Each normal distribution is wholly specified by a mean and a standard deviation. The mean is the observation we are replacing, but we had to pick some arbitrary standard deviation which defined the width of the kernel.
In the case of the normal distribution, we could just vary the standard deviation to adjust the width, but there is a more general way of stretching the kernel for arbitrary distributions. The kernel density estimate for observations $X_1,X_2,…,X_n$ using a kernel function $K$ is:
$$\hat{f}(x)=\frac{1}{n}\sum\limits_{i=1}^n K(x-X_i)$$
In our case above, $K$ is the PDF for the normal distribution with standard deviation 1. We can stretch the kernel by a factor of $h$ like this:
$$\hat{f}(x, h)=\frac{1}{nh}\sum\limits_{i=1}^n K(\frac{x-X_i}{h})$$
Note that changing $h$ has the exact same effect as changing the standard deviation: it makes the kernel wider and shorter while maintaining an area of 1 under the curve.
Different kernel widths result in different mode counts
The width of the kernel is effectively a smoothing factor. If we choose too large of a width, we just end up with one giant mound that is almost a perfect normal distribution. Here's what it looks like if we use $h=5$:
Clearly, this has a single maxima.
If we choose too small of a width, we get a very spiky and over-fit estimate of the PDF. Here's what it looks like with $h = 0.1$:
This PDF has a bunch of local maxima. If we shrink the width small enough, we'll get $n$ maxima, where $n$ is the number of observations:
The neat thing about using the normal distribution as our kernel is that it has the property that shrinking the width will only introduce new local maxima. Silverman gives a proof of this at the end of Section 2 in the original paper. This means that for every integer $k$, where $1<k<n$, we can find the minimum width $h_k$ such that the kernel density estimate has at most $k$ maxima. Silverman calls these $h_k$ values "critical widths."
Finding the critical widths
To actually find the critical widths, we need to look at the formula for the kernel density estimate. The PDF for a plain old normal distribution with mean $\mu$ and standard deviation $\sigma$ is:
$$f(x)=\frac{1}{\sigma\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-\mu)^2}{2\sigma^2}}$$
The kernel density estimate with standard deviation $\sigma=1$ for observations $X_1,X_2,…,X_n$ and width $h$ is:
$$\hat{f}(x,h)=\frac{1}{nh}\sum\limits_{i=1}^n \frac{1}{\sqrt{2\pi}}\mathrm{e}^{–\frac{(x-X_i)^2}{2h^2}}$$
For a given $h$, you can find all the local maxima of $\hat{f}$ using your favorite numerical methods. Now we need to find the $h_k$ where new local maxima are introduced. Because of a result that Silverman proved at the end of section 2 in the paper, we know we can use a binary search over a range of $h$ values to find the critical widths at which new maxima show up.
Picking which kernel width to use
This is the part of the original paper that I found to be the least clear. It's pretty dense and makes a number of vague references to the application of techniques from other papers.
We now have a kernel density estimate of the PDF for each number of modes between $1$ and $n$. For each estimate, we're going to use a statistical test to determine the significance. We want to be parsimonious in our claims that there are additional modes, so we pick the smallest $k$ such that the significance measure of $h_k$ meets some threshold.
Bootstrapping is used to evaluate the accuracy of a statistical measure by computing that statistic on observations that are resampled from the original set of observations.
Silverman used a smoothed bootstrap procedure to evaluate the significance. Smoothed bootstrapping is bootstrapping with some noise added to the resampled observations. First, we sample from the original set of observations, with replacement, to get $X_I(i)$. Then we add noise to get our smoothed $y_i$ values:
$$y_i=\frac{1}{\sqrt{1+h_k^2/\sigma^2}}(X_{I(i)}+h_k \epsilon_i)$$
Where $\sigma$ is the standard deviation of $X_1,X_2,…,X_n$, $h_k$ is the critical width we are testing, and $\epsilon_i$ is a random value sampled from a normal distribution with mean 0 and standard deviation 1.
Once we have these smoothed values, we compute the kernel density estimate of them using $h_k$ and count the modes. If this kernel density estimate doesn't have more than $k$ modes, we take that as a sign that we have a good critical width. We repeat this many times and use the fraction of simulations where we didn't find more than $k$ modes as the p-value. In the paper, Silverman does 100 rounds of simulation.
Silverman's technique was a really important early step in multimodality detection and it has been thoroughly investigated and improved upon since 1981. Google Scholar lists about 670 citations of this paper. If you're interested in learning more, one paper I found particularly helpful was On the Calibration of Silverman's Test for Multimodality (Hall & York, 2001).
One of the biggest weaknesses in Silverman's technique is that the critical width is a global parameter, so it may run into trouble if our underlying distribution is a mixture of low and high variance component distributions. For an actual implementation of mode detection in a benchmarking package, I'd consider using something that doesn't have this issue, like the technique described in Nonparametric Testing of the Existence of Modes (Minnotte, 1997).
I hope this is correct and helpful. If I misinterpreted anything in the original paper, please let me know. Thanks!
Posted by Matt Adereth Oct 12th, 2014 algorithms, math
« Computing the Remedian in Clojure Poisonous Shapes in Algebra and Graph Theory »
Adereth's Blog
Distributed Black-Box Optimization Talk at QCon
Playing with Wolfram Playing Cards
Writing a Halite Bot in Clojure
Bag of Little Bootstraps Presentation at PWL SF
Visualizing Girl Talk: Parsing with Clojure's Instaparse
Clojure/conj Talk on 3D Printing Keyboards
Presentation on The Mode Tree at Papers We Love Too
SA Profile Keys on a Kinesis Advantage
Poisonous Shapes in Algebra and Graph Theory
@adereth on GitHub
Copyright © 2018 - Matt Adereth - Powered by Octopress | CommonCrawl |
Mat. Res.
DOI: 10.1590/1980-5373-mr-2017-0649
Surface Charge Density Determination in Water Based Magnetic Colloids: a Comparative Study
A. F. C. Campos
Webert Costa de Medeiros
Renata Aquino
Abstract: This work focuses on the systematic investigation of the two well-established methods of structural surface charge density determination on magnetic colloids, labeled as Single Potentiometric Method (SPM) and Potentiometric-Conductometric Method (PCM). To compare some important features of the methods we determined the structural surface charge density of magnetic colloids samples based on CoFe 2 O 4 @ɣ-Fe 2 O 3 core-shell nanoparticles with three different mean sizes using both strategies. Concerning quicknes… Show more
Colloidal Properties Of N-cds Aqueous Dispersions 1
References 23 publications
"…K eq + and K eq − respectively denote the surface stability constants, while the on the lined species belong to the solid phase. The pKa of and are respectively 1.44, and 4.66 [ 28 , 29 ].…"
Section: Resultsmentioning
Statistical optimization of amorphous iron phosphate: inorganic sol–gel synthesis-sodium potential insertion
Maarouf
Saoiabi
Azzaoui3
BMC Chemistry
Iron phosphate, Fe2 (HPO4)3*4H2O, is synthesized at ambient temperature, using the inorganic sol–gel method coupled to the microwave route. The experimental conditions for the gelling of Fe (III)-H3PO4 system are previously defined. Potentiometric Time Titration (PTT) and Potentiometric Mass Titration (PMT) investigate the acid–base surface chemistry of obtained phosphate. Variations of surface charge with the contact time, Q a function of T, are examined for time contact varying in the range 0–72 h. The mass suspensions used for this purpose are 0.75, 1.25 and 2.5 g L−1. The point of zero charge (PZC) and isoelectric point (IEP) are defined using the derivative method examining the variations $$\frac{{{\text{dpH}}}}{{{\text{d}}t}} = f\left( {{\text{pH}}} \right)$$ dpH d t = f pH , at lower contact time. A shift is observed for PZC and IEP towards low values that are found to be 2.2 ± 0.2 and 1.8 ± 0.1, respectively. In acidic conditions, the surface charge behavior of synthesized phosphate is dominated by $$\overline{{ > {\text{POH}}}}$$ > POH ¯ group which pKa = 2.45 ± 0.15. Q against T titration method is performed for synthesized Fe2 (HPO4)3*4H2O in NaCl electrolytes. The maximal surface charge (Q) is achieved at the low solid suspension. Hence, for m = 0.75 g L−1, Q value of 50 coulombs is carried at μ = 0.1 and pH around 12, while charge value around 22 coulombs is reached in the pH range: 3–10. The effect of activation time, Q and pH on sodium insertion in iron phosphate, were fully evaluated. To determine the optimal conditions of the studied process, mathematical models are used develop response surfaces in order to characterize the most significant sodium interactions according to the variation of the pH, Q, the contact time and the contents of the synthesized material.
"…However, a charge density as large as has been reported in magnetic nanoparticles suspended in polar solvents. If such large charge densities can be realised in particles of radius , the parameter for such particles is (Brown et al 2013 ; Campos et al 2017 ). The parameter is much smaller, about for particles with diameter , charge density 1 C m and , where is expressed in .…"
Section: Discussionmentioning
A suspension of conducting particles in a magnetic field – the particle stress
Kumaran
J. Fluid Mech.
"…Typical potentiometric-conductometric titration curves of N-CDs sample are presented in Figure 4, where the equivalence points EP 1 and EP 2 were determined by using direction lines applied to the conductometric curve [37] [38] [39] . These equivalence points delimit three distinct regions whose meaning can be described as follows.…"
Section: Colloidal Properties Of N-cds Aqueous Dispersionsmentioning
On the Colloidal Stability of Nitrogen-Rich Carbon Nanodots Aqueous Dispersions
Fiuza1,
Gomide2,
Self Cite
The present survey reports on the colloidal stability of aqueous dispersions of nitrogen-rich carbon nanodots (N-CDs). The N-CDs were synthesized by thermally induced decomposition of organic precursors and present an inner core constituted of a β - C 3 N 4 crystalline structure surrounded by a surface shell containing a variety of polar functional groups. N-CDs size and structure were checked by combined analysis of XRD (X-ray Diffraction) and TEM (Transmission Electron Microscopy) measurements. FTIR (Fourier-Transform Infrared Spectroscopy) experiments revealed the presence of carboxyl and amide groups on N-CDs surface. Towards a better understanding of the relation between colloidal stability and surface charge development, zetametry experiments were applied in N-CDs dispersions at different pHs and constant ionic strength. The increase of the absolute values of zeta potential with the alkalinization of the dispersion medium is consistent with the deprotonation of carboxyl groups on N-CDs surface, which agrees with the macroscopic visual observations of long-term colloidal stability at pH 12. The saturation value of N-CDs surface charge density was evaluated by means of potentiometric-conductometric titrations. The difference between carboxyl-related surface charge and the one determined by zeta potential measurements point to the presence of oxidized nitrogen functionalities onto the N-CDs surface in addition to carboxyl groups. These novel results shed light on the electrostatic repulsion mechanism that allows for the remarkable colloidal stability of N-CDs dispersions. | CommonCrawl |
Thesis-2020-Ojiako.pdf (17.26 MB)
Mathematical modelling of gas–plasma jets interactions with liquids
Download (17.26 MB)
posted on 13.12.2021, 14:41 by Juliet Ojiako
An experimental and theoretical investigation of low velocity (up to $50\, \mathrm{m}\,\mathrm{s}^{-1}$) air jets and air plasmas interacting with a liquid was undertaken. Experimentally, a jet of air was directed in the laboratory towards a liquid (water) surface which was arranged both as a thin film on a substrate and also as bulk liquid. Various depths of liquid were investigated as well as different air flow rates. In addition, both air and helium plasma jets were considered but only air plasmas were analysed for chemical changes in the liquid.
Theoretically, the interaction of an air jet with water was considered using three different models. Two models used were direct numerical simulations (DNS), namely the Computational Fluid Dynamics (CFD) package in COMSOL Multiphysics version 5.3a and the volume-of-fluid Gerris package which is an open source software. In addition, a model using the thin-film approximation was used through decoupling the gas and liquid motion which is computationally very efficient.
Experimentally, the deformation of the liquid surface was measured by direct photography and the results compared to the models. For small air flow rates, a state was achieved with a dimple below the impinging jet which remained steady over long periods. At relatively high air flow rates, a steady state was not achieved and the surface underwent oscillations. The threshold for the onset of the oscillations was determined. The resulting surface shapes and onset of the oscillations were compared to the theoretical predictions with good agreement. In addition there was agreement between the two models with regards to streamline patterns and velocity fields were compared with good agreement.
When the liquid was very shallow, the thin film approximation was also compared to the experiments and the results of the DNS with better agreement than what the thin film approximation would suggest. In addition, for high flow rates, it was shown that the film could dewet from the surface and form an annular ring around the edge of the beaker in which the liquid was contained. Depending on the flow conditions and the liquid-substrate properties (hydrophilic or hydrophobic), this annulus was observed to remain after the jet was switched off. In other circumstances the film would form a central globule under the impinging jet.
In the case of the plasma interactions, it was shown that the associated electric field could deform the water surface, lessening the depth of the observed dimple. A simplified model of the air plasma which contained only 10 plasma species was used together with the flow patterns determined from the DNS to investigate the transfer of the species to the liquid and their subsequent chemical reactions. Experimentally, indigo dye was added to the liquid which is destroyed by ozone, produced during the chemical reaction process. This allowed a comparison with the theoretically predicted ozone production from the model.
EPSR mini CDT
© Ojiako Juliet Chinasa
A doctoral thesis. Submitted in partial fulfilment of the requirements for the award of Doctor of Philosophy of Loughborough University.
Roger Smith ; Dmitri Tseluiko ; Hemaka Bandulasena
Qualification name
This submission includes a signed certificate in addition to the thesis file(s)
I have submitted a signed certificate
https://repository.lboro.ac.uk/account/articles/12155115
Mathematical Sciences Theses
GasesPlasmasLiquidsJets | CommonCrawl |
The structure of small components in random graphs with a given degree sequence
Background and definitions
Consider a random graph on $n$ vertices with a nicely behaved degree sequence. That is, letting $d_i(n)$ denote the number of vertices of degree $i$, suppose that for all $i$, there exists a constant $\lambda_i$ such that $d_i(n)/n \to \lambda_i$ as $n \to \infty$.
Let $G$ be a randomly chosen graph with this degree sequence. Molloy and Reed proved that as $n \to \infty$, if $\sum_{i = 0}^{\infty} i(i - 2)\lambda_i > 0$, then w.h.p. $G$ has a giant component, while if $\sum_{i = 0}^{\infty} i(i - 2)\lambda_i < 0$, then w.h.p. $G$ does not have a giant component.
Moreover, the same authors gave a formula for the limiting size of the giant component in the former case.
My question concerns the "small" components in the case where a giant component exists. In the case of an Erdos-Renyi random graph $G_{n,p}$, the structure of these components is well understood: if $p = cn/2$ for some $c > 1$, then with high probability the graph is the union of the giant component, small tree components, and a growing but relatively small number of unicyclic components. (For a precise statement of this, see Theorem 6.11 of Bollobas's Random Graphs.)
For which degree sequences is it true that with high probability the small components are mostly trees with a comparatively small number of unicyclic components? In some cases, it's easy to see that no small component can be a tree: for example, in the case of an $r$-regular graph for $r \geq 2$, $G$ cannot contain a tree, because every tree has a vertex of degree $1$. (However, for $r \geq 3$, the results in the second paper linked to above show that w.h.p. the giant component consists of the entire graph, so in this case the desired property holds vacuously.) So, for which degree sequences is it "non-trivially" true that most small components are trees?
(N.B.: In the first paper linked to above, Molloy and Reed showed that in the "subcritical" case in which there is no giant component, w.h.p. every component contains at most one cycle.)
I'm teaching a course on complex networks using M.E.J. Newman's Networks: An Introduction. In the chapter on random graphs with a given degree sequence (see SEction 13.7), the author claims that for essentially any given degree sequence, almost all of the small components are trees. However, the argument given is far from rigorous, and I am rather skeptical of the conclusion.
graph-theory random-graphs
Andrew Uzzell
Andrew UzzellAndrew Uzzell
I think the answer to your question is: It is "non-trivially" true that most small components are trees for all degree sequences where the giant component is not the whole graph.
The answer comes from the two papers you provided. Theorem 2 of the second paper gives what they call a "Discrete Duality Principle" which basically says that if you remove the giant component, then the remaining graph looks like a random graph on the remaining vertices with degree sequence given by these $\lambda_i'$s (which they provide to you).
The $\lambda_i'$s will satisfy the $Q(\mathcal{D}) < 0$ condition, so you can apply Theorem 1(b) from the first paper to conclude that the random graph with that degree sequence is mostly tree components (as you pointed out).
I hope this is right, I don't know anything about this other than the two papers you provided.
dbaldbal
Not the answer you're looking for? Browse other questions tagged graph-theory random-graphs or ask your own question.
Fraction of vertices in ER random graphs not in giant or tiny components
A more efficient way to generate random graphs with a given degree sequence?
Generating spatially-aware degree-preserving random graphs?
Expected number of connected components in a random graph
Uniform sampling of random connected graph with given number of vertices/edges
When is a large graph with a given degree sequence likely to be connected?
Reference request - random regular graphs vs random graphs w/ degree sequence | CommonCrawl |
\(\newcommand{\dollar}{\$} \DeclareMathOperator{\erf}{erf} \DeclareMathOperator{\arctanh}{arctanh} \newcommand{\lt}{<} \newcommand{\gt}{>} \newcommand{\amp}{&} \)
Coordinated Calculus
Nathan Wakefield, Christine Kelley, Marla Williams, Michelle Haver, Lawrence Seminario-Romero, Robert Huben, Aurora Marks, Stephanie Prahl, Based upon Active Calculus by Matthew Boelkins
IndexPrevUpNext
PrevUpNext
About This Text
PreCalculus Review
1Understanding the Derivative
Introduction to Continuity
Introduction to Limits
How do we Measure Velocity?
The Derivative of a Function at a Point
The Derivative Function
Interpreting, Estimating, and Using the Derivative
The Second Derivative
Differentiability
2Computing Derivatives
Elementary Derivative Rules
The Sine and Cosine Functions
The Product and Quotient Rules
Derivatives of Other Trigonometric Functions
The Chain Rule
Derivatives of Inverse Functions
Derivatives of Functions Given Implicitly
Hyperbolic Functions
The Tangent Line Approximation
The Mean Value Theorem
3Using Derivatives
Using Derivatives to Identify Extreme Values
Global Optimization
Using Derivatives to Describe Families of Functions
Using Derivatives to Evaluate Limits
4The Definite Integral
Determining Distance Traveled from Velocity
The Definite Integral
5Evaluating Integrals
Constructing Accurate Graphs of Antiderivatives
Antiderivatives from Formulas
The Second Fundamental Theorem of Calculus
Integration by Substitution
The Method of Partial Fractions
Trigonometric Substitutions
Numerical Integration
Comparison of Improper Integrals
Using Technology and Tables to Evaluate Integrals
6Using Definite Integrals
Using Definite Integrals to Find Area and Volume
Using Definite Integrals to Find Volume by Rotation and Arc Length
Area and Arc Length in Polar Coordinates
Density, Mass, and Center of Mass
Physics Applications: Work, Force, and Pressure
7Sequences and Series
Convergence of Series
Ratio Test and Alternating Series
Absolute Convergence and Error Bounds
Taylor Polynomials
Applications of Taylor Series
8Differential Equations
An Introduction to Differential Equations
Qualitative Behavior of Solutions to DEs
Euler's Method
Separable differential equations
Population Growth and the Logistic Equation
A Short Table of Integrals
Supplemental Videos
FeedbackAuthored in PreTeXt
Section7.1Sequences
Motivating Questions
What is a sequence?
What does it mean for a sequence to converge?
What does it mean for a sequence to diverge?
We encounter sequences every day. Your monthly utility payments, the annual interest you earn on investments, the amount you spend on groceries each week; all are examples of sequences. Other sequences with which you may be familiar include the Fibonacci sequence \(1, 1, 2, 3, 5, 8, \ldots \text{,}\) where each term is the sum of the two preceding terms, and the triangular numbers \(1, 3, 6, 10, 15, 21, 28, 36, 45, 55, \ldots \text{,}\) the number of vertices in the triangles shown on the right in Figure7.1.
Figure7.1Triangular numbers
Sequences of integers are of such interest to mathematicians and others that they have a journal1The Journal of Integer Sequences at http://www.cs.uwaterloo.ca/journals/JIS/ devoted to them and an online encyclopedia2The On-Line Encyclopedia of Integer Sequences at http://oeis.org/ that catalogs a huge number of integer sequences and their connections. Sequences are also used in digital recordings and images.
Our studies in calculus have dealt with continuous functions. Sequences model discrete instead of continuous information. We will study ways to represent and work with discrete information in this chapter as we investigate sequences and series, and ultimately see key connections between the discrete and continuous.
Example7.2
Suppose you receive \(\$5,\!000\) through an inheritance. You decide to invest this money into a fund that pays \(8\%\) annually, compounded monthly. That means that each month your investment earns \(\frac{0.08}{12} \cdot P\) additional dollars, where \(P\) is your principal balance at the start of the month. So in the first month your investment earns
\begin{equation*} 5000 \left(\frac{0.08}{12}\right) \end{equation*}
or \(\$ 33.33\text{.}\) If you reinvest this money, you will then have \(\$ 5033.33\) in your account at the end of the first month. From this point on, assume that you reinvest all of the interest you earn.
How much interest will you earn in the second month? How much money will you have in your account at the end of the second month?
Complete Table7.3 below to determine the interest earned and total amount of money in this investment each month for one year.
Month Interest
earned Total amount
of money
in the account
\(0\) \(\$0.00\) \(\$5000.00\)
\(1\) \(\$33.33\) \(\$5033.33\)
\(2\)
\(10\)
Table7.3Interest earned on an investment
As we will see later, the amount of money \(P_n\) in the account after month \(n\) is given by
\begin{equation*} P_n = 5000\left(1+\frac{0.08}{12}\right)^{n}\text{.} \end{equation*}
Use this formula to check your calculations in Table7.3. Then find the amount of money in the account after 5 years.
How many years will it be before the account has doubled in value to \(\$10,\!000\text{?}\)
We will apply the monthly interest rate of \(\frac{.08}{12} \) to the balance at the end of month one. Thus, the interest earned during the second month will be
\begin{equation*} \$ 5033.33\left(\frac{.08}{12}\right)=\$ 33.56. \end{equation*}
This means that the account balance at the end of the second month will be \(\$ 5066.89.\)
The completed table is as follows.
\(10\) \(\$35.39\) \(\$5343.51\)
You may notice some small difference in your results from the formula, due to the way rounding happens in each method. To compute the balance in the account after 5 years, we can use the formula with \(n=60\) months, to get
\begin{equation*} P_{60}=5000\left(1+\frac{.08}{12}\right)^{60} = \$7449.23. \end{equation*}
The goal is to find the time it takes for the account to reach a balance of \(\$ 10,\!000\text{.}\) To do this, we can use the formula, and filling in everything we know, we have
\begin{equation*} 10000=5000\left(1+\frac{.08}{12}\right)^n. \end{equation*}
We can then solve for the number of months as follows:
\begin{align*} 10000 \amp = 5000\left(1+\frac{.08}{12}\right)^n\\ 2 \amp = \left(1+\frac{.08}{12}\right)^n\\ \ln (2) \amp = n \ln \left(1+\frac{.08}{12}\right)\\ \frac{\ln (2)}{\ln\left(1+\frac{.08}{12}\right)} \amp = n \text{.} \end{align*}
This gives us that the number of months until the account reaches \(\$ 10,\!000\) is \(\frac{\ln (2)}{\ln\left(1+\frac{.08}{12}\right)} \approx 104.3 \text{.}\) This amounts to about \(8.7\) years.
SubsectionSequences
As Example7.2 illustrates, many discrete phenomena can be represented as lists of numbers (like the amount of money in an account over a period of months). We call any such list a sequence. A sequence is nothing more than list of terms in some order. We often list the entries of the sequence with subscripts,
\begin{equation*} s_1, s_2, \ldots, s_n, \ldots\text{,} \end{equation*}
where the subscript denotes the position of the entry in the sequence.
A sequence is a list of terms \(s_1, s_2, s_3, \ldots\) in a specified order.
We can think of a sequence as a function \(f\) whose domain is the set of positive integers where \(f(n) = s_n\) for each positive integer \(n\text{.}\) This alternative view will be be useful in many situations.
We often denote the sequence \(s_1, s_2, s_3, \ldots\) by \(\{s_n\}\text{.}\) The value \(s_n\) (alternatively \(s(n)\)) is called the \(n\)th term in the sequence. If the terms are all 0 after some fixed value of \(n\text{,}\) we say the sequence is finite. Otherwise the sequence is infinite.
We often define sequences using formulas, which can be explicit or recursive. Explicit formulas write the term \(s_n\) in terms of \(n\text{,}\) while recursive formulas write \(s_n\) in terms of the earlier entries in the sequence. Often, explicit formulas are easier to work with because they can be used to directly find any term in the sequence.
Given the explicit sequence formula \(s_n=\frac{3n-1}{n+2}\text{,}\) find \(s_1, s_2, s_{10}\text{,}\) and \(s_{1000}\text{.}\)
Since the formula is explicit, we simply substitute in the appropriate value for \(n\text{.}\)
\begin{equation*} s_1= \frac{3(1)-1}{1+2}=\frac{2}{3}\text{,} \end{equation*}
\begin{equation*} s_2=\frac{3(2)-1}{2+2}=\frac{5}{4}\text{,} \end{equation*}
\begin{equation*} s_{10}=\frac{3(10)-1}{10+2}=\frac{29}{12}\text{,} \end{equation*}
\begin{equation*} s_{1000}=\frac{3(1000)-1}{1000+2}=\frac{2999}{1002}\text{.} \end{equation*}
Recursively defined sequences, on the other hand, use a formula that relies on previous terms in the sequence and requires some of the first few terms to be set to get started.
For each of the following recursively defined sequences, give the first six terms.
\(s_n=s_{n-1}+2\) for \(n>1\) and \(s_1=2.\)
\(s_n=-2s_{n-1}\) for \(n>1\) and \(s_1=1.\)
\(s_n=s_{n-1}+2s_{n-2}\) for \(n>2\) and \(s_1=1, s_2=3\text{.}\)
Notice that \(s_1=2\) was given and we'll need it to find \(s_2\text{.}\) For \(n=2\text{,}\)
\begin{equation*} s_2=s_{2-1}+2=s_1+2=2+2=4. \end{equation*}
Similarly, to get \(s_3\) we'll need to know \(s_2\text{.}\)
\begin{align*} s_3\amp=s_2+2=4+2=6,\\ s_4\amp=s_3+2=6+2=8,\\ s_5\amp=s_4+2=8+2=10,\\ s_6\amp=s_5+2=12\text{.} \end{align*}
This formula says that each term is the previous term multiplied by -2. We have
\begin{align*} s_1\amp =1, \\ s_2\amp=-2(1)=-2, \\ s_3\amp=-2(-2)=4,\\ s_4\amp= -2(4)=-8, \\ s_5\amp=-2(-8)=16, \\ s_6\amp=-2(16)=-32 \text{.} \end{align*}
Notice that this formula relies on two previous terms, not just one, which is why we had to give the first two terms ahead of time.
\begin{align*} s_1\amp=1,\\ s_2\amp=3,\\ s_3\amp=s_2 + 2 s_1 = 3+2(1)=5,\\ s_4\amp=s_3+2s_2 = 5 + 2(3)=11,\\ s_5\amp=s_4+2s_3 = 11 + 2(5)=21,\\ s_6\amp=s_5+2s_4 = 21+2(11) = 43\text{.} \end{align*}
With infinite sequences, we are often interested in their end behavior and the idea of convergent sequences.
Let \(s_n\) be the \(n\)th term in the sequence \(1, 2, 3, \ldots\text{.}\) Find a formula for \(s_n\) and use appropriate technological tools to draw a graph of entries in this sequence by plotting points of the form \((n,s_n)\) for some values of \(n\text{.}\) Most graphing calculators can plot sequences; directions follow for the TI-84.
In the MODEmenu, highlight SEQin the FUNCline and press ENTER.
In the Y=menu, you will now see lines to enter sequences. Enter a value for nMin (where the sequence starts), a function for u(n) (the \(n\)th term in the sequence), and the value of u_{n Min}.
Set your window coordinates (this involves choosing limits for \(n\) as well as the window coordinates XMin, XMax, YMin, and YMax.
The GRAPHkey will draw a plot of your sequence.
Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
Let \(s_n\) be the \(n\)th term in the sequence \(1, \frac{1}{2}, \frac{1}{3}, \ldots\text{.}\) Find a formula for \(s_n\text{.}\) Draw a graph of some points in this sequence. Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
Let \(s_n\) be the \(n\)th term in the sequence \(2, \frac{3}{2}, \frac{4}{3}, \frac{5}{4}, \ldots\text{.}\) Find a formula for \(s_n\text{.}\) Using your knowledge of limits of continuous functions as \(x \to \infty\text{,}\) decide if this sequence \(\{s_n\}\) has a limit as \(n \to \infty\text{.}\) Explain your reasoning.
When \(n=1\text{,}\) \(s_n = 1\text{;}\) when \(n=2\text{,}\) \(s_n = 2\text{.}\)
Think about the value of \(\frac{1}{n}\text{.}\)
Note that the numerator of each term of the sequence is one more than the denominator of the term.
\(s_n = n\text{.}\) A plot of the first 20 points in the sequence is shown below.
This sequence does not have a limit as \(n\) goes to infinity.
\(s_n = \frac{1}{n}\text{.}\) A plot of the first 20 points in the sequence is shown below.
This sequence has a limit of 0 as \(n\) goes to infinity.
\(s_n = \frac{n+1}{n}\text{.}\) A plot of the first 20 points in the sequence is shown below.
By observation we see that a formula for \(s_n\) is \(s_n = n\text{.}\) A plot of the first 20 points in the sequence is shown below.
We recall that a function \(f\) diverges to infinity (and hence does not have a limit3Limits when they exist are inherently finite. This is true even though we sometimes use the notation \(\lim\limits_{x\to\infty}f(x)=\infty\) to mean that a function increases without bound. as \(x\to\infty\)) if we can make the values of \(f(x)\) as large as we want by choosing large enough values of \(x\text{.}\) Since we can make the values of \(n\) in our sequence as large as we want by choosing \(n\) to be arbitrarily large, we suspect that this sequence does not have a limit as \(n\) goes to infinity. The fact that \(\lim\limits_{x\to\infty}x=\infty\) supports this reasoning.
By observation we see that a formula for \(s_n\) is \(s_n = \frac{1}{n}\text{.}\) A plot of the first 20 points in the sequence is shown below.
Since we can make the values of \(\frac{1}{n}\) in our sequence as close to 0 as we want by choosing \(n\) to be arbitrarily large, we suspect that this sequence has a limit of 0 as \(n\) goes to infinity. This is supported by the fact that \(\lim\limits_{x\to\infty}\frac1x=0\text{.}\)
Since the numerator is always 1 more than the denominator, a formula for \(s_n\) is \(s_n = \frac{n+1}{n}\text{.}\) A plot of the first 20 points in the sequence is shown below.
Since we can make the values of \(\frac{n+1}{n}\) in our sequence as close to 1 as we want by choosing \(n\) to be arbitrarily large, we suspect that this sequence has a limit of 1 as \(n\) goes to infinity. This is supported by the fact that \(\lim\limits_{x\to\infty}\frac{x+1}x=1\text{.}\)
Next we formalize the ideas from Example7.7.
Recall our earlier work with limits involving infinity in Section3.6. State clearly what it means for a continuous function \(f\) to have a limit \(L\) as \(x \to \infty\text{.}\)
Given that an infinite sequence of real numbers is a function from the integers to the real numbers, apply the idea from part (a) to explain what you think it means for a sequence \(\{s_n\}\) to have a limit as \(n \to \infty\text{.}\)
Based on your response to part (b), decide if the sequence \(\left\{ \frac{1+n}{2+n}\right\}\) has a limit as \(n \to \infty\text{.}\) If so, what is the limit? If not, why not?
A function \(f\) has limit \(L\) as \(x \to \infty\) if we can make \(f(x)\) as close to \(L\) as we like by \(\ldots\text{.}\)
Think about making \(s_n\) as close as you want to \(L\) for large values of \(n\text{.}\)
Consider the behavior of the function \(f(x) = \frac{1+x}{2+x}\) as \(x \to \infty\text{.}\)
A continuous function \(f\) has a limit \(L\) as the independent variable \(x\) goes to infinity if we can make the values of \(f(x)\) as close to \(L\) as we want by choosing large enough values of \(x\text{.}\)
We expect that a sequence \(\{s_n\}\) will have a limit \(L\) as \(n\) goes to infinity if we can make the entries \(s_n\) in the sequence as close to \(L\) as we want by choosing \(n\) to be sufficiently large.
As \(n\) gets large, the constant terms become infinitesimally small compared to \(n\) and so \(\frac{1+n}{2+n}\) looks like \(\frac{n}{n}\) or 1 for large \(n\text{.}\) So the sequence \(\left\{ \frac{1+n}{2+n}\right\}\) has a limit of 1 at infinity.
More rigorously, we can either write \(\frac{1+n}{2+n}=1-\frac1{2+n}\) and note that \(\lim\limits_{n\to\infty}\frac1{2+n}=0\text{,}\) or we can apply L'Hopital's rule to the function \(f(x)=\frac{1+x}{2+x}\) as \(x\to\infty\) to assert that \(\left\{\frac{1+n}{2+n}\right\}\) has limit \(1\) as \(n\to\infty\text{.}\)
In Example7.7 and Example7.8 we investigated a few sequences \(\{s_n\}\) that had a limit as \(n\) goes to infinity. More formally, we make the following definition.
Sequence Convergence
The sequence \(\{ s_n \}\) converges or is a convergent sequence if there is a number \(L\) so that the terms \(s_n\) get and stay as close to \(L\) as we choose. This means that for some, perhaps very large, number \(N\text{,}\) we have that every \(s_n\) after \(s_N\) is within our chosen distance of \(L.\) In this situation, we call \(L\) the limit of the convergent sequence and write
\begin{equation*} \lim_{n \to \infty} s_n = L\text{.} \end{equation*}
If the sequence \(\{s_n\}\) does not converge, we say that the sequence \(\{s_n\}\) diverges.
The idea of sequence having a limit as \(n \to \infty\) is the same as the idea of a continuous function having a limit as \(x \to \infty\text{.}\) The only difference is that sequences are discrete instead of continuous.
We will apply the same terminology from limits of continuous functions to the limits of discrete functions. For example, we will say that a sequence converges to a limit \(L\) from below if the terms of the sequence are smaller than \(L\text{,}\) and from above if the terms of the sequence are larger than \(L\text{.}\)
Many properties of limits carry over from the continuous to discrete setting as well. These algebraic properties are essentially identical to the properties of limits listed in Section1.2:
Properties of Discrete Limits
Assuming all the limits on the right-hand side exist:
If \(b\) is a constant, then \(\lim\limits_{n \rightarrow \infty} (bs_n)=b\left(\lim\limits_{n \rightarrow \infty} s_n \right)\)
\(\lim\limits_{n \rightarrow \infty} \left( s_n+t_n\right)=\lim\limits_{n \rightarrow \infty} s_n+\lim\limits_{n \rightarrow \infty}t_n\)
\(\lim\limits_{n \rightarrow \infty} \left( s_n \cdot t_n\right)=\lim\limits_{n \rightarrow \infty} s_n\cdot\lim\limits_{n \rightarrow \infty}t_n\)
\(\lim\limits_{n \rightarrow \infty} \left(\frac{s_n}{t_n}\right)=\frac{\lim\limits_{n \rightarrow \infty} s_n}{\lim\limits_{n \rightarrow \infty} t_n}\text{,}\) provided \(\lim\limits_{n \rightarrow \infty} t_n \neq 0\)
For any constant \(k\text{,}\) \(\lim\limits_{n \rightarrow \infty} k=k\)
Use graphical and/or algebraic methods to determine whether each of the following sequences converges or diverges.
\(\left\{\frac{1+2n}{3n-2}\right\}\)
\(\left\{\frac{5+3^n}{10+2^n}\right\}\)
\(\left\{\frac{10^n}{n!}\right\}\)4Note that \(!\) is the factorial symbol, and \(n! = n(n-1)(n-2) \cdots (2)(1)\) for any positive integer \(n\text{.}\) By convention, we define \(0!\) to be \(1\text{.}\)
Multiply the numerator and denominator each by \(\frac{1}{n}\text{.}\)
Compare the \(n\)th term to \(\frac{3^n}{2^n}\text{.}\)
Plot the sequence and think about what the graph suggests.
The sequence \(\left\{\frac{1+2n}{3n-2}\right\}\) converges to \(\frac{2}{3}\text{.}\)
The sequence \(\left\{\frac{5+3^n}{10+2^n}\right\}\) diverges to infinity.
\(\frac{10^n}{n!} \to 0\) as \(n \to \infty\text{.}\)
A plot of the first 20 terms of the sequence \(\left\{\frac{1+2n}{3n-2}\right\}\) is shown below.
The plot suggests that the sequence has a limit between 0.5 and 1. Evaluating the limit algebraically, we find that
\begin{equation*} \lim\limits_{n\to\infty}\frac{1+2n}{3n-2}=\lim\limits_{n\to\infty}\frac{\frac1n+2}{3-\frac2n}=\frac23\text{.} \end{equation*}
So the sequence \(\left\{\frac{1+2n}{3n-2}\right\}\) converges to \(\frac{2}{3}\text{.}\)
A plot of the first 20 terms of the sequence \(\left\{\frac{5+3^n}{10+2^n}\right\}\) is shown below. Note the scale on the vertical axis.
The plot implies that the sequence does not have a limit as \(n\) goes to infinity. Evaluating the limit algebraically, we factor \(3^n\) from the numerator and \(2^n\) from the denominator. Doing so yields
\begin{equation*} \lim\limits_{n\to\infty}\frac{5+3^n}{10+2^n}=\lim\limits_{n\to\infty}\frac{3^n\left(\frac{5}{3^n}+1\right)}{2^n\left(\frac{10}{2^n}+1\right)}\text{.} \end{equation*}
Now, since \(\lim\limits_{n\to\infty}\frac{\frac5{3^n}+1}{\frac{10}{2^n}+1}=1\text{,}\) it follows that
\begin{equation*} \lim\limits_{n\to\infty}\frac{3^n\left(\frac{5}{3^n}+1\right)}{2^n\left(\frac{10}{2^n}+1\right)}=\lim\limits_{n\to\infty}\left(\frac32\right)^n\text{.} \end{equation*}
Since \(\frac{3}{2} \gt 1\text{,}\) this limit is infinite and the sequence \(\left\{\frac{5+3^n}{10+2^n}\right\}\) diverges to infinity as the plot suggests.
A plot of the first 20 terms of the sequence \(\left\{\frac{10^n}{n!}\right\}\) is shown below. Note the scale on the vertical axis.
Initially, it looks as though the terms increase without bound, but beginning at about \(n=10\) the factorial in the denominator dominates the numerator. Notice that
\begin{equation*} \frac{10^n}{n!} = \frac{10 \times 10 \times 10 \times \cdots \times 10}{1 \times 2 \times 3 \times \cdots \times n}\text{.} \end{equation*}
When \(n \gt 20\text{,}\) we have that \(\frac{10}{n} \lt \frac{1}{2}\) and thus
\begin{align*} \frac{10^n}{n!} \amp = \left(\frac{10 \times 10 \times 10 \times \cdots \times 10}{1 \times 2 \times 3 \times \cdots\times 20}\right) \left(\frac{10 \times 10 \times 10 \times \cdots \times 10}{21 \times 22 \times 23 \times \cdots\times n}\right)\\ \amp = \left(\frac{10^{20}}{20!}\right) \left(\frac{10}{21}\right) \left(\frac{10}{22}\right) \cdots \left(\frac{10}{n}\right)\\ \amp \lt \left(\frac{10^{20}}{20!}\right) \left(\frac{1}{2}\right)^{n-20}\text{.} \end{align*}
Since \(\frac{1}{2}\lt 1\text{,}\) the term \(\left(\frac{1}{2}\right)^{n-20}\) goes to 0 as \(n\) goes to infinity. The fact that \(\frac{10^{20}}{20!}\) is a constant means that \(\frac{10^n}{n!} \to 0\) as \(n \to \infty\text{.}\)
There are some special types of sequences for which convergence is sometimes easier to determine.
Bounded Sequence
A sequence \(\{s_n\}\) is bounded if there are numbers \(U\) and \(L\) such that \(U \leq s_n \leq L \) for all \(n\text{.}\) The number \(U\) is called a upper bound, while \(L\) is called a lower bound.
Monotone Sequence
A sequence is monotone if it is either increasing or decreasing. That is, if either \(s_n \leq s_{n+1} \) for all \(n\text{,}\) or \(s_n \geq s_{n+1}\) for all \(n\text{.}\)
Example7.10
Determine whether the following sequences are bounded, monotone, both, or neither. Then, determine whether the sequence converges or diverges.
\(s_n = \left(\frac{1}{3}\right)^n \)
\(s_n = (-1)^n \)
\(s_n = 2^n +1 \)
\(s_n= \frac{1}{2}s_{n-1}\) for \(n>1\) and \(s_1=30\text{.}\)
\(s_n=\frac{(-1)^n}{n} \)
\(s_n = \left( 1+\frac{1}{n}\right) \)
This sequence is bounded because \(0 \leq \left(\frac{1}{3}\right)^n \leq \frac{1}{3} \) for all \(n\geq 1\text{.}\) It's also decreasing, so it's monotone as well. It converges and has limit 0.
This sequence is bounded because \(-1 \leq (-1)^n \leq 1 \) for all \(n\text{.}\) However, since \(s_n\) oscillates between \(-1\) and \(1\text{,}\) it is not monotone. The sequence diverges because of the oscillation.
\(2^n +1\) is unbounded because it gets arbitrarily large as \(n\) gets large. It is monotone increasing. The sequence diverges because it's unbounded.
This recursive sequence is bounded between \(0\) and \(30\text{,}\) and since each term is half the previous one, it is monotone decreasing. The sequence converges to \(0\text{.}\)
\((-2)^n \) is neither bounded nor monotone. It is not bounded below or above, and is not monotone since it alternates between positive and negative terms. The sequence diverges.
This sequence is not monotone, since it alternates between positive and negative terms. It is bounded between \(-1\) and \(1\) (also between \(-1\) and \(\frac12\)). This sequence converges to 0.
This sequence is both bounded and monotone. It's decreasing and bounded between 1 and 2. It converges with limit 1.
Notice in the examples above, being bounded or monotone alone does not guarantee convergence. However, each example that was both bounded and monotone was convergent. This is true in general.
Convergence of Monotone Bounded Sequences
If a sequence \(\{s_n\}\) is bounded and monotone, then it converges.
Boundedness of Convergent Sequences
If a sequence \(\{s_n\}\) converges, then it is bounded.
Besides these two rules Monotone + Bounded \(\rightarrow \) Convergent and Convergent \(\rightarrow \) Bounded ), any combination of monotone/not monotone, bounded/not bounded, and convergent/not convergent is possible. For example, we've already seen a sequence in Example7.10 which is bounded but not convergent.
SubsectionSummary
A sequence is a list of objects in a specified order. We will typically work with sequences of real numbers. We can think of a sequence as a function from the positive integers to the set of real numbers.
A sequence \(\{s_n\}\) of real numbers converges to a number \(L\) if by choosing \(n\) sufficiently large we can make every value of \(s_k\) as close as we want to \(L\text{,}\) for \(k \ge n\text{.}\)
A sequence diverges if it does not converge.
Sequences are monotone if they are always increasing or always decreasing, and bounded if the terms of the sequence are always between an upper bound and a lower bound. Bounded monotone sequences converge, and convergent sequences are bounded.
SubsectionExercises
1Limits of Five Sequences
2Formula for a Sequence, Given First Terms
3Divergent or Convergent Sequences
4Terms of a Sequence from Sampling a Signal
5Finding the Limit of a Convergent Sequence
Finding limits of convergent sequences can be a challenge. However, there is a useful tool we can adapt from our study of limits of continuous functions at infinity to use to find limits of sequences. We illustrate in this exercise with the example of the sequence
\begin{equation*} \frac{\ln(n)}{n}\text{.} \end{equation*}
Calculate the first 10 terms of this sequence. Based on these calculations, do you think the sequence converges or diverges? Why?
For this sequence, there is a corresponding continuous function \(f\) defined by
\begin{equation*} f(x) = \frac{\ln(x)}{x}\text{.} \end{equation*}
Draw the graph of \(f(x)\) on the interval \([0,10]\) and then plot the entries of the sequence on the graph. What conclusion do you think we can draw about the sequence \(\left\{\frac{\ln(n)}{n}\right\}\) if \(\lim_{x \to \infty} f(x) = L\text{?}\) Explain.
Note that \(f(x)\) has the indeterminate form \(\frac{\infty}{\infty}\) as \(x\) goes to infinity. What idea from differential calculus can we use to calculate \(\lim_{x \to \infty} f(x)\text{?}\) Use this method to find \(\lim_{x \to \infty} f(x)\text{.}\) What, then, is \(\lim_{n \to \infty} \frac{\ln(n)}{n}\text{?}\)
6The Formula for the Amount in a Bank Account
We return to the example begun in Preview Activity7.2 to see how to derive the formula for the amount of money in an account at a given time. We do this in a general setting. Suppose you invest \(P\) dollars (called the principal) in an account paying \(r\%\) interest compounded monthly. In the first month you will receive \(\frac{r}{12}\) (here \(r\) is in decimal form; e.g., if we have \(8\%\) interest, we write \(\frac{0.08}{12}\)) of the principal \(P\) in interest, so you earn
\begin{equation*} P\left(\frac{r}{12}\right) \end{equation*}
dollars in interest. Assume that you reinvest all interest. Then at the end of the first month your account will contain the original principal \(P\) plus the interest, or a total of
\begin{equation*} P_1 = P + P\left(\frac{r}{12}\right) = P\left( 1 + \frac{r}{12}\right) \end{equation*}
dollars.
Given that your principal is now \(P_1\) dollars, how much interest will you earn in the second month? If \(P_2\) is the total amount of money in your account at the end of the second month, explain why
\begin{equation*} P_2 = P_1\left( 1 + \frac{r}{12}\right) = P\left( 1 + \frac{r}{12}\right)^2\text{.} \end{equation*}
Find a formula for \(P_3\text{,}\) the total amount of money in the account at the end of the third month in terms of the original investment \(P\text{.}\)
There is a pattern to these calculations. Let \(P_n\) the total amount of money in the account at the end of the third month in terms of the original investment \(P\text{.}\) Find a formula for \(P_n\text{.}\)
7Half-life of GLP-1
Sequences have many applications in mathematics and the sciences. In a recent paper5Hui H, Farilla L, Merkel P, Perfetti R. The short half-life of glucagon-like peptide-1 in plasma does not reflect its long-lasting beneficial effects, Eur J Endocrinol 2002 Jun;146(6):863-9. the authors write
The incretin hormone glucagon-like peptide-1 (GLP-1) is capable of ameliorating glucose-dependent insulin secretion in subjects with diabetes. However, its very short half-life (1.5-5 min) in plasma represents a major limitation for its use in the clinical setting.
The half-life of GLP-1 is the time it takes for half of the hormone to decay in its medium. For this exercise, assume the half-life of GLP-1 is 5 minutes. So if \(A\) is the amount of GLP-1 in plasma at some time \(t\text{,}\) then only \(\frac{A}{2}\) of the hormone will be present after \(t+5\) minutes. Suppose \(A_0 = 100\) grams of the hormone are initially present in plasma.
Let \(A_1\) be the amount of GLP-1 present after 5 minutes. Find the value of \(A_1\text{.}\)
Let \(A_2\) be the amount of GLP-1 present after 10 minutes. Find the value of \(A_2\text{.}\)
Let \(A_n\) be the amount of GLP-1 present after \(5n\) minutes. Find a formula for \(A_n\text{.}\)
Does the sequence \(\{A_n\}\) converge or diverge? If the sequence converges, find its limit and explain why this value makes sense in the context of this problem.
Determine the number of minutes it takes until the amount of GLP-1 in plasma is 1 gram.
8Sampling Continuous Data
Continuous data is the basis for analog information, like music stored on old cassette tapes or vinyl records. A digital signal like on a CD or MP3 file is obtained by sampling an analog signal at some regular time interval and storing that information. For example, the sampling rate of a compact disk is 44,100 samples per second. So a digital recording is only an approximation of the actual analog information. Digital information can be manipulated in many useful ways that allow for, among other things, noisy signals to be cleaned up and large collections of information to be compressed and stored in much smaller space. While we won't investigate these techniques in this chapter, this exercise is intended to give an idea of the importance of discrete (digital) techniques.
Let \(f\) be the continuous function defined by \(f(x) = \sin(4x)\) on the interval \([0,10]\text{.}\) A graph of \(f\) is shown in Figure7.11.
Figure7.11The graph of \(f(x) = \sin(4x)\) on the interval \([0,10]\)
We approximate \(f\) by sampling, that is by partitioning the interval \([0,10]\) into uniform subintervals and recording the values of \(f\) at the endpoints.
Ineffective sampling can lead to several problems in reproducing the original signal. As an example, partition the interval \([0,10]\) into 8 equal length subintervals and create a list of points (the sample) using the endpoints of each subinterval. Plot your sample on graph of \(f\) in Figure Figure7.11. What can you say about the period of your sample as compared to the period of the original function?
The sampling rate is the number of samples of a signal taken per second. As the part (a) illustrates, sampling at too small a rate can cause serious problems with reproducing the original signal (this problem of inefficient sampling leading to an inaccurate approximation is called aliasing). There is an elegant theorem called the Nyquist-Shannon Sampling Theorem that says that human perception is limited, which allows that replacement of a continuous signal with a digital one without any perceived loss of information. This theorem also provides the lowest rate at which a signal can be sampled (called the Nyquist rate) without such a loss of information. The theorem states that we should sample at double the maximum desired frequency so that every cycle of the original signal will be sampled at at least two points. Recall that the frequency of a sinusoidal function is the reciprocal of the period. Identify the frequency of the function \(f\) and determine the number of partitions of the interval \([0,10]\) that give us the Nyquist rate.
Humans cannot typically pick up signals above 20 kHz. Explain why, then, that information on a compact disk is sampled at 44,100 Hz.
9Deciding if a Sequence is Bounded or Monotone
For each of the sequences below, decide if the sequence is bounded and if it is monotone. (It may help you to list out the first few terms of the sequence.)
\(a_n= 7- (\frac 1 5)^n\)
\(b_n= 4\sin(n)\)
\(c_n= n (-1)^n\)
\(d_n= n^2+n\)
10Writing Recursive Sequences in Explicit Form
For each of the recursive sequences below, list the first 6 terms, then write an explicit expression for the \(n\)th term. Check your work by looking up the sequence on the Online Encyclopedia of Integer Sequences at http://oeis.org/.
\(a_n= a_{n-1}+3\) for \(n>1\) and \(a_1=4\text{.}\)
\(b_n= nb_{n-1}\) for \(n>1\) and \(b_1=1\text{.}\)
\(c_n= \frac 1 2 c_{n-1}\) for \(n>1\) and \(c_1=5\text{.}\)
\(d_n= d_{n-1}+4n-2\) for \(n>1\) and \(d_1=2\text{.}\) | CommonCrawl |
A Quantitative Way to Look at Global Warming
We can look at global warming in terms of the social cost of carbon. If we translate the claims of the various sides into claims of beliefs about the social cost of carbon, we can translate "97% of scientists agree global warming is a problem" to "97% of scientists agree the social cost of carbon is greater than zero." This is not an argument against someone claiming that the social cost of carbon is only $10 per ton of carbon. In the other direction, "the activists won't stop until they collapse Western Civilization" can be translated into "the activists won't stop until they impose a carbon tax of over $1000 per ton of carbon." This is not an argument against someone claiming that the social cost of carbon is $100 per ton of carbon. Can we turn the argument into a discussion of which social cost of carbon is correct?
Disclaimer: It looks like some of the calculations in my last post on the topic confused cost per ton of carbon and cost per ton of CO2. You may want to do your own calculations.
Addendum: An analogy that just occurred to me. You can think of the cap-and-trade vs. carbon taxes controversy as a controversy about dealing with negative externalities by means of commanding that a quota be filled vs. monetary payments. In other words, it resembles the 1960s/1970s controversy of conscription vs. paying enough for a volunteer army (which was a controversy about dealing with positive externalities by means of commanding that a quota be filled vs. monetary payments).
Read First, Understand Later
I have often found that I now understand something I read or saw years ago and did not understand then. For example:
In Childhood's End by Arthur C. Clarke, Karellen's initial speech was in "English so perfect that the controversy it began was to rage across the Atlantic for a generation." More recently, I realized the controversy was whether he was using a British or American accent.
Prelude to Space, also by Clarke, mentioned "Britain's last millionaire." At the time, it was expected that confiscatory income taxes would be the wave of the future and eliminate extremes of wealth. (My earlier post on confiscatory income taxes and SF is here.) By the time I read it, it seemed a comment on de-industrialization instead.
Yet another Clarke story, "Silence Please" mentions the well-known composer Edward England. At the time, I hadn't heard of Benjamin Britten.
The People of the Wind by Poul Anderson mentioned that there was a human precedent for the political structure of the Ythrian Domain but it was a bloody failure. Much later I found that it resembled the constitution of the late, unlamented Soviet Union. (A similar system was used on the planet Alphanor in The Star King by Jack Vance.)
In the story "5,271,009" by Alfred Bester, the main character was temporarily restored to sanity by what was called "niacin plus carbon dioxide." I later realized that it was a justification of smoking tobacco. Niacin is also known as nicotinic acid.
In "Will You Wait" by Alfred Bester, the Devil was running for Congress. Alfred Bester's Congressman at the time was John Lindsay.
In "The Foundling Stars" by Hal Clement, "'Nineteen decimals' had been a proverbial standard of accuracy for well over a century;" and that's because it's 64 bits (currently known as double precision).
The ending of "The Cabin Boy" by Damon Knight turned out to be an allusion to a sea chantey I will not repeat here.
In Country Lawyer by Bellamy Partridge, the career of a country lawyer included the last will and testament of a lady of ill repute. At the time I read it, I had no idea of what was meant by "Everybody in town knew what she was, thought of course some of the men knew better than others."
In Mathematics and the Imagination by Edward Kasner and James R. Newman, the examples of the common use of the word "probably" included the following:
Money quickly, abundantly, and mysteriously earned during prohibition (it was judged, without consulting Bradstreet's) was probably the fruit of bootlegging.
This was probably an allusion to Joe Kennedy.
When I read Grimm's Story by Vernor Vinge, I didn't recognize the pun in the sentence "Just ventilating the structure required the services of twenty draft animals."
The episodes of Peabody's Improbable History ended in puns that I was too young to understand the first time I watched it. For example, in one episode, King Charles get his head stuck in a beehive because everyone knows he was bee-headed.
The first time I read 1984, I didn't realize that the name "O'Brien" meant he was a member of a formerly oppressed minority.
1984 also included allusions to stuff I hadn't heard about. It wasn't until I read Darkness at Noon that I understood some of the allusions.
Similarly, Fourth Mansions by R. A. Lafferty included allusions to Teilhard de Chardin I didn't understand at the time.
When I saw Ghostbusters, I didn't realize it was based on H. P. Lovecraft's work.
When I saw the "Lemming of the BDS" episode of Monty Python, I hadn't heard of Marathon Man.
Suggestions for the Museum of Math
At long last, I finally got around to visiting the Museum of Math in Manhattan. I have a few suggestions for exhibits:
Zooms of the Mandelbrot set.
Self-inverse fractals
Escher's Circle limits
The Life cellular automaton
Green sticks in the Zometools exhibit (those make a regular octahedron possible)
Four-dimensional geometry
Surreal numbers
Last, but not least, countable ordinals
It's the Real World after All
It looks like p-adic numbers are actually useful for something after all.
The p-adic numbers are the basis for some of the spaces I discussed here:
In some of the spaces studied by mathematicians, every point of a
sphere is a center.
"There seems no center because it's all center." --- C.S. Lewis
A Blown Mind Is a Terrible Thing to Waste
The following bulshytt (example here) has been going around:
If the multiverse theory is true, then there must be a universe where the multiverse theory is false.
First, "more than one" does not mean "infinite." Second, "infinite" does not mean "anything physically possible." Third, "anything physically possible" does not mean"anything you can think of."
It makes just as much sense to say:
If the multiple-planets theory is true, then there must be a planet where the multiple-planets theory is false.
What does this imply about the common reaction: "mind = blown?"
I was reminded of this by Mark Chu-Carroll's dissection of a similar piece of nonsense.
A Brief Note on the "War on Boys"
Is the "War on Boys" occurring in the schools actually a problem? Higher education is not for everybody and one of the tasks of schools is to ensure that unscholarly people stop wasting their time. The traditional school does that very well as far as boys are concerned.
The problem is that the traditional filter against unscholarly boys does not work for unscholarly girls. Even unscholarly girls are willing to sit in one place and do desk work. Maybe we need a "War on Girls" as well.
On the other hand, the emergency back-up filter against unscholarly girls is working just fine. Unscholarly girls who insist on higher education anyway are shunted into degrees that are easily identifiable as meaningless, similar to unscholarly boys on the football team. (If you can reliably identify a 'side' you are supposed to take in class, the degree is meaningless.)
Addendum: Is the emergency back-up filter working too well? Could it have caused the decline in women computer science grads? Maybe it needs debugging…
There was actually a professor who said that professors are overpaid.
The original version of this said "underpaid." Apparently, my fingers automatically typed "dog bites man."
A Possible "Trigger Warning"
WARNING: Contains literature. May induce thinking and other undesirable consequences. Read at your own risk!
Place on all books/ebooks/whatever.
Addendum: Rich Lowry beat me to it.
A Problem with Spurious Correlations
The correlations at Spurious Correlations all (or at least the first two pages) seem to be time series. Shouldn't there be geographic correlations as well? For example, what about the negative correlation between UFO sightings and abortion rates?
Update on a Possible MathJax Bug
The commands \shoveleft and \shoveright do work, but only inside {multline}. They don't work in MathJax inside {array}.
Commonly-Accepted Premises I Don't Share
Have you ever gotten into a pointless argument with someone who regarded one of his/her essential premises as obvious enough to not be worth defending or even stating? I've started making a list of such premises that I don't agree with. For example:
Any behavior that is caused is incompatible with free will. In other words, if you do something for a reason, you are not free.
History moves one way.
'Government' is simply the name we give to the things we choose to do together.
Nobody has an opinion on his/her own. Everything is thought by collectives of one form or another.
The right-wing consists of people who have cowardly given in to authority whereas the Enlightened Ones have the guts to talk back to The Man.
Central planning is simply a plan made by one person.
Everybody knows 'regulatory capture' is what happens when businesses influence the government to regulate less.
If 'everybody knows' something, it must be true.
In the preceding line, 'everybody' only applies to those willing to go along with what 'everybody knows.'
If a sufficiently-large number of people do something, they cannot be blamed. Either they are right or they have no choice.
Human are animals. (Since many sane people also believe this, please note we're plants.)
The amount of government is approximately constant. Regulating X means deregulating Y. Adding regulations will not strengthen the government.
Yes. I am adding to this.
The Science Is Settled
Colony Collapse Disorder is caused by kids on dope.
I'm just waiting for @trutherbot to mention the alleged bee crisis…
Is This Supposed to Be a Criticism?
Apparently, nativists have recently tried attributing pro-immigration opinions to Asperger's syndrome. Does that mean open-borders people are responsible for the Internet the nativists use to spread anti-immigration opinions?
Explaining the Increasing Authoritarianism on the Left
The increasing authoritarianism on the left reminded me of The Sociological Counterpart of Cheyne–Stokes Respiration from Stand on Zanzibar by John Brunner. It might be due to the decline in 'liberaltarianism.' If leftists have fewer libertarian allies who can say "Are you nuts?" in response to authoritarian suggestions, those suggestions may be taken more seriously.
MathJax vs. LaTeX
The following sounded familiar:
You discover that one day, some idiot decided that since another idiot decided that 1/0 should equal infinity, they could just use that as a shorthand for "Infinity" when simplifying their code. Then a non-idiot rightly decided that this was idiotic, which is what the original idiot should have decided, but since he didn't, the non-idiot decided to be a dick and make this a failing error in his new compiler.
Meanwhile in my day job, I recently found that MathJax does not add equation numbers to blank lines. For example, the following will have an equation number in \({\rm\LaTeX}\) but not in MathJax:
\begin{gather}A=A\nonumber\\\end{gather}
\begin{gather}A=A\nonumber\\\end{gather}
It was easiest to change it to the following to get consistent results:
\begin{gather}A=A\nonumber\\\text{}\end{gather}
\begin{gather}A=A\nonumber\\\text{}\end{gather}
I don't think it's quite in the idiot category but it's still annoying.
I disagree with the article on one point. I pour myself caffeine-free Diet Coke instead of Scotch.
A Rule for Reading Neoreactionaries
Just pretend you're reading something from a leftist. Nativists, for example, sound amazingly similar to the anti-overpopulation people. Paleo dieters sound like the natural-food fans. Human biodiversity sounds like the writings of environmentalists trumpeting the latest research on the hazard of the month and then ignoring yesterday's similar research that went nowhere. It looks like collectivists can't help imitating each other.
Crimea and Brendan Eich
You can think of both the Crimea invasion and the Eich firing as an attempt by a side that had recently lost a battle or two to look more powerful by picking a fight it could win. The Crimea invasion occurred shortly after Ukraine refused to be a puppet. Eich was fired shortly after Phil Robertson was reinstated.
The Paleo Diet Has Jumped the Proverbial Shark
Now leftists are saying we should eat more protein and they're blaming the carbohydrate+umami-enhancer diet on business.
Meet the new fad, same as the old fad.
It's Ba-ack!
I recently mentioned that it's been years, or possibly decades, since the last time I had heard the claim that paying higher wages is good for the economy because it enables workers to buy more. Well… It's ba-ack! To make matters worse, the nativist commenters at Instapundit also think we can provide prosperity by making labor artificially more expensive. | CommonCrawl |
Development and validation of a classification approach for extracting severity automatically from electronic health records
Mary Regina Boland1,2,
Nicholas P Tatonetti1,2,3,4 &
George Hripcsak1,2
Journal of Biomedical Semantics volume 6, Article number: 14 (2015) Cite this article
Electronic Health Records (EHRs) contain a wealth of information useful for studying clinical phenotype-genotype relationships. Severity is important for distinguishing among phenotypes; however other severity indices classify patient-level severity (e.g., mild vs. acute dermatitis) rather than phenotype-level severity (e.g., acne vs. myocardial infarction). Phenotype-level severity is independent of the individual patient's state and is relative to other phenotypes. Further, phenotype-level severity does not change based on the individual patient. For example, acne is mild at the phenotype-level and relative to other phenotypes. Therefore, a given patient may have a severe form of acne (this is the patient-level severity), but this does not effect its overall designation as a mild phenotype at the phenotype-level.
We present a method for classifying severity at the phenotype-level that uses the Systemized Nomenclature of Medicine – Clinical Terms. Our method is called the Classification Approach for Extracting Severity Automatically from Electronic Health Records (CAESAR). CAESAR combines multiple severity measures – number of comorbidities, medications, procedures, cost, treatment time, and a proportional index term. CAESAR employs a random forest algorithm and these severity measures to discriminate between severe and mild phenotypes.
Using a random forest algorithm and these severity measures as input, CAESAR differentiates between severe and mild phenotypes (sensitivity = 91.67, specificity = 77.78) when compared to a manually evaluated reference standard (k = 0.716).
CAESAR enables researchers to measure phenotype severity from EHRs to identify phenotypes that are important for comparative effectiveness research.
Recently, the Institute of Medicine has stressed the importance of Comparative Effectiveness Research (CER) in informing physician decision-making [1]. As a result, many national and international organizations were formed to study clinically meaningful Health Outcomes of Interest (HOIs). This included the Observational Medical Outcomes Partnership (OMOP), which standardized HOI identification and extraction from electronic data sources for fewer than 50 phenotypes [2]. The Electronic Medical Records and Genomics Network (eMERGE) [3] also classified some 20 phenotypes, which were used to perform Phenome-Wide Association Studies (PheWAS) [4]. However, a short list of phenotypes of interest remains lacking in part because of complexity in defining the term phenotype for use in Electronic Health Records (EHRs) and genetics [5].
EHRs contain a wealth of information for studying phenotypes including longitudinal health information from millions of patients. Extracting phenotypes from EHRs involves many EHR-specific complexities including data sparseness, low data quality [6], bias [7], and healthcare process effects [8].
Many machine-learning techniques that correlate EHR phenotypes with genotypes encounter large false positive rates [3]. Multiple hypothesis correction methods aim to reduce the false positive rate. However, these methods strongly penalize for a large phenotype selection space. A method is needed that efficiently reduces the phenotype selection space to only include important phenotypes. This would reduce the number of false positives in our results and allow us to prioritize phenotypes for CER and rank them by severity.
To extract phenotypes from EHRs, a specialized ontology or terminology is needed that describes phenotypes, their subtypes and the various relationships between phenotypes. Several ontologies/terminologies have been developed for studying human phenotypes including the Human Phenotype Ontology (HPO) [9]. The HPO contains phenotypes with at least some hereditary component, e.g., Gaucher disease. However, EHRs contain phenotypes that are recorded during the clinical encounter that are not necessarily hereditary. To capture a patient's phenotype from EHRs, we will utilize an ontology specifically designed for phenotype representation in EHRs called the Systemized Nomenclature of Medicine – Clinical Terms (SNOMED-CT) [10,11]. SNOMED-CT captures phenotypes from EHRs, including injuries that are not included in the HPO. Furthermore, SNOMED-CT can be used to capture more clinical content then International Classification of Diseases, version 9 (ICD-9) codes [12], making SNOMED-CT ideal for phenotype classification. Using SNOMED-CT enables development of a standardized approach that conforms to OMOP's guidelines promoting data reuse.
Robust methods are needed that address these challenges and reuse existing standards to support data sharing across institutions. This would propel our understanding of phenotypes and allow for robust CER to improve clinical care. This would also help pave the way for truly translational discoveries and allow genotype-phenotype associations to be explored for clinically important phenotypes of interest [13].
An important component when studying phenotypes is phenotype severity. Green et al. demonstrate that a patient's disease severity at hospital admission was crucial [14] when analyzing phenotype severity at the patient-level. We are interested in classifying phenotypes as either severe or mild at the phenotype-level, which differs from the vast literature on patient-specific severity. Classifying severity at the phenotype-level involves distinguishing acne as a mild condition from myocardial infarction as a severe condition. Contrastingly, patient-level severity assesses whether a given patient has a mild or severe form of a phenotype (e.g., acne). Importantly, phenotype-level severity is independent of the individual patient's state and is relative to other phenotypes (e.g., acne vs. myocardial infarction). Further, phenotype-level severity does not change based on the individual patient. For example, acne is mild at the phenotype-level, which is relative to other phenotypes. Therefore, a given patient may have a severe form of acne (i.e., patient-level severity = severe), but the overall phenotype-level severity is mild because phenotype-level severity is relative to other phenotypes and does not change based on an individual patient's patient-level severity.
Studying phenotype severity is complex. The plethora of medical conditions is mirrored by an equally diverse set of severity indices that run the full range of medical condition complexity. For example, there is a severity index specifically designed for nail psoriasis [15], insomnia [16], addiction [17], and even fecal incontinence [18]. However, each of these indices focuses on classifying patients as being either a severe or mild case of a given condition (e.g., psoriasis). They do not capture the difference at the phenotype-level.
Other researchers developed methods to study patient-specific phenotype severity at the organismal level. For example, the Severity of Illness Index assesses patient health using seven separate dimensions [19] consisting of: 1) the stage of the principal diagnosis at time of admission; 2) complications; 3) interactions (i.e., the number of patient comorbidities that are unrelated to the principal diagnosis); 4) dependency (i.e., the amount of care required that is above the ordinary); 5) non-operating room procedures (i.e., the type and number of procedures performed); 6) rate of response to therapy; and 7) remission of acute symptoms directly related to admission.
The Severity of Illness Index is useful for characterizing patients as severe or mild types of a given disease phenotype. However, it does not measure severity at the phenotype-level (e.g., acne vs. myocardial infarction), which is required to reduce the phenotype selection space to only the most severe phenotypes for CER.
In this paper, we describe the development and validation of a Classification Approach for Extracting Severity Automatically from Electronic Health Records (CAESAR). CAESAR incorporates the spirit of the Severity of Illness Index, but measures phenotype-level severity rather than patient-level severity. CAESAR was designed specifically for use with EHR-derived phenotypes.
Measuring severity
We used five EHR-specific measures of condition severity that are related to the 7 dimensions from Horn's patient-level severity index [19] because EHRs differ from research databases [20]. Columbia University Medical Center's (CUMC) Institutional Review Board approved this study.
Condition treatment time can be indicative of severity and so it was included as a severity measure. Treatment time is particularly indicative of severity for acute conditions, e.g., fractures, wounds or burns, because minor (less severe) fractures often heal more rapidly than major fractures (more severe). However, treatment time is also dependent on the chronicity of the disease [21], which is separate from severity. Treatment time can also have other effects when recorded in EHRs [22-24].
Because hospital duration time can be influenced by many factors, e.g., patients' other comorbidities, we decided to analyze the condition treatment time. While inter-dependent, hospital duration time is typically a subset of the entire condition treatment time (which can include multiple hospital visits).
Number of comorbidities is another useful measure for assessing phenotype severity. A similar measure is found in the Severity of Illness Index that measures the number of other conditions or problems that a given patient has at the time of their principal diagnosis. Our EHR-specific version looks at the number of distinct comorbidities per patient with a given phenotype and then averages across all of the individuals in the database with that phenotype. This average tells us the comorbidity burden associated with a given phenotype. An example is given in Figure 1 to illustrate how the number of comorbidities, medications, and treatment time can differ by phenotype severity. Note that 'acne' is an atypical mild phenotype as its treatment time is longer than 'myocardial infarction' while most mild phenotypes have shorter treatment times. Importantly, chronicity also affects treatment time, which can negate the effect that severity has on treatment time (Figure 1).
Example showing differences between ehr manifestations of severe (Myocardial Infarction or MI) and mild (Acne) phenotypes. Phenotype-level differences between severe and mild phenotypes are shown in Figure 1. Notice that there is very little difference between the two phenotypes if you only look at the number of procedures, comorbidities or prescribed medications. Therefore, if you use any of those three measures alone to identify severity, it would be difficult. However, if cost is used as a proxy for severity then the correct classification would be made (myocardial infarction is more severe than acne and also costs more). But if you use the treatment length then an incorrect classification of the phenotype-level severity will result (acne takes longer to treat as a result of chronicity, and therefore longer treatment length is not equal to increased phenotype-level severity). This underscores the importance of using multiple measures together as a proxy for severity, which is the approach employed by CAESAR.
Number of medications is another useful measure for assessing severity. This measure is related to the previous measure (i.e., the number of comorbidities). However, it differs because some phenotypes have a large number of medications, but also a small number of comorbidities, e.g., burn injuries. Therefore, in many cases these measures will be similar but in other important instances they will differ.
Number of procedures is also based on a measure from the Severity of Illness Index. Because we are focused on phenotype-level severity, we computed an average number of procedures associated with each phenotype. First, we extracted the number of procedures performed per phenotype and per patient. Then we computed the average across all patients in our database yielding the average number of procedures per phenotype.
Cost to treat phenotype is a commonly used metric for assessing severity [25]. The Centers for Medicare and Medicaid Services released the billable rate for each procedure code per minute [26]. They also released the number of minutes each procedure typically requires. Combining these data allows us to calculate the billable amount for a given procedure [26]. The billable rates are from 2004 and they are for each Healthcare Common Procedure Coding System (HCPCS) code [26].
Since these data are only available for procedure codes (HCPCS codes are procedure codes) we calculated the total cost per patient using the procedures they were given. We determined the cost per phenotype by taking the average cost across all patients with that phenotype.
Measures of phenotype severity and E-PSI (Ehr-phenotype severity index)
We first calculated the proportion of each measure. The sum of the proportions (there are five proportions – one for each measure) was divided by the total number of proportions (i.e., five). This final value is E-PSI, an index term based on all 5 measures given in Equation 1 where x is a phenotype. Therefore, E-PSI is a proportional index that incorporates treatment time, cost, number of medications, procedures, and comorbidities.
Equation 1:
E-PSI (Phenotype x)
$$ \begin{array}{l}=\frac{x_{cost}}{ \max (cost)} + \frac{x_{treatment\ length}}{ \max \left( treatment\ length\right)} + \frac{x_{comorbidities}}{ \max (comorbidities)}\\ {}\kern3.36em + \frac{x_{medications}}{ \max (medications)} + \frac{x_{procedures}}{ \max (procedures)}\end{array} $$
For example the treatment time of 'Hemoglobin SS disease with crisis' is 1406 days. We divide this by the max treatment length of any phenotype, which is also 1406 days. This gives us the proportional treatment length of the disease or 1.00. Likewise, proportions are calculated for each of the five measures. The sum of the proportions is divided by the total number of proportions, or 5. This is E-PSI, the proportional index, for the phenotype.
We used Independent Components Analysis (ICA) [27] to visualize the relationship between E-PSI and each phenotype severity measure. Computations were performed in R (v.3.1.1).
Reference standard development and evaluation
Development of the Reference Standard involved using the CUMC Clinical Data Warehouse that was transformed to the Clinical Data Model (CDM) outlined by the OMOP consortium [2]. All low prevalence phenotypes were removed, leaving behind a set of 4,683 phenotypes (prevalence of at least 0.0001). Because we are studying phenotypes manifested during the clinical encounter, we treat each distinct SNOMED-CT code as a unique phenotype. This was done because each SNOMED-CT code indicates a unique aspect of the patient state [28].
To compare results between "mild" and "severe" phenotypes, we required a reference-standard set of SNOMED-CT codes that were labeled as "mild" and "severe". In addition, the set must be un-biased towards a particular clinical subfield (e.g., oncology or nephrology). Therefore, we developed a reference-standard set of 516 phenotypes (out of the 4,683 phenotype super-set) using a set of heuristics. All malignant cancers and accidents were labeled as "severe"; all ulcers were labeled as "mild"; all carcinomas in situ were labeled as "mild"; and most labor and delivery-related phenotypes were labeled as "mild". Since the reference standard was created manually, the final judgment was left to the ontology expert regarding labeling a given phenotype as "mild" or "severe". However, the ontology expert consulted with medical experts to reduce ambiguity.
Evaluation of the Reference Standard required soliciting volunteers to manually evaluate a subset of the reference standard (N = 7). Half of the evaluators held a Medical Degree (MD) (N = 3) and completed residency while the other half were graduate students with informatics training (N = 3) and one post-doctoral scientist. We asked each evaluator to assign phenotypes as either mild or severe. We provided each evaluator with instructions for distinguishing between mild and severe phenotypes. For example, "severe conditions are conditions that are life-threatening (e.g., stroke is immediately life-threatening) or permanently disabling (congenital conditions are generally considered severe unless they are easily corrected). Mild conditions may still require treatment (e.g., benign neoplasms and cysts are generally considered mild and not severe as they may not require surgery)." To ascertain the confidence that each evaluator had in making their severity assessments, we asked evaluators to denote their confidence in each severity assignment using a modified Likert scale [29] with the following 3 choices: 'very confident', 'somewhat confident' and 'not confident'. All evaluators were provided with two coded examples and 100 randomly extracted phenotypes (from the reference standard). This evaluation set of 100 phenotypes contained 50 mild and 50 severe (labels from the reference-standard). Pair-wise agreement between each evaluator and the reference-standard was calculated using Cohen's kappa [30,31]. Inter-rater agreement among all evaluators and the reference standard was calculated using Fleiss's kappa [32,33].
Evaluation of Measures at Capturing Severity involved comparing results from "mild" and "severe" phenotypes for each severity measure. Severity measures were not normally distributed so non-parametric measures (i.e., quartiles) were used for comparisons.
Learning phenotype-level severity classes
Development of the random forest classifier
CAESAR involved the unsupervised learning of classes by calculating a proximity matrix [34]. The scaled 1-proximity for each data point (in this case a phenotype) was plotted [34]. The reference standard result was then overlaid on top to determine if there was any significant clustering based on a phenotype's class (in this case severe or mild). Clusters of severe and mild phenotypes can be used to set demarcation points for labeling a phenotype.
Using the proximity matrix also allows for discrimination among levels of severity, in addition to the binary classification of severe vs. mild. We used the randomForest package (v.4.6-10) in R (v.3.1.1) for calculations [35] and we used 1000 trees in our model. The random forest classifier, or CAESAR, takes all 5 severity measures and E-PSI (the proportional index term) as input for the model.
Evaluation of the random forest classifier
CAESAR was evaluated using the 516-phenotype reference standard. Sensitivity and specificity were used to assess CAESAR's performance. The class errors for severe and mild were measured using the randomForest package [35] and compared against the out-of-bag (OOB) error rate. The randomForest algorithm uses the Gini index to measure node impurity for classification trees. The Gini impurity measure sums the probability of an item being chosen times the probability of misclassifying that item. We can assess the importance of each variable (i.e., the 5 measures and E-PSI) included in CAESAR by looking at the mean decrease in Gini. Variables with larger decreases in Gini are more important to include in CAESAR for accurate prediction.
Assessment of phenotype severity
Severe phenotypes in general are more prevalent in EHRs because in-patient records contain "sicker" individuals when compared to the general population, which can introduce something called the Berkson bias [36]. However, in the general population mild phenotypes are often more prevalent than severe phenotypes.
For condition/phenotype information we used data from CUMC EHRs, which was initially recorded using ICD-9 codes. These ICD-9 codes were mapped to SNOMED-CT codes using the OMOP CDM v.4 [2]. For this paper, we used all phenotypes (each phenotype being a unique SNOMED-CT code) with prevalence of at least 0.0001 in our hospital database. This constituted 4,683 phenotypes. We then analyzed the distribution of each of the five measures and E-PSI among the 4,683 phenotypes. Figure 2 shows the correlation matrix among the 5 severity measures and E-PSI.
Severity measure correlation matrix. Histograms of each severity measure shown (along the diagonal) with pairwise correlation graphs (lower triangle) and correlation coefficients and p-values (upper triangle). Notice the condition length is the least correlated with the other measures while number of medications and number of procedures are highly correlated (r = 0.88, p < 0.001).
Strong correlations exist between both the number of procedures and the number of medications (r = 0.88), and the number of comorbidities (r = 0.89). This indicates that there is a high degree of inter-relatedness between the number of procedures and the other severity measures. Cost was calculated using HCPCS codes alone, whereas the number of procedures measure includes both HCPCS and the ICD-9 procedure codes as defined in the OMOP CDM. Because cost was calculated using only HCPCS codes, the correlation between cost and the number of procedures was only 0.63. Also phenotype measures were increased for more severe phenotypes. This could be useful for distinguishing among subtypes of a given phenotype based on severity.
E-PSI versus other severity measures
We performed ICA on a data frame containing each of the five severity measures and E-PSI. The result is shown in Figure 3 with phenotypes colored by increasing E-PSI score and size denoting cost. Notice that phenotype cost is not directly related to the E-PSI score. Also phenotypes with higher E-PSI seem to be more severe (Figure 3). For example, 'complication of transplanted heart', a severe phenotype, had a high E-PSI score (and high cost).
Independent component analysis of phenotypes illustrates relationship between E-PSI and cost. Independent Component Analysis was performed using all five severity measures and E-PSI. Phenotypes are colored by increasing E-PSI score (higher score denoted by light blue, lower score denoted by dark navy). The size indicates cost (large size indicates high cost). Phenotypes with higher E-PSI seem to be more severe; for example, 'complication of transplanted heart', a severe phenotype, had a high E-PSI score (and high cost). However, phenotype cost is not directly related to the E-PSI score.
Phenotypes can be ranked differently depending on the severity measure used. To illustrate this, we ranked the phenotypes using E-PSI, cost, and treatment length and extracted the top 10 given in Table 1. When ranked by E-PSI and cost, transplant complication phenotypes appeared (4/10 phenotypes), which are generally considered to be highly severe. However, the top 10 phenotypes when ranked by treatment time were also highly severe phenotypes, e.g., Human Immunodeficiency Virus and sickle cell. An ideal approach, used in CAESAR, combines multiple severity measures into one classifier.
Table 1 Top 10 phenotypes ranked by severity measure
'Complication of transplanted heart' appears in the top 10 phenotypes when ranked by all three-severity measures (italicized in Table 1). This is particularly interesting because this phenotype is both a complication phenotype and transplant phenotype. By being a complication the phenotype is therefore a severe subtype of another phenotype, in this case a heart transplant (which is actually a procedure). Heart transplants are only performed on sick patients; therefore this phenotype is always a subtype of another phenotype (e.g., coronary arteriosclerosis). Hence 'complication of transplanted heart' is a severe subtype of multiple phenotypes (e.g., heart transplant, and the precursor phenotype that necessitated the heart transplant – coronary arteriosclerosis).
Evaluation of severity measures
Development of the Reference Standard severe and mild SNOMED-CT codes involved using a set of heuristics with medical guidance. Phenotypes were considered severe if they were life threatening (e.g., 'stroke') or permanently disabling (e.g., 'spina bifida'). In general, congenital phenotypes were considered severe unless easily correctable. Phenotypes were considered mild if they generaly require routine or non-surgical (e.g., 'throat soreness') treatment.
Several heuristics were used: 1) all benign neoplasms were labeled as mild; 2) all malignant neoplasms were labeled as severe; 3) all ulcers were labeled as mild; 4) common symptoms and conditions that are generally of a mild nature (e.g., 'single live birth', 'throat soreness', 'vomiting') were labeled as mild; 5) phenotypes that were known to be severe (e.g., 'myocardial infarction', 'stroke', 'cerebral palsy') were labeled as severe. The ultimate determination was left to the ontology expert for determining the final classification of severe and mild phenotypes. The ontology expert consulted with medical experts when deemed appropriate. The final reference standard consisted of 516 SNOMED-CT phenotypes (of the 4,683 phenotypes). In the reference standard, 372 phenotypes were labeled as mild and 144 were labeled as severe.
Evaluation of the Reference Standard was performed using volunteers from the Department of Biomedical Informatics at CUMC. Seven volunteers evaluated the reference standard including three MDs with residency training, three graduate students with informatics experience and one post-doc (non-MD). Compensation was commensurate with experience (post-docs received $15 and graduate students received $10 Starbucks gift cards).
We excluded two evaluations from our analyses: one because the evaluator had great difficulty with the medical terminology, and the second because the evaluator failed to use the drop-down menu provided as part of the evaluation. We calculated the Fleiss kappa for inter-rater agreement among the remaining 5 evaluations and found evaluator agreement was high (k = 0.716). The individual results for agreement between each evaluator and the reference standard were kappa equal to 0.66, 0.68, 0.70, 0.74, and 0.80. Overall, evaluator agreement (k = 0.716) was sufficient for comparing two groups (i.e., mild and severe) and 100% agreement was observed between all five raters and the reference-standard for 77 phenotypes (of 100).
Evaluation of Measures at Capturing Severity was performed by comparing the distributions of all 6 measures between severe and mild phenotypes in our 516-phenotype reference standard. Results are shown in Figure 4. Increases were observed for severe phenotypes across all measures. We performed the Wilcoxon Rank Sum Test to assess significance of the differences between severe vs. mild phenotypes shown in Figure 4. The p-values for each comparison were <0.001.
Differences in severity measures and e-psi for mild vs. severe phenotypes. The distribution of each of the 6 measures used in CAESAR is shown for severe and mild phenotypes. Severity assignments were from our reference standard. Using the Wilcoxon Rank Sum Test, we found statistically significant differences between severe and mild phenotypes across all 6 measures (p < 0.001). Severe phenotypes (dark red) having higher values for each of the six measures than mild phenotypes. The least dramatic differences were observed for cost and number of comorbidities while the most dramatic difference was for the number of medications.
Unsupervised learning of severity classes
CAESAR used an unsupervised random forest algorithm (randomForest package in R) that required E-PSI and all 5-severity measures as input. We ran CAESAR on all 4,683 phenotypes and then used the 516-phenotype reference standard to measure the accuracy of the classifier.
CAESAR achieved a sensitivity = 91.67 and specificity = 77.78 indicating that it was able to discriminate between severe and mild phenotypes. CAESAR was able to detect mild phenotypes better than severe phenotypes as shown in Figure 5.
CAESAR error rates. Error rates for CAESAR's random forest classified are depicted with severe denoted by the green line, mild denoted by the red line and out-of-bag (OOB) error denoted by the black line. CAESAR achieved a sensitivity = 91.67 and specificity = 77.78 indicating that it was able to discriminate between severe and mild phenotypes. CAESAR was able to detect mild phenotypes better than severe phenotypes.
The Mean Decrease in Gini (MDG) measured the importance of each severity measure in CAESAR. The most important measure was the number of medications (MDG = 54.83) followed by E-PSI (MDG = 40.40) and the number of comorbidities (MDG = 30.92). Cost was the least important measure (MDG = 24.35).
CAESAR used all 4,683 phenotypes plotted on the scaled 1-proximity for each phenotype [34] shown in Figure 6 with the reference standard overlaid on top. Notice that phenotypes cluster by severity class (i.e., mild or severe) with a "mild" space (lower left) and a "severe" space (lower right), and phenotypes of intermediate severity in between.
Classification result from CAESAR showing all 4,683 phenotypes (gray) with severe (red) and mild (pink) phenotype labels from the reference standard. All 4,683 phenotypes plotted using CAESAR's dimensions 1 and 2 of the scaled 1-proximity matrix. Severe phenotypes are colored red, mild phenotypes are colored pink and phenotypes not in the reference standard are colored gray. Notice that most of the severe phenotypes are in the lower right hand portion of the plot while the "mild" space is found in the lower left hand portion.
However, three phenotypes are in the "mild" space (lower left) of the random forest model (Figure 6). These phenotypes are 'allergy to peanuts', 'suicide-cut/stab', and 'motor vehicle traffic accident involving collision between motor vehicle and animal-drawn vehicle, driver of motor vehicle injured'. These phenotypes are probably misclassified because they are ambiguous (in the case of the motor vehicle accident, and the suicide cut/stab) or because the severity information may be contained in unstructured EHR data elements (as could be the case with allergies).
Using the proximity matrix also allows further discrimination among severity levels beyond the binary mild vs. severe classification. Phenotypes with ambiguous severity classifications appear in the middle of Figure 6. To identify highly severe phenotypes, we can focus only on phenotypes contained in the lower right hand portion of Figure 6. This reduces the phenotype selection space from 4,683 to 1,395 phenotypes (~70% reduction).
We are providing several CAESAR files for free download online at http://caesar.tatonettilab.org. These include, the 516-phenotype reference-standard used to evaluate CAESAR, the 100-phenotype evaluation set given to the independent evaluators along with the instructions, and the 4,683 conditions with their E-PSI scores and the first and second dimensions of the 1-proximity matrix (shown in Figure 6). This last file also contains two subset tables containing the automatically classified "mild" and "severe" phenotypes and their scores.
Using the patient-specific severity index as a backbone [19], we identified five measures of EHR-specific phenotype severity that we used as input for CAESAR. Phenotype-level severity differs from patient-level severity because it is an attribute of the phenotype itself and can be used to rank phenotypes. Using CAESAR, we were able to reduce our 4,683-phenotype set (starting point) to 1,395 phenotypes with high severity and prevalence (at least 0.0001) reducing the phenotype selection space by ~70%. Severe phenotypes are highly important for CER because they generally correlate with lower survival outcomes, lost-productivity, and have an increased cost burden. In fact, patients with severe heart failure tend to have bad outcomes regardless of the treatment they receive [37]. Therefore understanding the severity of each condition is important before performing CER and having a complete list of severe phenotypes would be greatly beneficial.
Additionally, developing a classification algorithm that is biased towards identifying more severe over mild phenotypes is optimal, as it would enable detection of phenotypes that are crucial for public health purposes. Active learning methods that favor detection of severe phenotypes were proven successful in a subsequent study [38].
CAESAR uses an integrated severity measure approach, which is better than using any of the other measures alone, e.g., cost, as each severity measure has its own specific bias. It is well known that cosmetic procedures, which by definition treat mild phenotypes, are high in cost. If cost is used as a proxy for severity it could introduce many biases towards phenotypes that require cosmetic procedures (e.g., crooked nose) that are of little importance to public health. Also some cancers are high in cost but low in mortality (and therefore severity), a good example being non-melanoma skin cancer [39]. Therefore, by including multiple severity measures in CAESAR we have developed a method that is robust to these types of biases.
Another interesting finding was that cancer-screening codes tend to be classified as severe phenotypes by CAESAR even though they were generally considered as mild in the reference standard. The probable cause for this is that screening codes, e.g., 'screening for malignant neoplasm of respiratory tract', are generally only assigned by physicians when cancer is one of the differential diagnoses. In this particular situation the screening code, while not an indicator of the disease itself, is indicative of the patient being in an abnormal state with some symptoms of neoplastic presence. Although not diagnoses, screening codes are indicative of a particular manifestation of the patient state, and therefore can be considered as phenotypes. This finding is also an artifact of the EHR, which records the patient state [8], which does not always correlate with the "true" phenotype [5,28].
Importantly, CAESAR may be useful for distinguishing among subtypes of a given phenotype if one of the characteristics of a subtype involves severity. For example, the severity of Gaucher disease subtypes are difficult to capture at the patient-level [40]. This rare phenotype would benefit greatly from study using EHRs where more patient data exists. Using CAESAR may help in capturing the phenotype-level severity aspect of this rare phenotype, which would help propel the utility of using EHRs to study rare phenotypes [41] by providing accurate severity-based subtyping.
CAESAR is directly relevant to the efforts of the Observational Health Data Sciences and Informatics consortium (OHDSI), which is a continuation of OMOP. OHDSI is an international network focused on observational studies using EHRs and other health record systems. Their original motivation was to study post-market effects of pharmaceutical drugs [42] based on their pharmaceutical partnerships. To this end, a severity-based list of ranked phenotypes would be beneficial for assessing the relative importance of various post-marketing effects (e.g., nausea is mild, arrhythmia is severe).
Other phenotyping efforts would also benefit from CAESAR including the eMERGE network [3], which seeks to carefully define phenotypes of interest for use in PheWAS studies. So far they have classified 20 phenotypes. Having a ranked list of phenotypes would help eMERGE to rank prospective phenotypes, thereby allowing them to select more severe phenotypes for further algorithm development efforts.
There are several limitations to this work. The first is that we used CUMC data when calculating four of the severity measures. Because we used only one institution's data, we have an institution-specific bias. However, since CAESAR was designed using the OMOP CDM, it is portable for use at other institutions that conform to the OMOP CDM. The second limitation is that we did not use clinical notes to assess severity. Some phenotypes, e.g., 'allergy to peanuts', may be mentioned more often in notes than in structured data elements. For such phenotypes, CAESAR would under estimate their severity. The third limitation is that we only used procedure codes to determine phenotype cost. Therefore, phenotypes that do not require procedures will appear as low cost phenotypes even though they may have other costs, e.g., medications.
Future work involves investigating the inter-relatedness of our severity measures and determining the temporal factors that affect these dependencies. We also plan to investigate the inter-dependency of phenotypes (e.g., 'blurred vision' is a symptom of 'stroke', but both are treated as separate phenotypes) and determine the utility of our severity measures for distinguishing between phenotypes and their subtypes.
Another potentially interesting extension of our work could involve utilizing the semantics of SNOMED, specifically their phenotype/subtype relations, to explore CAESAR's severity results. Because we chose SNOMED to represent each phenotype, we can leverage SNOMED's semantics to further probe the relationship between severity and disease. Perhaps some of the phenotypes with ambiguous severity (middle of Figure 6) occurred because their disease subtypes can be either mild or severe (we can assess this using SNOMED's hierarchical structure). However, leveraging the semantics of concepts for severity classification is a complex area [43], which will likely require additional methods to tackle. Hopefully these topics can be explored in future by ourselves or others.
This paper presents CAESAR, a method for classifying severity from EHRs. CAESAR takes several known measures of severity: cost, treatment time, number of comorbidities, medications, and procedures per phenotype, and a proportional index term as input into a random forest algorithm that classifies each phenotype as either mild or severe. Using a reference standard that was validated by medical experts (k = 0.716), we found that CAESAR achieved a sensitivity of 91.67 and specificity of 77.78 for severity detection. CAESAR reduced our 4,683-phenotype set (starting point) to 1,395 phenotypes with high severity. By characterizing phenotype-level severity using CAESAR, we can identify phenotypes worthy of study from EHRs that are of particular importance for CER and public health.
CER:
HOI:
Health Outcomes of Interest
OMOP:
Observational Medical Outcomes Partnership
eMERGE:
The Electronic Medical Records and Genomics Network
PheWAS:
Phenome-Wide Association
EHRs:
HPO:
Human Phenotype Ontology
SNOMED-CT:
Systemized Nomenclature of Medicine – Clinical Terms
CAESAR:
Classification Approach for Extracting Severity Automatically from Electronic Health Records
CUMC:
HCPCS:
Healthcare Common Procedure Coding System
E-PSI:
Ehr-phenotype severity index
ICA:
Independent Components Analysis
CDM:
Clinical Data Model
MD:
OOB:
Out-of-bag error rate
MDG:
Mean Decrease in Gini
OHDSI:
Observational Health Data Sciences and Informatics consortium
International classifcation of diseases, 9th revision
Sox HC, Greenfield S. Comparative effectiveness research: a report from the Institute of Medicine. Ann Intern Med. 2009;151:203–5.
Stang PE, Ryan PB, Racoosin JA, Overhage JM, Hartzema AG, Reich C, et al. Advancing the science for active surveillance: rationale and design for the Observational Medical Outcomes Partnership. Ann Intern Med. 2010;153:600–6.
Kho AN, Pacheco JA, Peissig PL, Rasmussen L, Newton KM, Weston N, et al. Electronic medical records for genetic research: results of the eMERGE consortium. Sci Transl Med. 2011;3:79re71.
Denny JC, Ritchie MD, Basford MA, Pulley JM, Bastarache L, Brown-Gentry K, et al. PheWAS: demonstrating the feasibility of a phenome-wide scan to discover gene–disease associations. Bioinformatics. 2010;26:1205–10.
Boland MR, Hripcsak G, Shen Y, Chung WK, Weng C. Defining a comprehensive verotype using electronic health records for personalized medicine. J Am Med Inform Assoc. 2013;20:e232–8.
Weiskopf NG, Weng C. Methods and dimensions of electronic health record data quality assessment: enabling reuse for clinical research. J Am Med Inform Assoc. 2013;20:144–51.
Hripcsak G, Knirsch C, Zhou L, Wilcox A, Melton GB. Bias associated with mining electronic health records. J Biomed Discov Collab. 2011;6:48.
Hripcsak G, Albers DJ. Correlating electronic health record concepts with healthcare process events. J Am Med Inform Assoc. 2013;20:e311–8.
Robinson PN, Köhler S, Bauer S, Seelow D, Horn D, Mundlos S. The Human Phenotype Ontology: a tool for annotating and analyzing human hereditary disease. Am J Hum Genet. 2008;83:610–5.
Elkin PL, Brown SH, Husser CS, Bauer BA, Wahner-Roedler D, Rosenbloom ST, et al. Evaluation of the content coverage of SNOMED CT: ability of SNOMED clinical terms to represent clinical problem lists. In: Mayo Clinic Proceedings. North America: Elsevier; 2006. p. 741–8.
Stearns MQ, Price C, Spackman KA, Wang AY. SNOMED clinical terms: overview of the development process and project status. In: Proceedings of the AMIA Symposium. Ann Arbor: American Medical Informatics Association; 2001. p. 662.
Campbell JR, Payne TH. A comparison of four schemes for codification of problem lists. Proc Annu Symp Comput Appl Med Care. 1994;201:5.
Shah NH. Mining the ultimate phenome repository. Nat Biotechnol. 2013;31:1095–7.
Green J, Wintfeld N, Sharkey P, Passman LJ. THe importance of severity of illness in assessing hospital mortality. JAMA. 1990;263:241–6.
Rich P, Scher RK. Nail psoriasis severity index: a useful tool for evaluation of nail psoriasis. J Am Acad Dermatol. 2003;49:206–12.
Bastien CH, Vallières A, Morin CM. Validation of the Insomnia Severity Index as an outcome measure for insomnia research. Sleep Med. 2001;2:297–307.
McLellan AT, Kushner H, Metzger D, Peters R, Smith I, Grissom G, et al. The fifth edition of the Addiction Severity Index. J Subst Abuse Treat. 1992;9:199–213.
Rockwood TH, Church JM, Fleshman JW, Kane RL, Mavrantonis C, Thorson AG, et al. Patient and surgeon ranking of the severity of symptoms associated with fecal incontinence. Dis Colon Rectum. 1999;42:1525–31.
Horn SD, Horn RA. Reliability and validity of the severity of illness index. Med Care. 1986;24:159–78.
Huser V, Cimino JJ. Don't take your EHR to heaven, donate it to science: legal and research policies for EHR post mortem. J Am Med Inform Assoc. 2014;21:8–12.
Perotte A, Hripcsak G. Temporal properties of diagnosis code time series in aggregate. IEEE J Biomed Health Inform. 2013;17:477–83.
Moskovitch R, Walsh C, Hripcsak G, Tatonetti NP. Prediction of Biomedical Events via Time Intervals Mining. NYC, USA: ACM KDD Workshop on Connected Health in Big Data Era; 2014.
Moskovitch R, Shahar Y. Classification-driven temporal discretization of multivariate time series. Data Min Knowl Disc. 2014;1:43.
Moskovitch R, Shahar Y. Fast time intervals mining using the transitivity of temporal relations. Knowl Inf Syst. 2013;1:28.
Averill RF, McGuire TE, Manning BE, Fowler DA, Horn SD, Dickson PS, et al. A study of the relationship between severity of illness and hospital cost in New Jersey hospitals. Health Serv Res. 1992;27:587.
CMS. License for Use of Current Procedural Terminology, Four. http://www.cms.gov/apps/ama/license.asp?file=/physicianfeesched/downloads/cpepfiles022306.zip 2004, Accessed 25 April 2014.
Hyvärinen A, Oja E. Independent component analysis: algorithms and applications. Neural Netw. 2000;13:411–30.
Hripcsak G, Albers DJ. Next-generation phenotyping of electronic health records. J Am Med Inform Assoc. 2013;20:117–21.
Likert R. A technique for the measurement of attitudes. Arch Psychol. 1932;140. http://www.worldcat.org/title/technique-for-the-measurement-of-attitudes/oclc/812060.
Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull. 1968;70:213–20.
Revelle W. Package 'psych': Procedures for Psychological, Psychometric, and Personality Research (Version 1.4.4) [software]. 2014. Available from http://cran.r-project.org/web/packages/psych/psych.pdf.
Fleiss J. Measuring nominal scale agreement among many raters. Psychol Bull. 1971;76:378–82.
Gamer M, Lemon J, Fellows I, Sing P. Package irr: Various Coefficients of Interrater Reliability and Agreement (Version 0.84) [software]. 2013. Available from http://cran.r-project.org/web/packages/irr/irr.pdf.
Liaw A, Wiener M. Classification and Regression by randomForest. R news. 2002;2:18–22.
Breiman L, Cutler A, Liaw A, Wiener M. Package 'randomForest': Breiman and Cutler's random forests for classification and regression (Version 4.6-7) [software]. 2012. Available from: http://cran.r-project.org/web/packages/randomForest/randomForest.pdf.
Westreich D. Berkson's bias, selection bias, and missing data. Epidemiology (Cambridge, Mass). 2012;23:159.
Tinetti ME, Studenski SA. Comparative Effectiveness Research and Patients with Multiple Chronic Conditions. N Engl J Med. 2011;364(26):2478–81.
Nissim N, Boland MR, Moskovitch R, Tatonetti NP, Elovici Y, Shahar Y, et al. An Active Learning Enhancement for Conditions Severity Classification. NYC, USA: ACM KDD on Workshop on Connected Health at Big Data Era; 2014.
Housman TS, Feldman SR, Williford PM, Fleischer Jr AB, Goldman ND, Acostamadiedo JM, et al. Skin cancer is among the most costly of all cancers to treat for the Medicare population. J Am Acad Dermatol. 2003;48:425–9.
Di Rocco M, Giona F, Carubbi F, Linari S, Minichilli F, Brady RO, et al. A new severity score index for phenotypic classification and evaluation of responses to treatment in type I Gaucher disease. Haematologica. 2008;93:1211–8.
Holmes AB, Hawson A, Liu F, Friedman C, Khiabanian H, Rabadan R. Discovering disease associations by integrating electronic clinical data and medical literature. PLoS One. 2011;6:e21132.
Ryan PB, Madigan D, Stang PE, Schuemie MJ, Hripcsak G. Medication-wide association studies. Pharmacometr Syst Pharmacol. 2013;2:e76.
Dligach D, Bethard S, Becker L, Miller T, Savova GK. Discovering body site and severity modifiers in clinical texts. J Am Med Inform Assoc. 2014;21(3):448–54.
We thank the OMOP consortium and OHDSI, Dr. Patrick Ryan, and Rohan Bareja for their assistance with various facets of OMOP/OHDSI and CUMC's data warehouse. Support for this research provided by R01 LM006910 (GH) and MRB supported in part by training grant T15 LM00707.
Department of Biomedical Informatics, Columbia University, New York, NY, USA
Mary Regina Boland
, Nicholas P Tatonetti
& George Hripcsak
Observational Health Data Sciences and Informatics (OHDSI), Columbia University, 622 West 168th Street, PH-20, New York, NY, USA
Department of Systems Biology, Columbia University, New York, NY, USA
Nicholas P Tatonetti
Department of Medicine, Columbia University, New York, NY, USA
Search for Mary Regina Boland in:
Search for Nicholas P Tatonetti in:
Search for George Hripcsak in:
Correspondence to Mary Regina Boland.
MRB performed research, data analyses, and wrote the paper. NPT contributed to statistical design procedures, and provided intellectual contributions. GH was Principal Investigator for this project, contributed to research design, provided substantive intellectual contributions, and feedback on the manuscript. All authors read and approved the final manuscript.
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Boland, M.R., Tatonetti, N.P. & Hripcsak, G. Development and validation of a classification approach for extracting severity automatically from electronic health records. J Biomed Semant 6, 14 (2015) doi:10.1186/s13326-015-0010-8
Health status indicators
Vaccine and drug ontology in the study of mechanism and effect | CommonCrawl |
Ring-shaped bifocal lens used for fluorescent self-referenced holographic imaging
Márton Zsolt Kiss1,2
Journal of the European Optical Society-Rapid Publications volume 12, Article number: 2 (2016) Cite this article
We propose an alternative and simple solution to self-referenced digital holographic imaging based on a ring-shaped bifocal lens, without the need of any mirrors, polarizers or spatial light modulators. We discuss the imaging properties of the ring-shaped bifocal lens in self-referenced holography. The easy applicability of this bifocal lens is demonstrated on a realized microscope setup for volumetric observation of freely moving fluorescent objects, based on a conventional light microscope.
The aim of our research is to develop a microscope that can detect, localise and image freely moving fluorescent objects within a thick volume in real time. The realization of such an instrument would result in an immediate industrial benefit, for example, real-time water monitoring systems [1] can be built, where the use of self-referenced holography [2, 3] is undoubtedly very profitable.
Holographic imaging is based on an interference phenomenon, in which wavefronts are captured to reconstruct the image of the measured objects [4]. Two of its main advantages are the possibility to increase the depth of the observed volume without having a considerable loss in the resolution [5], and the potential of lens-less imaging [6]. In the traditional in-line [7] and off-axis [8] setups, one light source with an appropriate length of coherence is used to implement both the target and the reference beams. Such a light source can be a laser [9], a LED [10], an electron gun [11], or some other sources [12] as well.
However, in some cases, these traditional arrangements cannot be applied. Either the size and/or the position of the target object does not permit the proper interference of the target and reference beams, or the light emission of the targets themselves is the relevant feature that is to be detected and reconstructed. In these cases, Self-referenced Holographic Setups (SHS) can be applied. In these setups, the target and reference beams are the differently modulated lights of the same emitted (or reflected) light of the measured object. The requirement of proper interference is that the optical path difference of the two beams has to be smaller than the coherence length of the light.
Self-referenced Holography (SH) is frequently applied to image fluorescent [13], distant and extended objects [14], or stars [15]. An architecture of the SHS can be based on either an interferometer (e.g. Hariharan-Sen [16, 17]), bi- or multifocal lens [18, 19], or on a diffractive optical element [20].
Although the concept of bifocal lens based SHS was already presented in the early years of holography [4], and the application of birefringent bifocal lens, double half lens, Fresnel zone plate, and their mixed solutions were proposed, only some of them were tested experimentally [21]. Recently, the application of Spatial Light Modulators (SLM) extended the list of bi- and multifocal lens based SHS [22].
Usually, interferometer-based SHSs are quite large and complex, due to the mirrors required, which also makes them exceptionally sensitive to vibrations. Furthermore, the used beam splitter typically result in more than 50 % light intensity loss. Polariser based SHSs are also doomed to face the latter problem, which is also a particularly challenging limitation when the light emission of the objects is little, as in the case of the above-mentioned fluorescent imaging.
All the SLM- and some of the interferometer-based setups also offer the possibility of twin-image elimination; however this can usually be fulfilled only for static objects, as the method is based on phase-shifting interferometry and requires multiple exposures of the same object [23].
The above limitations of the earlier approaches directed our attention to the application of a special bifocal lens in SHS. That is, to apply a Ring-shaped Bifocal Lens (RBL) [24] to implement SH. According to our best knowledge, so far this approach was not proposed, used or tested by others.
In this paper, our goal is to prove that the RBL is an efficient tool for making holography based fluorescent volume detection, localization and imaging. First, let us overview the main details of the imaging method of the SH, and outline the hologram generating principle of the RBL in the next section. Second, the main advantages and disadvantages of the RBL made holograms are discussed. Finally, we demonstrate the applicability, and the ability to fluorescent targets volume imaging of the RBL through the experimental study of the RBL based SHS.
The self-referenced imaging and the RBL
Let us briefly summarize the self-referenced hologram generating method. An SHS generates two waves (a wave pair) with different wavefront curvatures from a light coming from a point of the object.
The interference of the pair of waves produces a Self-referenced Interference Pattern (SIP). The SIP captured by a digital camera is the digital intensity hologram. Several object points will have several SIPs by the SHS, and the camera captured image of their sum also is the digital intensity hologram.
When the light coming from different points of the object are coherent with each other, the complex amplitude of their SHS generated SIPs is added. Otherwise, only the intensities of their SIPs are summed. (Obviously there is a case when the SIPs are partially coherent with each other.) Thus, in the particular case of fluorescent objects, only the intensities of the SIPs are summed, irrespectively of the type of their excitation light.
Let us analyse the formation of a single SIP in the following. A spherical wave emitted (or reflected) by a single point is divided than modulated differently by the SHS into two spherical waves, with a radius of curvatures (R1, R2) at the detector plane. Our calculations pointed out that the interaction of these waves generates an interference intensity pattern, which is the same as the intensity of the interference of a plane wave and a spherical wave with radius of curvature Rd (Eq. (1)), where
$$ {\mathrm{R}}_d=\pm \frac{{\mathrm{R}}_1*{\mathrm{R}}_2}{{\mathrm{R}}_1-{\mathrm{R}}_2}\ . $$
Rd is used to define the reconstruction distance of the corresponding object point. As the SIPs are incoherently summed up, they do not disturb each other, and therefore, their reconstruction distances remain unbiased. This property is important when the reconstruction is used for object localization. However, the more incoherent summation of the SIPs, the less relative dynamic range of the detector, which results in considerable loss of contrast of the captured digital hologram.
Next, our novel optical solution is presented for creating double coherent waves from a single one for the purpose of self-referenced holography.
RBL hologram generation
At the central point of our SHS stands the Ring-shaped Bifocal Lens that realise the self-referenced hologram with the required beam splitting. This splitting is the result of the division of the aperture of the RBL. The two apertures are different; the central one is circular while the other is a ring around it. Both of them are symmetric to the optical axis. These aperture areas have a focus difference, which in general can be reached with the optical property of the material (e.g. grin lens), or with the geometry. Here, we use an RBL where the geometry of the lens generates the different focuses. To ensure exactly two focuses, only one of the two surfaces (the right one) is diversified, as it can be seen in Fig. 1.
Two-view scheme of the Ring-shaped Bifocal Lens (RBL)
In our experiments, we use a custom made RBL, which consists of a central plano-convex lens (focal length 400 mm), and a "biplane ring-shaped lens" having infinite focus. The outer diameter of the whole RBL is 10 mm while the diameter of the inner lens is 6 mm. The scheme of the hologram generation of the actual RBL is shown in Fig. 2 when the source object is at an infinite distance. One can see that the RBL creates two beams, a central placed cone shaped one, and a hollow one. These beams have a ring-shaped cross section at the plane of detection. The self-referenced hologram of a single point created by the RBL is ring-shaped as it is shown in Fig. 4. Because the middle of this hologram is missing, we call it gappy hologram. The shape of the hologram is determined by the shape of the outer aperture of the RBL, and only the divergences of the beams depend on the focal parameters (Fig. 2).
a Self-interference pattern generation by the RBL from a collimated beam. RBL creates a hollow and a central beam from a single incident beam. IP denotes the area of the intersection of the two generated beams. b RBL implementations using different focal length lens. The dark green areas are the overlapping parts of the different beams. The shape of the outer aperture of the RBL will determine the shape of the hologram. Ring aperture creates ring-shaped hologram
The RBL modulates the incident U(r,RBL1) complex amplitude beam the following way (Eq. (2)):
$$ U\left(r,RBL2\right)=U\left(r,RBL1\right)*\left({A}_o{L}_o+{A}_c{L}_c\right) $$
Where U(r, RBL2) denotes the generated beam, Ao (Eq. (3)) and Ac (Eq. (4)) correspond to the ring and the central apertures of the RBL, while Lo (Eq. (5)) and Lc (Eq. (6)) describe the phase modulation property of the lenses in the ring-shaped and the central area.
$$ {A}_o= sign\left(r-R\right) $$
$$ {A}_c= sign\left(R-r\right) $$
$$ {L}_o= \exp \Big(i\pi /\left(\lambda *{f}_o\right){r}^2 $$
$$ {L}_c= \exp \Big(i\pi /\left(\lambda *{f}_c\right){r}^2 $$
$$ sign(x)=\left\{\begin{array}{c}\hfill 0,\kern0.75em x<0\hfill \\ {}\hfill 1,\kern0.5em x\ge 0,\hfill \end{array}\kern0.5em \right. $$
and r denotes the distance from the optical axis, R the inner radius of the RBL, and λ the applied wavelength, while fo and fc are the focal lengths of the ring and the central areas, respectively.
The optical path difference problem
It is known, that in SHSs the Optical Path Difference (OPD) between their different optical ways have to be smaller than the coherent length of the used light to have interference phenomena and hologram.
First, let us observe the OPD on a convenient Hariharan-Sen [15] interferometer based SHS in the case when a single target point is on the optical axis. It is also shown in Fig. 3, and one can see, that even if the light of that target point is split, modulated independently than united to make interference, the created two beams after their union have zero OPD along the optical axis. Due to the curvature differences of the wavefronts of the two beams their OPD increases with their distance from the optical axis. At the plane of the detector, where the OPD become bigger than the coherent length of the used light the interference phenomena will disappear, and there will be the border of the hologram.
a Wavefronts of a single-point source modulated by a Hariharan-Sen interferometer. b Wavefronts of a single-point source modulated by the RBL. At the detector plane, the yellow crosses denote the places, where the OPD between the two beams is zero, thus we have a constructive interference. The red lines (or dot that is a zero length line) originating from a pixel of the detector show the optical paths between the pixel and the pair of wavefronts. The length difference between such red lines is the OPD at the pixel in question
Considering these findings and that the RBL formed hologram is a gappy one, we aimed to design the RBL to achieve zero OPD between the central beam and the hollow beam at the common ring-shaped area at the detector plane not on the optical axis. We illustrate the design on Fig. 3. To have this zero OPD we calculated the required thickness difference between the central and the ring area of the custom made RBL.
As the relative curvatures between the wavefronts change during the propagation, it can be seen that the OPDs at the plane of detection also depend on the actual position of the detector.
The attributes of the RBL
Using an RBL based SHS a single point target generates a ring-shaped (gappy) hologram, which is shown in Fig. 4. In the central region of such hologram, the interference pattern with the low spatial frequency components is missing because the intensity of the hollow beam is zero at that place.
A typical self-referenced digital intensity hologram using an RBL. Interference fringes only appear in the ring area, due to the geometry of the hollow beam. This results in that the low spatial frequency components will be not recorded in the hologram, and this way they will be missing from the hologram reconstructions too
The reconstructed image of the ring-shaped hologram will appear in the middle of the ring, not on the interference fringes. Thus, an even (homogenous) background can have the reconstructed image. Furthermore, the twin image diffractions expanding outwards and never overlap with the image reconstruction itself. However, we should note, that the absence of low-frequency components might result in deterioration of the image.
Even if the image reconstruction is not perfect, we will show that using the proposed method, one can still detect the 3D position of the target objects and will be able to discriminate between them correctly.
Using numerical simulations, we investigated the relationship between the parameters of the ring-shaped hologram and its reconstruction properties. We handled both the depth of focus of the reconstruction and the Strehl ratio (the quotient of the maximum intensities of an actual and a reference reconstruction images) with exceptional care.
First, we simulated the diffraction limited hologram of a single point source. In the simulated case, that contains seventeen concentric rings of interference. Second, starting from the middle one, we mask more and more interference rings, thus simulating gappy holograms. The relationship between the number of missing interference rings and the quality of the reconstructed images were measured.
The reconstruction distance was set 3000 μm while the wavelength and pixel size were 530 nm, and 0.9 μm respectively. The simulated holograms, the reconstructions and the measured parameters like the Strehl ratio and the depth of focus are shown in Fig. 5.
The effects of the gap-size on the hologram reconstruction properties. a The diagram of the depth of focus (y1, [μm], ) and the Strehl ratio (y2, [-], ) as the functions of the hologram incompleteness. The values of the x-axes denote the number of the missing interference rings. b The reconstructed images of the different gappy holograms, that can be considered also as the spot diagram of the different ring-shaped holograms. c Gappy intensity holograms with different gap-sizes
To determine the depth of focus I used the following definition: a point is reconstructed at a special reconstruction distance if the intensity of the reconstructed point is not lower than the maximal intensity of its background. The background intensity is estimated from a surrounding area nine times bigger than the reconstructed point.
The Strehl ratio is defined as the quotient of the maximum intensities of the actual gappy hologram reconstruction and the full hologram reconstruction.
Our results show that by the increase of the missing area, the Strehl ratio decreases and the depth of focus increases. It can also be observed that the depth of focus is increasing when a ring-shaped hologram becomes thinner, but this effect is only apparent when the gap is sufficiently large, in the first few cases when the gap is small, it is unnoticeable. Thus, setting the rate of the aperture areas of the hologram with the aperture parameters of the RBL, these properties of the reconstructed image can be hold. Observing the point spread functions (PSF) of the spot diagram, it can be seen that as the size of the central spot and its maximal intensity decrease while the artefact caused by the surrounding ring increase with the increase of the relative gap size. As these diffraction rings can produce interference with the fringes of the target object reconstruction, the size and shape (spot, line, grid…) parameters of the target will also shape the resolution of the imaging. In section 4.2 the PSF of the built SHS setup is presented.
We have to note that the narrow ring-shaped hologram of a single point source has an elongated, line-like image along the direction of the reconstruction. This is an axicon-like property. The further discussion is of this topic is beyond the scope of the present paper.
The self-referenced holographic microscope
In this section, we discuss an actual implementation of a self-referenced digital holographic microscope based on a commercially available microscope and the RBL discussed so far.
For this purpose, the RBL was placed into an Olympus (IX71) microscope, on the rear surface of the objective (Olympus, 4x Plan, NA = 0.16, f = 45 mm, infinity corrected). The hologram was recorded with a high sensitivity C-MOS camera (ASI120MM-S, 1280x960). The camera was placed 40 mm out of the focal plane of the camera adapter (U-TV0.5XC-3) which was placed on a trinocular tube (U-TR30-2). The excitation light had a wavelength of 405 nm. For the observance of the fluorescent USAF test-target (EO Stock No. 57-792), a dichroic emission filter was applied (10 nm bandwidth and the central wavelength is 530 nm, Thorlabs FB530-10) between the RBL and the trinocular tube. When the algae sample was measured, we also used a longpass filter (Thorlabs FEL 600). The scheme of the measuring setup is illustrated in Fig. 6.
The scheme of the RBL based self-referenced holographic microscope setup. The objective collimates the light of a point source. This light beam is spatially divided by the RBL. The colour filter (CF) is primarily used to cut off the excitation light. The tube lens adjusts the divergence of the two beams. Finally, the detector captures the digital hologram. IP denotes the grey area, where the two beams can interfere with each other
As the RBL generates two different beams from the light of a single point object, it also generates its two real images at the first and the second image planes. The first image is produced by the cone-shaped beam, which is surrounded by the defocused image of the hollow beam. On the other hand, the second image is reproduced from the hollow beam, while the defocused cone shaped beam overlaps with it. The two images appear at different planes. We illustrate such dual-imaging at Fig. 7.
As the RBL is a bifocal lens, it is a dual-imaging system. It creates double images of one target at different planes. (These planes are also captured.) The first image is produced by the cone-shaped beam, which is surrounded by the defocused image of the hollow beam. On the other hand, the second image can be reproduced from the hollow beam, while the defocused cone shaped beam overlaps with it. Here the two image planes and the hologram plane can be seen with the intensity patterns generated with the RBL based Self-referenced Holographic System (SHS), where the target is the sign "®". One can see, that the first image (a) is surrounded by the hollow beam. Vice versa, the second image (b) overlaps with the defocused cone beam. c is the intensity distribution of the a and b images along the red line, and d is the hologram of the same target
Imaging with the RBL basedself-referenced holographic microscope
The angular spectrum method [25] was applied to reconstruct the objects from their digital holograms. Three different measurements were made to test the operation and efficiency of the introduced SHS.
In the first one, holograms of different size of objects from the same depth were captured and reconstructed to get partial information about the resolution. In the second experiment, the holograms of an object in three different distances were measured and reconstructed to know that the setup can make volume detection. Finally, in the third measurement, auto-fluorescent self-referenced holographic image of an algae sample was captured and evaluated to realise that it can make auto-fluorescent biological measurements.
The effect of the object size on the reconstruction quality
Our first test consisted of using four different samples from the number "2" of USAF 1951 fluorescent test target, with different sizes. One by one, we placed these test targets at the same position, just on the focal plane of the objective. One of the four sample reconstructions and the cropped images of the reconstructions are shown in Fig. 8.
Resolution and contrast measurements with the RBL based SHS. a One reconstructed hologram; b-e the reconstructed images of different sized "2" shaped objects, where the physical sizes of the test targets were decreasing. The orange scale bars are 200 μm long. One can see, that as more point sources contributes to the object the lower contrast the reconstructed image will have. The missing information of the hollow beam limits the resolution of the imaging, but not the resolution of the object detection
Comparing these reconstruction results (Fig. 8), it can be seen that the more point sources contributes to the object the lower contrast the reconstructed image will have, as we discussed at the beginning of section 2. The missing information of the hollow beam limits the resolution of the imaging, but not the resolution of the detection.
Reconstruction of object in different depths
We estimate the depth of the measurable volume by placing the object in the -0.5 mm, 0 mm, +1 mm positions along the optical axis. The position of 0 mm corresponds to the focal plane of the used objective, and the others are end positions of the adjusting mechanism. We note that the position of the object was also changed in the lateral direction to avoid the overlapping of the reconstructed images. As we used one fluorescent USAF test target, we had to position it and make the exposures three times to capture the three holograms belonging to the different depths.
By adding the intensities of these hologram images, we simulated the recording of a single exposure hologram of the target objects, which are in different depth and lateral positions. In the following, we shall call the result as Measured Hologram (MH). The MH has inhomogeneous background that is caused by the zero order diffractions. This background results in an elevated noise in the reconstructed image. To eliminate the zero order I subtracted the MH from the sharpened MH, which was created with the "sharpeningFilter" function of Matlab®. The sharpening also increased the contrast of the interference fringes, while the change of the zero order was negligible. As the proper estimation of the zero-order term is needed for the correct reconstruction, the eliminated zero-order is replaced by an even unit size intensity in the hologram. We call this modified MH as Corrected Hologram (CH).
In Fig. 9, we show the halves of both of the MH and the CH then the reconstructions of the CH at different depths, and also one MH reconstruction.
In contrast to the conventional microscopes, the proposed RBL based device has a reconstruction depth of 1.5 mm, which is over 25 times of conventional microscopes. a Half image of the Measured hologram (MH) created with the RBL based SHS, and a half image of the modified MH, which has a numerically flattened background, and which is called Corrected Hologram (CH). b Three different reconstructions of the CH and one reconstruction of the MH. "zx" denotes the real axial target positions belonging to the reconstructions of both CH and MH. "z1" = −0.5 mm; "z2" = 0 mm (target is at the focal plane of the objective); "z3" = +1 mm
Beside the efficiency of the introduced hologram correction method, the reconstructed images also demonstrate that the RBL based SH microscope can reconstruct an object within a volume having more than a 1.5 mm depth. Considering the fact that a conventional microscope has only 55 μm depth of focus at this magnification level, the increase enabled by the proposed RBL based microscope is over 25 times.
The PSF characteristics of the discussed RBL based SHS and commercial microscope that belongs to a point source target of distances (1, 0.5, 0, -0.5 mm) is presented in Fig. 10. In the experiment, we applied a fibre end with 4 μm diameter, which is emitting 530 nm green light, of a LED.
The cross section of the PSF of the conventional microscope (CM, - -) and the built RBL based holographic setup (SHS, ---), where the target, the 4 μm illuminating (530 nm) spot, is at different distances. X and Y denote the position on the detector and the relative intensity respectively. It can be seen, that the PSF of the SHS does not change significantly with the target distance, but the PSF of the CM changes. The different colours belonging to different target distances as it is shown in the corner
Because different exposure times have to be used for the holographic and for the conventional microscope, to compare the results I normalised the two measurements. Because holography needs a background for the correct reconstructions to have correct intensity values, the PSFs of the reconstructions have an offset. This decrease the contrast of the reconstructed image. These PSF characteristics that has been reached in the experiments with the built SHS setup also show that the RBL can increase the depth of the observed volume. Although the central spot sizes of the different reconstructions are similar to the conventionally captured spot size (when the target is in focus). The reconstructed rings can bias the resolution in a special way.
Holographic imaging of auto-fluorescent objects
Let us demonstrate the capabilities of the proposed system. For this purpose, we recorded a self-referenced hologram of an alive filamentous green algae sample. Using a proper emission (longpass colour) filter, the light used for the excitation is filtered. However, this does not increase the coherence length of the emitted fluorescence (Chlorophyll A auto-fluorescent, central wavelength: 680 nm, band with: ~30 nm). The reconstruction method was the same as in the experiment detailed above. The fluorescent image, the captured hologram and the reconstruction of the sample can be seen in Fig. 11.
a Conventional Fluorescent image of a filamentous and some smaller green algae. The orange scale bar is 200 μm long. b The self-referenced hologram using RBL based SHS. c Reconstructed image. The achieved reconstruction mark the position of the sample objects, it also represents some shape and size information about the target
Our measurements show that not only the filamentous green algae but also some other smaller algae fragments are detectable. In conclusion, the RBL based SHS can make an informative holographic image using fluorescent light with 14 μm coherence length.
The main challenge in the design process of the custom Ring-shaped Bifocal Lens (RBL) was to keep the optical path difference of the different beams bellow the coherence length of the fluorescent emission, as close to zero as it is possible. The primary parameter of the design was the thickness difference of the central and the ring-shaped area of the RBL. These are, however, depends on the actual focus parameters of the optical setup. The aperture parameters mostly limit the low-frequency components which will be missing from the self-referenced holograms.
By using a proper band-pass filter, one can increase the coherent length of the emitted fluorescent light, thus the contrast of the fluorescent self-referenced hologram and the contrast of the reconstruction as well. When all the objects emit fluorescent light of the same colour, a band-pass filter can be advantageous; otherwise (such as different algae with different fluorescent emissions) the original RBL should be used without any further filtering.
According to our measurements, the RBL can be efficiently used for self-referenced holography. The most prominent advantages of an RBL is that it is simple, it can be manufactured easily, compact, and the self-referenced architecture based on it is not sensitive to vibrations. Furthermore, its single-shot imaging property makes it applicable to observe freely moving fluorescent objects as well.
However, we must note that the RBL based self-referenced setups have to deal with some loss of the low-frequency components, which deficiency generates a diffraction-fringe-like noise on the reconstruction. Although this property might limit the imaging property of the RBL based SHS, the exact position of the target object is still determinable. This property can be efficiently utilizable in applications where the 3D position of singular fluorescent point source is to be determined. Our results pave the way for a new kind of implementation of 3D photoactivated localization microscopy [26].
The results and effects connected with the RBL and presented in this article are illustrating the imaging property of the RBL, and can help the further optimizations.
Building the ring-shaped bifocal lens (RBL) into a common microscope, and making fluorescent measurements with it, we pointed out that the RBL is a useful, and simple device to create stable self-referenced holographic setups for detection, localisation and imaging of fluorescent, freely moving targets of an extended volume.
Vörös, L., Mózes, A., Somogyi, B.: "A five-year study of autotrophic winter picoplankton in lake Balaton, Hungary". Aquat. Ecol. 43, 727–734 (2009)
Kazemzadeh, F., Jin, C., Yu, M., Amelard R., Haider, S., Saini, S., Emelko, M., Clausi, D.A., Wong, A.:"Multispectral digital holographic microscopy with applications in water quality assessment", (SPIE Optical Engineering + Applications, 957906) (2015).
Rosen, J., Brooker, G.: "Digital spatially incoherent Fresnel holography". Opt. Lett. 32, 912–914 (2007)
Lohmann, A.W.: "Wavefront reconstruction for incoherent objects". J. Opt. Soc. Am. 55, 1555–1556 (1965)
Kiss, M.Z., Nagy, B.J., Lakatos, P., Göröcs, Z., Tőkés, S., Wittner, B., Orzó, L.: "Special multicolor illumination and numerical tilt correction in volumetric digital holographic microscopy". Opt. Express 22, 7559–7573 (2014)
Göröcs, Z., Ozcan, A.: "On-chip biomedical imaging". IEEE Rev. Biomed. Eng. 6, 29–46 (2013)
Grare, S., Allano, D., Coëtmellec, S., Perret, G., Corbin, F., Brunel, M., Gréhan, G., Lebrun, D.: "Dual-wavelength digital holography for 3D particle image velocimetry: experimental validation". Appl. Optics 55, A49–A53 (2016)
Zhang, Y., LĂĽ, Q., Ge, B.: "Elimination of zero-order diffraction in digital off-axis holography". Opt. Commun. 240, 261–267 (2004)
Orzó, L.: "High speed phase retrieval of in-line holograms by the assistance of corresponding off-axis holograms". Opt. Express 23, 16638–16649 (2015)
Qin, Y., Zhong, J.: "Quality evaluation of phase reconstruction in led-based digital holography". Chin. Opt. Lett. 7, 1146–1150 (2009)
Gabor, D.: "A new microscopic principle". Nature 161, 777–778 (1948)
Gabor, D.: "Microscopy by reconstructed wave-fronts". Proc. R. Soc. Lond. A Math. Phys. Sci. 197, 454–487 (1949)
Article ADS MATH Google Scholar
Man, T., Wan, Y., Wu, F., Wang, D.: "Axial localization of fluorescence samples using single-shot self-interference digital holography". In: Digital Holography & 3-D Imaging Meeting of OSA, DM2A5. (2015)
Kim, M.K.: "Full color natural light holographic camera". Opt. Express 21, 9636–9642 (2013)
Mertz, L., Young, N.: "Fresnel transformations of images". SPIE milestone series ms 128, 44–49 (1996)
Kiss, M. Zs., Göröcs, Z., Tőkés Sz.:"Self-referenced digital holographic microscopy". In: Cellular Nanoscale Networks and Their Applications CNNA, 2012 13th International Workshop on, 1–4, (IEEE, 2012).
Wan, Y., Man, T., Wang, D.: "Incoherent off-axis fourier triangular color holography". Opt. Express 22, 8565–8573 (2014)
Rosen, J., Brooker, G.: "Fluorescence incoherent color holography". Opt. Express 15, 2244–2250 (2007)
Katz, B., Rosen, J., Kelner, R., Brooker, G.: "Enhanced resolution and throughput of fresnel incoherent correlation holography (finch) using dual diffractive lenses on a spatial light modulator (slm)". Opt. Express 20, 9109–9121 (2012)
Wang, P.: "Self-interference low-coherent digital holography by engineered volume holographic pupils". In: Digital Holography & 3-D Imaging Meeting of OSA, DT3A7. (2015)
Sirat, G., Psaltis, D.: "Conoscopic holography". Opt. Lett. 10, 4–6 (1985)
Rosen, J., Kelner, R.: "Modified lagrange invariants and their role in determining transverse and axial imaging resolutions of self-interference incoherent holographic systems". Opt. Express 22, 29048–29066 (2014)
Lin, Y.C., Cheng, C.J., Poon, T.C.: "Optical sectioning with a low-coherence phase-shifting digital holographic microscope". Appl. Optics 50, B25–B30 (2011)
M. Zs. Kiss, "A new compact self-referenced holographic setup tested on a fluorescent target", In Digital Holography & 3-D Imaging Meeting of OSA, DTh1A7 (2015)
Shen, F., Wang, A.: "Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula". Appl. Opt. 45, 1102–1110 (2006)
Davis, I.: "The super-resolution revolution". Biochem. Soc. Trans. 37, 1042–1044 (2009)
The author express their thanks to the Ministry of National Development (NFM) for the financial support, granted within the frame of the competitiveness and excellence program, for the project 'Research and development in medical technology for the efficient treatment of cataract', VKSZ_12-1-2013-0080.
The author thanks for the help of Medicontur Ltd. for the fabrication of the custom bifocal lenses.
The author also thanks for the help of Unicam Ltd. for making available special elements of an Olympus microscope for the experiments.
Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Budapest, H-1083, Hungary
Márton Zsolt Kiss
Hungarian Academy of Sciences, Institute for Computer Science and Control, Computational Optical Sensing and Processing Laboratory, Budapest, H-1111, Hungary
Correspondence to Márton Zsolt Kiss.
Kiss, M.Z. Ring-shaped bifocal lens used for fluorescent self-referenced holographic imaging. J. Eur. Opt. Soc.-Rapid Publ. 12, 2 (2016). https://doi.org/10.1186/s41476-016-0002-z
DOI: https://doi.org/10.1186/s41476-016-0002-z
Digital holography
Holographic optical elements
Vision in depth
Single-shot imaging | CommonCrawl |
Modeling Unicorns and Dead Cats: Applying Bressan's MLν to the Necessary Properties of Non-existent Objects
Tyke Nunez1
Journal of Philosophical Logic volume 47, pages95–121(2018)Cite this article
Should objects count as necessarily having certain properties, despite their not having those properties when they do not exist? For example, should a cat that passes out of existence, and so no longer is a cat, nonetheless count as necessarily being a cat? In this essay I examine different ways of adapting Aldo Bressan's MLν so that it can accommodate an affirmative answer to these questions. Anil Gupta, in The Logic of Common Nouns, creates a number of languages that have a kinship with Bressan's MLν, three of which are also tailored to affirmatively answering these questions. After comparing their languages, I argue that metaphysicians and philosophers of language should prefer MLν to Gupta's languages in most applications because it can accommodate essential properties, like being a cat, while being more uniform and less cumbersome.
For an introduction to MLν that is more accessible than Bressan's book, see [1]. Since submitting this essay for review, Belnap and Müller have published two essays self-consciously developing the first order fragment of MLν [3] & [2]. In the first of these, they do an especially nice job of explaining the virtues of MLν and of their own Case-Intensional First Order Logic (CIFOL), in comparison to other quantified modal logics. I refer the reader to this essay for a more comprehensive discussion of the related languages than I will provide.
Bressan first suggests how his account can be modified to deal with objects that may not exist in all cases in [4, p. 89]. He amends this suggestion in [5, p. 372]. In both of these discussions, he passes over our problem for the semantics of necessity in silence. Gupta, however, struggles mightily with the issue in developing his L3. It is worth noting that Belnap and Müller end up treating the semantics of necessity in CIFOL in a very similar manner to the way that I suggest we should treat it in MLν [3, esp. 419].
Following Bressan, I will be using the term 'case' in order to stay neutral between interpreting modal indices either as worlds or times. And although, of course, there are quite important differences that arise when interpreting modal indices in different ways, as much as possible I will be attempting to work at a level of abstraction that is above these.
Gupta presents an argument for the distinct logical treatment of common nouns that bring with them such criteria of identity [10, esp. ch. 1, §5]. Bressan offers some assessment of this argument [5, §N6]. And McCawley has a nice brief discussion [11].
Although it will be predicates that provide principles of trans-case identity in these languages, we need not take this to be an implicit endorsement of 'contingent' or 'relative identity .' Rather, we just need to think common nouns like 'cat,' 'horse,' or 'person' have distinctive semantic properties that distinguish them from predicates like 'red' or 'smooth' that do not, and that this difference is worth modeling in our language.
cf. e.g., [6, ch. 15 & 16]
Specifically, as a practicing physicist, Bressan wanted to capture Mach's definition of "mass" in terms of possible experiments.
[12]. Another major motivation for this extensionalism seems to have come from reading intension as on the side of the mental. Although there was plenty of historical warrant for this, it is clear that Bressan's intensions are patterns of extensions-at-a-case, as the case varies. There is nothing mental about them.
Bressan puts the point that common nouns, extensional predicates, and intensional predicates will be treated in a syntactically uniform way in MLν rather strongly by claiming that "no a priori distinction is made in MLν between common nouns and 1-ary predicates" [5, p. 351]. Bressan treats all predication as intensional. There are not different semantic rules for assessing extensional and intensional predicates. Nonetheless, extensional predicates are distinctive, since their truth in a case only depends on the extension in that case of the individual concepts falling under it, not the extensions of these concepts in other cases. This allows Bressan to preserve a uniform treatment of predication while still capturing significant semantic differences for extensional predication. (The relevant technical details for understanding how this works will be presented in the next section.)
I give a formal statement of these differences in the Appendix on Gupta's languages.
In doing thisI will not use the form of MLν presented in [4], but the one Belnap sketches in [1]. For a more detailed explanation of the type hierarchy and the basic elements of the semantics than I give here, see [1].
Bressan, Gupta, and Belnap all designate non-existence by having a single non-existent entity of each type. A few other ways of representing non-existence seem available to us. First, we could leave individual concepts undefined in the cases in which they don't exist. Second, we could have lots of non-existent entities—most intuitively one for every possible object. Using the second of these would have the advantage of trivializing the problems that arise for Bressan with the introduction of non-existence. Since that would make for an uninteresting essay, and mean accepting a vast menagerie of non-existing things into our ontology, I will leave it aside. The first option, however, will come up again below. (Gupta considers these two options briefly [10, p. 68].)
Montague treated terms like John as denoting not an individual, John, but a set of John's properties, where properties are intensional, mapping indices to sets of extensions. This Russalian treatment made a corresponding treatment of non-existent objects natural. Accordingly, when translating a sentence like 'John seeks a unicorn' into IL, 'a unicorn' will be treated as a property of properties . (For a nice explanation see [8, ch. 7, §V].) This kind of approach means that one can avoid countenancing non-existent objects in one's ontology, give a decent rendering of the sentence, and get close to the specificity that one could have by allowing a different non-existent entity for every merely possible object. This kind of strategy is very different from the ones we will be pursuing, in part because we will treat singular terms as designating individuals in a Fregean manner.
This means that what A ranges over will shift according to application—variables and constants will be atomic expressions of one type, truth values will be another, predicates taking constants as arguments another, etc.
With these definitions in place, for the most part, in the rest of the essay I will suppress types because they are largely irrelevant to the issues under consideration.
Belnap defines this notion as a counterpart to Bressan's quasi-intension function." Q I I (A)"signifies "the quasi-intension on interpretation I of A". This gives a function from the set of cases Γto expressions of the appropriate type τ,that is, a member of I n t τ .
This makes formal issues over contingent vs. strict identity quite clear. For a nice defense of why treating equality this way is preferable in MLν see [1, p. 36-37].
Furthermore, parallel to the doubling of option (3), in order to preserve unrestricted quantification on option (2) one would need two distinct domains, one which includes merely possible objects, and one that doesn't. Such a complication seems worth avoiding, if possible.
Gupta tinkers with Bressan's notions. Although every common noun does provide a principle of identity for tracing the objects it is true of across cases, not every common noun designates a kind of substance. For example, 'man born in Jerusalem' does not, since being born in Jerusalem is not an essential property of the man. Unlike Bressan, who is only concerned with modeling substance kinds through his absolute concepts, Gupta marks this difference. He does this by distinguishing between sorts and substance sorts, which are the intensions assigned to the two kinds of common nouns. Roughly, sorts provide principles of identity that allow one to trace an object from case to case because they are separated intensional predicates, while substance sorts also indicate essential properties, and so are constant. Gupta maintains that for every sort, there is a substance sort that underlies it, which accounts for why it is separated (for discussion, see [11]). Although Gupta's substance sorts correspond to Bressan's absolute concepts, and Gupta does not need to substantively alter Bressan's modal constancy, because Bressan's modal separation is case relative it will not do for modeling a principle of trans-case identity. For some of the technical details on how Gupta modifies modal separation so that it can effectively model the principles of identity of common nouns see the Appendix.
Gupta goes so far as to suggest an added condition on sorts: that they never apply to non-existent objects [10, p. 69n].
In defining Quasi-Modal Separation [5, p. 372],Bressan accidentally omits the diamond. (The diamond is not forgotten in[4, p. 94].)The ramifications of this omission illuminate the relative importance of QMC and QMS.Leaving it out weakens the requirement because without the diamond a predicate can still bequasi-modally separated in a case even if two individual concepts falling under it overlap, aslong as they don't overlap in the case under consideration. Keeping the diamond means theindividual concepts can't overlap in any case if the concept they are falling under is separatedin that case. Since quasi-absolute concepts are also quasi-modally constant, whether thediamond is included or not makes no difference to them. Still, including the diamond ispreferable because quasi-modal separation is intended to let us trace the same object fromcase to case, and if two objects overlap in some case, then from that case it is impossible toknow which object to trace back through the other cases. These considerations help showthat separation is more important than constancy for tracing objects. In Chapter 4 of hisbook, Gupta elaborates an elegant solution to Chisholm's trans-world identity problemfor inanimate objects (like bikes or Theseus's ship) that admits such objects aren't evenquasi-modally constant but which shows that as long as the corresponding sortal predicate(e.g. 'x is a bike') is quasi-separated in every world, this is enough to trace them across worlds[10, p. 86-107,esp. p. 104-106].
This is the gist of the footnote on p. 70 of [10].
Again, I give some of the details of Gupta's account of these notions in the Appendix.
Here it is significant that it is ' ◊F,' not just 'F,' that is necessarily QMS. If it were only 'F,' we would have something like Gupta's weak separation, not his near separation [10, p. 29, ch. 1, notes 18 & 19]. I discuss these notions further in the Appendix.
For Gupta's discussion of the problem, see chapter 3, §2 [10, p. 71-78].
There is a pardonable abuse of notation here that I will continue in what follows. \(QE_{\gamma _{3}, I}C(eh^{*})=F\) abbreviates: \(QE_{\gamma _{3}, I}C(x)=F\) where I(x) = e h ∗, and similarly for \(QE_{\gamma _{3}, I}C(f^{**})=F\).
How much we agree with Gupta here will depend on what we are using our logic for and how we are interpreting our cases. At first at least, it will seem we won't want to count merely possible cats as cats, if we interpret cases as worlds, since we don't want to have to consider all of the merely possible men in the room when we talk about men. On the other hand, if cases are interpreted as times or moments in possible histories, and we want to model, "Mama could have had two more kittens than she in fact had" it seems odd to insist that what we are referring to are not cats.
This rule of thumb will certainly not be hard and fast, and I do not take deciding between these options to be a matter for logic. Still, perhaps it is worth mentioning that my own view is that merely possible cats, men, or unicorns, should usually count as of their kind, in line with Fig. 3 and against Gupta. Possible, dead, or imaginary men seem to be no less men than do living ones, and their non-existence is marked by their having the non-existent object as their extension. (Kant's remark about the hundred Thalers comes to mind (CpR, A599/B627).)
L 4, however, is an exception. It gives up on the thought that possibly non-existent cats are necessarily cats and is much closer to MLν than Gupta's other languages because its variables range over individual concepts rather than extensions.
This is closely related to Gupta's "initial intuition"[10, p. 71].
Perhaps the semantics can, somewhat controversially, be extended to two (or more) place relations by the following maneuver:
$$QE_{\gamma, I} \psi(i, j)=T^{*} \text{ iff } QE_{\gamma, I} \psi(i, j)\neq T \text{ and either } i(\gamma)=* \text{ or } j(\gamma)=* $$
Here, on a temporal reading, "I am the great grandchild of my great grandfather" would be T* (just as with "Socrates is a man"), since my great grandfather has passed away, if we treat the predicates as 'is a cat' in Fig. 2. If we treated them as in Fig. 3, however, their value would be T, and we need not treat all predicates one way or the other.
N.B. if formulas only depend on objects whose extension is non-existent and are false of those, they will still come out T*. For example, "Socrates is sitting" is T*.
Arguably, an example might be, 'there is something that is a flying horse,' where the intensional object that makes this true is Pegasus.
The full tables for the two place logical operators are:
$$\begin{array}{lc|cr} \begin{array}{l|cccc} \wedge & T & T^{*} & F^{*} & F\\ T & T & T^{*} & F^{*} & F\\ T^{*} & T^{*} & T^{*} & F^{*} & F\\ F^{*} & F^{*} & F^{*} & F^{*} & F\\ F & F & F & F & F\\ \end{array} & & &\begin{array}{l|cccc} \vee & T & T^{*} & F^{*} & F \\ T & T & T & T & T\\ T^{*} & T & T^{*} & T^{*}(T) & T^{*}\\ F^{*} & T & T^{*}(T) & F^{*} & F^{*}\\ F & T & T^{*} & F^{*} & F\\ \end{array} \end{array} $$
$$\begin{array}{lc|cr} \begin{array}{l|cccc} \rightarrow & T & T^{*} & F^{*} & F \\ T & T & T^{*} & F^{*} & F\\ T^{*} & T & T^{*}(T) & F^{*} & F^{*}\\ F^{*} & T & T^{*}(T) & T^{*}(T) & T^{*}\\ F & T & T & T & T\\ \end{array} & & &\begin{array}{l|cccc} \leftrightarrow & T & T^{*} & F^{*} & F \\ T & T & T^{*} & F^{*} & F\\ T^{*} & T^{*} & T^{*}(T) & F^{*}(F) & F^{*}\\ F^{*} & F^{*} & F^{*}(F) & T^{*}(T) & T^{*}\\ F & F & F^{*} & T^{*} & T\\ \end{array} \end{array} $$
I have listed two truth values for some of the operations because while they will usually have the first value, if the reasons that the values of the component formulas was T* or F* in the first place line up, then it seems they should have the second value. For example, if two formulas involving universal quantification are T*, then, for each, there will be individual concepts, d 1…d n , whose extensions are non-existent and which falsify them. Taking one of these formulas as the antecedent and the other as the consequent, if those individual concepts that falsify the antecedent are a superset of those on which the consequent is false, then it should be: T ∗→ T ∗ = T. Or if the antecedent is F* and the consequent T*, then the way the conditional will come out T is if none of the values on which F* is true are also those on which the consequent is false. So that the operations are truth functional, it makes sense to assign the value that is not in parentheses, despite the fact that in specific cases the assignment of the other value can be justified.
Again, for technical details see the Appendix.
There are, however, serious issues with the semantics of quantification and necessity for this strategy. Specifically, as Gupta notes [10, p.72-75], although intuitively the assignments of variables that are not free in a formula should be ignored when figuring out their semantic value, implementing this is difficult, and for both L2 and L3, the schema ' A → (∀K ,x)A' is invalid.
Gupta comments on this [10, p. 77-78].
This means the problem with Q A b s that Gupta points to in his footnote (which was discussed on page 13 above) is not one for Q A b s MC [10, p. 70].
Belnap and Müller hit upon the same conception of an essential property in developing CIFOL [3, § 5.3]. They do a nice job of showing how absolute concepts will not be the only essential properties. Properties like the sex of a horse, which we commonly take to be essential to it qua horse, will also come out as essential in this sense.
Here I am adapting Gupta's distinction between the extension, intension, and hyperintension of an expression [10, p. 17], which he borrows from [7].
Fine is after this distinction with his example of Socrates and his singleton [9, p. 241].
For more details and discussion see [10, ch. 1, §1-2].
Belnap, N. (2006). Bressan's type-theoretical combination of quantification and modality. In Lagerlund, H., Lindström, S., & Sliwinski, R. (Eds.), Modality Matters: Twenty-Five Essays in Honour of Krister Segerberg, volume 53, pages 31–53. Dept. of Philosophy, Uppsala University, Sweden.
Belnap, N., & Müller, T. (2014a). BH-CIFOL: Case-intensional first order logic. Journal of Philosophical Logic, 43(5), 835–866.
Belnap, N., & Müller, T. (2014b). CIFOL: Case-intensional first order logic. Journal of Philosophical Logic, 43(2), 393–437.
Bressan, A. (1972). A General Interpreted Modal Calculus: Yale University Press.
Bressan, A. (1993). On Gupta's book The Logic of Common Nouns. Journal of Philosophical Logic, 22(4), 335–383.
Cresswell, J., & Hughes, G. (2004). A New Introduction to Modal Logic: Taylor & Francis.
Cresswell, M. (1975). Hyperintensional logic. Studia Logica, 34(1), 25–38.
Dowty, D., Wall, R., & Peters, S. (1981). Introduction to Montague Semantics, volume 11 of Synthese language library. D. Reidel Publishing Company.
Fine, K. (1995). The logic of essence. Journal of Philosophical Logic, 24(3), 241–273.
Gupta, A. (1980). The Logic of Common Nouns: An Investigation in Quantified Modal Logic: Yale University Press.
McCawley, J. (1982). Review of The Logic of Common Nouns. Journal of Philosophy, 79, 512–517.
Quine, W.V.O. (1953, 1980). Reference and modality. In From a Logical Point of View: 9 Logico-philosophical Essays, Logico-philosophical essays, 2nd edition: Harvard University Press.
My research into Bressan's and Gupta's languages began with a semester of funding as the Allan Ross Anderson fellow in the spring of 2008. During this time the essay began to take shape under the patient guidance of Nuel Belnap, to whom I am very grateful. I should also note that during the revision process, an anonymous reviewer at JPL offered truly exceptional feedback that substantially improved the final version of the essay. In addition, Shawn Standefer and Anil Gupta gave me helpful comments through out the process.
Washington University in St. Louis, St. Louis, MO, 63108, USA
Tyke Nunez
Correspondence to Tyke Nunez.
Appendix: Gupta's Languages
In this appendix I will give some of the technical details concerning two aspects of Gupta's Languages. First, I will present Gupta's various versions of the notions of a substance sort, constancy, and separation, which correspond to Bressan's absolute concepts, modal constancy, and modal separation. Second, I will give some of the details of Gupta's treatment of the semantics of quantification and necessity. When he introduces non-existent objects, Gupta struggles mightily to adapt the semantics of necessity so that possibly non-existent cats can still be necessarily cats, but as I mentioned in the conclusion, he does not ultimately arrive at a satisfying formulation. (Gupta recognizes this, to a degree [10, p. 75].) I will try to give some sense of why, without going through the full story [10, ch. 3]. All page numbers included with the definitions are to the corresponding definitions in [10].
Before doing either of these, some preliminaries are in order. In addition to the standard logical categories, Gupta's L1 includes a category for common nouns. Although the syntactic rules of L1 are fairly straightforward and I will not rehearse all of them, quantification is always restricted to quantification over a certain sort of thing by a common noun, so the syntactic rules governing these are worth presenting:
Definition 12 (Some of Gupta's syntax; cf. p. 7)
If K is a common noun, x is a variable, and A is a formula, then (∀K ,x)A is a formula.
If K is a common noun, x is a variable, and A is a formula, then (K,x)A is a common noun.
The second clause allows for complex common nouns built from simpler ones, such as 'Man who likes Margret.'Footnote 40
The semantics of Gupta's languages begins standardly enough with:
Definition 13 (Model Structure for L1; p. 18)
A model structure for L1is an ordered triple < W,D,i ∗ > ,where:
W is a nonempty set,
D is a function that assigns to each member of W a nonempty set,
i ∗is a function that assigns to each member w of W a member of D(w).
Think of W as the set of possible worlds (or cases), D(w) as the set of objects that exist in w, and i ∗ as the individual concept whose extension in all worlds is the non-existent object (i.e. i(w) = ∗ in all w).
As with Bressan's absolute concepts, Gupta models substance sorts through an intensional property that is constant and separated:
Definition 14 (Gupta's Substance Sort in L1; p. 35)
A substance sort in a model structure is a modally constant and separated intensional property.
Modal constancy in L1 is not substantively different from Bressan's, although Gupta states it slightly differently. Where is a model structure:
Definition 15 (Gupta's Modal Constancy; p. 27)
An intensional property \(\mathcal {S}\) in is modally constant iff \(\mathcal {S}(w)=\mathcal {S}(w^{\prime })\)at all worlds w,w ′∈ W.
That is, \(\mathcal {S}\) will be constant if the individual concepts in the extension of \(\mathcal {S}\) for any world are the same.
I mentioned in footnote 18 that Gupta adjusts Bressan's notion of modal separation because it is case-relative, and so it will not do for modeling a principle of trans-case identity. Gupta's preferred modification is his own 'separation:'
Definition 16 (Gupta's Separation; p. 29)
An intensional property \(\mathcal {S}\) in is separated iff all individual concepts i,i ′that belong to \(\mathcal {S}\)at any worlds w,w ′are such that if i(w 1) = i ′(w 1)at a world w 1,then i = i ′.
He uses this to define his notion of a sort:
Definition 17 (Gupta's Sort in L1; p. 33)
A sort in a model structure is an intensional property in which is separated.
In general Gupta uses variables '\(\mathcal {S}\)', '\(\mathcal {S^{\prime }}\)', '\(\mathcal {S}_{1}\)', etc. to range over sorts in a fixed model structure .
In addition to Gupta's separation, there is also a weaker notion that is like Bressan's world-relative separation, except that it holds at every case.
Definition 18 (Gupta's World-Relative Separation; p. 29n)
An intensional property \(\mathcal {S}\) in is separated in the world w iff all individual concepts i,i ′that belong to \(\mathcal {S}\)at w are such that if i(w 1) = i ′(w 1)at a world w 1,then i = i ′.
Definition 19 (Gupta's Weak Separation; p. 29n)
An intensional property\(\mathcal {S}\) in is weakly separated iff\(\mathcal {S}\)is separated in every world.
Intuitively, the difference is that while separation says that the extension of two individual concepts that are \(\mathcal {S}\) in any world (even if these worlds are different) will never overlap at a world, weak separation just says that at each world the individual concepts that are \(\mathcal {S}\) in that world will not have the same extension in any world. Weak separation can provide a principle of identity, and Gupta develops a closely related notion that also incorporates a treatment of non-existence in L5 of chapter 4.
In order to accommodate non-existence, in chapter 3 Gupta modifies L1 in three different ways, treating the semantics of quantification and necessity slightly differently in each of L2, L3, and L4. Still, the way that he treats modal constancy and separation in each of these is the same:
Definition 20 (Gupta's Near Constancy; p. 69-70)
An intensional property \(\mathcal {S}\)is nearly constant in iff, if an individual concept i belongs to \(\mathcal {S}\)at any world w, then i belongs to \(\mathcal {S}\)at all worlds w ′such that i(w ′)≠i ∗(w ′).
Definition 21 (Gupta's Near Separation; p. 69)
An intensional property \(\mathcal {S}\)is nearly separated in iff all individual concepts i,i ′that belong to \(\mathcal {S}\)at some worlds (i.e., \(i\in \mathcal {S}(w_{1})\)and \(i^{\prime }\in \mathcal {S}(w_{2})\),for some w 1,w 2 ∈ W)are such that if i(w) = i ′(w)≠i ∗(w)at a world w, then i = i ′.
As I just noted here and in footnote 18, Gupta is concerned with distinguishing two forms of principles of identity associated with common nouns, sorts and substance sorts, where only the latter apply to essential properties.
Definition 22 (Gupta's Sort in L2-L4; p. 69)
\(\mathcal {S}\)is a sort in a model structure iff \(\mathcal {S}\)is an intensional property and \(\mathcal {S}\)is nearly separated in .
Definition 23 (Gupta's Substance Sort in L2-L4; p. 70)
\(\mathcal {S}\)is a substance sort in iff \(\mathcal {S}\)is a sort in and \(\mathcal {S}\)is nearly constant in .
As with Gupta's separation and Bressan's modal separation, Gupta's near separation differs from Bressan's quasi-modal separation in that it is not case-relative and so can serve as a principle of trans-case identity. In chapter 4, Gupta modifies separation and constancy again, this time endorsing for his 'quasi-separation' something like Bressan's 'quasi-modal separation' but in every case. For the details, as well as the corresponding required adjustment of constancy for L5, see [10, p. 104, & p. 107].
There are a few background notions that we need to have in place before we can look at Gupta's treatment of the semantics for quantification and necessity. First, he defines two sets of brackets. Given a sort \(\mathcal {S}\) in , he designates by \(\mathcal {S}[w]\) the set of objects that fall under \(\mathcal {S}\) in w, and by \(\mathcal {S}[{\kern -2.3pt}[ w ]{\kern -2.3pt}]\) the set of objects that are possibly \(\mathcal {S}\).
Definition 24 (p. 35)
$$\begin{array}{l} \mathcal{S}[w]=_{df}\{d: d\in D(w) \text{ and there is an individual concept} \\\quad\quad\quad\quad\quad i \in \mathcal{S}(w) \text{ such that}~ i(w)=d.\} \end{array} $$
$$\begin{array}{l} \mathcal{S}[{\kern-2.3pt}[ w ]{\kern-2.3pt}]=_{df}\{d: d\in D(w) \text{ and there is an individual concept} \\ \quad\quad\quad\quad\quad i \in \mathcal{S}(w^{\prime}) \text{ for some } w' \in W \text{ such that} ~i(w)=d\}. \end{array} $$
Next he defines what it means to be 'the same \(\mathcal {S}\)' and 'an \(\mathcal {S}\) counterpart:'
Definition 26 ('the same \(\mathcal {S}\)' in L1; p. 36)
d in w is the same \(\mathcal {S}\)as d ′in w ′iff there is an individual concept, i, that belongs to\(\mathcal {S}\)at some world, and i(w) = d and i(w ′) = d ′.
Definition 27 ('an \(\mathcal {S}\) counterpart'; p. 36)
The \(\mathcal {S}\)counterpart in w ′of the individual d in w (abbreviated \(\mathcal {S}(w^{\prime }, d, w)\))is the unique individual d ′such that d ′in w ′is the same \(\mathcal {S}\)as d in w.
As Gupta points out, "\(\mathcal {S}(w^{\prime }, d, w)\) is well defined if \(d\in \mathcal {S}[{\kern -2.3pt}[ w ]{\kern -2.3pt}]\). For if \(d\in \mathcal {S}[{\kern -2.3pt}[ w ]{\kern -2.3pt}]\), then there is an individual concept i belonging to \(\mathcal {S}\) at some world such that i(w) = d. The separation of \(\mathcal {S}\) implies that i is unique. Hence there is a unique d ′, namely i(w ′), which is the same \(\mathcal {S}\) as d in w" [10, p. 36].
Now to define the assignment function, we first need the notion of a model:
Definition 28 (Model for L1; p. 37)
A model for L1is anordered quintuple < W,D,i ∗,m,ρ > ,where:
< W,D,i ∗ > is a model structure,
m is a function that assigns (a) to each individual constant ofL1an individual concept, (b) to each n-ary predicate an n-ary relation, and (c) to eachatomic common noun a sort,
ρ ∈ W.
Through the function m a model in L1 assigns an intension to each atomic expression and ρ specifies the real world.
With the notion of a model, an assignment is:
Definition 29 (Assignments for L1; p. 38)
An assignment for L1relative to a model M =< W,D,i ∗,m,ρ > is a function that assigns to each variable ofL1an ordered pair \(< \mathcal {S}, d >\),where \(\mathcal {S}\)is a sort relative to the model structure and d ∈ U(= ∪ w∈W D(w)).
Here if a is an assignment, a o (x) is the object assigned to x by a and a s (x) is the sort assigned to x by a. Using this, Gupta defines a few notions that he then deploys in defining the semantic value of formulas involving quantification and necessity:
Definition 30 (Normal assignments for L1; p. 38)
An assignment a (for L1relative to a model M) is normal in w iff a o (x) ∈ a s (x)[[w]]for all variables x.
Definition 31 (\(\mathcal {S}\) variants for L1; p. 38-39)
An assignment a′ is an \(\mathcal {S}\) variant of a at x in w iff
a ′is just like a except perhaps at x (abbreviated to\(a^{\prime } \underset {x}{\bumpeq } a\)),
\(a^{\prime }_{s}(x) = \mathcal {S}\),
\(a^{\prime }_{o}(x)\in \mathcal {S}[w]\).
Definition 32 (World variants for L1; p. 39)
The w′ variant of a relative to w (abbreviated to f(w ′,a,w)) isthe unique assignment a ′that meets the following conditions:
\(a^{\prime }_{s}(x)=a_{s}(x)\)at all variables x,
\(a^{\prime }_{o}(x)\) in w ′is the same \(a^{\prime }_{s}(x)\)as a o (x)in w, at all variables x.
If these conditions are not met by any assignment then f(w ′,a,w) is undefined.
Now, having defined M, w, and a, Gupta then defines through induction on the length of expression α the concept: "the semantic value of α at a world w in a model M relative to the assignment a normal in w" [10, p. 40]. He abbreviates this to \(V^{w}_{M, a}(\alpha )\). Before giving this definition, however, it will help with quantification to have defined one more function that gives the intension of an expression α in a model M, for an assignment a, and a world w:
Definition 33 (Intension function for L1; p. 40)
Let M,w,a,and α be as above.Then \(I^{w}_{M, a}(\alpha )\)is a function with domain W that satisfies the followingcondition:
$$(I^{w}_{M, a}(\alpha))(w^{\prime})=V^{w^{\prime}}_{M, f(w^{\prime}, a, w)}(\alpha)$$
Gupta uses m to define the valuation function V as expected for individual constants, variables, and common nouns, and the value of an equality or truth function are found in the standard ways [10, cf. p. 40-41]. I include the definition of V for n-ary relations to give a better sense of how things run:
Definition 34 (Part of \(V^{w^{\prime }}_{M, a}(\alpha )\); p. 40-41)
Let M,w,a,and α be as above. Then V isdefined by induction on α:
If α is the atomic formula F(t 1,…,t n ),then \(V^{w}_{M, a}(\alpha )=T\) if \(\langle V^{w}_{M, a}(t_{1})\),…,\(V^{w}_{M, a}(t_{n})\rangle \in m(F)(w)\).Otherwise \(V^{w}_{M, a}(\alpha )=F\).
If α is the formula □A,then \(V^{w}_{M, a}(\alpha )=T\) if \(V^{w^{\prime }}_{M, f(w^{\prime }, a, w)}(A)=T\)at all worlds w ′∈ W.Otherwise \(V^{w}_{M, a}(\alpha )=F\).
If α is the formula (∀K ,x)A,then \(V^{w}_{M, a}(\alpha )=T\) if \(V^{w}_{M, a^{\prime }}(A)=T\)for all assignments a ′that are \(I^{w}_{M, a}(K)\)variants of a at x in w. Otherwise \(V^{w}_{M, a}(\alpha )=F\).
If α is the common noun (K,x)A,then \(V^{w}_{M, a}(\alpha )\)is the set of individual concepts i such that\(i \in V^{w}_{M, a}(K)\)and \(V^{w}_{M, a^{\prime }}(A)=T\),where \(a^{\prime } \underset {x}{\bumpeq } a\)and \(a^{\prime }_{s}(x)=I^{w}_{M, a}(K)\)and a o (x) = i(w).
Working back through the definitions, with this semantics one can see how assignment functions contribute to fixing principles of trans-world identity in the way described in Section 7.2.
In accommodating non-existent objects Gupta takes over many of these definitions, only modifying them when necessary. The main difficulty comes with the semantics of necessity and how it interacts with quantification. To give the modifications of these he first revises the notions of being 'the same \(\mathcal {S}\):'
Definition 35 ('the same \(\mathcal {S}\)' with non-existents; p. 71)
d in w is the same \(\mathcal {S}\)as d ′in w ′iff d≠i ∗(w)and d ′≠i ∗(w ′)and there is an individual concept, i, which belongs to\(\mathcal {S}\)at some world, and i(w) = d and i(w ′) = d ′.
Deploying this new version of 'the same \(\mathcal {S}\)' then has the effect of changing the sense of 'an \(\mathcal {S}\) counterpart' (abbreviated \(\mathcal {S}(w^{\prime }, d, w)\)), and a 'world variant' (abbreviated f(w ′,a,w)), although the wording of the definitions of these notions can stay the same.
Now the intuition that Gupta tries to capture with the semantics of necessity is "an object d of the sort \(\mathcal {S}\) satisfies □F x in w iff d satisfies F x in w, and at all worlds w ′ at which \(\mathcal {S}(w^{\prime }, d, w)\) is defined, \(\mathcal {S}(w^{\prime }, d, w)\) satisfies F x in w ′" [10, p. 71]. He does this with the following in L2, which replaces Definition 34.(ii):
Definition 36 (Box rule for L2; p. 72)
Let M,w,a,and α be asabove:
If α is the formula □A,then \(V^{w}_{M, a}(\alpha )=T\) if \(V^{w^{\prime }}_{M, f(w^{\prime }, a, w)}(A)=T\) at all worlds w ′ at which f(w ′,a,w) is defined. Otherwise \(V^{w}_{M, a}(\alpha )=F\).
This definition runs into serious trouble, one aspect of which I eluded to in footnote 34, because f(w ′,a,w) is undefined at a world w ′ now whenever there is no \(\mathcal {S}\) variant at a w ′ for one of the variables that gets assigned an object by a. This will be true, even if the variable in question does not figure in the formula under consideration. As a result, the semantic values of the formula at worlds which should be considered end up being discounted. And even though Gupta makes progress on this issue in the semantics of L3, there are still serious issues [10, cf. p. 72-75].
To respond to these, in L4 Gupta gives up on the assignment of sorts and objects to variables and instead just assigns them individual concepts. With necessity, he now takes into account all of the worlds.
If α is the formula □A,then \(V^{w}_{M, a}(\alpha )=T\) if \(V^{w^{\prime }}_{M, a}(A)=T\)at all worlds w ′∈ W.Otherwise \(V^{w}_{M, a}(\alpha )=F\).
Now that he is not doing anything to discount those cases where the objects figuring in A in various worlds have * as their extension in that world, only the cats of Fig. 3 on p. 15 will count as necessarily cats, not the ones in Fig. 2. That is, since C(e h∗) = F in γ 3, e h ∗ will not necessarily be a cat, even though it is one in all of the cases in which it exists. Thus, with L4 Gupta gives up on accommodating our 'rough definition' of necessity (Definition 9).
Nunez, T. Modeling Unicorns and Dead Cats: Applying Bressan's MLν to the Necessary Properties of Non-existent Objects. J Philos Logic 47, 95–121 (2018). https://doi.org/10.1007/s10992-016-9418-6
Intensional logic
Non-existent objects
Essential properties
Sortals
Logic of common nouns
Aldo Bressan
Anil Gupta
Richard Montague
Absolute concept
Substance sort
Principle of identity | CommonCrawl |
Results for 'Corey Snelgrove'
Robert Nichols in Conversation with Kelly Aguirre, Phil Henderson, Cressida J. Heyes, Alana Lentin, and Corey Snelgrove.Robert Nichols, Phil Henderson, Cressida J. Heyes, Kelly Aguirre, Alana Lentin & Corey Snelgrove - 2021 - Journal of World Philosophies 6 (2):181-222.details
Kelly Aguirre, Phil Henderson, Cressida J. Heyes, Alana Lentin, and Corey Snelgrove engage with different aspects of Robert Nichols' Theft is Property! Dispossession and Critical Theory. Henderson focuses on possible spaces for maneuver, agency, contradiction, or failure in subject formation available to individuals and communities interpellated through diremptive processes. Heyes homes in on the ritual of antiwill called "consent" that systematically conceals the operation of power. Aguirre foregrounds tensions in projects of critical theory scholarship that aim for dialogue (...) and solidarity with Indigenous decolonial struggles. Lentin draws attention to the role of race in undergirding the logic of Anglo-settler colonial domination that operates through dispossession, while Snelgrove emphasizes the link between alienation, capital, and colonialism. In his reply to his interlocutors, Nichols clarifies aspects of his "recursive logics" of dispossession, a dispossession or theft through which the right to property is generated. (shrink)
Balancing Procedures and Outcomes Within Democratic Theory: Corey Values and Judicial Review.Corey Brettschneider - 2005 - Political Studies 53:423-451.details
Democratic theorists often distinguish between two views of democratic procedures. 'Outcomes theorists' emphasize the instrumental nature of these procedures and argue that they are only valuable because they tend to produce good outcomes. In contrast, 'proceduralists' emphasize the intrinsic value of democratic procedures, for instance, on the grounds that they are fair. In this paper. I argue that we should reject pure versions of these two theories in favor of an understanding of the democratic ideal that recognizes a commitment to (...) both intrinsically valuable democratic procedures and democratic outcomes. In instances in which there is a conflict between these two commitments, I suggest they must be balanced. This balancing approach offers a justification of judicial review on the grounds that it potentially limits outcomes that undermine democracy. But judicial review is not justifiable in any instance in which a bad democratic outcome results from democratic procedures. When the loss that would result from overturning a democratic procedure is greater than the gain to democracy that would result from ensuring against an undemocratic outcome; judicial review is not justifiable. Loss or gain to democracy is defined by the negative or positive impact of each action on the core democratic values of equality and autonomy, aspects of the democratic ideal. Even when judicial review is justified, the fact that it overturns intrinsically valuable procedures suggests that such review is never ideal from the standpoint of democracy. (shrink)
Autonomy in Political Theories in Social and Political Philosophy
Constitutional Interpretation in Philosophy of Law
Constitutional Law, Misc in Philosophy of Law
Constitutionalism in Philosophy of Law
The Concept of Rights in Social and Political Philosophy
Review Symposium of David Corey, The Sophists in Plato's Dialogues: SUNY Press, 2015.Avi I. Mintz, Anne-Marie Schultz, Samantha Deane, Marina McCoy, William H. F. Altman & David D. Corey - 2017 - Studies in Philosophy and Education 37 (4):417-431.details
Philosophy of Education in Philosophy of Social Science
Kant and Rational Psychology.Corey Dyck - 2014 - New York, NY: Oxford University Press UK.details
Corey W. Dyck presents a new account of Kant's criticism of the rational investigation of the soul in his monumental Critique of Pure Reason, in light of its eighteenth-century German context. When characterizing the rational psychology that is Kant's target in the Paralogisms of Pure Reason chapter of the Critique commentators typically only refer to an approach to, and an account of, the soul found principally in the thought of Descartes and Leibniz. But Dyck argues that to do so (...) is to overlook the distinctive rational psychology developed by Christian Wolff, which emphasized the empirical foundation of any rational cognition of the soul, and which was widely influential among eighteenth-century German philosophers, including Kant. In this book, Dyck reveals how the received conception of the aim and results of Kant's Paralogisms must be revised in light of a proper understanding of the rational psychology that is the most proximate target of Kant's attack. In particular, he contends that Kant's criticism hinges upon exposing the illusory basis of the rational psychologist's claims inasmuch as he falls prey to the appearance of the soul as being given in inner experience. Moreover, Dyck demonstrates that significant light can be shed on Kant's discussion of the soul's substantiality, simplicity, personality, and existence by considering the Paralogisms in this historical context.Readership: Scholars and advanced students in history of philosophy, especially those working on Kant. (shrink)
Kant: Rational Psychology in 17th/18th Century Philosophy
Parasite-stress promotes in-group assortative sociality: The cases of strong family ties and heightened religiosity.Corey L. Fincher & Randy Thornhill - 2012 - Behavioral and Brain Sciences 35 (2):61-79.details
Throughout the world people differ in the magnitude with which they value strong family ties or heightened religiosity. We propose that this cross-cultural variation is a result of a contingent psychological adaptation that facilitates in-group assortative sociality in the face of high levels of parasite-stress while devaluing in-group assortative sociality in areas with low levels of parasite-stress. This is because in-group assortative sociality is more important for the avoidance of infection from novel parasites and for the management of infection in (...) regions with high levels of parasite-stress compared with regions of low infectious disease stress. We examined this hypothesis by testing the predictions that there would be a positive association between parasite-stress and strength of family ties or religiosity. We conducted this study by comparing among nations and among states in the United States of America. We found for both the international and the interstate analyses that in-group assortative sociality was positively associated with parasite-stress. This was true when controlling for potentially confounding factors such as human freedom and economic development. The findings support the parasite-stress theory of sociality, that is, the proposal that parasite-stress is central to the evolution of social life in humans and other animals. (shrink)
Philosophy of Psychology in Philosophy of Cognitive Science
Corey W. Dyck, Kant and Rational Psychology. Reviewed by.Nathan R. Strunk - 2016 - Philosophy in Review 36 (3):97-99.details
Corey W. Dyck presents a new account of Kant's criticism of the rational investigation of the soul in his monumental Critique of Pure Reason, in light of its eighteenth-century German context. When characterizing the rational psychology that is Kant's target in the Paralogisms of Pure Reason chapter of the Critique commentators typically only refer to an approach to, and an account of, the soul found principally in the thought of Descartes and Leibniz. But Dyck argues that to do so (...) is to overlook the distinctive rational psychology developed by Christian Wolff, which emphasized the empirical foundation of any rational cognition of the soul, and which was widely influential among eighteenth-century German philosophers, including Kant. In this book, Dyck reveals how the received conception of the aim and results of Kant's Paralogisms must be revised in light of a proper understanding of the rational psychology that is the most proximate target of Kant's attack. In particular, he contends that Kant's criticism hinges upon exposing the illusory basis of the rational psychologist's claims inasmuch as he falls prey to the appearance of the soul as being given in inner experience. Moreover, Dyck demonstrates that significant light can be shed on Kant's discussion of the soul's substantiality, simplicity, personality, and existence by considering the Paralogisms in this historical context. (shrink)
Kant: Metaphysics and Epistemology in 17th/18th Century Philosophy
The Influence of Demonstrated Concern on Perceived Ethical Leadership: A Levinasian Approach.Corey Steiner - 2020 - Philosophy of Management 19 (4):447-467.details
This paper brings empirical and theoretical studies of ethical leadership into conversation with one another in an effort to determine the antecedent to perceived ethical leadership. Employing a Levinasian perspective, I argue that ethical leadership entails being faced with the impossible task of realizing the needs of many individual others. For this reason, I argue, perceived ethical leadership is grounded in an employee's perception that a leader struggles to make decisions based on the conflicting demands placed upon her. More important (...) than the result of a leader's decision is the degree to which the leader demonstrates concern for the well-being of others in her decision-making process. I ground my discussion through reference to results of empirical studies on behaviors associated with ethical leadership, including Brown, Treviño, and Harrison, Kalshoven, Den Hartog, and De Hoogh, and Treviño, Hartman, and Brown. I identify several mediating factors which may influence employee perception of ethical leadership, proposing avenues for further research which can help to clarify the relationship between concrete leadership behaviors and perceived ethical leadership. (shrink)
Emotion's influence on judgment-formation: Breaking down the concept of moral intuition.Corey Steiner - 2019 - Philosophical Psychology 33 (2):228-243.details
ABSTRACTRecent discussions in the field of moral cognition suggest that the relationship between emotion and judgment-formation can be described in three separate ways: firstly, it narrows our atte...
Science, assertion, and the common ground.Corey Dethier - 2022 - Synthese 200 (1):1-19.details
I argue that the appropriateness of an assertion is sensitive to context—or, really, the "common ground"—in a way that hasn't previously been emphasized by philosophers. This kind of context-sensitivity explains why some scientific conclusions seem to be appropriately asserted even though they are not known, believed, or justified on the available evidence. I then consider other recent attempts to account for this phenomenon and argue that if they are to be successful, they need to recognize the kind of context-sensitivity that (...) I argue for. (shrink)
Norms of Assertion in Philosophy of Language
Scientific Language, Misc in General Philosophy of Science
Democratic Rights: The Substance of Self-Government.Corey Brettschneider - 2007 - Princeton University Press.details
When the Supreme Court in 2003 struck down a Texas law prohibiting homosexual sodomy, it cited the right to privacy based on the guarantee of "substantive due process" embodied by the Constitution. But did the court act undemocratically by overriding the rights of the majority of voters in Texas? Scholars often point to such cases as exposing a fundamental tension between the democratic principle of majority rule and the liberal concern to protect individual rights. Democratic Rights challenges this view by (...) showing that, in fact, democracy demands many of these rights. Corey Brettschneider argues that ideal democracy is comprised of three core values--political autonomy, equality of interests, and reciprocity--with both procedural and substantive implications. These values entitle citizens not only to procedural rights of participation but also to substantive rights that a "pure procedural" democracy might not protect. What are often seen as distinctly liberal substantive rights to privacy, property, and welfare can, then, be understood within what Brettschneider terms a "value theory of democracy." Drawing on the work of John Rawls and deliberative democrats such as Jürgen Habermas, he demonstrates that such rights are essential components of--rather than constraints on--an ideal democracy. Thus, while defenders of the democratic ideal rightly seek the power of all to participate, they should also demand the rights that are the substance of self-government. (shrink)
Conceptions of Democracy in Social and Political Philosophy
Democracy, Misc in Social and Political Philosophy
John Rawls in 20th Century Philosophy
Justification of Democracy in Social and Political Philosophy
Existence Assumptions and Logical Principles: Choice Operators in Intuitionistic Logic.Corey Edward Mulvihill - 2015 - Dissertation, University of Waterloodetails
Hilbert's choice operators τ and ε, when added to intuitionistic logic, strengthen it. In the presence of certain extensionality axioms they produce classical logic, while in the presence of weaker decidability conditions for terms they produce various superintuitionistic intermediate logics. In this thesis, I argue that there are important philosophical lessons to be learned from these results. To make the case, I begin with a historical discussion situating the development of Hilbert's operators in relation to his evolving program in the (...) foundations of mathematics and in relation to philosophical motivations leading to the development of intuitionistic logic. This sets the stage for a brief description of the relevant part of Dummett's program to recast debates in metaphysics, and in particular disputes about realism and anti-realism, as closely intertwined with issues in philosophical logic, with the acceptance of classical logic for a domain reflecting a commitment to realism for that domain. Then I review extant results about what is provable and what is not when one adds epsilon to intuitionistic logic, largely due to Bell and DeVidi, and I give several new proofs of intermediate logics from intuitionistic logic+ε without identity. With all this in hand, I turn to a discussion of the philosophical significance of choice operators. Among the conclusions I defend are that these results provide a finer-grained basis for Dummett's contention that commitment to classically valid but intuitionistically invalid principles reflect metaphysical commitments by showing those principles to be derivable from certain existence assumptions; that Dummett's framework is improved by these results as they show that questions of realism and anti-realism are not an "all or nothing" matter, but that there are plausibly metaphysical stances between the poles of anti-realism and realism, because different sorts of ontological assumptions yield intermediate rather than classical logic; and that these intermediate positions between classical and intuitionistic logic link up in interesting ways with our intuitions about issues of objectivity and reality, and do so usefully by linking to questions around intriguing everyday concepts such as "is smart," which I suggest involve a number of distinct dimensions which might themselves be objective, but because of their multivalent structure are themselves intermediate between being objective and not. Finally, I discuss the implications of these results for ongoing debates about the status of arbitrary and ideal objects in the foundations of logic, showing among other things that much of the discussion is flawed because it does not recognize the degree to which the claims being made depend on the presumption that one is working with a very strong logic. (shrink)
History: Philosophy of Mathematics in Philosophy of Mathematics
Intuitionistic Logic in Logic and Philosophy of Logic
Mathematical Logic in Philosophy of Mathematics
Realism and Anti-Realism, Misc in Metaphysics
Higher dimensional cardinal characteristics for sets of functions.Corey Bacal Switzer - 2022 - Annals of Pure and Applied Logic 173 (1):103031.details
Set Theory in Philosophy of Mathematics
The Infinite in Philosophy of Mathematics
How to Do Things with Theory: The Instrumental Role of Auxiliary Hypotheses in Testing.Corey Dethier - 2021 - Erkenntnis 86 (6):1453-1468.details
Pierre Duhem's influential argument for holism relies on a view of the role that background theory plays in testing: according to this still common account of "auxiliary hypotheses," elements of background theory serve as truth-apt premises in arguments for or against a hypothesis. I argue that this view is mistaken. Rather than serving as truth-apt premises in arguments, auxiliary hypotheses are employed as "epistemic tools": instruments that perform specific tasks in connecting our theoretical questions with the world but that are (...) not premises in arguments. On the resulting picture, the acceptability of an auxiliary hypothesis depends not on its truth but on contextual factors such as the task or purpose it is put to and the other tools employed alongside it. (shrink)
Confirmation Holism in General Philosophy of Science
Quine-Duhem Thesis in General Philosophy of Science
The parasite-stress theory may be a general theory of culture and sociality.Corey L. Fincher & Randy Thornhill - 2012 - Behavioral and Brain Sciences 35 (2):99-119.details
In the target article, we presented the hypothesis that parasite-stress variation was a causal factor in the variation of in-group assortative sociality, cross-nationally and across the United States, which we indexed with variables that measured different aspects of the strength of family ties and religiosity. We presented evidence supportive of our hypothesis in the form of analyses that controlled for variation in freedom, wealth resources, and wealth inequality across nations and the states of the USA. Here, we respond to criticisms (...) from commentators and attempt to clarify and expand the parasite-stress theory of sociality used to fuel our research presented in the target article. (shrink)
Aspects of Consciousness in Philosophy of Mind
What Does Social Work Have to Offer Evidence-based Practice?Corey Shdaimah - 2009 - Ethics and Social Welfare 3 (1):18-31.details
Evidence-based practice (EBP) is a relatively recent incarnation in social work's long history of valuing evidence as a basis for practice. Few argue with the ethics and usefulness of grounding practice in empirically tested interventions. Critics of EBP instead focus on how it is defined and implemented. Critiques include what counts as evidence, who makes decisions regarding research agendas and processes, and the lack of attention to context. This essay reflects on such critiques and suggests that social work, as a (...) profession that values human diversity, equality, and self-determination, is well situated to shed light on such debates about EBP. As a profession that supports a person-in-environment perspective, we must examine not only the theory but the practice of EPB in academic, institutional, and societal settings. It is also argued that, owing to our professional mission, it is not enough to acknowledge the risk of oppression and harm; we are obligated to take them seriously and include such potential for harm in our assessment of so-called best practices. (shrink)
From "Either-Or" to "When and How": A Context-Dependent Model of Culture in Action.Corey M. Abramson - 2012 - Journal for the Theory of Social Behaviour 42 (2):155-180.details
In this article, I outline a framework for the sociological study of culture that connects three intertwined elements of human culture and demonstrates the concrete contexts under which each most critically influences actions and their subsequent outcomes. In contrast to models that cast motivations, resources, and meanings as competing explanations of how culture affects action, I argue that these are fundamental constituent elements of culture that are inseparable, interdependent, and simultaneously operative. Which element provides the strongest link to action, and (...) how this link operates, must be understood as a function of the actor's position within wider social contexts. I argue that on average motivations have the most discernable link to action within a social strata, cultural resources provide the strongest link across strata, and meanings have the greatest direct influence when codified and sanctioned. I then offer a reframing and synthesis that reintegrates previously "competing" theories of culture into a more holistic context-dependent model of culture in action. Finally, I use evidence from prior empirical research, as well as new data from an ongoing ethnographic study of health behaviors among the aged, to show how various elements of culture are concretely linked to action in eight different social contexts. In doing so, I provide a roadmap for the transition out of the "either-or" logic underlying much of cultural theory and reemphasize the importance of the classical sociological concern for "when" and "how" various aspects of culture influence action and outcomes in concrete social contexts. (shrink)
Philosophy of Sociology, Misc in Philosophy of Social Science
Neorepublicanism and the Domination of Posterity.Corey Katz - 2017 - Ethics, Policy and Environment 20 (3):294-313.details
Some have recently argued that the current generation dominates future generations by causing long-term climate change. They relate these claims to Philip Pettit and Frank Lovett's neorepublican theory of domination. In this paper, I examine their claims and ask whether the neorepublican conception of domination remains theoretically coherent when the relation is between current agents and nonoverlapping future subjects. I differentiate between an 'outcome' and a 'relational' conception of domination. I show how both are theoretically coherent when extended to posterity (...) but only if we make different definitional and normative choices than those made by Pettit and Lovett. (shrink)
Environmental Philosophy in Philosophy of Biology
Republicanism in Social and Political Philosophy
Fear: The History of a Political Idea.Corey Robin - 2006 - Oup Usa.details
Robin illustrates the central role that fear has played and continues to play in the wielding of power, particularly in politics and the workplace.
Legal services lawyers: when conceptions of lawyering and values clash.Corey S. Shdaimah - 2012 - In Leslie C. Levin & Lynn M. Mather (eds.), Lawyers in Practice: Ethical Decision Making in Context. University of Chicago Press. pp. 317.details
Legal Ethics in Applied Ethics
The Ethical Duty to Reduce the Ecological Footprint of Industrialized Healthcare Services and Facilities.Corey Katz - 2022 - Journal of Medicine and Philosophy 47 (1):32-53.details
According to the widely accepted principles of beneficence and distributive justice, I argue that healthcare providers and facilities have an ethical duty to reduce the ecological footprint of the services they provide. I also address the question of whether the reductions in footprint need or should be patient-facing. I review Andrew Jameton and Jessica Pierce's claim that achieving ecological sustainability in the healthcare sector requires rationing the treatment options offered to patients. I present a number of reasons to think that (...) we should not ration health care to achieve sufficient reductions in a society's overall consumption of ecological goods. Moreover, given the complexities of ecological rationing, I argue that there are good reasons to think that the ethical duty to reduce the ecological footprint of health care should focus on only nonpatient-facing changes. I review a number of case studies of hospitals who have successfully retrofitted facilities to make them more efficient and reduced their resource and waste streams. (shrink)
When the State Speaks, What Should It Say?: How Democracies Can Protect Expression and Promote Equality.Corey Brettschneider - 2012 - Princeton University Press.details
Brettschneider extends this analysis from freedom of expression to the freedoms of religion and association, and he shows that value democracy can uphold the protection of these freedoms while promoting equality for all citizens.
The Cichoń diagram for degrees of relative constructibility.Corey Bacal Switzer - 2020 - Mathematical Logic Quarterly 66 (2):217-234.details
Corey W. Dyck.Paul Guyer'S. - 2009 - Philosophy and Social Criticism 35 (5):613-619.details
Brettschneider, Corey. When the State Speaks, What Should It Say? How Democracies Can Protect Expression and Promote Equality.Princeton, NJ: Princeton University Press, 2012. Pp. 216. $35.00. [REVIEW]Katharine Gelber - 2013 - Ethics 124 (1):177-181.details
On the Wings of Metaphor by Stanley D. Ivie.D. Snelgrove - 2003 - Journal of Thought 38 (3):93-95.details
Semantics in Philosophy of Language
Loren Corey Eiseley: In appreciation.Ward H. Goodenough - 1984 - Zygon 19 (1):21-24.details
Corey Byrnes. Fixing Landscape: A Techno-Poetic History of China's Three Gorges. New York: Columbia University Press, 2018. 344 pp. [REVIEW]Christian Sorace - 2022 - Critical Inquiry 48 (2):433-434.details
The identity theory of quotation.Corey Washington - 1992 - Journal of Philosophy 89 (11):582-605.details
Quotation in Philosophy of Language
Nested Ecology: The Place of Humans in the Ecological Hierarchy.Chelsea Snelgrove - 2010 - Environmental Philosophy 7 (2):182-185.details
Topics in Environmental Ethics in Applied Ethics
A rational approach to animal rights: extensions in abolitionist theory.Corey Lee Wrenn - 2016 - New York: Palgrave-Macmillan.details
Applying critical sociological theory, this book explores the shortcomings of popular tactics in animal liberation efforts. Building a case for a scientifically-grounded grassroots approach, it is argued that professionalized advocacy that works in the service of theistic, capitalist, patriarchal institutions will find difficulty achieving success.
Animal Rights in Applied Ethics
Virtues and Vices in Normative Ethics
Review: Corey Brettschneider, When the State Speaks, What Should It Say? How Democracies Can Protect Expression and Promote Equality. [REVIEW]Katharine Gelber - forthcoming - Philosophical Explorations.details
Positioning students as consumers and entrepreneurs: student service materials on a Hong Kong university campus.Corey Fanglei Huang - 2022 - Critical Discourse Studies 19 (6):667-686.details
Favoring individual entrepreneurial freedom and free-market competition, neoliberalism has reshaped the social and discursive practices of higher education institutions (HEIs) around the world. In this paper, I draw on methods from critical multimodal discourse studies and an analytic concept from linguistic anthropology to examine several sets of student service materials circulating on the campus of a Hong Kong university between 2016 and 2017. While these materials are purportedly designed with student welfare in mind, I demonstrate how they effectively position students (...) as (1) consumers of tailored services or experiences provided by the university; and (2) entrepreneurial selves, that is, socio-economically competitive and self-managed young individuals. I conclude by arguing that these service materials are shaped by and espouse a neoliberal governmentality that (re)orients HEIs and their students towards an all-pervasive marketization, competitiveness, and assertion of class privilege in a globalizing, particularly Westernized late capitalist society in Asia. (shrink)
Nursing work in NHS Direct: constructing a nursing identity in the call-centre environment.Sherrill Ray Snelgrove - 2009 - Nursing Inquiry 16 (4):355-365.details
Nursing Ethics in Applied Ethics
When the eternal can be met: the Bergsonian theology of time in the works of C.S. Lewis, T.S. Eliot, and W.H. Auden.Corey Latta - 2014 - Eugene, Oregon: Pickwick Publications.details
The task of theologizing literature in the twentieth century -- Bergonsian conceptions of time : duration, dualism, intention -- Meeting the eternal in the present : Bergsonsism and the theology of present time in C.S. Lewis's The great divorce -- T.S. Eliot's Bergonsian "always present" : incarnation and duration in Four quartets -- W.H. Auden's themes of time and dualism : the Bergsonsian theology of "kairos and logos.".
$47.31 used View on Amazon.com
In The Intellectual Legacy of Michael Oakeshott.Corey Abel & Timothy Fuller (eds.) - 2005 - Imprint Academic.details
This volume brings together a diverse range of perspectives reflecting the international appeal and multi-disciplinary interest that Oakeshott now attracts.
Social and Political Philosophy, Misc in Social and Political Philosophy
Reconciling the opposing effects of neurobiological evidence on criminal sentencing judgments.Corey Allen, Karina Vold, Gidon Felson, Jennifer Blumenthal-Barby & Eyal Aharoni - 2019 - PLoS ONE 1:1-17.details
Legal theorists have characterized physical evidence of brain dysfunction as a double-edged sword, wherein the very quality that reduces the defendant's responsibility for his transgression could simultaneously increase motivations to punish him by virtue of his apparently increased dangerousness. However, empirical evidence of this pattern has been elusive, perhaps owing to a heavy reliance on singular measures that fail to distinguish between plural, often competing internal motivations for punishment. The present study employed a test of the theorized double-edge pattern using (...) a novel approach designed to separate such motivations. We asked a large sample of participants (N = 330) to render criminal sentencing judgments under varying conditions of the defendant's mental health status (Healthy, Neurobiological Disorder, Psychological Disorder) and the disorder's treatability (Treatable, Untreatable). As predicted, neurobiological evidence simultaneously elicited shorter prison sentences (i.e., mitigating) and longer terms of involuntary hospitalization (i.e., aggravating) than equivalent psychological evidence. However, these effects were not well explained by motivations to restore treatable defendants to health or to protect society from dangerous persons but instead by deontological motivations pertaining to the defendant's level of deservingness and possible obligation to provide medical care. This is the first study of its kind to quantitatively demonstrate the paradoxical effect of neuroscientific trial evidence and raises implications for how such evidence is presented and evaluated. (shrink)
Neuroscience in Cognitive Sciences
Psychology in Cognitive Sciences
Some have recently argued that the current generation dominates future generations by causing long-term climate change. They relate these claims to Philip Pettit and Frank Lovett's neorepublican theory of domination. In this paper, I examine their claims and ask whether the neorepublican conception of domination remains theoretically coherent when the relation is between current agents and nonoverlapping future subjects. I differentiate between an 'outcome' and a 'relational' conception of domination. I show how both are theoretically coherent when extended to posterity (...) but only if we make different definitional and normative choices than those made by Pettit and Lovett. (shrink)
Climate Change in Applied Ethics
Future Generations in Applied Ethics
Government, Misc in Social and Political Philosophy
The Identity Theory of Quotation.Corey Washington - 1992 - Journal of Philosophy 89 (11):582.details
Ontology in Metaphysics
Review: Corey Dyck's 'Kant and Rational Psychology'. [REVIEW]Dennis Schulting - 2016 - Studi Kantiani 29:185-191.details
Kant: Apperception and Self-Consciousness in 17th/18th Century Philosophy
Kant: The Self in 17th/18th Century Philosophy
Review: Corey W. Dyck, Kant and Rational Psychology. [REVIEW]Naomi Fisher - 2015 - Review of Metaphysics 68 (3):651-653.details
Kant: Philosophy of Mind, Misc in 17th/18th Century Philosophy
Issues and ethics in the helping professions.Gerald Corey, Marianne Schneider Corey & Patrick Callanan - 2015 - United States: Brooks/Cole/Cengage Learning.details
An introduction to ethics issues for people in the helping professions, exploring the role of therapy for both trainees and professional counselors, and discussing values in the helping relationship, client rights and counselor responsibilities, confidentiality, professional competency and training, and other topics.
Psychotherapy and Psychoanalysis in Philosophy of Cognitive Science
$2.00 used $18.00 new $66.95 from Amazon View on Amazon.com
The Rights of the Guilty: Punishment and Political Legitimacy.Corey Brettschneider - 2007 - Political Theory 35 (2):175-199.details
In this essay I develop and defend a theory of state punishment within a wider conception of political legitimacy. While many moral theories of punishment focus on what is deserved by criminals, I theorize punishment within the specific context of the state's relationship to its citizens. Central to my account is Rawls's "liberal principle of legitimacy," which requires that all state coercion be justifiable to all citizens. I extend this idea to the justification of political coercion to criminals qua citizens. (...) I argue that the liberal principle of legitimacy implicitly requires states to respect the basic political rights of those who are guilty of committing crimes, thus prohibiting capital punishment. (shrink)
Capital Punishment in Applied Ethics
Contractarianism about Political Authority in Social and Political Philosophy
Punishment in Applied Ethics
Punishment in Criminal Law in Philosophy of Law
Chapter Three: A Lutheran View of Life and Learning: Paradox as Paradigm.Corey Maahs - 2015 - In Gary W. Jenkins & Jonathan Yonan (eds.), Liberal Learning and the Great Christian Traditions. Pickwick Publications.details
Corey Beals, Levinas and the Wisdom of Love: The Question of Invisibility. [REVIEW]Brian Bergen-Aurand - 2009 - Philosophy in Review 29 (5):311-313.details
Emmanuel Levinas in Continental Philosophy
Selfhood and Authenticity.Corey Anton - 2001 - State University of New York Press.details
Explores the notion of selfhood in the wake of the post-structuralist debates.
$16.44 used $22.38 new View on Amazon.com
Forces in a true and physical sense: from mathematical models to metaphysical conclusions.Corey Dethier - 2019 - Synthese 198 (2):1109-1122.details
Wilson [Dialectica 63:525–554, 2009], Moore [Int Stud Philos Sci 26:359–380, 2012], and Massin [Br J Philos Sci 68:805–846, 2017] identify an overdetermination problem arising from the principle of composition in Newtonian physics. I argue that the principle of composition is a red herring: what's really at issue are contrasting metaphysical views about how to interpret the science. One of these views—that real forces are to be tied to physical interactions like pushes and pulls—is a superior guide to real forces than (...) the alternative, which demands that real forces are tied to "realized" accelerations. Not only is the former view employed in the actual construction of Newtonian models, the latter is both unmotivated and inconsistent with the foundations and testing of the science. (shrink)
Classical Mechanics in Philosophy of Physical Science
Positive Psychology: The Study of 'That Which Makes Life Worthwhile'.Corey L. M. Keyes - unknowndetails
Positive psychology aims to help people live and flourish, rather than merely to exist. The term "positive psychology" may seem to imply that all other psychology is in some way negative, but that implication is unintended and untrue. However the term "positive psychology" contains a softer indictment, namely, that psychology has become unbalanced. In the years since World War II psychology, guided by its funding agencies and the rising social conscience of its practitioners, has focused on helping people and society (...) solve serious problems. Clinical psychology has focused on mental illness, social psychology has focused on prejudice, racism, and aggression, and cognitive psychology has focused on diagnosing the errors and biases that lead to bad decisions. There are good reasons to spend more time and money on illness and problems than on health and strengths. Utilitarianism, compassion, and a concern for equality suggest that people in great pain should be helped before those who are not suffering. But there are at least two costs to focusing on illness, problems, and weaknesses. The first cost is an inappropriately negative view of human nature and the human condition. We teach students about the many ways the mind can go wrong, and about the frightening prevalence rates of depression, child abuse, and eating disorders. We teach students that people are fundamentally selfish creatures whose occasional good deeds are accidental products of self-esteem management. Is such cynicism and pessimism really justified? Positive psychology is realistic. It does not claim that human nature is all sweetness and light, but it does offer a more balanced view. Most people are doing reasonably well in life, and have the capacity to thrive and flourish, even when -- or especially when -- confronted with challenges, setbacks, and suffering (see Ryff & Singer, this volume; Wethington, this volume). Most people have experienced powerful feelings of moral elevation and inspiration that are unconnected to any need for self-esteem (see Haidt, this volume).. (shrink)
Destructibility and axiomatizability of Kaufmann models.Corey Bacal Switzer - 2022 - Archive for Mathematical Logic 61 (7):1091-1111.details
A Kaufmann model is an \(\omega _1\) -like, recursively saturated, rather classless model of \({{\mathsf {P}}}{{\mathsf {A}}}\) (or \({{\mathsf {Z}}}{{\mathsf {F}}} \) ). Such models were constructed by Kaufmann under the combinatorial principle \(\diamondsuit _{\omega _1}\) and Shelah showed they exist in \(\mathsf {ZFC}\) by an absoluteness argument. Kaufmann models are an important witness to the incompactness of \(\omega _1\) similar to Aronszajn trees. In this paper we look at some set theoretic issues related to this motivated by the seemingly (...) naïve question of whether such a model can be "killed" by forcing without collapsing \(\omega _1\). We show that the answer to this question is independent of \(\mathsf {ZFC}\) and closely related to similar questions about Aronszajn trees. As an application of these methods we also show that it is independent of \(\mathsf {ZFC}\) whether or not Kaufmann models can be axiomatized in the logic \(L_{\omega _1, \omega } (Q)\) where _Q_ is the quantifier "there exists uncountably many". (shrink)
Atheism in the American Animal Rights Movement: An Invisible Majority.Corey Lee Wrenn - 2019 - Environmental Values 28 (6):715-739.details
Previous research has alluded to the predominance of atheism in participant pools of the Nonhuman Animal rights movement, as well as the correlation between atheism and support for anti-speciesism, but no study to date has independently examined this demographic. This article presents a profile of 210 atheists and agnostics, derived from a larger survey of 287 American vegans conducted in early 2017. Results demonstrate that atheists constitute one of the movement's largest demographics, and that atheist and agnostic vegans are more (...) likely to adopt veganism out of concern for other animals. While these vegans did not register a higher level of social movement participation than religious vegans, they were more intersectionally oriented and more likely to politically identify with the far left. Given the Nonhuman Animal rights movement's overall failure to target atheists, these findings suggest a strategic oversight in overlooking the movement's potentially most receptive demographic. (shrink)
Environmental Ethics in Applied Ethics
Experimental Philosophy: Ethics, Misc in Metaphilosophy
Rater characteristics, response content, and scoring contexts: Decomposing the determinates of scoring accuracy.Corey Palermo - 2022 - Frontiers in Psychology 13.details
Raters may introduce construct-irrelevant variance when evaluating written responses to performance assessments, threatening the validity of students' scores. Numerous factors in the rating process, including the content of students' responses, the characteristics of raters, and the context in which the scoring occurs, are thought to influence the quality of raters' scores. Despite considerable study of rater effects, little research has examined the relative impacts of the factors that influence rater accuracy. In practice, such integrated examinations are needed to afford evidence-based (...) decisions of rater selection, training, and feedback. This study provides the first naturalistic, integrated examination of rater accuracy in a large-scale assessment program. Leveraging rater monitoring data from an English language arts summative assessment program, I specified cross-classified, multilevel models via Bayesian estimation to decompose the impact of response content, rater characteristics, and scoring contexts on rater accuracy. Results showed relatively little variation in accuracy attributable to teams, items, and raters. Raters did not collectively exhibit differential accuracy over time, though there was significant variation in individual rater's scoring accuracy from response to response and day to day. I found considerable variation in accuracy across responses, which was in part explained by text features and other measures of response content that influenced scoring difficulty. Some text features differentially influenced the difficulty of scoring research and writing content. Multiple measures of raters' qualification performance predicted their scoring accuracy, but general rater background characteristics including experience and education did not. Site-based and remote raters demonstrated comparable accuracy, while evening-shift raters were slightly less accurate, on average, than day-shift raters. This naturalistic, integrated examination of rater accuracy extends previous research and provides implications for rater recruitment, training, monitoring, and feedback to improve human evaluation of written responses. (shrink) | CommonCrawl |
Counting zeros and ones in binary representations of prime numbers
If you write down binary representations of all prime numbers starting from 3 up to some (very big) $N^{th}$ prime number and denote with $S_1(N)$ the total number of ones (1) and with $S_0(N)$ the total number of zeros (0) used in all binary representations, what kind of relation would you expect between $S_1$ and $S_0$?
I did not expect to see anything like 50:50 distribution. All binary representations start with 1 and all primes end with 1. So it looks like digit 1 is favored: binary representation of every odd prime is guranteed to have at least two ones. There's no such guarantee for zeros.
But what is the ratio between ones and zeros if you remove the first and the last digit from all binary representations? I meen, what happens if you look only at the "central" part of the number?
I might be too naive but I expected to see that, with the first and the last digit removed from every binary presentation, the ratio between ones and zeros would be close to 50:50. In other words, I expected to see something like:
$$S_1(N)\approx S_0(N) + 2 N\tag{1}$$
But in reality it's not like that and I would say not even close. I did an experiment with the first 10 million primes and got the following results (BTW, 10,000,000th odd prime turned out to be 179,424,691):
$$N=10,000,000$$ $$S_1=139,605,415$$ $$S_0=124,501,052$$
Proper relationship seems to be:
$$S_1(N) \approx S_0(N) + \frac32 N\tag{2}$$
The margin of error in my experiment is less then 0.1%.
Is there a simple explanation why the correct factor seems to be equal to $\frac32$, not 2?
(I can also share the Java code if someone wants to validate it.)
EDIT: Here is the graph of function:
$$\frac{S_1(N)-S_0(N)}{N}$$
for the first 20 million primes:
It turns out that the value 1.5 is reached only from time to time.
prime-numbers
OldboyOldboy
$\begingroup$ Well...in base $2$, $179424691$ begins $10$ so you've, presumably got a huge number of primes near there (but smaller) which also start $10$ so you have a variant of a Benford's Law issue going on. for this reason it's a bit hard to read much into single values. $\endgroup$ – lulu Dec 16 '18 at 20:02
$\begingroup$ I don't understand: where do you get the $2N$ from in your formula (1)? $\endgroup$ – Rob Arthan Dec 16 '18 at 20:06
$\begingroup$ there's no guarantee the distribution remains well-behaved. prime numbers become more sparse, for example. try different ranges for your sample $\endgroup$ – David Peterson Dec 16 '18 at 20:09
$\begingroup$ Note: I assume that by $S_i(N)$ you mean that you count up to the $N^{th}$ prime. That's not what you say, but you imply that when you remark that the $10,000,000^{th}$ prime is $179,424,691$ and reading it this way explains the additive factor of $2N$. $\endgroup$ – lulu Dec 16 '18 at 20:14
$\begingroup$ @RobArthan Take some very big prime number. It has to start with one and end with one. What digits are in between? I supposed that, on average, there was an equal number of zeros and ones in between. $\endgroup$ – Oldboy Dec 16 '18 at 20:17
If you consider the primes between $2^n$ and $2^{n+1}-1$, inclusive, you would expect there to be approximately $\frac{c2^n}{n}$ of them for some fixed $c$ (I believe it's $\frac{1}{\ln 2}$), and each should have "on average" $\frac{n+3}{2}$ ones and $\frac{n-1}{2}$ zeroes (assuming the middle bits are chosen uniformly at random. This gives, up through $2^n$,
$$c\sum_{k=1}^{n-1} \frac{2^k}{k}\cdot\frac{k+3}{2}$$
ones, and
$$c\sum_{k=1}^{n-1} \frac{2^k}{k}\cdot\frac{k-1}{2}$$
zeroes, in
$$c\sum_{k=1}^{n-1} \frac{2^k}{k}$$
primes. Letting
$$f(n)=c\sum_{k=1}^{n-1} 2^k,\ \ g(n)=c\sum_{k=1}^{n-1} \frac{2^k}{k},$$
we have, out of $g(n)$ primes, $\frac{f(n)+3g(n)}{2}$ ones and $\frac{f(n)-g(n)}{2}$ zeroes; the difference is thus $2g(n)$, as expected.
Let's say, however, that you count up to $3\cdot 2^{n-1}$ instead. You still have the same number of $0$s and $1$s you did before, but now you have all the numbers from $2^n$ to $3\cdot 2^{n-1}$ as well. We expect there to be about
$$c\frac{2^{n-1}}{n}$$
of them, and they have two ones, a zero, and $n-2$ "free digits," for "on average" $\frac{n+2}{2}$ ones and $\frac{n}{2}$ zeroes. This gives us, across our
$$g(n)+\frac{c2^{n-1}}{n}$$
primes, approximately
$$\frac{f(n)+3g(n)}{2}+\frac{n+2}{2}\cdot\frac{c2^{n-1}}{n}$$
ones and
$$\frac{f(n)-g(n)}{2}+\frac{n}{2}\cdot\frac{c2^{n-1}}{n}$$
zeroes, for a difference of
$$2g(n)+\frac{c2^{n-1}}{n}.$$
The ratio between this and the number of primes we have is
$$\frac{2g(n)+\frac{c2^{n-1}}{n}}{g(n)+\frac{c2^{n-1}}{n}};$$
since $g(n)\sim \frac{c2^n}{n-1}$, this gives us a ratio of
$$\frac{4+1}{2+1}=\frac{5}{3}.$$
This isn't exactly the $\frac{3}{2}$ you got, but it's certainly not $2$ either. As such, we should expect this "difference ratio" to differ from $2$ significantly depending on where your cutoff is. It's possible that if you stop at $\frac{5}{4}\cdot 2^n$ instead, you'll get a different number; the bottom line is that your heuristics can still be right without $\frac{S_1(N)-S_0(N)}{N}\to 2$.
Carl SchildkrautCarl Schildkraut
Not the answer you're looking for? Browse other questions tagged prime-numbers or ask your own question.
Bound on number of zeros in smallest prime greater than $10^n$
Has anyone established an upper bound for the least integer $k$ such that infinitely many primes have at most $k$ ones in their binary representation?
Cyclic consecutive zeros of binary sequence with prime length
Mathematicians shocked(?) to find pattern in prime numbers
Prime gaps and last digit of prime numbers
How to code how many prime numbers are there between 1 million and 2 million on MATLAB
What property of prime numbers have I ran into?
Prime numbers in binary.
Most palindromic prime numbers
What is the probability there is no prime between $n$ and $n+\ln(n)$? | CommonCrawl |
Volume 19 Supplement 2
Selected articles from the 16th Asia Pacific Bioinformatics Conference (APBC 2018): genomics
Prediction of enhancer-promoter interactions via natural language processing
Wanwen Zeng1,2,
Mengmeng Wu1,3 &
Rui Jiang1,2
Precise identification of three-dimensional genome organization, especially enhancer-promoter interactions (EPIs), is important to deciphering gene regulation, cell differentiation and disease mechanisms. Currently, it is a challenging task to distinguish true interactions from other nearby non-interacting ones since the power of traditional experimental methods is limited due to low resolution or low throughput.
We propose a novel computational framework EP2vec to assay three-dimensional genomic interactions. We first extract sequence embedding features, defined as fixed-length vector representations learned from variable-length sequences using an unsupervised deep learning method in natural language processing. Then, we train a classifier to predict EPIs using the learned representations in supervised way. Experimental results demonstrate that EP2vec obtains F1 scores ranging from 0.841~ 0.933 on different datasets, which outperforms existing methods. We prove the robustness of sequence embedding features by carrying out sensitivity analysis. Besides, we identify motifs that represent cell line-specific information through analysis of the learned sequence embedding features by adopting attention mechanism. Last, we show that even superior performance with F1 scores 0.889~ 0.940 can be achieved by combining sequence embedding features and experimental features.
EP2vec sheds light on feature extraction for DNA sequences of arbitrary lengths and provides a powerful approach for EPIs identification.
One of the major discoveries in recent years is that non-coding DNAs are not "junk". On the contrary, they fulfill a wide variety of crucial biological roles involving regulatory and signaling functions [1]. Enhancer is one of the most important noncoding elements that has a central role in controlling gene expression [2]. Recent studies have shown that noncoding single nucleotide polymorphisms (SNPs) that are associated with risk for numerous common diseases through genome-wide association studies (GWAS), frequently lie in cell line-specific enhancers [3, 4]. These GWAS SNPs are hard to interpret because we are unaware of how non-coding SNPs affect gene expression and disease transmission through the complicated regulatory relationship [5]. We can improve understanding of disease mechanisms if enhancers are accurately linked to the promoters/genes they regulate. For example, Guo et al. [6] identified mechanism of GWAS risk SNP rs7463708 in promoting prostate transformation. This SNP is located in the enhancer of long noncoding RNA (lncRNA) PCAT1 and significantly upregulates PCAT1 expression. PCAT1 interacts with the enhancers of prostate cancer genes GNMT and DHCR24, and in turn promotes prostate tumorigenesis. Thus, the identification of true three-dimensional (3D) genome organization, especially EPIs across different cell lines constitutes important steps towards understanding of gene regulation, cell differentiation and disease mechanisms.
However, there are enormous technical challenges to obtain these 3D interactions in the entire genome. Chromosome conformation capture-based (3C) methods [7], including 4C [8] and 5C [9] have been developed to detect physical contacts in the 3D space but fail to capture whole genome interactions. Chromatin Interaction Analysis by Paired-End Tag Sequencing (ChIA-PET) [10] allows genome-wide measurements but is restricted to interactions mediated by a preselected protein of interest. The method of Hi-C [11] allows the genome-wide detections of interactions but its current resolution is not high enough (~ 10 kb) to capture individual EPIs. All these traditional experimental approaches for detecting 3D genome interactions remain time-consuming and noisy, motivating the development of computational approaches.
To bridge this growing gap between low-resolution experiments and high-resolution EPIs, some computational methods have been established, which mainly fall into two classes. One class is based on experimental features. For instance, IM-PET [12], RIPPLE [13], TargetFinder [14] and EpiTensor [15], aim to predict 3D genomic interactions in different cell lines by integrating numerous one-dimensional (1D) local chromatin states including genomic and epigenomic data. Among them, TargetFinder is the state-of-the-art computational method to identify true EPIs by collecting experimental data sets including histone modifications, TF binding, chromatin accessibility and gene expressions. The other class is based on sequence information only, which is represented by SPEID [16]. SPEID takes advantage of a convolutional Long Short-Term Memory (LSTM) network to learn the feature representation from input sequences automatically and can reliably predict EPIs.
Existing 3D genomic interaction prediction methods fail to exploit sequence information except SPEID. At the meantime, there are many inspiring methods for 1D chromatin states prediction [17, 18], including gkmSVM for enhancer prediction [19], DeepSEA for epigenomic state prediction [20] and DeepBind for DNA/RNA-binding proteins prediction [21], which extract sequence features and yield high performance. gkmSVM transforms variable-length sequences to fixed-length k-mer features to classify input DNA sequences. k-mer features are an unbiased, complete set of sequence features defined on arbitrary-length DNA sequences but lose the contextual information between adjacent k-mers. DeepSEA and DeepBind take advantage of powerful convolutional neural networks (CNN) but they require fixed-length sequences as input, which is also a limit for SPEID. Since DNA sequences are in variable length and contextual information is important for understanding the function of whole sequence, how to transform a variable-length sequence into a fixed-length vector representation conserving the context information remains challenging and crucial for improving sequence-based prediction methods.
It is well-known that learning a good representation of input data is an important task in machine learning. There is an analogous problem in natural language processing, which is to learn an embedding vector for a sentence, that is essentially, to train a model that is able to automatically transform a sentence to a vector and encodes its semantic meaning. Paragraph Vector [22] successfully solves the problem by mapping texts into a unified vector representation, and generates embedding representation which can be further used for different applications [23], such as machine translation [24], sentiment analysis [22], and information retrieval [25].
Inspired by the idea of sentence embedding, we present a novel 3D interactions prediction method, named EP2vec, in this paper. First, we utilize an unsupervised deep learning method, namely Paragraph Vector, to learn sequence embedding features. Concretely, we embed the enhancer sequences and promoter sequences into a vector space separately, and then every sequence can be represented as a vector, namely the sequence embedding features. Then, EP2vec uses the resulted features for subsequent classification of EPIs through supervised learning. Our experiments prove that we are able to accurately predict EPIs using only the sequence embedding features, which outperforms other existing computational methods. In addition, by combining both sequence embedding features and experimental features, we can further improve performance, which indicates sequence embedding features and experimental features are complementary to each other. Furthermore, by applying attention mechanism, we successfully interpret the meaning of sequence embedding features and find motifs that represent cell line information. The source code to implement EP2vec can be downloaded from https://github.com/wanwenzeng/ep2vec.
The majority of our datasets were adapted from TargetFinder. Promoter and enhancer regions were identified using ENCODE Segway [26] and ChromHMM [27] annotations for K562, GM12878, HeLa-S3, and HUVEC cell lines, and using Roadmap [28] Epigenomics ChromHMM annotations for NHEK and IMR90 cell lines. Since EPIs could only happen between active enhancers and promoters, we used the full set of all enhancers and promoters as external resources to perform unsupervised feature extraction which would be elaborated in the next section. The total number of enhancers and the number of promoters for each cell line are reported in Table 1. The length distributions of enhancers and promoters in six cell lines are shown in Additional file 1: Figures S1 and S2.
Table 1 Details of each cell line dataset. The enhancers (or promoters) column indicates the number of all known active enhancers (or promoters) for each cell line, which are used for unsupervised feature learning for enhancer (or promoter) sequences
To focus on distal interactions, enhancers closer than 10 kb to the nearest promoter were discarded. Using GENCODE [29] version 19 annotations and RNA-seq data from ENCODE, promoters were reserved if actively transcribed (mean FPKM > 0.3 [30] with irreproducible discovery rate < 0.1 [31]) in each cell line. Positive EPIs were annotated using high-resolution genome-wide Hi-C data [32]. These EPIs were assigned to one of five bins based on the distance between the enhancer and the promoter, such that each bin had the same number of interactions. Negative pairs were assigned to their corresponding distance bin and then subsampled within each bin, using one negative per positive. The number of positive or negative samples for each cell line is reported in Table 1.
In addition, we also collected a dataset from FANTOM5 project [4]. The FANTOM5 consortium extracted RNA transcripts from a multitude of different primary cells and tissues using the Cap Analysis of Gene Expression (CAGE) experiment. Because active enhancer regions were transcribed, they identified a distinct bidirectional CAGE pattern which could predict enhancer regions based on CAGE data not associated with promoters. The transcribed enhancer atlas held around 40,000 transcribed enhancers across the human body, which they called permissive enhancers. We collected the permissive enhancers and RefSeq promoters. Using statistical methods, FANTOM5 defined some enhancer-promoter interactions, which we considered as positive samples. Negative samples were generated as random pairs of enhancers and promoters based on the distance distribution of the positive samples.
Workflow of EP2vec
The workflow of EP2vec contained two stages including the unsupervised feature extraction and supervised learning (Fig. 1). Sequences of active regulatory elements in a specific cell line have cell line-specific regulatory information. Hence, EP2vec could use unsupervised methods to extract useful information from the sequences set, which would benefit subsequent tasks such as EPIs prediction. EP2vec regarded DNA sequences as sentences with k-mers as words, and learned effective representations of these sequences based on the co-occurrence statistics of k-mers.
The two-stage workflow of EP2vec. Stage 1 of EP2vec is unsupervised feature extraction which transforms enhancer sequences and promoter sequences in a cell line into sequence embedding features separately. Given a set of all known enhancers or promoters in a cell line, we first split all the sequences into k-mer words with stride s = 1 and assign a unique ID to each of them. Regarding the preprocessed sequences as sentences, we embed each sentence to a vector by Paragraph Vector. Concretely, we use vectors of words in a context with the sentence vector to predict the next word in the context using softmax classifier. After training converges, we get embedding vectors for words and all sentences, where the vectors for sentences are exactly the sequence embedding features that we need. Note that in sentence ID, SEQUENCE is a placeholder for ENHANCER or PROMOTER, and is the total number of enhancers or promoters in a cell line. Stage 2 is supervised learning for predicting EPIs. Given a pair of sequences, namely an enhancer sequence and a promoter sequence, we represent the two sequences using the pre-trained vectors and then concatenate them to obtain the feature representation. Lastly, we train a Gradient Boosted Gradient Trees classifier to predict whether this pair is a true EPI
Stage 1 of EP2vec was unsupervised feature extraction which transforms enhancer sequences and promoter sequences in a cell line into sequence embedding features separately. Given a set of all known enhancers or promoters in a cell line, we first split all the sequences into k-mer words with stride s = 1 and assign a unique ID to each of them. Regarding the preprocessed sequences as sentences, we embedded each sentence to a vector by Paragraph Vector. Concretely, we used vectors of words in a context with the sentence vector to predict the next word in the context using softmax classifier. After training converges, we got embedding vectors for words and all sentences, where the vectors for sentences were exactly the sequence embedding features that we needed. Stage 2 is supervised learning for predicting EPIs. Given a pair of sequences, namely an enhancer sequence and a promoter sequence, we represented the two sequences using the pre-trained vectors and then concatenated them to obtain the feature representation. Lastly, we trained a Gradient Boosted Regression Trees classifier (GBRT) to predict whether this pair was a true EPI.
In this section, we will illustrate how to apply Paragraph Vector to learn fixed-length feature representations from variable-length DNA sequences in Stage 1.
First, given a set of all known enhancers or promoters in a cell line, we assigned a unique ID for each sequence, and split it into k-mers. k-mers were split along a sequence using sliding window with stride s, meaning that two adjacent k-mers had a distance of s bps. Thus, in general, a sequence with L bps will be split into \( floor\left[\frac{L-k}{s}\right]+1 \) k-mers. For example, we could split "ATGCAACAC" into four 6-mers with stride s = 1 as "ATGCAA", "TGCAAC", "GCAACA" and "CAACAC" with ID "SEQEUNCE_i" (Fig. 1). From now on, we regarded the split enhancers or promoters as sentences and the k-mers as words. Note that the vocabulary size of k-mers was 4k.
Second, each sentence was mapped to a unique vector in a d-dimensional vector space, where d was the embedding dimension. Each word was also mapped to a unique vector in the same space. The basic training algorithms were greedy in nature, we followed general pratics to initialize all these vectors at random before training. For example, the sentence "SEQEUNCE_i" was mapped to a d-dimensional vector x i ∈ ℝd, with each component initialized by a random value. Similarly, k-mers "ATGCAA", "TGCAAC", "GCAACA" and "CAACAC", which were indexed as ci, 1, ci, 2, ci, 3, ci, 4 ∈ [1, 4k] in the k-mer vocabulary, were also mapped to four vectors \( {w}^{c_{i,1}},{w}^{c_{i,2}},{w}^{c_{i,3}},{w}^{c_{i,4}}\in {\mathbb{R}}^d \) with random initialization.
Third, we trained all these sentence vectors and word vectors by constructing the training loss function. In detail, we predicted the next word of a context in a sentence, by concatenating these vectors of words in this context and the sentence vector as predictive features. Since the vocabulary size was 4k, the next word had 4k possibilities. Generally speaking, the context has a fixed window length m and is sampled from the sentence in a sliding window fashion. For example, as shown in Fig. 1, we set the window length m = 3, and used the concatenated vectors of "ATGCAA", "TGCAAC", "GCAACA" and "SEQUENCE_i" to predicted the next word "CAACAC" by a 4096-way classification. Note that the sentence vector was shared across all contexts generated from this single sentence, while the word vector for one single k-mer was shared across all sentences.
More formally, given N sequences represented in N vectors x1, x2, …, x N . The i-th sequence contained T i words represented in vectors \( {w}^{c_{i,1}},{w}^{c_{i,2}},\dots, {w}^{c_{i,{T}_i}} \), the objective of the model was to maximize the average log probability
$$ \max \sum \limits_{i=1}^N\frac{1}{T_i-m}\sum \limits_{t=1}^{T_i-m}\log p\left({c}_{i,t+m}|{w}^{c_{i,t}},\cdots, {w}^{c_{i,t+m-1}},{x}_i\right). $$
The prediction task was typically accomplished via a multiclass classifier, such as softmax classifier, which could be formulated as
$$ p\left({c}_{i,t+m}|{w}^{c_{i,t}},\cdots, {w}^{c_{i,t+m-1}},{x}_i\right)=\frac{e^{y_{c_{i,t+m}}}}{\sum \limits_j{e}^{y_j}}. $$
Here, j ∈ [1, 4k] was an index to an output words, and y j was the corresponding component of the un-normalized log-probability \( y\in {\mathbb{R}}^{4^k} \) computed by
$$ y=b+ Uh, $$
where \( U\in {\mathbb{R}}^{4^k\times \left(m+1\right)d} \) and \( b\in {\mathbb{R}}^{4^k} \) are the softmax parameters, while \( h=\left({w}^{c_{i,t}},\cdots, {w}^{c_{i,t+m-1}},{x}_i\right)\in {\mathbb{R}}^{\left(m+1\right)d} \) was the concatenation of the m word vectors and the sentence vector.
The N sentence vectors and 4kword vectors were trained using stochastic gradient descent (SGD) together with the softmax parameters U and b, where the gradient was obtained via back propagation [33]. At every step of SGD, one could sample a fixed-length context from a random sentence, compute the error gradient and use the gradient to update the parameters in our model. In practice, hierarchical softmax [34,35,36] was preferred to softmax for fast training. In our study, the structure of the hierarchical softmax was a binary Huffman tree, where short codes were assigned to frequent words. This was a good speedup trick because common words were accessed quickly. This use of binary Huffman code for the hierarchical softmax was the same as Mikolov et al. [36].
After the training converges, words with similar meanings were expected to be mapped to adjacent positions in the vector space and the sentence vectors could be used as features for the sentence. In fact, the sentence vectors learned by the model were exactly the sequence embedding features which captured the sequence contextual information. Note that, we trained sequence embedding features for enhancers and promoters separately. We implemented these feature extraction based on the GENSIM packages [37].
Model training
In this section, we proceeded to interpret Stage 2 of EP2vec workflow (Fig. 1), namely supervised learning for EPIs classification. For each pair of an enhancer and a promoter, we first concatenated the sequence embedding features of the two sequences as the final features. Then based on this feature representation, we trained a GBRT classifier to predict the binary label, i.e., whether this pair was a true EPI. GBRT was a classifier which used decision trees as weak estimators and combines several weak estimators into ensemble as a single model, in a stage-wise fashion. The tree ensemble model was a set of classification and regression trees (CART). The prediction scores of each individual tree were summed up to get the final score.
GBRT performed gradient descent algorithm for the objective function for the binary classification of EPIs, and its performance mainly depended on three hyper-parameters: learning rate α, number of trees n, and tree-depth D. Smaller learning rates tended to result in better accuracy but require more iterations. Tree-depth D controlled the size of each decision tree. To yield the best performance, we figured out best hyper-parameter setting α = 1e − 3, n = 4000, D = 25, using grid search strategy. More details about training of GBRT could be found in the online codes.
Model evaluation
To examine the performance of EP2vec in predicting EPIs in specific cell line, we performed the stratified 10-fold cross-validation experiment in all datasets. We randomly partitioned training data into ten equal sized subsets and each subset contained roughly the same proportions of the two lines of class labels. One of the ten subsets was used for testing the model, and the remaining nine were used as training data. This validation process was repeated ten times, with each of the ten subsets used exactly once as test data.
We calculated F1 scores for each cross-validation, which considered both the precision p and the recall r of the test. Precision p is the number of correct positive results divided by the number of all positive results, and recall r is the number of correct positive results divided by the number of positive results that should have been returned. The F1 score could be interpreted as the harmonic mean of the precision and recall, as F1 = 2rp/(r + p), which reaches its best value at 1 and worst at 0.
We compared the performance of EP2vec and several other baseline methods, including TargentFinder, gkmSVM, SPEID. We directly used the source codes their authors published online. TargetFinder definde three training sets. The first set included features for the enhancer and promoter only (E/P). The second set included features for an extended enhancer (using 3 kb of flanking sequence) and a non-extended promoter (EE/P). The last set included enhancers and promoters plus the window between them (E/P/W), which were up to thousands of base pairs. Since the performance of TargetFinder on the last set was consistently better than other two sets according to their publication, we only evaluated this method on the E/P/W set. For gkmSVM, we need to first transformed a pair of two sequences (enhancer and promoter) into a single sequence by concatenating them, and then used it as input for gkmSVM.
Attention mechanism
Not all words contribute equally to the representation of the sentence meaning. Hence, we introduced attention mechanism to find out such critical words that were most important to the meaning of the sentence. Considering the k-mer words as motifs, we essentially aimed to find motifs that contribute more to the vector representation of the enhancer/promoter. Take the i-th enhancer x i as example, it contained T i words \( {w}^{c_{i,1}},{w}^{c_{i,2}},\dots, {w}^{c_{i,{T}_i}} \). We measured the importance of each word by computing similarity between word vector \( {w}^{c_{i,t}} \) and sentence vector x i and got a normalized importance weight α it through a softmax function, as
$$ {\alpha}_{it}=\frac{\exp \left({x}_i^T{w}^{c_{i,t}}\right)}{\sum \limits_j\exp \left({x}_i^T{w}^{c_{i,j}}\right)}. $$
Therefore, every word in the sentence had a weight representing its importance to the sentence. In order to validate that our model was able to select informative words in a sentence, we visualized the high-weight words. In detail, two sets of informative k-mers were obtained by picking out the most important words for enhancer and promoter respectively, in every sentence with positive label. Then we performed motif enrichment analysis using CentriMo [38] to compare these words against known motifs in the HOCOMOCO v9 dataset [39], and drew out top enriched motifs with sequence logo [40].
Computational performance
To consolidate the importance of our work, we compare the performance of EP2vec against other three typical baseline methods, including TargetFinder, gkmSVM and SPEID. TargetFinder is based on experimental features obtained from biological sequencing experiments, and gkmSVM is based on k-mer features and SVM classifiers. SPEID is based on deep learning which uses LSTMs with sequence data to predict EPIs.
They all have their own advantages and disadvantages. (1) For TargetFinder, experimental features are rich of cell line-specific predictive information, but they are expensive and time-consuming to acquire. Besides, for some cell lines, the dimension of accessible experimental features is limited due to lack of biological experiments. (2) For gkmSVM, k-mer features are an unbiased, general, complete set of sequence features defined on arbitrary-length sequences. However, the k-mers can only capture local motif patterns because they only use the k-mer counts information without making full use of context information or co-occurrence information of k-mers. (3) For SPEID, LSTM is a powerful supervised deep learning technique which is able to capture long-range dependencies. Nonetheless, deep learning methods often have millions of parameters to learn in the training process which takes a long time, and special attention should be put on fine-tuning the network. Usually, it takes time to optimize the network structure for a specific dataset, but this optimal structure may be not applicable to other datasets due to overfitting problems.
Our paper proposes an innovative approach to represent a DNA sequence (or a pair of two DNA sequences) in a fixed-length vector, namely sequence embedding features, using the unsupervised method Paragraph Vector. The training of sequence embedding features utilizes the global statistics information of k-mers, and hence our features form a potentially better presentation for DNA sequences. Specifically, for EP2vec, we set k = 6, the stride s = 1, the context window size m = 20, and the embedding dimension d = 100. We report the F1 score statistics of the four methods in 10-fold cross-validation for each dataset in Table 2. In addition, we also calculate area under the Receiver Operating Characteristic curve (auROC) score (Additional file 1: Table S1) and area under the Precision Recall curve (auPRC) score (Additional file 1: Table S2).
Table 2 The mean values and the standard deviations of F1 scores for EP2vec and other three baseline methods in 10-fold cross-validation experiments. For FANTOM dataset, we do not evaluate TargetFinder due to lack of experimental features, and we do not evaluate SPEID since it is extremely time-consuming to run 10-fold cross validation of SPEID on so many samples
The results in Table 2 show that EP2vec is slightly better than TargetFinder and significantly outperforms the other two sequence-based methods, namely gkmSVM and SPEID. For example, in the GM12878 cell line dataset, the average F1 scores of EP2vec, TargetFinder (on E/P/W), gkmSVM and SPEID are 0.867, 0.844, 0.779 and 0.809, respectively. On the whole, the F1 scores for six cell line datasets of the above four methods ranges from 0.867~ 0.933, 0.844~ 0.922, 0.731~ 0.822, 0.809~ 0.900, respectively. We are convinced that the sequence embedding features learned by EP2vec is comparable to experimental features and has superiority over the other two computational sequence features, because we are able to capture the global context information of DNA sequences.
The goal of EP2vec is to capture global sequence information. In our approach, we must split sequences into words using a sliding window fashion to form sentences from which we could extract fixed-length embedding features. To evaluate the stability of EP2vec, we carry out sensitivity analysis for hyper-parameters including k, the stride s and the embedding dimension d.
As shown in Fig. 2, we find that when the embedding dimension d decreases, our model degrades slightly. For example, the F1 score of EP2vec on HUVEC dataset is 0.875 when d = 100. Setting d = 10 and retaining the other hyper-parameters unchanged, we find the F1 score decreases to 0.800. In general, the performance improves with the increase of embedding feature dimension. We note that although the mean F1 scores are not similar across different cell lines, 100 is the common choice of embedding dimension to obtain the near-optimal performances for all datasets. Lower but acceptable performance requires embedding dimension of 40 in NHEK, IMR90 and K562 while 80 in the other cell lines.
The F1 scores of different embedding dimensions. As the embedding dimensions increase, the performance increses. And embedding dimension d = 100 is sufficient to obtain the near-optimal performances in all these datasets
Furthermore, we explore the performance of different settings of the model hyper-parameters including k, the stride s and the embedding dimension d. The sensitivity analysis of these hyper-parameters is shown in Additional file 1: Tables S3-S5. These results indicate that EP2vec is robust to all the three hyper-parameters and successful in capturing the information of whole sentences.
Visualizing motifs by attention mechanism
In order to interpret that our model is able to detect informative k-mers or motifs in a sequence, we visualize k-mers with high weights selected using the attention mechanism for K562 and HUVEC in Fig. 3. We consider the most informative k-mers as sequence motifs that determine sequence function. Consequently, we calculate the most informative k-mers in positive samples and present the top enriched known motifs in enhancers and promoters (Additional file 1: Tables S6 and S7).
The enriched motifs in HUVEC and K562. MYB_f1, IKZF1_f1, GFI1_f1 and SOX15_f1 are enriched in HUVEC. KLF6_si and TFE3_f1 are enriched in K562
For example, HUVECs are cells derived from the endothelium of veins from the umbilical cord and are reported to play an important role in hematopoiesis. Among the top five enriched motifs in HUVEC, MYB_f1, GFI1_f1, IKZF1_f1 and SOX15_f1 present some clues to HUVEC cell line-specific information. MYB_f1 will bind to MYB, which plays an essential role in the regulation of hematopoiesis. MYB may be aberrantly expressed or rearranged or undergo translocation in leukemias and lymphomas, and is considered to be an oncogene [41]. GFI1_f1 will bind to GFI1, which functions as a transcriptional repressor. This TF plays a role in diverse developmental contexts, including hematopoiesis and oncogenesis. It functions as part of a complex along with other cofactors to control histone modifications that lead to silencing of the target gene promoters [42]. IKZF1_f1 will bind to IKZF1, which belongs to the family of zinc-finger DNA-binding proteins associated with chromatin remodeling. Overexpression of some dominant-negative isoforms have been associated with B-cell malignancies, such as acute lymphoblastic leukemia [43]. SOX15_f1 will bind to SOX15, which is involved in the regulation of embryonic development and in the determination of the cell fate [44]. All of these top enriched motifs in HUVEC are experimentally proved to be related with hematopoiesis or other similar functions, which indicates that we successfully find informative motifs through applying attention mechanism in EP2vec.
As another example, K562 cells are of the erythroleukemia type, and the line is derived from a 53-year-old female chronic myelogenous leukemia patient in blast crisis. The top two enriched motifs in K562 is KLF6_si and TFE3_f1, which also give evidence to K562 specific information. KLF6_si will binding to KLF6. The TF is a transcriptional activator, and functions as a tumor suppressor. Multiple transcript variants encoding different isoforms have been found for this gene, some of which are implicated in carcinogenesis [45]. TFE3_f1 will bind to TFE3. This TF promotes the expression of genes downstream of transforming growth factor beta (TGF-beta) signaling. This gene may be involved in chromosomal translocations in renal cell carcinomas and other cancers, resulting in the production of usion proteins [46].
TF annotations for top five enriched motifs in all six cell lines are reported in Additional file 1: Tables S8 and S9. From these results, we conclude that sequence embedding features not only perform well but also are interpretable through motif enrichment analysis. Although deep learning is widely applied and always surpass conventional methods in various tasks, it is hard to interpret why deep models perform well. We make use of attention mechanism and try to find out why sequence embedding features outperform others methods. One reasonable explanation is that EP2vec can capture important motifs in a sequence that reveal sequence information.
Combination of two types of features
According to Table 2, we observe that our sequence embedding features outperform experimental features in TargetFinder and sequence features computed in gkmSVM and SPEID. Here, to further improve the prediction accuracy of our model, we attempt to combine our sequence embedding features and experimental features in TargetFinder.
Concretely, we concatenate the 200-dimensional sequence embedding features and the experimental features and then we use a GBRT with the same hyper-parameters as EP2vec to train a classifier for predicting EPIs. According to Fig. 4, we can see that sequence embedding features are better than experimental features in capturing useful sequence information, while the combination of both types of features generate even better performance. Consequently, sequence embedding features in enhancers and promoters and experimental features in windows between enhancers and promoters facilitate each other and combination of them performs better than all other feature sets. Finally, we conclude that sequence embedding features and experiment features can be complementary to each other and we could take advantage of existing experimental features and extracted sequence embedding features to predict true EPIs with high accuracy.
The F1 scores of combined features and two single types of features in 10-fold cross-validation. The combination of both types of features generate even better performance, indicating sequence embedding features and experiment features can be complementary to each other
Deep learning has successful applications in both computer vision and natural language processing (NLP). As is well known, Convolutional Neural Network (CNN) is a powerful deep learning model in computer vision area. Inspired by deep learning applied in image processing, DeepSEA and DeepBind first regard DNA sequences as binary images through one-hot encoding. They both preprocess the DNA sequences by transforming them into 4xL images (L is the length of a sequence), and then use a CNN to model DNA sequences. Many other deep learning approaches applied in sequence analysis recently all follow this idea and achieve excellent performance.
Our deep learning framework EP2vec is different from them. We solve this sequence analysis problem from a different perspective inspired from NLP. In fact, there are also many successful applications of deep learning in NLP area, such as word2vec which embeds words into a vector space. Paragraph Vector is based on word2vec, and it embeds whole sentences to vectors encoding their semantic meanings. We think it is more natural to treat a DNA sequence as a sentence other than an image, since the DNA sequence is only a one-dimensional data while images are often two-dimensional data. Hence, we regard a DNA sequence as a sentence which is comprised of k-mers (or words). We learn good representations of DNA sequences using Paragraph Vector as the results shown in Computational performance. In this unsupervised feature extraction stage, we apply deep learning to extract sequence embedding features which will be used in supervised classification. The superiority of our framework mainly lies in that we utilize the global statistics of k-mer relationships, and can learn a global representation of a DNA sequence.
Our method is innovative in using a different deep learning diagram from existing methods in the following several aspects:
First, we draw strength from recent advance in deep learning and successfully extract fixed-length embedding features for variable-length sequences. Our results suggest that it is possible to use only sequence embedding features instead of traditional genomic and epigenomic features to predict EPIs with competitive results, and that DNA sequences themselves provide enough information about what function they perform in different cell lines. Different from other computational features for DNA sequences, we learn the sequence embedding features on basis of the k-mer co-occurrence statistics using Paragraph Vector, and by learning an embedding vector directly for a sequence we can better represent the global sequence information.
Second, we carry out sensitivity analysis with regard to model hyper-parameters involved in the unsupervised feature learning stage. The result indicates that EP2vec is robust to its hyper-parameters and is effective in capturing the information of whole sequences. Even using only 10-dimensional sequence embedding features, EP2vec still yields satisfactory results.
Third, we explore important motifs that account for enhancers and promoter when mining the information in sequence embedding features. As we all know, deep learning often behaves like black box and people find it hard to explain what the extracted features mean. We illustrate the meaning of sequence embedding features by visualizing the motifs found by attention mechanism with sequence logo. These results indicate that sequence embedding features have underlying biological meanings which we need to pay more attention to.
Last but not the least, we train a hybrid model using both sequence embedding features and the experimental features, which generates better classification results than using a single type of features. We conclude that the two types of features are complementary to each other, and their combination is beneficial for prediction of EPIs.
Nevertheless, our approach can still be improved in the following several aspects. First, we treat every word equally without discrimination in the training. Nevertheless, using the attention mechanism, we pay more attention on important words in the visualizing process. Hence, we could adopt attention mechanism in the training process and gain better representation of the whole sequence. Second, in the unsupervised feature extraction stage of EP2vec workflow, we train sequence embedding features for enhancers and promoters separately, without using interaction information. In fact, we can inject the EPIs label information in this stage, so that we can encode not only the cell line specific information of enhancer and promoter sequences but also the paired information of enhancers and promoters in the feature representation. Third, we could combine sequence-based features and massive biological experiments data in the network training process for sequence embedding features. Although sequence features show good performance, they lose cell line specific information which is enriched in experimental features. We can fuse the cell line specific experimental features in training process and predict EPIs genome-wide.
In conclusion, EP2vec extracts sequence embedding features using unsupervised deep learning method and predicts EPIs accurately using GBRT classifier achieving state-of-the-art performance. Different from the previous sequence-based methods, EP2vec is innovative in extracting fixed-length embedding features for variable-length sequences and retaining the context information. Given the excellent performance of EP2vec, we will continue to improve our approach according to the above discussion. We expect EP2vec and the future revised version to play an important role in all kinds of sequence prediction tasks, such as identification of miRNA target sites and RNA-RNA interactions, and benefit further downstream analysis.
3C:
Chromosome conformation capture-based
auPRC:
area under the Precision Recall curve
auROC:
area under the Receiver Operating Characteristic curve
ChIA-PET:
Chromatin interaction analysis by paired-end tag sequencing
EPI:
Enhancer-promoter interactions
GBRT:
Gradient boosted regression trees
Genome-wide association studies
lncRNA:
long noncoding RNA
LSTM:
NLP:
SGD:
Single nucleotide polymorphism
Esteller M. Non-coding RNAs in human disease. Nat Rev Genet. 2011;12(12):861–74.
Shlyueva D, Stampfel G, Stark A. Transcriptional enhancers: from properties to genome-wide predictions. Nat Rev Genet. 2014;15(4):272–86.
Smemo S, Campos LC, Moskowitz IP, Krieger JE, Pereira AC, Nobrega MA. Regulatory variation in a TBX5 enhancer leads to isolated congenital heart disease. Hum Mol Genet. 2012;21(14):3255–63.
Andersson R, Gebhard C, Miguel-Escalada I, Hoof I, Bornholdt J, Boyd M, Chen Y, Zhao X, Schmidl C, Suzuki T, et al. An atlas of active enhancers across human cell types and tissues. Nature. 2014;507(7493):455–61.
Jiang R. Walking on multiple disease-gene networks to prioritize candidate genes. J Mol Cell Biol. 2015;7(3):214–30.
Guo H, Ahmed M, Zhang F, Yao CQ, Li S, Liang Y, Hua J, Soares F, Sun Y, Langstein J, et al. Modulation of long noncoding RNAs by risk SNPs underlying genetic predispositions to prostate cancer. Nat Genet. 2016;48(10):1142–50.
Dekker J, Rippe K, Dekker M, Kleckner N. Capturing chromosome conformation. Science. 2002;295(5558):1306–11.
Simonis M, Klous P, Splinter E, Moshkin Y, Willemsen R, de Wit E, van Steensel B, de Laat W. Nuclear organization of active and inactive chromatin domains uncovered by chromosome conformation capture-on-chip (4C). Nat Genet. 2006;38(11):1348–54.
Dostie J, Richmond TA, Arnaout RA, Selzer RR, Lee WL, Honan TA, Rubio ED, Krumm A, Lamb J, Nusbaum C, et al. Chromosome conformation capture carbon copy (5C): a massively parallel solution for mapping interactions between genomic elements. Genome Res. 2006;16(10):1299–309.
Fullwood MJ, Liu MH, Pan YF, Liu J, Xu H, Mohamed YB, Orlov YL, Velkov S, Ho A, Mei PH, et al. An oestrogen-receptor-alpha-bound human chromatin interactome. Nature. 2009;462(7269):58–64.
Lieberman-Aiden E, van Berkum NL, Williams L, Imakaev M, Ragoczy T, Telling A, Amit I, Lajoie BR, Sabo PJ, Dorschner MO, et al. Comprehensive mapping of long-range interactions reveals folding principles of the human genome. Science. 2009;326(5950):289–93.
He B, Chen C, Teng L, Tan K. Global view of enhancer-promoter interactome in human cells. Proc Natl Acad Sci U S A. 2014;111(21):E2191–9.
Roy S, Siahpirani AF, Chasman D, Knaack S, Ay F, Stewart R, Wilson M, Sridharan R. A predictive modeling approach for cell line-specific long-range regulatory interactions. Nucleic Acids Res. 2015;43(18):8694–712.
Whalen S, Truty RM, Pollard KS. Enhancer-promoter interactions are encoded by complex genomic signatures on looping chromatin. Nat Genet. 2016;48(5):488–96.
Zhu Y, Chen Z, Zhang K, Wang M, Medovoy D, Whitaker JW, Ding B, Li N, Zheng L, Wang W. Constructing 3D interaction maps from 1D epigenomes. Nat Commun. 2016;7:10812.
S Singh, Y Yang, B Poczos, J Ma. Predicting Enhancer-Promoter Interaction from Genomic Sequence with Deep Neural Networks. biorxiv 2016.
Duren Z, Chen X, Jiang R, Wang Y, Wong WH. Modeling gene regulation from paired expression and chromatin accessibility data. Proc Natl Acad Sci U S A. 2017;114(25):E4914–23.
Min X, Zeng W, Chen N, Chen T, Jiang R. Chromatin accessibility prediction via convolutional long short-term memory networks with k-mer embedding. Bioinformatics. 2017;33(14):i92–i101.
Ghandi M, Lee D, Mohammad-Noori M, Beer MA. Enhanced regulatory sequence prediction using gapped k-mer features. PLoS Comput Biol. 2014;10(7):e1003711.
Zhou J, Troyanskaya OG. Predicting effects of noncoding variants with deep learning-based sequence model. Nat Methods. 2015;12(10):931–4.
Alipanahi B, Delong A, Weirauch MT, Frey BJ. Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning. Nat Biotechnol. 2015;33(8):831–8.
Le Q, Mikolov T. Distributed representations of sentences and documents. In: Proceedings of the 31st international conference on machine learning (ICML-14): 2014; 2014. p. 1188–96.
Gan M, Li W, Zeng W, Wang X, Jiang R. Mimvec: a deep learning approach for analyzing the human phenome. BMC Syst Biol. 2017;11(Suppl 4):76.
Sutskever I, Vinyals O, Le QV. Sequence to sequence learning with neural networks. In: Advances in neural information processing systems: 2014; 2014. p. 3104–12.
Huang P-S, He X, Gao J, Deng L, Acero A, Heck L: Learning deep structured semantic models for web search using clickthrough data. In: Proceedings of the 22nd ACM international conference on Conference on information & knowledge management: 2013. San Francisco: ACM; 2013: 2333-2338.
Hoffman MM, Buske OJ, Wang J, Weng Z, Bilmes JA, Noble WS. Unsupervised pattern discovery in human chromatin structure through genomic segmentation. Nat Methods. 2012;9(5):473–6.
Ernst J, Kellis M. ChromHMM: automating chromatin-state discovery and characterization. Nat Methods. 2012;9(3):215–6.
Bernstein BE, Stamatoyannopoulos JA, Costello JF, Ren B, Milosavljevic A, Meissner A, Kellis M, Marra MA, Beaudet AL, Ecker JR, et al. The NIH roadmap Epigenomics mapping consortium. Nat Biotechnol. 2010;28(10):1045–8.
Harrow J, Frankish A, Gonzalez JM, Tapanari E, Diekhans M, Kokocinski F, Aken BL, Barrell D, Zadissa A, Searle S, et al. GENCODE: the reference human genome annotation for the ENCODE project. Genome Res. 2012;22(9):1760–74.
Ramskold D, Wang ET, Burge CB, Sandberg R. An abundance of ubiquitously expressed genes revealed by tissue transcriptome sequence data. PLoS Comput Biol. 2009;5(12):e1000598.
Li Q, Brown JB, Huang H, Bickel PJ. Measuring reproducibility of high-throughput experiments. Ann Appl Stat. 2011:1752–79.
Rao SS, Huntley MH, Durand NC, Stamenova EK, Bochkov ID, Robinson JT, Sanborn AL, Machol I, Omer AD, Lander ES, et al. A 3D map of the human genome at kilobase resolution reveals principles of chromatin looping. Cell. 2014;159(7):1665–80.
Rumelhart DE, Hinton GE, Williams RJ. Learning representations by back-propagating errors. Cogn Model. 1988;5(3):1.
Morin F, Bengio Y: Hierarchical Probabilistic Neural Network Language Model. In: Aistats: 2005. Citeseer; 2005: 246-252.
Mnih A, Hinton GE. A scalable hierarchical distributed language model. In: Advances in neural information processing systems: 2009, vol. 2009. p. 1081–8.
Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J. Distributed representations of words and phrases and their compositionality. In: Advances in neural information processing systems: 2013; 2013. p. 3111–9.
Řehůřek R, Sojka P. Software framework for topic Modelling with large corpora. In: Proceedings of LREC 2010 workshop new challenges for NLP frameworks: 2010; 2010. p. 45–50.
Bailey TL, Machanick P. Inferring direct DNA binding from ChIP-seq. Nucleic Acids Res. 2012;40(17):e128.
Kulakovskiy IV, Vorontsov IE, Yevshin IS, Soboleva AV, Kasianov AS, Ashoor H, Ba-Alawi W, Bajic VB, Medvedeva YA, Kolpakov FA, et al. HOCOMOCO: expansion and enhancement of the collection of transcription factor binding sites models. Nucleic Acids Res. 2016;44(D1):D116–25.
Sebastian A, Contreras-Moreira B. footprintDB: a database of transcription factors with annotated cis elements and binding interfaces. Bioinformatics. 2014;30(2):258–65.
Ramsay RG, Gonda TJ. MYB function in normal and cancer cells. Nat Rev Cancer. 2008;8(7):523–34.
Hock H, Hamblen MJ, Rooke HM, Schindler JW, Saleque S, Fujiwara Y, Orkin SH. Gfi-1 restricts proliferation and preserves functional integrity of haematopoietic stem cells. Nature. 2004;431(7011):1002–7.
Virely C, Moulin S, Cobaleda C, Lasgi C, Alberdi A, Soulier J, Sigaux F, Chan S, Kastner P, Ghysdael J. Haploinsufficiency of the IKZF1 (IKAROS) tumor suppressor gene cooperates with BCR-ABL in a transgenic model of acute lymphoblastic leukemia. Leukemia. 2010;24(6):1200–4.
McLaughlin JN, Mazzoni MR, Cleator JH, Earls L, Perdigoto AL, Brooks JD, Muldowney JA 3rd, Vaughan DE, Hamm HE. Thrombin modulates the expression of a set of genes including thrombospondin-1 in human microvascular endothelial cells. J Biol Chem. 2005;280(23):22172–80.
DeKelver RC, Lewin B, Lam K, Komeno Y, Yan M, Rundle C, Lo MC, Zhang DE. Cooperation between RUNX1-ETO9a and novel transcriptional partner KLF6 in upregulation of Alox5 in acute myeloid leukemia. PLoS Genet. 2013;9(10):e1003765.
Heimann P, El Housni H, Ogur G, Weterman MA, Petty EM, Vassart G. Fusion of a novel gene, RCC17, to the TFE3 gene in t(X;17)(p11.2;q25.3)-bearing papillary renal cell carcinomas. Cancer Res. 2001;61(10):4130–5.
Rui Jiang is a RONG professor at the Institute for Data Science, Tsinghua University. We acknowledge the authors of TargetFinder, who provide us valuable data.
This research was partially supported by the National Natural Science Foundation of China (61573207, 61175002, 61721003). Publication costs were funded by National Natural Science Foundation of China (61573207, 61175002, 61721003).
The source code coud be found in https://github.com/wanwenzeng/ep2vec. The data from TargetFinder could be found in https://github.com/shwhalen/targetfinder.
About this supplement
This article has been published as part of BMC Genomics Volume 19 Supplement 2, 2018: Selected articles from the 16th Asia Pacific Bioinformatics Conference (APBC 2018): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-19-supplement-2.
MOE Key Laboratory of Bioinformatics; Bioinformatics Division and Center for Synthetic & Systems Biology, Beijing, 100084, China
Wanwen Zeng, Mengmeng Wu & Rui Jiang
Department of Automation, Tsinghua University, Beijing, 100084, China
Wanwen Zeng & Rui Jiang
Department of Computer Science, Tsinghua University, Beijing, 100084, China
Mengmeng Wu
Wanwen Zeng
Rui Jiang
WWZ conducted all experiments. RJ designed the research. WWZ, MMW and RJ wrote this manuscript. All the authors read and approved the final manuscript.
Correspondence to Rui Jiang.
Additional file
Supplementary Tables and Supplementary Figures. (DOCX 302 kb)
Zeng, W., Wu, M. & Jiang, R. Prediction of enhancer-promoter interactions via natural language processing. BMC Genomics 19, 84 (2018). https://doi.org/10.1186/s12864-018-4459-6
Three-dimensinal interactions | CommonCrawl |
Diagnostic and Prognostic Research
A prediction model for the decline in renal function in people with type 2 diabetes mellitus: study protocol
Mariella Gregorich1,2,
Andreas Heinzel2,
Michael Kammer1,2,
Heike Meiselbach3,
Carsten Böger4,
Kai-Uwe Eckardt5,
Gert Mayer6,
Georg Heinze1 &
Rainer Oberbauer ORCID: orcid.org/0000-0001-7544-62752
Diagnostic and Prognostic Research volume 5, Article number: 19 (2021) Cite this article
Chronic kidney disease (CKD) is a well-established complication in people with diabetes mellitus. Roughly one quarter of prevalent patients with diabetes exhibit a CKD stage of 3 or higher and the individual course of progression is highly variable. Therefore, there is a clear need to identify patients at high risk for fast progression and the implementation of preventative strategies. Existing prediction models of renal function decline, however, aim to assess the risk by artificially grouped patients prior to model building into risk strata defined by the categorization of the least-squares slope through the longitudinally fluctuating eGFR values, resulting in a loss of predictive precision and accuracy.
This study protocol describes the development and validation of a prediction model for the longitudinal progression of renal function decline in Caucasian patients with type 2 diabetes mellitus (DM2). For development and internal-external validation, two prospective multicenter observational studies will be used (PROVALID and GCKD). The estimated glomerular filtration rate (eGFR) obtained at baseline and at all planned follow-up visits will be the longitudinal outcome. Demographics, clinical information and laboratory measurements available at a baseline visit will be used as predictors in addition to random country-specific intercepts to account for the clustered data. A multivariable mixed-effects model including the main effects of the clinical variables and their interactions with time will be fitted. In application, this model can be used to obtain personalized predictions of an eGFR trajectory conditional on baseline eGFR values. The final model will then undergo external validation using a third prospective cohort (DIACORE). The final prediction model will be made publicly available through the implementation of an R shiny web application.
Our proposed state-of-the-art methodology will be developed using multiple multicentre study cohorts of people with DM2 in various CKD stages at baseline, who have received modern therapeutic treatment strategies of diabetic kidney disease in contrast to previous models. Hence, we anticipate that the multivariable prediction model will aid as an additional informative tool to determine the patient-specific progression of renal function and provide a useful guide to early on identify individuals with DM2 at high risk for rapid progression.
Peer Review reports
Chronic kidney disease (CKD) has become an increasing global health problem, partly due to the continuously rising incidences of diabetes mellitus, obesity, and hypertension [1]. It is estimated that 8 to 16% of the world's population suffer from CKD, and the rate is even higher in people with DM2, where cross-sectional studies report percentages of roughly 25% [2]. Ninety-five percent of people with early CKD are unaware of their disease [3]. However, early prediction of the continuous decline in kidney function could provide an additional resource for personalized preventative care [4]. Personalized risk assessment based on large study cohorts could therefore offer several potential benefits to the preventive care for people with DM2 such as early detection followed by monitoring to guide treatment and potentially slow the progression of the decline of kidney function as expressed by the estimated glomerular filtration rate (eGFR) [4]. Despite the accumulated knowledge with regard to CKD nowadays, identifying individuals at high risk for fast disease progression has proven to be challenging because longitudinal eGFR slopes not only vary between patients (inter-patient variability), but eGFR measurements also fluctuate within each patient over time (intra-patient variability) [5].
Nevertheless, several prediction models for the progression of renal function decline have previously been developed in various populations. It is of note that the models which were developed in studies from the last decade may no longer be applicable nowadays, because the new treatment classes of SGLT2-blocker and mineralocorticoid receptor antagonists have dramatically changed the course of CKD progression [6,7,8]. In addition, existing predictive models have mainly considered hard renal endpoints such as incidence or end-stage renal failure [9, 10]. Studies focused on renal function have mainly considered simplifications of the longitudinal eGFR trajectory by dichotomization of the expected eGFR slope. For instance, Subasi et al. [11] conducted a pilot study based on a randomized double-blinded treatment trial, the African American Study of Kidney Disease and Hypertension (AASK), in order to predict the rate of kidney function decline. The least-squares slopes of GFR decline from the 6-month time point until censoring were used to define rapid (i.e., an absolute slope < 3mL/min/1.73m2/year) and slow (i.e., absolute slope in between 1 and 3mL/min/1.73m2/year) progressors for this study. Similarly, Vigil et al. [12] aimed towards the identification of predictors of a rapid decline of renal function defined as an annual eGFR loss > 4 mL/min/1.73m2. However, covariates considered for selection in this analysis were chosen on the basis of their significance in univariable analysis (or by their clinical or biological relevance). Despite its popular use for data reduction, univariable prefiltering has been shown to lead to suboptimal or biased selection results [13]. Pena et al. [14] showed that a panel of novel biomarkers improves the prediction of renal function decline in people with DM2. In contrast to the other mentioned studies, renal function decline was not dichotomized. The eGFR slope was estimated using linear regression and then constituted the outcome of interest in its continuous form. People with an observed eGFR decline larger than −3mL/min/1.73m2 were considered stable, all others were considered to be rapid progressors in terms of kidney function decline. Then, a person's risk was assessed by dichotomizing the observed decline in these two groups and comparing it with the predicted probabilities of eGFR decline. The threshold of −3mL/min/1.73m2 was selected based on literature, which however included older studies without the current state-of-the-art therapy of diabetic kidney disease, where a GFR loss of only 1mL/min//1.73m2/year is standard [14,15,16,17]. Furthermore, molecular data in regular clinical care is usually not easily available and no validated reference values exist for these markers.
In this study, we will apply state-of-the-art methodology to avoid deficiencies of previous attempts in prediction models for renal function decline. We will directly incorporate the repeated eGFR measurements per person as an outcome vector into a multivariable mixed-effects model accounting for the dependence of the repeated measurements of eGFR and the clustered data structure due to the use of two multicenter prospective study cohorts. We will also provide the methodology to condition a prediction on an available baseline value of eGFR, facilitating predictions regardless of whether a baseline eGFR is available or not. In contrast to the commonly applied approaches, the repeated values of eGFR across follow-up visits will not be used for slope estimation followed by categorization to infer groups of the severity of loss in kidney function (e.g., stable, mild, and rapid progression) prior to model development. The proposed approach will enable the identification of groups of patients with a high risk of rapid renal function decline by longitudinal prediction of subject-specific eGFR trajectories, thereby providing the probability of exceeding a prespecified cutpoint of eGFR decline per year. After the planned external validation, our model will be implemented as a web tool and can be clinically applied to identify people with DM2 at increased risk of rapid decline in kidney function, or as a supporting tool for clinicians in medical decision-making so that patients can be prompted to seek medical care before a significant deterioration in kidney function occurs.
The main objective of the planned analysis is the development, internal-external, and external validation of a personalized prediction model for eGFR loss per year in Caucasian people with DM2. It is based on clinical information and laboratory measurements recorded at baseline. A secondary aim is the implementation of the externally validated prediction model as a publicly available web application to facilitate general applicability in clinical care based on data from recent eras using modern therapy.
The study cohort for a model building comprises individuals from two distinct prospective observational studies (PROVALID, GCKD) covering a wide range of CKD states [18, 19]. For the purpose of external validation, a third prospective cohort study (DIACORE) will be used [20].
PROVALID - PROspective cohort study in patients with type 2 diabetes mellitus for VALIDation of biomarkers
The PROVALID study cohort consists of around 4000 people with DM2 of whom information on medical history, physical status, laboratory measurements, and medication have been prospectively collected. Patients who are being taken care of at the primary healthcare level in five European countries (Austria, Hungary, Netherlands, Poland, and Scotland) were recruited between 2011 and 2015 and followed for at least 5 years. Participants had to be aged 18–75 years and had an incident or prevalent DM2 defined as treatment with hypoglycaemic drugs or according to ADA guidelines at the time point of study entry. The presence of CKD at study entry was not an exclusion or inclusion criterion. For the full inclusion and exclusion criteria of patients, see Table 1. The aim of the study was to investigate regional differences in the course of diabetic nephropathy to determine the 5-year cumulative incidence of renal and cardiovascular outcomes and to identify predictive biomarkers for the eGFR trajectory in patients with DM2.
Table 1 Overview of the inclusion and exclusion criteria of PROVALID, GCKD, and DIACORE
GCKD—German chronic kidney disease
The prospective cohort study GCKD consists of around 5000 people with CKD. Patients aged 18–74 years and suffering from CKD were included in case of an eGFR between 30 and 60 ml/min/1.73m2 with or without urinary albumin excretion (CKD KDIGO Stage 3) or a better preserved eGFR in the presence of urinary albumin excretion >300 or protein excretion >500mg/day, and were followed up to 10 years. For the full inclusion and exclusion criteria of patients, see Table 1. Out of the 5000 recruited individuals, 1800 had diabetes and therefore represent the relevant patient subset for the prediction model in this work. The follow-up visits were set up in bi-yearly intervals. The main objective of the GCKD study cohort was to identify and validate risk factors for CKD, end-stage renal disease (ESRD), and cardiovascular disease (CVD) events.
DIACORE—the DIAbetes COhoRtE study
DIACORE is a prospective observational cohort study consisting of 6000 people with prevalent DM2 in Germany with at least 10 years of follow-up. The main objective of this study was the investigation of risk factors associated with the development and progression of diabetic complications through biosampling using high-throughput technologies for the collected biosamples (i.e., transcriptomics, proteomics, and metabolomics). To this end, patient information and blood samples were taken at baseline and at every bi-yearly follow-up visit for at least 10 years after study initiation.
The following inclusion criteria will be applied to the individuals from the PROVALID, GCKD, and DIACORE studies for the development and validation of the prediction model in this work:
Caucasian ethnicity
Presence of DM2 at baseline
eGFR > 30 mL/min/1.73m2 at baseline
Aged 18–75 years at baseline
At least three visits with recorded serum creatinine measurements (including baseline)
At least 2 years of follow-up
Study outcome
The primary outcome of interest for the study is the annual decline in renal function, derived through the use of at least 3 follow-up measurements of eGFR. The eGFR values will be calculated by the equation of the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) using the patient's race, sex, age, and serum creatinine level [21].
Clinical predictor variables
The candidate variables for consideration in the prediction model were selected by medical experts. Variables of general clinical availability, ease in acquisition, and with clinical acceptance for inclusion in the model will be prioritized. Hence, the following variables will be included in the multivariable outcome model for the decline in renal function (see Table 2): age, sex, body mass index (BMI), smoking status (ever/never smoked), HbA1c, urine albumin-to-creatinine ratio (UACR), presence of glucose-lowering medication, presence of lipid-lowering medication, presence of blood pressure-lowering medication, systolic and diastolic blood pressure, hemoglobin, and serum cholesterol. All predictors were measured at baseline, i.e., the first visit after the patient was included in either of the study cohorts. Due to the varying depth of information regarding the medication of an individual in the three cohorts, only drug indication classes will be included as a binary entry (y/n).
Table 2 Clinical baseline information of study participants within PROVALID, GCKD, and DIACORE
Overall, 13 predictors will be included in the prediction model. Guidelines regarding the minimum required sample size for the development of a new multivariable prediction model have been recently proposed by Riley et al. [22]. Using the accompanying R package "pmsampsize" with an assumed average value for eGFR of 78.4mL/min/1.73m2 and a standard deviation of 21.4 mL/min/1.73m2 [23] in the population, an anticipated R2 of the model of 0.3 [24] and a shrinkage factor of 0.9, the minimum sample size aimed at minimization of overfit and the precise estimation of the parameters of a prediction model with 13 predictors is 271. Further, since regularization is not taken into consideration in model development due to the abundance of data relative to the number of predictors, the computation with a shrinkage parameter of 0.99 results in a sample size of 3048 required subjects. However, a much larger R2 is expected, which is why a minimum sample size of 1569 is required for a R2 of 0.5 and 903 participants for 0.7, respectively. Nevertheless, the development cohort will comprise around 5800 subjects and thus will exceed the minimum required sample size by far.
Patient's baseline characteristics will be described for the study cohorts separately, using mean and standard deviation for continuous variables, or median and interquartile range in case of non-normality, and absolute and relative frequencies for categorical variables. Skewed variables will be transformed by the logarithm before the analysis. Data availability and the fraction of missingness will be assessed and reported for each variable. As the fraction of missing data is expected to be low, a complete case analysis will be carried out. Data screening through the computation of sample characteristics and visual inspections to identify the presence of missing values, skewed variable distributions, and outliers, as well as reporting of conducted screening activities will adhere to the conceptual framework for initial data analysis by Huebner et al. [25]. All analysis will be performed with the software R version 4.0.2.
Prediction model development
People adhering to the inclusion and exclusion criteria from either the PROVALID or GCKD study will be considered for model building. Mixed-effects modeling accounts for the dependencies between repeated measurements per person over time and the similarity among people belonging to the same study cohort, hence they are commonly employed to analyze longitudinal and clustered data. Here, we will consider hierarchically nested random effects for the country and people within a country, such that the heterogeneity of eGFR trajectories between the five countries within the development cohort can be modeled appropriately [26]. More specifically, we will use random effects for the intercept and the slope of the repeated eGFR measurements over time and an additional random intercept for the country. We will include the baseline eGFR value in the outcome vector to provide more stable estimates of variability. When applying the model, any available eGFR measurements of an individual for whom eGFR trajectory predictions should be obtained can be used to improve predictions. A similar approach has been conducted by Selles et al. [27] to develop a prediction model not only able to deal with multiple measurements per subject but also able to allow these repeated clinical measurements to contribute to the prediction of an unseen individual. The set of clinical variables outlined in Table 2 will constitute the fixed effects. Automated variable selection will not be conducted as the set of predictors was chosen by background knowledge. Furthermore, due to the large sample size available for estimation overfitting is expected to be minimal, such that further regularization of the predictor effects will be omitted. Clinically relevant pairwise interactions of the independent variables with time will be investigated and added if their inclusion improves the prediction accuracy of the model as determined by the Akaike information criterion (AIC). Any model misspecification, e.g., regarding functional forms of the variables in the model, will be assessed using a plot of the marginal residuals versus the individual variables. Variables with non-linear relationships with the outcome as exhibited by these residual plots will then be modeled using restricted cubic splines to improve the model fit [28]. In addition, the relevance of the inclusion of such non-linear terms for improving the model fit will be evaluated in terms of the AIC.
More formally, the model for the continuous eGFR values Yit at time point t (t = 0, …, T) for subject i (i = 1, …, n) is defined by
$$ {\mathrm{Y}}_{\mathrm{i}\mathrm{t}}={\upbeta}_0+{\mathrm{a}}_{\mathrm{i}}+{\upbeta}_1X+{\upbeta}_2\mathrm{t}+{\upbeta}_3\mathrm{tX}+{\mathrm{b}}_{0\mathrm{i}}+{\mathrm{b}}_{1\mathrm{i}}\mathrm{t}+{\upepsilon}_{\mathrm{i}\mathrm{t}}, $$
with normally distributed residuals \( {\epsilon}_{it}\sim N\left(0,{\sigma}_{\epsilon}^2\right) \) with mean 0 and variance \( {\sigma}_{\epsilon}^2 \), random country-specific intercepts assumed to be ai~N(0, τ2), the random coefficients (b0i, b1i)~MN(0, G) (where MN denotes the bivariate normal distribution with an unspecified covariance matrix G, defined by the diagonal elements \( {\upsigma}_0^2 \) and \( {\upsigma}_1^2 \) and the off-diagonal σ01) and the vector of regression coefficients for the clinical independent variables β = (β0, β1, β2, β3). It should be noted that for simplicity of description, we here consider a single clinical predictor variable X, where β1 and β3 represent the regression coefficients corresponding to the clinical predictor variable and its interaction with time. Without loss of generality, more clinical predictors can be added, each with two regression coefficients for the baseline effects of the predictor and the corresponding interaction with time. When considering restricted cubic splines for a variable, this variable will be represented by several corresponding basis functions in (1).
Model (1) allows to prognosticate baseline and follow-up values of eGFR for a new person j given the value of the clinical predictor X, assuming that for that new person the random intercept and slope are equal to their expected values of 0. Further, we will assume the expected random country effect of aj = 0 for the prediction of eGFR in a new individual. In order to incorporate a baseline eGFR measurement yj0 into the prediction, a best linear unbiased predictor (BLUP) of the individual's random effects can be obtained through the posterior distribution of the random effects. The random intercept for that new subject j is then estimated by
$$ {\hat{\mathrm{b}}}_{0\mathrm{j}}=\frac{\upsigma_0^2\left({\mathrm{y}}_{\mathrm{j}0}-{\hat{\mathrm{y}}}_{\mathrm{j}0}\right)}{\upsigma_0^2+{\upsigma}_{\upvarepsilon}^2}, $$
where \( {\hat{\mathrm{y}}}_{\mathrm{j}0} \) denotes the estimated baseline value from the fixed effects, \( {\sigma}_{\epsilon}^2 \) denotes the variance of the random error and \( {\sigma}_0^2 \) the variance of the random intercept in the covariance matrix of the random coefficients G. The estimated random slope \( {\hat{b}}_{1j} \) for the new individual can be obtained by
$$ {\hat{\mathrm{b}}}_{1\mathrm{j}}=\frac{\upsigma_{01}{\hat{\mathrm{b}}}_{0\mathrm{j}}}{\upsigma_0^2} $$
In actual predictions, estimates are plugged in for \( {\sigma}_o^2 \), σ01, and \( {\upsigma}_{\upvarepsilon}^2 \). The preliminary prediction \( {\hat{y}}_{jt} \) for the new subject j at time t can then be updated by
$$ {\overset{\sim }{\mathrm{y}}}_{\mathrm{jt}}={\hat{\mathrm{y}}}_{\mathrm{jt}}+{\hat{\mathrm{b}}}_{0\mathrm{j}}+\mathrm{t}{\hat{\mathrm{b}}}_{1\mathrm{j}} $$
The predicted eGFR slope for a new individual, combining fixed and random effects, is finally given by \( d\tilde{y}_{j}:= d{\overset{\sim }{\mathrm{y}}}_{\mathrm{j}\mathrm{t}}/ dt={\hat{\upbeta}}_2+{\hat{\upbeta}}_3{\mathrm{X}}_{\mathrm{j}}+{\hat{\mathrm{b}}}_{1\mathrm{j}} \) with \( {\overset{\sim }{V}}_j \) expressing the variance of the predicted eGFR slope of individual j.
Given a correct model specification, the probability of progression can then be obtained by a normal approximation:
$$ \Pr \left(\mathrm{Z}={\mathrm{z}}_{\mathrm{j}}\right)=1-\Phi \left(\frac{{\mathrm{C}}_{\mathrm{p}}-d{\overset{\sim }{\mathrm{y}}}_{\mathrm{j}}}{\sqrt{{\overset{\sim }{V}}_j}}\right) $$
where zj, j ∈ {1, 2} denotes the membership of belonging to one of the two progression groups: rapid and stable progression, and Φ(x) the standard normal distribution function at x. Cp is the corresponding subject-specific cutpoint for defining progression of kidney decline, which will be set to −3mL/min/1.73m2/year at default or can be specified by the intended user in the web application (see Section "Web implementation") [14,15,16]. Now, one can compute the expected probability of belonging to either of the eGFR progression groups given a patient's baseline characteristics and the baseline eGFR.
For all statistical analyses, the freely available software R will be used. Mainly, the packages nlme and JMbayes will be considered for the implementation of the linear mixed model and the subject-specific prediction informed by the baseline eGFR measurement. The function lme() from the package nlme will be used to fit the linear mixed model and the function IndvPred_lme() from the package JMbayes to obtain the updated predictions and 95% prediction intervals for a new subject.
Model evaluation, performance, and external validation
An analysis of the conditional residuals will be conducted to check correct model specification, e.g., normality, linearity, and homoscedasticity of the error terms. Next, the importance of each independent variable will also be evaluated by the drop in the adjusted R2, i.e., the loss in estimated explained variation when the respective predictor is removed from the model.
The performance of the prediction model will be evaluated by an internal-external validation procedure to obtain a reliable assessment of its generalizability of prediction to unseen data [29]. The unit for the non-random data splitting will be the six countries within the PROVALID and GCKD studies (GCKD: Germany; PROVALID: Austria, Hungary, Netherlands, Poland, and Scotland) such that for each iteration, one regional group with its study participants are withheld from model training and then used for testing.
The prediction performance will further be assessed through:
Discrimination in terms of Kendall-tau-b concordance correlation coefficients at each time point, i.e., the concordance of the predicted eGFR values at time t obtained by Eq. (4) and the actual observed eGFR in each pair of patients
Calibration in terms of the slope of the calibration curve by
Regressing the predicted eGFR on the actual eGFR values at each time point
Plotting the estimated mean risk for fast progression per decile against the observed ratio of fast progression within each risk decile
Explained variation in terms of adjusted R2, i.e., the proportion of explained variability in the outcome by the independent variables
The external validity of the model will be examined by applying the final prediction model to the DIACORE study cohort and evaluating the three performance measures outlined above.
Model presentation
Reporting of the model development and the final prediction model will adhere to the Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD) statement, a checklist of items that ought to be reported when publishing a statistical prediction model [30]. The results of this analysis will be published in a peer-review journal. In particular, the final regression model will be presented using regression coefficients and 95% confidence intervals. Further, model reporting will include the covariance matrix of the random effects, the error variance, and the regression formula to allow independent application and validation of the model. Visualizations of results will be generated to improve model literacy for non-statistical readers and users of the web-based calculator.
Web implementation
The externally validated model will be implemented as an interactive prediction calculator, made available online as a web application to support clinical care and medical decision-making with regard to prevention and treatment of chronic kidney function decline in people with DM2. The interactive web application will be created with the help of the R development framework shiny to allow for individual input and the computation of patient-specific predictions given their baseline characteristics. The web application will offer two user interfaces to choose from:
A patient interface with a web layout consisting of limited input controls (i.e., text fields or checkboxes to enter age, sex, eGFR, and UACR)
A clinician interface with a layout featuring input controls for all predictors (i.e., includes laboratory measurements, medication) within the validated prediction model
Both interfaces will be based on the same prediction model formula, with predictors that are excluded from the web layout in the patient interface being fixed as the average value for the continuous variables and the value with the highest relative frequency for the categorical variables. As a result, the use of the model should not only be made available to clinical staff but also to patients themselves.
The output of the web application will contain the following components:
The eGFR value estimated by the model in 1–5 years with 95% model-based prediction intervals
The predicted eGFR decline per year with 95% model-based prediction interval
The proportion of people with higher predicted eGFR loss
The relative risk, i.e., the probability for a rapid eGFR decline for the predicted interval divided by the probability of a rapid eGFR decline for a person with equal age, sex, and baseline eGFR, but most favorable values of all other clinical baseline variables (with rapid decline specified by the intended user)
A visual representation of the results:
The visual communication of the results to the user will illustrate the estimated eGFR decline in the context of the distribution of predicted eGFR declines in the development cohort. In addition, a figure will be generated that shows the predicted trajectory of eGFR over the next 5 years with a 95% prediction interval.
A further option will be the possibility of risk assessment of rapid and stable eGFR decline. However, the choice of cutpoint selection for the categorization will be left to the user of the web-based implementation, in that the user can freely enter a suitable cutpoint for the eGFR classification into rapid and stable progression, whereby the risk of the new individual is estimated. By default, the eGFR cutpoint will be set to −3ml/min/1.73m2.
In this study, we have described the protocol for the planned development and validation of a new clinical prediction model for the decline of renal function in Caucasian people with DM2 to aid decision-making in clinical care. Our methodological approach will be based on a multivariable linear mixed model able to account for the dependence of clinical parameters per subject over time and for the similarity of individuals within the same country. Previous studies have employed risk prediction based on (multinomial) logistic regression models for which as a dependent variable the dichotomized eGFR slope was used, after estimation by least squares using the repeated measurements and then partitioning at arbitrary cutpoints. Each eGFR strata then covers different intervals on the eGFR slope spectrum and is supposedly indicative of varying severity of the progression of renal function loss. However, this approach suffers from several shortcomings. First, individual eGFR slopes can only be estimated very inaccurately, as only very few data points at non-equidistant time points are available for each patient [5]. Moreover, the categorization of the resulting eGFR slopes adds another layer of possible bias as it leads to subjects close on the continuous spectrum but on the opposite sides of the cutpoint being characterized as different [31]. By contrast, our approach avoids the issues associated with the imprecise estimation of slopes and inappropriate dichotomization prior to model development and takes the observed data fully into consideration when estimating the parameters of the model. In addition, we include the baseline measurements of eGFR into the outcome vector of an individual, as it is subject to the same measurement error as later eGFR measurements. In this way, these baseline measurements contribute to a more precise estimation of the error variance. Nevertheless, available eGFR measurements at the time of prediction (application of the model) can still be incorporated in our approach to optimize the predicted eGFR trajectory. In addition, the clustered data structure due to the two multicentre cohorts (GCKD and PROVALID) for model training will be taken into account by including an additional random intercept for the country in contrast to existing models. Therefore, the potential heterogeneity of model performance across the countries, which are distinct between GCKD and PROVALID, is prevented and hence, the generalizability to unseen individuals is improved.
However, this study will have a few limitations. First, it is to be expected that for some individuals not all creatinine measurements will be available, in particular, at later time points. Such random missingness is common in prospective longitudinal studies. Even in the case of informative missingness, by capturing much information about the patients by means of domain-expertise-selected clinical baseline variables, which are included in multivariable modeling, the mixed model is still able to provide unbiased predictions. Second, we will not include extensive biosampling data (e.g., metabolomics, proteomics, lipidomics) in the model. The inclusion of these biomarkers may improve the predictive accuracy of the model, but the scientific evidence for an added value in prediction is still scarce and the biomarkers have no validated test characteristics. We will rather prioritize clinical variables that are regularly collected and which are widely available to ensure the broad applicability of our model. Lastly, the lack of standardization of creatinine assays across cohorts inflicts variability of the clinical laboratory serum creatinine measurements and hence can induce potential bias in calculating eGFR.
The key strength of this study includes not only the refined methodology in the development of the prediction model, but also its large sample size due to the usage of two prospective observational cohort studies PROVALID and GCKD for model development. This ensures stable parameter estimation of the model. Another strength is the availability of an additional, independent prospective cohort study, DIACORE, for external validation after model development. To our knowledge, this is the first prediction model specifically developed for a central European population suffering from DM2 and covering a wide range of CKD stages.
Overall, we have outlined a robustly developed and validated clinical prediction model which will generalise to a wide range of patients with regard to initial CKD progression suffering from DM2. It will also be derived from a diverse and multinational population using two large studies and will be used to predict deterioration in renal function in DM2 patients to improve further eGFR development through early intervention in patients at high risk for the rapid progression of renal decline.
This will be a prediction model for eGFR loss in Caucasian subjects with DM2 that will use data from recent observational multinational studies. By adhering to transparent reporting procedures and current best practice for model development and validation, we will minimize the risk of bias when our prediction model is applied in the context of primary prevention of progression of CKD in people with DM2.
Kalantar-Zadeh K, Jafar TH, Nitsch D, Neuen BL, Perkovic V. Chronic kidney disease. Lancet. 2021.
Jitraknatee J, Ruengorn C, Nochaiwong S. Prevalence and risk factors of chronic kidney disease among type 2 diabetes patients: a cross-sectional study in primary care practice. Sci Rep. 2020;10(1):1–10. https://doi.org/10.1038/s41598-020-63443-4.
Chen TK, Knicely DH, Grams ME. Chronic kidney disease diagnosis and management: a review. Jama. 2019;322(13):1294–304. https://doi.org/10.1001/jama.2019.14745.
Pavkov ME, Collins AJ, Coresh J, Nelson RG. Kidney disease in diabetes. Diabetes in America 3rd Edition; 2018.
Kerschbaum J, Rudnicki M, Dzien A, Dzien-Bischinger C, Winner H, Heerspink HL, et al. Intra-individual variability of eGFR trajectories in early diabetic kidney disease and lack of performance of prognostic biomarkers. Sci Rep. 2020;10(1):1–7. https://doi.org/10.1038/s41598-020-76773-0.
Heerspink HJ, Stefánsson BV, Correa-Rotter R, Chertow GM, Greene T, Hou F-F, et al. Dapagliflozin in patients with chronic kidney disease. N Engl J Med. 2020;383(15):1436–46. https://doi.org/10.1056/NEJMoa2024816.
Ferreira JP, Kraus BJ, Zwiener I, Lauer S, Zinman B, Fitchett DH, et al. Cardio/kidney composite end points: a post hoc analysis of the EMPA-REG OUTCOME Trial. J Am Heart Assoc. 2021;10(7):e020053. https://doi.org/10.1161/JAHA.120.020053.
Bakris GL, Agarwal R, Anker SD, Pitt B, Ruilope LM, Rossing P, et al. Effect of finerenone on chronic kidney disease outcomes in type 2 diabetes. N Engl J Med. 2020;383(23):2219–29. https://doi.org/10.1056/NEJMoa2025845.
Dunkler D, Gao P, Lee SF, Heinze G, Clase CM, Tobe S, et al. Risk prediction for early CKD in type 2 diabetes. Clin J Am Soc Nephrol. 2015;10(8):1371–9. https://doi.org/10.2215/CJN.10321014.
Tangri N, Stevens LA, Griffith J, Tighiouart H, Djurdjev O, Naimark D, et al. A predictive model for progression of chronic kidney disease to kidney failure. Jama. 2011;305(15):1553–9. https://doi.org/10.1001/jama.2011.451.
Subasi E, Subasi MM, Hammer PL, Roboz J, Anbalagan V, Lipkowitz MS. A classification model to predict the rate of decline of kidney function. Frontiers in Medicine. 2017;4:97. https://doi.org/10.3389/fmed.2017.00097.
Vigil A, Condés E, Camacho R, Cobo G, Gallar P, Oliet A, et al. Predictors of a rapid decline of renal function in patients with chronic kidney disease referred to a nephrology outpatient clinic: a longitudinal study. Adv Nephrol. 2015;2015:1–8. https://doi.org/10.1155/2015/657624.
Heinze G, Dunkler D. Five myths about variable selection. Transpl Int. 2017;30(1):6–10. https://doi.org/10.1111/tri.12895.
Pena MJ, Heinzel A, Heinze G, Alkhalaf A, Bakker SJ, Nguyen TQ, et al. A panel of novel biomarkers representing different disease pathways improves prediction of renal function decline in type 2 diabetes. PLoS One. 2015;10(5):e0120995. https://doi.org/10.1371/journal.pone.0120995.
Eriksen B, Ingebretsen O. The progression of chronic kidney disease: a 10-year population-based study of the effects of gender and age. Kidney Int. 2006;69(2):375–82. https://doi.org/10.1038/sj.ki.5000058.
Shlipak MG, Katz R, Kestenbaum B, Fried LF, Newman AB, Siscovick DS, et al. Rate of kidney function decline in older adults: a comparison using creatinine and cystatin C. Am J Nephrol. 2009;30(3):171–8. https://doi.org/10.1159/000212381.
Rifkin DE, Shlipak MG, Katz R, Fried LF, Siscovick D, Chonchol M, et al. Rapid kidney function decline and mortality risk in older adults. Arch Intern Med. 2008;168(20):2212–8. https://doi.org/10.1001/archinte.168.20.2212.
Eder S, Leierer J, Kerschbaum J, Rosivall L, Wiecek A, de Zeeuw D, et al. A prospective cohort study in patients with type 2 diabetes mellitus for validation of biomarkers (PROVALID)–study design and baseline characteristics. Kidney Blood Press Res. 2018;43(1):181–90. https://doi.org/10.1159/000487500.
Eckardt K-U, Bärthlein B, Baid-Agrawal S, Beck A, Busch M, Eitner F, et al. The German chronic kidney disease (GCKD) study: design and methods. Nephrol Dial Transplant. 2012;27(4):1454–60. https://doi.org/10.1093/ndt/gfr456.
Dörhöfer L, Lammert A, Krane V, Gorski M, Banas B, Wanner C, et al. Study design of DIACORE (DIAbetes COhoRtE)–a cohort study of patients with diabetes mellitus type 2. BMC Med Genet. 2013;14(1):25. https://doi.org/10.1186/1471-2350-14-25.
Levey AS, Stevens LA, Schmid CH, Zhang Y, Castro AF III, Feldman HI, et al. A new equation to estimate glomerular filtration rate. Ann Intern Med. 2009;150(9):604–12. https://doi.org/10.7326/0003-4819-150-9-200905050-00006.
Riley RD, Snell KI, Ensor J, Burke DL, Harrell FE Jr, Moons KG, et al. Minimum sample size for developing a multivariable prediction model: part I–continuous outcomes. Stat Med. 2019;38(7):1262–75. https://doi.org/10.1002/sim.7993.
Bramlage P, Lanzinger S, Hess E, Fahrner S, Heyer CH, Friebe M, et al. Renal function deterioration in adult patients with type-2 diabetes. BMC Nephrol. 2020;21(1):1–10. https://doi.org/10.1186/s12882-020-01952-0.
Heinzel A, Kammer M, Mayer G, Reindl-Schwaighofer R, Hu K, Perco P, et al. Validation of plasma biomarker candidates for the prediction of eGFR decline in patients with type 2 diabetes. Diabetes Care. 2018;41(9):1947–54. https://doi.org/10.2337/dc18-0532.
Huebner M, le Cessie S, Schmidt CO, Vach W. A contemporary conceptual framework for initial data analysis. Observational Studies. 2018;4(1):171–92. https://doi.org/10.1353/obs.2018.0014.
Falconieri N, Van Calster B, Timmerman D, Wynants L. Developing risk models for multicenter data using standard logistic regression produced suboptimal predictions: a simulation study. Biom J. 2020;62(4):932–44. https://doi.org/10.1002/bimj.201900075.
Selles RW, Andrinopoulou E-R, Nijland RH, Van Der Vliet R, Slaman J, van Wegen EE, et al. Computerised patient-specific prediction of the recovery profile of upper limb capacity within stroke services: the next step. J Neurol Neurosurg Psychiatry. 2021;92(6):574–81. https://doi.org/10.1136/jnnp-2020-324637.
Sauerbrei W, Perperoglou A, Schmid M, Abrahamowicz M, Becher H, Binder H, et al. State of the art in selection of variables and functional forms in multivariable analysis—outstanding issues. Diagnostic and Prognostic Research. 2020;4(1):1–18. https://doi.org/10.1186/s41512-020-00074-3.
Steyerberg EW, Harrell FE Jr. Prediction models need appropriate internal, internal-external, and external validation. J Clin Epidemiol. 2016;69:245–7. https://doi.org/10.1016/j.jclinepi.2015.04.005.
Collins GS, Reitsma JB, Altman DG, Moons KG. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD) the TRIPOD statement. Circulation. 2015;131(2):211–9. https://doi.org/10.1161/CIRCULATIONAHA.114.014508.
Altman DG, Royston P. The cost of dichotomising continuous variables. BMJ. 2006;332(7549):1080. https://doi.org/10.1136/bmj.332.7549.1080.
The research leading to these results has received support from the Innovative Medicines Initiative Undertaking under grant agreement no. 115974 BEAt-DKD.
Section for Clinical Biometrics, Center for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna, Austria
Mariella Gregorich, Michael Kammer & Georg Heinze
Division of Nephrology and Dialysis, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
Mariella Gregorich, Andreas Heinzel, Michael Kammer & Rainer Oberbauer
Department of Nephrology and Hypertension, Friedrich-Alexander Universität Erlangen-Nürnberg (FAU), Erlangen, Germany
Heike Meiselbach
Department of Nephrology, University of Regensburg, University Hospital Regensburg, Regensburg, Germany
Carsten Böger
Department of Nephrology and Medical Intensive Care, Charité Universitätsmedizin Berlin, Berlin, Germany
Kai-Uwe Eckardt
Department of Internal Medicine IV (Nephrology and Hypertension), Medical University Innsbruck, Innsbruck, Austria
Gert Mayer
Mariella Gregorich
Andreas Heinzel
Michael Kammer
Georg Heinze
Rainer Oberbauer
MG was responsible for the study conceptualization and design and drafted the initial manuscript. GH was responsible for the study conceptualization and design and revised the manuscript critically for important intellectual content. AH, RO, MK, GM, CB, and KUE revised the manuscript critically for important intellectual content. HM has made substantial contributions to the acquisition of the data. The authors have approved the final submitted version and are accountable for all aspects of the work.
Correspondence to Rainer Oberbauer.
The PROVALID study protocol was approved in each participating country by the responsible local Institutional Review Board (IRB). Signing an informed consent was a prerequisite for study participation in all countries. The GCKD study protocol was approved by local institutional review boards at each participating academic institution, and the data protection concept was reviewed by the data protection officer of the State of Hessen. The DIACORE study protocol, the data protection strategy, and the study procedures have been approved by the Ethics Committees of participating institutions and are in accordance with the Declaration of Helsinki. Patients participate in DIACORE only after providing informed written consent.
Gregorich, M., Heinzel, A., Kammer, M. et al. A prediction model for the decline in renal function in people with type 2 diabetes mellitus: study protocol. Diagn Progn Res 5, 19 (2021). https://doi.org/10.1186/s41512-021-00107-5
Prediction modeling
Mixed model | CommonCrawl |
Helgoland Marine Research
Some population parameters of Ruditapes philippinarum (Bivalvia, Veneridae) on the southern coast of the Marmara Sea, Turkey
Serhat Çolakoğlu1 &
Mustafa Palaz2
Helgoland Marine Research volume 68, pages 539–548 (2014)Cite this article
Ruditapes philippinarum, a venerid clam, is a dominant species in the sandy and muddy areas in the coastal waters of the Marmara Sea. Intensive commercial harvesting of this species is conducted in these regions. We studied the population dynamics of R. philippinarum on the southern coast of the Marmara Sea (Bandırma). Samples were collected on a monthly basis between September 2012 and August 2013. Seasonal von Bertalanffy growth parameters using the length–frequency distribution of R. philippinarum were estimated at L ∞ = 67.50 mm and K = 0.33 year−1, and the seasonal oscillation in growth rate was 0.53. The slowest growth period was in January. The growth performance index and potential lifespan were 3.182 and 8.06 years, respectively. The growth relationship was confirmed to have a positive allometric pattern. The average total mortality rate was estimated to be 0.777 year−1, whereas the natural and fishing mortality rates were 0.539 and 0.238 year−1, respectively. The current exploitation rate of R. philippinarum was 0.306. The recruitment pattern peaked during June–August, and spawning occurred between May and August. The results of this study provide valuable information on the status of R. philippinarum stocks.
The venerid clam Ruditapes philippinarum inhabits sandy and muddy bottoms of seas and is usually found buried 2–3 cm below the surface in the intertidal zone. Natural populations of this species are distributed along the coast of the Pacific and Atlantic Oceans, as well as the coastlines of the Adriatic and Aegean Seas (Jensen et al. 2004), and along the coast of the Mediterranean and Marmara Seas (Albayrak 2005). Along the southern coast of the Marmara Sea, R. philippinarum is one of the most abundant bivalve species at depths between 1 and 10 m, and it is only collected by scuba diving. R. philippinarum was first introduced in this area by Albayrak (2005).
R. philippinarum is one of the most commercially exploited bivalve molluscs in the world, and its production comes from both fishing of natural stocks and cultivated grounds. The total catch of this species was reported to be approximately 40.000 tonnes in 2010 (FAO 2012). However, in Turkey, natural stocks are the only source of Ruditapes sp., and the annual catch was reported to be 14.9 tonnes in 2011 (Türkstat 2012).
The commercial harvesting of R. philippinarum in the Marmara Sea increased during the 2000s (Albayrak 2005) with the establishment of designated growing areas for intensive fishing. However, there have been no previous investigations of the population dynamics of this clam in the Marmara Sea. Several studies have analysed other aspects of this species such as stock assessment and management (Cho et al. 2008; Spillman et al. 2009; Dang et al. 2010; Choi et al. 2011), reproduction (Robert et al. 1993; Matozzo et al. 2003; Kang et al. 2007; Ren et al. 2008), recruitment (Toba et al. 2007; Komorita et al. 2009) and population structure (Yap 1977; Bourne 1982; Flye-Sainte-Marie et al. 2007; Ponurovsky 2008; Caill-Milly et al. 2012).
The objective of the present study was to examine the growth rates, mortality rates, reproduction and recruitment of R. philippinarum as well as to assess its stocks in the coastal regions of the Marmara Sea. This information will be important for the management and conservation of populations of this species in this region.
Study area and sampling
This study was conducted on the Bandırma Bay coast, south of the Marmara Sea (40°24′25″N–27°55′33″E; Fig. 1) in intertidal and shallow sub-tidal areas with sandy bottoms. R. philippinarum samples were collected on a monthly basis between September 2012 and August 2013. Samples were collected by towing parallel to the shoreline during low tide for 10 min (length of dredge mouth and height: 55 and 30 cm, respectively; number of teeth and length: 25 and 16 cm, respectively; mesh size: 5 mm) at a depth of 3–8 m using a mechanical dredge. Shell length (SL) and total weight (TW) of individual bivalves were measured for a period of 1 year. Size measurements were used to estimate growth parameters. The sea surface temperature varied between 8.70 °C in winter (February) and 24.10 °C in summer (July), with a mean of 16.20 ± 1.55 °C (Fig. 2). Seawater temperature in the sampling area was measured using a mercury bulb thermometer.
Sampling location [Bandırma (40°24′25″N–27°55′33″E)]
Changes in seawater temperature during the study period
In total, 10.626 R. philippinarum were sampled. Anterior–posterior length (SL) of individual specimens was measured using digital callipers (0.01-mm accuracy). Length–frequency distributions were constructed with 1-mm intervals for each month. Total, shell and wet meat weight of each bivalve were measured using an electronic balance (0.01-mg accuracy).
The length–weight relationship was determined according to the allometric equation defined by Ricker (1973): Y = aX b, where Y is TW, X is SL, a is the intercept and b is the slope. Parameters a and b were estimated by least squares linear regression using log–log transformed data:
$$\log \text{TW} = \log a + b\log \text{SL}$$
The coefficient of determination (r 2) was used as an indicator of the linear regression quality. In addition, the 95 % confidence limit of b and the significance level of r 2 were also estimated. To confirm whether the value of b obtained by linear regression was significantly different from the isometric value (b = 3) and if they had negative (b < 3) or positive (b > 3) allometric relationships, a t test was applied with a confidence level of ±95 % (α = 0.05; Sokal and Rohlf 1987).
On the basis of monthly sampling frequency in the study area, 12 time-series datasets (1-mm SL size classes) were estimated using the electronic length–frequency analysis (ELEFAN) procedure in the length–frequency distribution analysis (LFDA) program (Kirkwood et al. 2001). Length was predicted as a function of age according to the von Bertalanffy growth equation (VBG, Eq. 2). This equation is used when a non-seasonal growth pattern is observed
$$L_{t} = L_{\infty } \left( {1 - e^{{ - K(t - t_{0)} }} } \right)$$
A study conducted by Hoenig and Hanumara (1990) found the Hoenig and Hanumara (1982) model used in fisheries better fit seasonal growth data; this model represents a combination of features from other models. Therefore, seasonal growth was described using the Hoenig and Hanumara (1982) version of the VBG equation:
$$L_{\text{t}} = L_{{\infty }} \left[ {1 - e^{{\left[ { - K\left( {t - t_{0} } \right) + \left( {C\frac{K}{2 \ne }} \right)\sin 2 \ne \left( {t - t_{\text{s}} } \right) - \left( {C\frac{K}{2 \ne }} \right)\sin 2 \ne \left( {t_{0} - t_{\text{s}} } \right)} \right]}} } \right]$$
where L t is the maximum anterior–posterior shell length (apSL; mm) at time t, L ∞ is the asymptotic apSL (mm), K (year−1) is the growth curvature parameter, C is the relative amplitude (0 ≤ C ≤ 1) of seasonal oscillations, t 0 is the theoretical age when the SL is zero (years) and t s is the phase of the seasonal oscillations (−0.5 ≤ t s ≤ 0.5), which denotes the time of year that corresponds to the start of the convex segment of sinusoidal oscillation.
The time of the year when growth is slowest, known as the winter point (WP), was calculated as:
$$\text{WP} = t_{\text{s}} + 0.5$$
Seasonal and non-seasonal VBG curves were fitted to length–frequency distributions after first specifying a range of values for L ∞ and K to maximize the goodness of fit (Rn) for each curve, thereby optimizing data. Rn was calculated as:
$${\text{Rn}} = \frac{{10^{{\text{ESP/ASP}}} }}{10}$$
where ASP is the available sum of peaks, computed by adding the best values of the available peaks, and ESP is the explained sum of peaks, computed by summing all the peaks and troughs hit by the VBG curve. In the area on the score grid that the best maximum is found, maximization has been done on the small area (0.1 < K<0.5 year−1 and 60 < L ∞ < 70 mm), in order to obtain the highest score function possible. Through the value of this score function, growth parameters were determined to be stable.
The growth performance index (Ø′, Eq. 6) was compared using different growth values reported in the literature, according to the following formula (Eq. 6; Pauly and Munro 1984). In addition, we constructed a 95 % confidence interval for Ø′ from the different combination estimates and from those in this study (α = 0.05)
$$\emptyset^{\prime } = 2\log_{10} \left( {L_{{\infty }} } \right) - \log_{10} K$$
The maximum lifespan (A 95, Eq. 7) was calculated using the inverse of the VBG equation, where we considered the maximum SL as 95 % of the L ∞ (Taylor 1958):
$$A_{95} = t_{0} + \frac{2.996}{K}$$
The instantaneous total mortality rate (Z, Eq. 8) was estimated using different methods. The Beverton and Holt (1956) equation for estimating Z was calculated as:
$$Z = K\left[ {\frac{{L_{{\infty }} - \overline{L} }}{{\overline{L} - L^{\prime } }}} \right]$$
where L' is the length when R. philippinarum were first fully recruited and \(\overline{L}\) is the mean length of all clams longer than L'.
The length-converted catch curve (LCCC; Pauly 1983, 1984a, b) was also used to estimate Z as follows:
$$\ln \left( {\frac{{N_{\text{i}} }}{{\Delta t_{\text{i}} }}} \right) = a + b t_{\text{i}}^{\prime }$$
where N i is the frequency in length class i, Δt i is the time required for a clam to grow and reach length class i, a is the intercept, \(t_{\text{i}}^{\prime }\) is the relative age of individual clams that correspond to length class i and b is the slope that corresponds to Z with a sign change.
The natural instantaneous mortality rate (M, Eq. 10) was estimated using the empirical relationship defined by Pauly (1980):
$$\log M = - 0.0066 + 0.279\log \text{TL}_{\infty } + 0.6543\log K + 0.4634\log T$$
where T is the mean annual seawater temperature and TL ∞ is the asymptotic total length (cm) that R. philippinarum can reach. This empirical equation assumes that the length is measured as TL in cm (Gayanilo et al. 2005). Therefore, length–frequency analyses were reapplied to length composition data to obtain TL ∞ (cm), TL and K for use in Pauly's empirical equation.
The fishing mortality rate (F) was calculated as:
$$Z = M + F$$
The exploitation rate (E; Sparre and Venema 1992) was calculated as:
$$E = \frac{F}{F + M}$$
Moreover, instantaneous mortality rates were then converted to annual mortality rates (A) as:
$$A = e^{ - Z}$$
The Beverton–Holt and LCCC Z were calculated using length–frequency distribution analysis version 5.0 (Kirkwood et al. 2001). M was estimated using the FISAT II program (Gayanilo et al. 2005). Significant differences between the Beverton–Holt and LCCC mortality rates were analysed by one-way analysis of variance (ANOVA; F test), using Microsoft Excel 2010 (Zar 1984).
The reproductive activity of R. philippinarum was determined on the basis of the ash-free dry weight (AFDW)/dry shell weight (DSW) ratio. Each month, sub-samples of 35 clams were used to extract all their soft parts. The sub-sample used for condition index (CI, Eq. 14) analysis had an SL ranging from 20 to 50 mm. To determine the body mass cycle, all soft parts were removed and dried to a constant mass at 100 °C for 24 h to obtain DSW (g). AFDW (mg) was obtained by drying soft tissues in an oven at 550 °C for 7 h (Laudien et al. 2003). CI was calculated according to the following formula (Walne and Mann 1975):
$${\text{CI}} = \left( {\text{AFDW/DSW}} \right) \times 100$$
The monthly gonado-somatic index (GSI, Eq. 15), which is defined as the volume of gonadal tissue (V gon) relative to the total body volume (V body), was estimated using a method based on linear measurements of the gonad region, which forms a sheath around the digestive gland (Urban and Riascos 2002; Riascos et al. 2007).
$${\text{GSI}} = V_{\text{gon}} \text{/}V_{\text{body}} \times 100$$
A sub-sample of 35 specimens (SL = 40–50 mm) was used to study the reproductive cycle. The body mass cycle of individual bivalves was determined in the gonad stage on the basis of microscopic observations of fresh gonadal material. We used a semi-quantitative scale proposed by Guillou et al. (1990), which allowed us to classify males and females into four gonad stages: indifferent, ripe I, ripe II and spent.
Size–frequency distribution, length–weight and shell morphometric relationships
Monthly length–frequency distributions of R. philippinarum are shown in Fig. 3. The length of individual bivalves ranged from 4 to 62 mm, and the weight ranged from 0.02 to 63.9 g (N = 10,626). Throughout the sampling period, we found 76.72 % smaller (<25 mm) and 23.28 % larger individuals (≥25 mm). The recruitment pattern peaked from June to August. Length–frequency distributions showed that recruitment continued during the summer and ended in August, where young clams that measured 4–10 mm were found at the beginning of the summer (June; Fig. 3). The calculated length–weight equation was log TW = −4 + 3.1384 log SL. In exponential form, the equation is TW = 0.0001SL3.1384 (r 2 = 0.87; N = 1,890). Linear regression showed a significant relationship between TW (P < 0.05) and SL. The morphometric relationship between TW/SL (b = 3.1384) indicated consistent positive allometric growth. The 95 % confidence interval range for b was calculated as 3.7181–3.7296.
Length–frequency data for R. philippinarum collected from the southern coast of the Marmara Sea (Bandırma) between September 2012 and August 2013
The maximum SL recorded in R. philippinarum was 62 mm; the predicted longest length was 63.31 mm. L ∞ of seasonal and non-seasonal VBG parameters was 67.50 and 67.14 mm, respectively, and K was 0.33 year−1 (Fig. 4). Seasonal and non-seasonal VBG parameters obtained from LFDA are summarized in Table 1. The seasonal growth curve computed using these parameters is shown above the restructured length distribution in Fig. 5. The slowest growth rate was observed in January (0.02 × 12 = 0.24 months). Ø′ and A 95 derived from seasonal VBG parameters were 3.182 and 8.06 years, respectively, with a 95 % confidence interval of 2.868–3.110 (t0.05,10 = 2.228).
Growth curves (grey lines) of R. philippinarum estimated from monthly length–frequency data (black histograms) for the periods of September 2012 to August 2013
Table 1 Seasonal and non-seasonal von Bertalanffy growth parameters estimated from length–frequency distribution analysis of R. philippinarum
Length–frequency distribution (bars) for R. philippinarum, where the seasonal von Bertalanffy growth curves (lines) are superimposed
Mortality and exploitation rate
A estimated with different combinations of methods ranged between 0.512 (LCCC) and 0.413 (Beverton–Holt) year−1. The Beverton–Holt Z estimates ranged between 0.287 and 1.436 year−1, with a mean of 0.884 ± 0.107 year−1 (95 % confidence interval, 0.650–1.118). Z estimated with the LCCC method ranged between 0.19 and 1.56 year−1, with a mean of 0.670 ± 0.031 year−1 (95 % confidence interval, 0.443–0.897). The highest mortalities were observed in October (LCCC, 1.56 year−1) and June (Beverton–Holt, 1.436 year−1), whereas the lowest was observed in February (LCCC, 0.19 year−1; Beverton–Holt, 0.287 year−1; Fig. 6). Compared with LCCC, mortality rates obtained using the Beverton–Holt method were similar with the input parameters from ELEFAN. No significant differences (P < 0.05) were observed between the mortality rates [LCCC/Beverton–Holt F test (1, 22) = 2.085 (Fcrit = 4.301); P = 0.163]. M was 0.539 year−1, whereas the average Z was estimated to be 0.777 year−1. F was calculated as 0.238 year−1, and E was estimated to be 0.306.
Mortality rates (Z) in each month according to different methods
CI, GSI and the distribution of reproductive stages are shown in Fig. 7. R. philippinarum were continuously observed in the spent stage throughout the year. Spawning occurred between May and August based on declines in CI and GSI and increased microscopic observation of spent stages (Fig. 7). CI and GSI development started with increasing water temperatures in March and peaked with high water temperature in May. Moreover, spawning appeared to occur due to the major increase in the spent stages and decrease in ripe clams. In general, higher proportions of ripe individuals were observed between February and May.
Distributions of the condition index (CI), gonado-somatic index (GSI) and reproductive stages in successive months for R. philippinarum
Our study represents the first analysis of the length–weight relationship in R. philippinarum specimens from the coastline of the Marmara Sea. The allometric coefficient b (3.138) was confirmed as having a positive allometric pattern. Similar exponential values were reported by Mingyun et al. (1989; b = 3.259), Choi et al. (2011; b = 3.036) and Caill-Milly et al. (2012; b > 3). In contract, Yap (1977; b = 2.862), Cho et al. (2008; b = 2.988) and Ponurovsky (2008; b = 2.954) reported negative allometric patterns. Discrepancies in the value of b in length–weight relationships could have been affected by variations in environmental conditions, such as the density of substrata in the sediment, intensity of predation and variability in food availability (Gaspar et al. 2001).
Sparre and Venema (1992) reported that growth parameters differed among species and among stocks within the same species, which was attributed to different environmental conditions. In the present study, L ∞ (67.511 mm SL) differed from that of previous studies (Table 2). The highest reported L ∞ (75.53 mm) was observed on the central coast of British Columbia, Canada (Bourne 1982), whereas the lowest reported L ∞ (41.1 mm) was obtained from Arcachon Bay, France (Dang et al. 2010). In the present study, R. philippinarum exhibited a slower growth rate (K = 0.33 year−1) compared with K = 0.913 year−1 from Kaneohe Bay, Hawaiian Islands (Yap 1977), K = 0.697 year−1 on the British coast, UK (Humphreys et al. 2007), K = 0.72 year−1 from Arcachon Bay (Dang et al. 2010) and K = 0.341 year−1 in the Taehwa River, Ulsan, South Korea (Choi et al. 2011). In contrast, the growth rate determined in the current report was higher than that reported by other studies conducted in British Columbia (K = 0.273 year−1 or K = 0.303 year−1) (Bourne 1982) and Amurshy Bay, Sea of Japan (K = 0.302 year−1; Ponurovsky 2008). We found that R. philippinarum exhibited seasonal growth (C = 0.53), with the slowest growth in January (WP = 0.02).
Table 2 Von Bertalanffy growth parameters and mortality of R. philippinarum in different areas
Ø′ is appropriate for comparing the growth performance of different populations of bivalve species. The Ø′ of R. philippinarum derived from VBG parameters was 3.182, which is higher than values obtained from other locales, like that obtained from Kaneohe Bay (3.399; Table 2; Yap 1977). However, there were no significant differences in the Ø′ between these studies (P < 0.05). To determine the age of bivalve species, the most commonly used methods are based on analysis of external surface rings, internal growth lines and micro-growth bands in shells (Richardson 2001), including an analysis of length–frequency distributions (Peharda et al. 2013). In addition, the approximate lifespan of bivalve species can be estimated on the basis of VBG parameters (Taylor 1958). We determined that A 95 = 8.06 years for R. philippinarum, which is higher than values reported from other areas outside the studied coast of Yeongi, Tongyeong, Korea (Table 2; Cho et al. 2008).
The wide range of Z estimates obtained using different methods made it difficult to decide a reliable single value for the mortality rate. However, Z and A of R. philippinarum were similar using LCCC (Z = 0.670 year−1; A = 0.512 year−1) and Beverton–Holt (Z = 0.884 year−1; A = 0.413 year−1) methods. The average Z (0.777 year−1) in the present study was different from those estimated by other studies in different areas of the world (Table 2). In addition, F was lower (0.238 year−1) than M (0.539 year−1), indicating a balanced stock of R. philippinarum in our study area. The main approach used to evaluate stock status was based on an analysis of harvest rates from time-series datasets available from previous years and an estimate of the current E. Patterson (1992) recommends E = 0.4 as the limit management reference point, which is consistent with high long-term yields. Relative to this E reference point, we determined that the R. philippinarum stock on the southern coast of the Marmara Sea may be considered as being exploited in a sustainable manner (E = 0.306).
We observed continuous gametogenic activity in R. philippinarum throughout the year (Fig. 7). However, periods of increased gametogenic activity correlated with climatic variation (Riascos et al. 2007). In our study, the reproductive cycle of R. philippinarum had a seasonal spawning pattern based on the similarity between GSI/CI and the percentage of spent animals (Fig. 7). R. philippinarum is well known for asynchronous partial successive spawning and fast maturation of gametes. Some researchers point out the difficulty in estimating the peak reproductive weight and evolution of mean weight from observed data of individual clams when asynchronous partial spawning events occur in the studied population (Flye-Sainte-Marie et al. 2007). Therefore, in this study, it can only be said that clam gonads started to ripen when the average water temperatures reached 12 °C in the Bandırma Bay. Spawning period of R. philippinarum appeared to occur between May and August (summer). Different studies around the world have shown that R. philippinarum has various spawning periods, depending on location. In other studies, the spawning period was June–September in British Columbia (Bourne 1982), April–August and late summer in Ile Tudy, South Brittany, France (Beninger and Lucas 1984), autumn–summer in Arcachon Bay (Robert et al. 1993; Dang et al. 2010), June–November in Ria de Vigo, Spain (Rodriguez-Moscoso et al. 1992), summer in Vostok Bay, Russia (Ponurovsky and Yakovlev 1992), May–September in the Lagoon of Venice, Italy (Meneghetti et al. 2004) and summer–autumn in Tokyo Bay, Japan (Toba et al. 2007). These different spawning seasons are probably related to the seawater temperature (Dang et al. 2010) and variations in seasonal seawater temperature during the spawning season, especially in the neighbourhood of intertidal zone, which may also explain the high variability in spawning patterns.
Length–frequency distributions indicated a rapid increase in recruitment (individual clams with SL < 17 mm) from June to October in 2013 (Fig. 3). The major recruitment peak of this bivalve occurred during June–August (summer). However, other studies of this species have reported different results. For example, the recruitment of R. philippinarum occurred twice each year in May (spring) and October (autumn) in Tokyo Bay (Toba et al. 2007), August (summer) and October (autumn) in Hokkaido, Japan (Komorita et al. 2009) and May–August and October–November in Arcachon Bay (Dang et al. 2010). Recruitment patterns of bivalves differ among species depending on the season, nutritional needs and environmental conditions (Rufino et al. 2010).
To the best of our knowledge, no other previous studies have investigated the population dynamics of R. philippinarum in the Marmara Sea. On the basis of our results, it was concluded that the stock of R. philippinarum analysed is currently at a sustainable level, with an existing fishing level. Exploitation is below the optimum exploitation level (E = 0.4). Results from this study also provide basic information that may facilitate conservation and stock management policies for R. philippinarum clam populations in the Bandırma Bay.
Albayrak S (2005) First record of Tapes philippinarum (Adams and Reeve, 1850) (Bivalvia: Veneridae) from the Sea of Marmara. Zool Middle East 35:108–109. doi:10.1080/09397140.2005.10638113
Beninger PG, Lucas A (1984) Seasonal variations in condition, reproductive activity and gross biochemical composition of two species of adult clam reared in a common habitat: Tapes decussatus L. (Jeffreys) and Tapes philippinarum (Adams & Reeve). J Exp Mar Biol Ecol 79:19–37
Beverton RJH, Holt SJ (1956) A review of methods for estimation of mortality rates in exploited fish populations, with special reference to sources of bias in catch sampling. Rapp PV Réun Cons Int Explor Mer 140:67–83
Bourne N (1982) Distribution, reproduction, and growth of Manila clam, Tapes philippinarum in British Columbia. J Shellfish Res 2:47–54
Caill-Milly N, Bru N, Mahé K, Borie C, D'Amico F (2012) Shell shape analysis and spatial allometry patterns of Manila Clam (Ruditapes philippinarum) in a Mesotidal Coastal Lagoon. Mar Biol p11. doi:10.1155/2012/281206
Cho SM, Jeong WG, Lee SJ (2008) Ecologically sustainable management of short-necked clam, Ruditapes philippinarum, on the coast of Yeongi at Tongyeong, Korea. Korean J Malacol 24:189–197
Choi Y, Yoon S, Lee S, Kim J, Yang J, Yoon B, Park J (2011) The study of stock assessment and management implications of the Manila clam, Ruditapes philippinarum in Taehwa river of Ulsan. Korean J Malacol 27:107–114
Dang C, Montaudouina X, Gam M, Paroissin C, Brud N, Caill-Milly N (2010) The Manila clam population in Arcachon Bay (SW France): can it be kept sustainable? J Sea Res 63:108–118. doi:10.1016/j.seares.2009.11.003
FAO (2012) Global aquaculture production statistics 1950–2010 http://www.fao.org/fishery/statistics/globalaquacultureandcaptureproduction/query/en. 10 Sep 2013
Flye-Sainte-Marie J, Jean F, Paillard C, Ford S, Powell E, Hofmann E, Klinck J (2007) Ecophysiological dynamic model of individual growth of Ruditapes philippinarum. Aquaculture 266:130–143. doi:10.1016/j.aquaculture.2007.02.017
Gaspar MB, Santos MN, Vascocelos P (2001) Weight–length relationships of 25 bivalve species (Mollusca: Bivalvia) from the Algarve coast (southern Portugal). J Mar Biol Assoc UK 81:805–807
Gayanilo FC Jr, Sparre P, Pauly D (2005) FAO-ICLARM stock assessment tools II (FiSAT II). User's guide. FAO computerized information series (Fisheries). No. 8, Revised version. Rome, FAO 2005. p 168
Guillou J, Bachelet G, Desprez M, Ducrotoy JM, Madani I, Rybarczyk H, Sauriau PG, Sylvand B, Elkaim B, Glemarec M (1990) Les modalite´s de la reproduction de la coque (Cerastoderma edule) sur le littoral Francais de la Manche et de l'Atlantique. Aquat Living Resour 3:29–41
Hoenig JM, Hanumara RC (1982) A statistical study of a seasonal growth model for fishes. Technical Report, Department of Computer Science and Statistic, University of Rhode Island, Narragansett, pp 126
Hoenig NA, Hanumara RC (1990) An empirical comparison of seasonal growth models. Fishbyte 8:32–34
Humphreys J, Caldow RWG, McGrorty S, West AD, Jensen AC (2007) Population dynamics of naturalised Manila clams Ruditapes philippinarum in British coastal waters. Mar Biol 151:2255–2270. doi:10.1007/s00227-007-0660-x
Jensen AC, Humphreys J, Caldow RWG, Grisley C, Dyrynda PEJ (2004) Naturalization of the Manila clam (Ruditapes philippinarum), an alien species, and establishment of a clam fishery within Poole Harbour, Dorset. J Mar Biol Assoc UK 84:1069–1073. doi:10.1017/S0025315404010446h
Kang CK, Kang SY, Choy JE, Kim SD, Shim TB, Lee PY (2007) Condition, reproductive activity, and gross biochemical composition of the manila clam, Tapes philippinarum in natural and newly created sandy habitats of the southern coast of Korea. J Shellfish Res 26:401–412
Kirkwood GP, Aukland R, Zara SJ (2001) Length frequency distribution analysis (LFDA). version 5.0. MRAG Ltd., London UK
Komorita T, Shibanuma S, Yamada T, Kajihara R, Tsukuda M, Montani S (2009) Impact of low temperature during the winter on the mortality in the post-settlement period of the juvenile of short-neck clam, Ruditapes philippinarum, on the tidal flats in Hichirippu Lagoon, Hokkaido, Japan. Plankton Benthos Res 41:31–37. doi:10.1016/j.jembe.2009.10.018
Laudien J, Brey T, Arntz WE (2003) Population structure, growth and production of the surf clam Donax serra (Bivalvia, Donacidae) on two Namibian sandy beaches. Estuar Coas Shelf Sci 58:105–115. doi:10.1016/S0272-7714(03)00044-1
Matozzo V, Da Ros L, Ballarin L, Meneghetti F, Marin GM (2003) Functional responses of haemocytes in the clam Tapes philippinarum from the Lagoon of Venice: fishing impact and seasonal variations. Can J Fish Aquat Sci 60:949–958
Meneghetti F, Moschino V, Da Ros L (2004) Gametogenic cycle and variations in oocyte size of Tapes philippinarum from the Lagoon of Venice. Aquaculture 240:473–488. doi:10.1016/j.aquaculture.2004.04.011
Mingyun L, Xuelang X, Jian F, Peng Y (1989) The population dynamics of clam (Ruditapes philippinarum) and the measures for its propagation protection. Acta Ecologica Sinica 9:4
Patterson K (1992) Fisheries for small pelagic species: an empirical approach to management targets. Rev Fish Biol Fisher 2:321–338
Pauly D (1980) On the relationships between natural mortality, growth parameters and mean environmental temperature in 175 fish stocks. J Cons Int Explor Mer 39:175–192
Pauly D (1983) Length converted catch curves. A powerful tool for fisheries research in the tropics. (part II) ICLARM Fishbyte 2: 17–19
Pauly D (1984a) Length-converted catch curves: a powerful tool for fisheries research in the tropics (Part II). Fishbyte 2:17–19
Pauly D (1984b) Length-converted catch curves: a powerful tool for fisheries research in the tropics (Part III Conclusion). Fishbyte 2:8–10
Pauly D, Munro JL (1984) Once more on growth comparison in fish and invertebrates. Fishbyte 2:21–30
Peharda M, Popović Z, Ezgeta-Balić D, Vrgoč N, Puljas S, Frankić A (2013) Age and growth of Venus verrucosa (Bivalvia: Veneridae) in the eastern Adriatic Sea. Cah Biol Mar 54:281–286
Ponurovsky SK (2008) Population structure and growth of the Japanese littleneck clam, Ruditapes philippinarum in Amursky Bay, Sea of Japan. Russ J Mar Biol 34:329–332. doi:10.1134/S1063074008050106
Ponurovsky SK, Yakovlev YM (1992) The reproductive biology of the Japanese littleneck, Tapes philippinarum (A. Adams and Reeve, 1850) (Bivalvia: Veneridae). J Shellfish Res 11(2):265–277
Ren Y, Xu B, Guo Y, Yang M, Yang J (2008) Growth, mortality and reproduction of the transplanted Manila clam, Ruditapes philippinarum in Jiaozhou Bay. Aquatic Res 39:1759–1768. doi:10.1111/j.1365-2109.2008.02052.x
Riascos JM, Heilmayer O, Laudien J (2007) Population dynamics of the tropical bivalve Cardita affinis from Málaga Bay, Colombian Pacific related to La Niña 1999–2000. Helgoland Mar Res 62:63–71. doi:10.1007/s10152-007-0083-6
Richardson CA (2001) Molluscs as archives of environmental change. Oceanogr Mar Biol Annu Rev 39:103–164
Ricker WE (1973) Linear regression in fisheries research. J Fish Res Board Can 30:409–434
Robert R, Trut G, Laborde JL (1993) Growth, reproduction and gross biochemical composition of the Manila clam Ruditapes philippinarum in the Bay of Arcachon, France. Mar Biol 116:291–299
Rodriguez-Moscoso E, Pazo JP, Garcia A, Fernandez-Cortes F (1992) Reproductive cycle of Manila clam, Ruditapes philippinarum (Adams & Reeve 1850) in Ria of Vigo (NW Spain). Sci Mar 56:61–67
Rufino MM, Gaspar MB, Pereira AM, Maynou F, Monteiro CC (2010) Ecology of megabenthic bivalve communities from sandy beaches on the south coast of Portugal. Sci Mar 74:163–178. doi:10.3989/scimar.2010.74n1163
Sokal RR, Rohlf FJ (1987) Introduction to biostatistics, 2nd edn. W.H. Freeman and Company, New York
Sparre P, Venema SC (1992) Introduction to tropical fish stock assessment, Part 1. Manual. FAO Fish Tech Pap No. 306.1 Rev. 1. Rome FAO. p 376
Spillman CM, Hamilton DP, Imberger J (2009) Management strategies to optimize sustainable clam (Tapes philippinarum) harvests in Bambamarco Lagoon, Italy. Estuar Coas Shelf Sci 81:267–278. doi:10.1016/j.ecss.2008.11.003
Taylor CC (1958) Cod growth and temperature. J Cons Int Explor Mer 23:366–370
Toba M, Yamakawa H, Kobayashi Y, Sugiura Y, Honma K, Yamada H (2007) Observations on the maintenance mechanisms of metapopulations, with special reference to the early reproductive process of the Manila clam Ruditapes philippinarum (Adam & Reeve) in Tokyo Bay. J Shellfish Res 26:121–130. doi:10.2983/0730-8000(2007)26[121:OOTMMO]2.0.CO;2
Türkstat (2012) Fishery statistics. Turkish Statistical Institute, Ankara
Urban HJ, Riascos JM (2002) Estimating gonado-somatic indices in bivalves with fused gonads. J Shellfish Res 21:249–253
Walne PR, Mann R (1975) Growth and biochemical composition in Ostrea edulis and Crassostrea gigas. In: Proceedings of the ninth European marine biological symposium pp 587–607
Yap GW (1977) Population biology of the Japanese little-neck clam, Tapes philippinarum, in Kaneohe Bay, Oahu, Hawaiian Islands. Pac Sci 3:31
Zar JH (1984) Biostatistical analysis, 2nd edn. Prentice-Hall, Englewood Cliffs
Ministry of Food, Agriculture and Livestock, Çanakkale Provincial Directorate, 17100, Çanakkale, Turkey
Serhat Çolakoğlu
Ezine Vocational Schools, Canakkale Onsekiz Mart University, 17100, Çanakkale, Turkey
Mustafa Palaz
Correspondence to Serhat Çolakoğlu.
Communicated by H.-D. Franke.
Çolakoğlu, S., Palaz, M. Some population parameters of Ruditapes philippinarum (Bivalvia, Veneridae) on the southern coast of the Marmara Sea, Turkey. Helgol Mar Res 68, 539–548 (2014). https://doi.org/10.1007/s10152-014-0410-7
Revised: 18 August 2014
Issue Date: December 2014
Ruditapes philippinarum
Marmara Sea
Submission enquiries: [email protected] | CommonCrawl |
The European Physical Journal C
January 2020 , 80:21 | Cite as
Modified cosmology models from thermodynamical approach
Chao-Qiang Geng
Yan-Ting Hsu
Jhih-Rong Lu
Lu Yin
Regular Article - Theoretical Physics
We apply the first law of thermodynamics to the apparent horizon of the universe with the power-law corrected and non-extensive Tsallis entropies rather than the Bekenstein–Hawking one. We examine the cosmological properties in the two entropy models by using the CosmoMC package. In particular, the first numerical study for the cosmological observables with the power-law corrected entropy is performed. We also show that the neutrino mass sum has a non-zero central value with a relaxed upper bound in the Tsallis entropy model comparing with that in the \(\Lambda \)CDM one.
According to the current cosmological observations, our universe is experiencing a late time accelerating expansion. Although the \(\Lambda \)CDM model can describe the accelerating universe by introducing dark energy [1], it fails to solve the cosmological constant problem, related to the "fine-tuning" [2, 3] and "coincidence" [4, 5] puzzles. A lot of efforts have been made to understand these issues. For example, one can modify the gravitational theory to obtain viable cosmological models with dynamical dark energy to explain the accelerating universe [6].
On the other hand, one can reconstruct the Friedmann equations through the implications of thermodynamics. It has been shown that the Einstein's equations can be derived by considering the Clausius' relation of a local Rindler observer [7]. In particular, this idea has been applied to cosmology, while the Friedmann equations have been obtained by using the first law of thermodynamics in the horizon of the universe [8]. It has been also demonstrated that the modified Friedmann equations can be acquired from the thermodynamical approach by just replacing the entropy-area relation with a proper one in a wide variety of gravitational theories [8, 9, 10, 11, 12, 13]. Thus, as long as there is a new entropy area relation, thermodynamics gives us a new way to determine the modified Friedmann equations without knowing the underlying gravitational theory. Furthermore, since the entropy area relation obtained from the modified gravity theory can be useful to extract the dark energy dynamics along with the modified Friedmann equations, it is reasonable to believe that even if we do not know the underlying theory of modified gravity, some modifications of the entropy relation will still give us additional information for modified Friedmann equations as well as the dynamics of dark energy, which would be different from \(\Lambda \)CDM. As a result, we expect that the modification of the entropy is also relevant to the cosmological evolutions.
It is known that a power-law corrected term from the quantum entanglement can be included in the black hole entropy near its horizon [14]. Interestingly, one can apply it to cosmology by taking it as the entropy on the horizon of the universe. On the other hand, the universe is regarded as a non-extensive thermodynamical system, so the Boltmann–Gibbs entropy should be generalized to a non-extensive quantity, the Tsallis entropy, while the standard one can be treated as a limit [15, 16, 17]. The Tsallis entropy has been widely discussed in the literature. In the entropic-cosmology scenario [18], the Tsallis entropy model predicts a decelerating and accelerating universe [19]. In addition, a number of works on the Tsallis holographic dark energy have been proposed and investigated [20, 21, 22, 23, 24]. In addition, the Tsallis entropy has also been used in many different dark energy models, such as the Barboza–Alcaniz and Chevalier–Polarski–Linder parametric dark energy and Wang–Meng and Dalal vacuum decay models [25]. Moreover, it is shown that modified cosmology from the first law of thermodynamics with varying-exponent Tsallis entropy can provide a description of both inflation and late-time acceleration with the same parameter choices [27]. In particular, the Tsallis entropy is proportional to a power of the horizon area, i.e. \(S_T \propto A^{\delta }\), when the universe is assumed to be a spherically symmetric system [28].
Although it is possible to modify Friedmann equations by just considering fluid with an inhomogeneous equation of state of the corresponding form [29], we still choose the thermo-dynaimical approach as that in Ref. [30], in which the authors considered the first law of thermodynamics of the universe with fixed-exponent Tsallis entropy and showed that the cosmological evolution mimics that of \(\Lambda \)CDM and are in great agreement with Supernovae type Ia observational data. In this paper, we examine the features of the modified Friedmann equations obtained by replacing the usual Bekenstein–Hawking entropy-area relation, \(S=A/4G\), with the power-law correction and Tsallis entropies [14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 30], where G is the gravitational constant.
This paper is organized as follows. In Sect. 2, we consider the power-law corrected and Tsallis entropy models and derive the modified Friedmann equations and dynamical equation of state parameters by applying the first law of thermodynamics to the apparent horizon of the universe. In Sect. 3, we present the cosmological evolutions of the two models and compare them with those in \(\Lambda \)CDM. Finally, the conclusions are given in Sect. 4. The paper is written in units of \(c=\hbar =k_B=1\).
2 The models
We use the flat Friedmann–Lemaître–Robertson–Walker (FLRW) metric:
$$\begin{aligned} ds^2=-dt^2+a^{2}(t)\Big (dr^2+r^2 d\Omega ^2\Big ), \end{aligned}$$
where a(t) is the scale factor. The modified Friedmann equations can be constructed by considering the first law of thermodynamics in the apparent horizon of the universe and using the new entropy area relation rather than the Bekenstein–Hawking one. We concentrate on two models: power law corrected entropy (PLCE) and Tsallis entropy cosmological evolution (TECE) models.
2.1 Power law corrected entropy (PLCE) model
In the PLCE model, the entropy has the form [14]
$$\begin{aligned} S_{pl}= \frac{A}{4L_p^2}\bigg (1 - K_\nu A^{1-\frac{\nu }{2}}\bigg ), \end{aligned}$$
where \(\nu \) is a dimensionless constant parameter and \(K_\nu =\nu (4 \pi )^{(\nu -2)/2}(4-\nu )^{-1}r_c^{\nu -2}\) with \(r_c\) the crossover scale, A corresponds to the area of the system, and \(L_p\) represents the Planck length. With the method described in Ref. [30], one is able to extract the modified Friedmann equations:
$$\begin{aligned} H^2&=\frac{8\pi G}{3}(\rho _m +\rho _r + \rho _{DE}),\nonumber \\ {\dot{H}}&=-4\pi G(\rho _m +\rho _r+ \rho _{DE}+p_m +p_r+p_{DE}), \end{aligned}$$
where \(\rho _{DE}\) and \(p_{DE}\) are the dark energy density and pressure, given by
$$\begin{aligned} \rho _{DE}&= \frac{3}{8 \pi G}\frac{1}{r^{2-\nu }_c}\big (H^{\nu }-1\big ) + \frac{\Lambda }{8 \pi G}, \end{aligned}$$
$$\begin{aligned} p_{DE}&=\frac{-\nu }{8 \pi G}\frac{\dot{H}}{r^{2-\nu }_c}\big (H^{\nu }-1\big )- \frac{3}{8 \pi G}\frac{1}{r^{2-\nu }_c}\big (H^{\nu }-1\big ) \nonumber \\&-\frac{\Lambda }{8 \pi G}, \end{aligned}$$
respectively. To discuss the evolution of dark energy, it is convenient to define the equation of state parameter, \(w_{DE} \equiv p_{DE}/\rho _{DE}\), which is found to be
$$\begin{aligned} w_{DE} = -1 + \frac{-\nu \dot{H}H^{\nu -2}}{3(H^{\nu }-1) + \Lambda r^{2-\nu }_c}. \end{aligned}$$
2.2 Tsallis entropy cosmological evolution model
In the TECE model, we have [28]
$$\begin{aligned} S_T=\frac{\tilde{\alpha }}{4G}A^{\delta }, \end{aligned}$$
where A is the area of the system with dimension [\(L^2\)], \(\tilde{\alpha }\) is a positive constant with dimension [\(L^{2-2\delta }\)], and \(\delta \) denotes the non-additivity parameter. Similarly, by following the procedure in Ref. [30], we obtain
$$\begin{aligned} H^2&=\frac{8\pi G}{3}(\rho _m +\rho _r + \rho _{DE}),\nonumber \\ \dot{H}&=-4\pi G(\rho _m +\rho _r+ \rho _{DE}+p_m +p_r+p_{DE}), \end{aligned}$$
$$\begin{aligned} \rho _{DE}&=\frac{3}{8 \pi G}\bigg [\frac{\Lambda }{3}+ H^2\bigg (1-\alpha \frac{\delta }{2-\delta } H^{2(1-\delta )}\bigg )\bigg ], \end{aligned}$$
$$\begin{aligned} p_{DE}&=-\frac{1}{8 \pi G}\bigg [\Lambda +2 \dot{H}(1-\alpha \delta H^{2(1-\delta )})\nonumber \\&\quad +3H^2\bigg (1-\alpha \frac{\delta }{2-\delta }H^{2(1-\delta )}\bigg )\bigg ] \end{aligned}$$
where \(\alpha = (4 \pi )^{\delta -1}\tilde{\alpha }\), and \(\Lambda \) is a constant related to the present values of \(H_0, \rho _{m0}\) and \(\rho _{r0}\), given by
$$\begin{aligned} \Lambda&=\frac{3\alpha \delta }{2-\delta }H_0^{2(2-\delta )}-8\pi G(\rho _{m0} +\rho _{r0}). \end{aligned}$$
Thus, the equation of state parameter for the TECE model is evaluated to be
$$\begin{aligned} w_{DE} = \frac{p_{DE}}{\rho _{DE}} =-1 + \frac{2\dot{H}(\alpha \delta H^{2\dot{H}(1-\delta )}-1)}{3H^2\bigg (1-\frac{\alpha \delta }{2-\delta }H^{2(1-\delta )}\bigg )+\Lambda }. \end{aligned}$$
3 Cosmological evolutions
3.1 Power law corrected entropy model
Since \(\rho _{DE}\) and \(w_{DE}\) are determined by the Hubble parameter H(z), we use the Newton–Raphson method [31] to obtain the cosmological evolutions of the PLCE model.
Open image in new window
Evolutions of the equation-of-state parameter \(w_{DE}\) in \(\Lambda \)CDM and PLCE models
Because the PLCE model goes back to \(\Lambda \)CDM when \(\nu = 0\), we choose \(\nu =\pm 0.02\) to compare the differences between the two models. We also take a larger value of \(\nu = 0.2\) to check the sensitivity of \(\nu \). The results in Fig. 1 show that \(w_{DE}\) does not overlap or cross \(-1\) in any non-zero value of \(\nu \). In addition, it maintains its value in the early universe, and only trends to \(-1\) for \(z<2\).
CMB power spectra of the TT mode in \(\Lambda \)CDM and PLCE models along with the Planck 2018 data
The change \(\Delta D_{\ell }^{TT}\) of the TT mode of CMB power spectra between PLCE and \(\Lambda \)CDM, where the legend is the same as Fig. 1
CMB TE power spectra in the \(\Lambda \)CDM and PLCE models along with the Planck 2018 data
In Fig. 2, we display the CMB power spectra in the \(\Lambda \)CDM and PLCE models along with the data from Planck 2018. Since the TT spectra of PLCE and \(\Lambda \)CDM are almost identical to the data from Planck 2018 for the high values of the multipole l, we focus on the differences between the two models and the data when \(l<100\) as depicted in Fig. 3. The TT power spectrum in the PLCE model for \(\nu > 0\) is larger than that of \(\Lambda \)CDM when \(l < 100\) with the error in the allowable range of the observational data.
For the TE mode, the spectra in PLCE for the different parameters \(\nu \) are always close to that in \(\Lambda \)CDM as well as the observational data of Planck 2018, as shown in Fig. 4. However, when we carefully compare the differences between the results in PLCE and \(\Lambda \)CDM in Fig. 5, we notice that those of PLCE are closer to the Planck 2018 data, comparing to that in \(\Lambda \)CDM.
The change \(\Delta D_{\ell }^{TE}\) of the TT mode of CMB power spectra between PLCE and \(\Lambda \)CDM, where the legend is the same as Fig. 4
Evolutions of the equation-of-state parameter \(w_{DE}\) in \(\Lambda \)CDM and TECE models
Equation (2.7) of TECE becomes the one in \(\Lambda \)CDM when \(\delta =\tilde{\alpha }=1\). In our study, we only focus on the effects when \(\delta \ne 1\) so we set \(\tilde{\alpha }=1\) and \(\delta =1+\xi \). In Fig. 6, we find that the equation of state, \(w_{DE}\), behaves differently for different values of \(\xi \). In particular, it is larger (smaller) than \(-1\) when \(\xi \) is larger (smaller) than zero without crossing \(-1\) in anytime.
In Figs. 7 and 8, we see that the TT Power spectra of TECE and \(\Lambda \)CDM have a large difference in the large scale structure. Note that there is a significant discrepancy between \(\Lambda \)CDM and the data at \(l\sim 20-27\). However, the spectrum of TECE for \(\xi =0.002\) and \(l\sim 20-27\) is below that in \(\Lambda \)CDM, and closer to the observational data of Planck 2018. The shifts of the TE mode between PLCE and \(\Lambda \)CDM are shown in Figs. 9 and 10.
Legend is the same as Fig. 2 but in the TECE model with a set of \(\xi \)
H(z) data points
H(z)
69.0 ± 19.6
117.0 ± 23.0
83.0 ± 8.0
3.3 Global fits
We use the modified \(\mathbf{CAMB}\) and CosmoMC program [32] to do the global cosmological fits for the PLCE and TECE models from the observational data with the MCMC method. The dataset includes those of the CMB temperature fluctuation from Planck 2015 with TT, TE, EE, low-l polarization and CMB lensing from SMICA [45, 46, 47], the weak lensing (WL) data from CFHTLenS [48], and the BAO data from 6dF Galaxy Survey [49] and BOSS [50]. In particular, we include 35 points for the H(z) measurements in our fits, which are listed in Table 1. The \(\chi ^2\) fit is given by
$$\begin{aligned} {\chi ^2}={\chi ^2_{CMB}}+{\chi ^2_{WL}}+{\chi ^2_{BAO}}+{\chi ^2_{H(z)}}, \end{aligned}$$
$$\begin{aligned} \chi ^2_c = \sum _{i=1}^n \frac{(T_c(z_i) - O_c(z_i))^2}{E^i_c} \,, \end{aligned}$$
where the subscript of "c" denotes the category of the data, n represents the number of the dataset, \(T_c\) is the prediction from CAMB, and \(O_c\) (\(E_c\)) corresponds to the observational value (covariance). The priors of the various cosmological parameters are given in Table 2.
Priors for cosmological parameters in the PLCE and TECE models
PLCE model parameter \(\nu \)
\(-0.025 \le \nu \le 1.0\)
TECE model parameter \(\xi \)
\(-0.01 \le \xi \le 0.02\)
Baryon density
\(0.5 \le 100\Omega _bh^2 \le 10\)
CDM density
\(0.1 \le 100\Omega _ch^2 \le 99\)
Optical depth
\(0.01 \le \tau \le 0.8\)
Neutrino mass sum
\(0 \le \Sigma m_{\nu } \le 2\) eV
\(\frac{\mathrm {Sound \ horizon}}{\mathrm {Angular \ diameter \ distance}}\)
\(0.5 \le 100 \theta _{MC} \le 10\)
Scalar power spectrum amplitude
\(2 \le \ln ( 10^{10} A_s ) \le 4\)
Spectral index
\(0.8 \le n_s \le 1.2\)
One and two-dimensional distributions of \(\Omega _b h^2\), \(\Omega _c h^2\), \(\sum m_\nu \), \(\nu \), \(H_0\), and \(\sigma _8\) in the PLCE and \(\Lambda \)CDM models, where the contour lines represent 68\(\%\) and 95\(\%\) C.L., respectively
Legend is the same as Fig. 11 but for the TECE and \(\Lambda \)CDM models
Fitting results for the PLCE and \(\Lambda \)CDM models, where the limits are given at 68\(\%\) and 95\(\%\) C.L., respectively
PLCE (68% C.L.)
\(\Lambda \)CDM (68% C.L.)
\({\varvec{\Omega }}_{{\varvec{ b}}} {{\varvec{ h}}}^{{\varvec{ 2}}} \)
\(0.02237\pm 0.00014 \)
\(0.02237 \pm 0.00027 \)
\(0.02235^{+0.00028}_{-0.00027}\)
\({\varvec{\Omega }}_{{\varvec{ c}}} {{\varvec{ h}}}^{{\varvec{ 2}}} \)
\(0.1172^{+0.0012}_{-0.0011}\)
\(0.1173\pm 0.0012 \)
\(0.1173 \pm 0.0023 \)
\({\varvec{ 100}}{\varvec{\theta }}_{{\varvec{ MC}}} \)
\(1.04101 \pm 0.00059\)
\({\varvec{\tau }} \)
\(0.079^{+0.017}_{-0.019} \)
\(0.078\pm 0.018 \)
\({\varvec{\Sigma }} {{\varvec{ m}}}_{\varvec{\nu }} \)/ eV
\(< 0.0982 \)
\(< 0.183 \)
\({\varvec{\nu }} \)
\(0.0240^{+0.0110}_{-0.0085} \)
\({{\mathbf{ln}}}(\mathbf{10}^{\mathbf{10}} {\varvec{ A}}_{{\varvec{ s}}})\)
\(H_0 \)
\(67.96\pm 0.56 \)
\(68.0 \pm 1.1 \)
\(\sigma _8 \)
\(\chi ^2_{best-fit} \)
Fitting results for the TECE and \(\Lambda \)CDM models, where the limits are given at 68\(\%\) and 95\(\%\) C.L., respectively
TECE (68% C.L.)
\(0.1174 \pm 0.0025\)
\(0.090 \pm 0.042 \)
\({\varvec{\Sigma }} {{\varvec{ m}}}_{\varvec{\nu }} \)/eV
\({\varvec{\xi }} \)
\(68.05^{+0.60}_{-0.54} \)
\(68.1^{+1.1}_{-1.2} \)
In Fig. 11, we present our fitting results of PLCE (red) and \(\Lambda \)CDM (blue). Although the PLCE model has been discussed in the literature, it is the first time to illustrate its numerical cosmological effects. In particular, we find that \(\nu \) \(=\) \(( 0.0240^{+0.0110}_{-0.0085})\) in 68\(\%\) C.L., which shows that PLCE and \(\Lambda \)CDM can be clearly distinguished. It is interesting to note that the value of \(\sigma _8=0.814^{+0.023}_{-0.026} \) (95\(\%\) C.L.) in PLCE is smaller than that of \(0.815^{+0.023}_{-0.025}\) (95\(\%\) C.L.) in \(\Lambda \)CDM. As shown in Table 3, the best fitted \(\chi ^2\) value in PLCE is 3017.12, which is also smaller than 3018.32 in \(\Lambda \)CDM. Although the cosmological observables for the best \(\chi ^2\) fit in PLCE do not significantly deviate from those in \(\Lambda \)CDM, it indicates that the PLCE model is closer to the observational data than \(\Lambda \)CDM.
Similarly, we show our results for TECE (red) and \(\Lambda \)CDM (blue) in Fig. 12. Explicitly, we get that \(\xi \) = \(( 3.8\pm {2.7})\times 10^{-4}\) in 68\(\%\) C.L. In addition, the TECE model can relax the limit of the total mass of the active neutrinos. In particular, we have that \(\Sigma m_\nu \) \(< 0.317\) eV, comparing to \(\Sigma m_\nu \) \(< 0.195\) eV in \(\Lambda \)CDM at 95\(\%\) C.L. In addition, the value of \(H_0\) in TECE equals to \(68.42\pm {0.71}\) \((68.4\pm {1.4})\), which is larger than \(68.05^{+0.60}_{-0.54}\) \((68.1^{+1.1}_{-1.2})\) in \(\Lambda \)CDM with 68\(\%\) (95\(\%\)) C.L.
As shown in Table 4, the best fitted \(\chi ^2\) value in the TECE model is 3018.96, which is smaller than 3019.28 in \(\Lambda \)CDM model. Although the difference between the value of \(\chi ^2\) in TECE and \(\Lambda \)CDM is not significant, it still implies that the TECE model can not be ignored. Clearly, more considerations and discussions are needed in the future.
We have calculated the cosmological evolutions of \(\rho _{DE}\) and \(w_{DE}\) in the PLCE and TECE models. We have found that the EoS of dark energy in PLCE (TECE) does not cross \(-1\). We have shown that the CMB TE power spectrum of the PLCE model with a positive \(\nu \) is closer to the Planck 2018 data than that in \(\Lambda \)CDM, while the CMB TT spectrum in the TECE model has smaller values around \(l\sim 20-27\), which are lower than that in \(\Lambda \)CDM, but close to the data of Planck 2018. By using the Newton method in the global fitting, we have obtained the first numerical result in the PLCE model with \(\nu =0.0240^{+0.0110}_{-0.0085} \) in 68\(\%\) C.L., which can be distinguished well with \(\Lambda \)CDM. Our Fitting results indicate that the PLCE model gives a smaller value of \(\sigma _8\) with a better \(\chi ^2\) value than \(\Lambda \)CDM. In the TECE model, we have gotten that \(\xi =(3.8\pm 2.7)^{-4}\) and \(\Sigma m_\nu \) \(< 0.186\) eV in 68\(\%\) C.L., while \(H_0\) is closer to 70. The best fitted value of \(\chi ^2\) is 3018.96 in the TECE model, which is smaller than 3019.28 in \(\Lambda \)CDM. These results have demonstrated that the TECE model deserves more attention and research in the future.
This work was supported in part by National Center for Theoretical Sciences and MoST (MoST-107-2119-M-007-013-MY3).
L. Amendola, S. Tsujikawa, Dark Energy : Theory and Observations (Cambridge University Press, Cambridge, 2015)zbMATHGoogle Scholar
S. Weinberg, Rev. Mod. Phys. 61, 1 (1989)ADSCrossRefGoogle Scholar
S. Weinberg, Gravitation and Cosmology (Wiley, New York, 1972)Google Scholar
N. Arkani-Hamed, L.J. Hall, C.F. Kolda, H. Murayama, Phys. Rev. Lett. 85, 4434 (2000)ADSCrossRefGoogle Scholar
P.J.E. Peebles, B. Ratra, Rev. Mod. Phys. 75, 559 (2003)ADSCrossRefGoogle Scholar
E.J. Copeland, M. Sami, S. Tsujikawa, Int. J. Mod. Phys. D 15, 1753 (2006)ADSCrossRefGoogle Scholar
T. Jacobson, Phys. Rev. Lett. 75, 1260 (1995)ADSMathSciNetCrossRefGoogle Scholar
R.G. Cai, S.P. Kim, JHEP 0502, 050 (2005)ADSCrossRefGoogle Scholar
M. Akbar, R.G. Cai, Phys. Lett. B 635, 7 (2006)ADSMathSciNetCrossRefGoogle Scholar
M. Akbar, R.G. Cai, Phys. Rev. D 75, 084003 (2007)ADSCrossRefGoogle Scholar
M. Jamil, E.N. Saridakis, M.R. Setare, Phys. Rev. D 81, 023007 (2010)ADSCrossRefGoogle Scholar
Z.Y. Fan, H. Lu, Phys. Rev. D 91(6), 064009 (2015)ADSMathSciNetCrossRefGoogle Scholar
Y. Gim, W. Kim, S.H. Yi, JHEP 1407, 002 (2014)ADSCrossRefGoogle Scholar
S. Das, S. Shankaranarayanan, S. Sur, Phys. Rev. D 77, 064013 (2008)ADSCrossRefGoogle Scholar
C. Tsallis, J. Stat. Phys. 52, 479 (1988)ADSCrossRefGoogle Scholar
M.L. Lyra, C. Tsallis, Phys. Rev. Lett. 80, 53 (1998)ADSCrossRefGoogle Scholar
G. Wilk, Z. Wlodarczyk, Phys. Rev. Lett. 84, 2770 (2000)ADSCrossRefGoogle Scholar
D.A. Easson, P.H. Frampton, G.F. Smoot, Phys. Lett. B 696, 273 (2011)ADSCrossRefGoogle Scholar
N. Komatsu, S. Kimura, Phys. Rev. D 88, 083534 (2013)ADSCrossRefGoogle Scholar
E.M.C. Abreu et al., Physica A 441, 141 (2016)ADSMathSciNetCrossRefGoogle Scholar
S. Ghaffari et al., Eur. Phys. J. C 78, 706 (2018)ADSCrossRefGoogle Scholar
M.A. Zadeh, A. Sheykhi, H. Moradpour, K. Bamba, Eur. Phys. J. C 78, 940 (2018)ADSCrossRefGoogle Scholar
E.N. Saridakis, K. Bamba, R. Myrzakulov, F.K. Anagnostopoulos, JCAP 1812, 012 (2018)ADSCrossRefGoogle Scholar
M. Tavayef, A. Sheykhi, K. Bamba, H. Moradpour, Phys. Lett. B 781, 195 (2018)ADSCrossRefGoogle Scholar
E.M. Barboza et al., Physica A 436, 301 (2015)ADSMathSciNetCrossRefGoogle Scholar
M. Abdollahi Zadeh, A. Sheykhi, H. Moradpour, Mod. Phys. Lett. A 34, 1950086 (2019)CrossRefGoogle Scholar
S. Nojiri, S.D. Odintsov, E.N. Saridakis, Eur. Phys. J. C 79(3), 242 (2019)ADSCrossRefGoogle Scholar
C. Tsallis, L.J.L. Cirto, Eur. Phys. J. C 73, 2487 (2013)ADSCrossRefGoogle Scholar
K. Bamba, S. Capozziello, S. Nojiri, S.D. Odintsov, Astrophys. Space Sci. 342, 155 (2012)ADSCrossRefGoogle Scholar
A. Lymperis, E.N. Saridakis, Eur. Phys. J. C 78, 993 (2018)ADSCrossRefGoogle Scholar
W.H. Press et al., Numerical Recipes 3rd Edition: The Art of Scientific Computing (Cambridge University Press, Cambridge, 2007)zbMATHGoogle Scholar
A. Lewis, S. Bridle, Phys. Rev. D 66, 103511 (2002)ADSCrossRefGoogle Scholar
C. Zhang, H. Zhang, S. Yuan, T.J. Zhang, Y.C. Sun, Res. Astron. Astrophys. 14, 1221 (2014)ADSCrossRefGoogle Scholar
R. Jimenez, L. Verde, T. Treu, D. Stern, Astrophys. J. 593, 622 (2003)ADSCrossRefGoogle Scholar
J. Simon, L. Verde, R. Jimenez, Phys. Rev. D 71, 123001 (2005)ADSCrossRefGoogle Scholar
M. Moresco et al., JCAP 1208, 006 (2012)ADSCrossRefGoogle Scholar
E. Gaztanaga, A. Cabre, L. Hui, Mon. Not. R. Astron. Soc. 399, 1663 (2009)ADSCrossRefGoogle Scholar
D. Stern, R. Jimenez, L. Verde, M. Kamionkowski, S.A. Stanford, JCAP 1002, 008 (2010)ADSCrossRefGoogle Scholar
B.A. Reid et al., Mon. Not. R. Astron. Soc. 426, 2719 (2012)ADSCrossRefGoogle Scholar
M. Moresco, Mon. Not. R. Astron. Soc. 450, L16 (2015)ADSCrossRefGoogle Scholar
N.G. Busca et al., Astron. Astrophys. 552, A96 (2013)CrossRefGoogle Scholar
Y. Hu, M. Li, Z. Zhang, arXiv:1406.7695 [astro-ph.CO]
A. Font-Ribera et al., [BOSS Collaboration], JCAP 1405, 027 (2014)Google Scholar
R. Adam et al., [Planck Collaboration], Astron. Astrophys. 594, A10 (2016)Google Scholar
N. Aghanim et al., [Planck Collaboration], Astron. Astrophys. 594, A11 (2016)Google Scholar
P.A.R. Ade et al., [Planck Collaboration], Astron. Astrophys. 594, A15 (2016)Google Scholar
C. Heymans et al., Mon. Not. R. Astron. Soc. 432, 2433 (2013)ADSCrossRefGoogle Scholar
F. Beutler et al., Mon. Not. R. Astron. Soc. 416, 3017 (2011)ADSCrossRefGoogle Scholar
L. Anderson et al., [BOSS Collaboration], Mon. Not. R. Astron. Soc. 441, 24 (2014)Google Scholar
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funded by SCOAP3
1.School of Fundamental Physics and Mathematical SciencesHangzhou Institute for Advanced Study, UCASHangzhouChina
2.Department of PhysicsNational Tsing Hua UniversityHsinchuTaiwan
3.Physics DivisionNational Center for Theoretical SciencesHsinchuTaiwan
Geng, CQ., Hsu, YT., Lu, JR. et al. Eur. Phys. J. C (2020) 80: 21. https://doi.org/10.1140/epjc/s10052-019-7476-y
Accepted 08 November 2019
DOI https://doi.org/10.1140/epjc/s10052-019-7476-y
Publisher Name Springer Berlin Heidelberg
EPJC is an open-access journal funded by SCOAP3 and licensed under CC BY 4.0 | CommonCrawl |
On the regularity of steady periodic stratified water waves
Vladimir S. Gerdjikov 1, , Georgi Grahovski 2, and Rossen Ivanov 3,
Institute for Nuclear Research and Nuclear Energy, Bulgarian academy of sciences, 72 Tsarigradsko chaussee, 1784 Sofia, Bulgaria
School of Electronic Engineering, Dublin City University, Glasnevin, Dublin 9
School of Mathematical Science, Dublin Institute of Technology, Kevin Street, Dublin 8
Received April 2011 Published January 2012
Non-holonomic deformations of integrable equations of the KdV hierarchy are studied by using the expansions over the so-called ``squared solutions'' (squared eigenfunctions). Such deformations are equivalent to perturbed models with external (self-consistent) sources. In this regard, the KdV6 equation is viewed as a special perturbation of KdV equation. Applying expansions over the symplectic basis of squared eigenfunctions, the integrability properties of the KdV hierarchy with generic self-consistent sources are analyzed. This allows one to formulate a set of conditions on the perturbation terms that preserve the integrability. The perturbation corrections to the scattering data and to the corresponding action-angle variables are studied. The analysis shows that although many nontrivial solutions of KdV equations with generic self-consistent sources can be obtained by the Inverse Scattering Transform (IST), there are solutions that, in principle, can not be obtained via IST. Examples are considered showing the complete integrability of KdV6 with perturbations that preserve the eigenvalues time-independent. In another type of examples the soliton solutions of the perturbed equations are presented where the perturbed eigenvalue depends explicitly on time. Such equations, however in general, are not completely integrable.
Keywords: soliton perturbations, Inverse scattering method, KdV6 equation., self-consistent sources, KdV hierarchy.
Mathematics Subject Classification: Primary: 37K15, 37K40, 37K55; Secondary: 35P10, 35P25, 35P3.
Citation: Vladimir S. Gerdjikov, Georgi Grahovski, Rossen Ivanov. On the integrability of KdV hierarchy with self-consistent sources. Communications on Pure & Applied Analysis, 2012, 11 (4) : 1439-1452. doi: 10.3934/cpaa.2012.11.1439
V. A. Arkad'ev, A. K. Pogrebkov and M. K. Polivanov, Expansions with respect to squares, symplectic and Poisson structures associated with the Sturm-Liouville problem. I, Theor. Math. Phys., 72 (1987), 909-920. Google Scholar
V. A. Arkad'ev, A. K. Pogrebkov and M. K. Polivanov, Expansions with respect to squares, symplectic and Poisson structures associated with the Sturm-Liouville problem. II, Theor. Math. Phys., 75 (1988), 448-460. doi: 10.1007/BF01017483. Google Scholar
G. Borg, Eine Umkehrung der Sturm-Liouvilleschen eigenwertaufgabe. Bestimmung der differentialgleichung durch die eigenwerte, Acta Math., 78 (1946), 1-96, (German). doi: 10.1007/BF02421600. Google Scholar
F. Calogero, A. Degasperis, "Spectral Transform and Solitons Vol 1. Tools to Solve and Investigate Nonlinear Evolution Equations," Studies in Mathematics and its Applications 13 (Lecture Notes in Computer Science vol 144) Amsterdam: North-Holland (1982), p. 516. Google Scholar
R. Camassa and D. Holm, An integrable shallow water equation with peaked solitons, Phys. Rev. Lett., 71 (1993), 1661-1664 (E-print: patt-sol/9305002). doi: 10.1103/PhysRevLett.71.1661. Google Scholar
C. Claude, A. Latifi and J. P. Leon, Nonlinear resonant scattering and plasma instability: an integrable model, J. Math Phys., 32 (1991), 3321-3330. doi: 10.1063/1.529443. Google Scholar
A. Constantin, V. Gerdjikov and R. Ivanov, Inverse scattering transform for the Camassa-Holm equation, Inv. Problems, 22 (2006), 2197-2207 (E-print: nlin/0603019). doi: 10.1088/0266-5611/22/6/017. Google Scholar
A. Constantin, V. Gerdjikov and R. Ivanov, Generalized Fourier transform for the Camassa-Holm hierarchy, Inverse Problems, 23 (2007), 1565-1597 (E-print: arXiv:0707.2048). doi: 10.1088/0266-5611/23/4/012. Google Scholar
A. Constantin and D. Lannes, The hydrodynamical relevance of the Camassa-Holm and Degasperis-Procesi Equations, Arch. Rat. Mech. Anal., 192 (2009), 165-186 (E-print: arXiv:0709.0905). doi: 10.1007/s00205-008-0128-2. Google Scholar
G. Eilenberger, "Solitons: Mathematical Methods for Physicists," Springer Series in Solid-State Sciences. vol. 19, Springer-Verlag, Berlin, (1981). Google Scholar
L. D. Faddeev and L. A. Takhtajan, Poisson structure for the KdV equation, Lett. MAth. Phys., 10 (1985), 183-188. doi: 10.1007/BF00398156. Google Scholar
V. S. Gerdjikov, Generalised Fourier transforms for the soliton equations. Gauge-covariant formulation, Inv. Problems, 2 (1986), 51-74. doi: 10.1088/0266-5611/2/1/005. Google Scholar
V. S. Gerdjikov, The generalized Zakharov-Shabat system and the soliton perturbations, Theoret. and Math. Phys., 99 (1994), 593-598. doi: 10.1007/BF01016144. Google Scholar
V. S. Gerdjikov and M. I. Ivanov, Expansions over the "squared" solutions and the inhomogeneous nonlinear Schrödinger equation, Inv. Problems, 8 (1992), 831-847. doi: 10.1088/0266-5611/8/6/004. Google Scholar
V. S. Gerdjikov and E. Kh. Khristov, Evolution equations solvable by the inverse-scattering method. I. Spectral theory, Bulgarian J. Phys., 7 (1980), 28-41. (In Russian). On evolution equations solvable by the inverse scattering method. II. Hamiltonian structure and Bäcklund transformations Bulgarian J. Phys., 7 (1980), 119-133 (In Russian). Google Scholar
V. S. Gerdjikov, G. Vilasi and A. B. Yanovski, "Integrable Hamiltonian Hierarchies. Spectral and Geometric Methods," Lecture Notes in Physics, 748. Springer-Verlag, Berlin, 2008. doi: 10.1007/978-3-540-77054-1. Google Scholar
V. S. Gerdjikov and A. B. Yanovski, Completeness of the eigenfunctions for the Caudrey-Beals-Coifman system, J. Math. Phys., 35 (1994), 3687-3725. doi: 10.1063/1.530441. Google Scholar
G. G. Grahovski and R. I. Ivanov, Generalised Fourier transform and perturbations to soliton equations, Discr. Cont. Dyn. Syst. B, 12 (2009), 579-595 (E-print: arXiv:0907.2062). doi: 10.3934/dcdsb.2009.12.579. Google Scholar
P. Guha, Nonholonomic deformation of generalized KdV-type equations, J. Phys. A: Math. Theor., 42 (2009), 345201. doi: 10.1088/1751-8113/42/34/345201. Google Scholar
I. Iliev, E. Khristov and K. Kirchev, "Spectral Methods in Soliton Equations," Pitman Monographs and Surveys in Pure and Appl. Math., 73, Pitman, London, 1994. Google Scholar
R. I. Ivanov, Water waves and integrability, Philos. Trans. R. Soc. Lond. Ser. A: Math. Phys. Eng. Sci., 365 (2007), 2267-2280 (E-print: arXiv:0707.1839). doi: 10.1098/rsta.2007.2007. Google Scholar
R. S. Johnson, Camassa-Holm, Korteweg-de Vries and related models for water waves, J. Fluid. Mech., 457 (2002), 63-82. doi: 10.1017/S0022112001007224. Google Scholar
R. S. Johnson, On solutions of the Camassa-Holm equation, Proc. Roy. Soc. London A, 459 (2003), 1687-1708. doi: 10.1016/S0169-5983(03)00036-4. Google Scholar
A. Karasu-Kalkantli, A. Karasu, A. Sakovich, S. Sakovich and R. Turhan, A new integrable generalization of the KdV equation, J. Math. Phys., 49 (2008), 073516. doi: 10.1063/1.2953474. Google Scholar
V. I. Karpman and E. M. Maslov, Perturbation theory for solitons, Soviet Phys. JETP, 46 (1977), 537-559. Google Scholar
D. J. Kaup, A perturbation expansion for the Zakharov-Shabat inverse scattering transform, SIAM J. Appl. Math., 31 (1976), 121-133. doi: 10.1137/0131013. Google Scholar
D. J. Kaup, Closure of the squared Zakharov-Shabat eigenstates, J. Math. Anal. Appl., 54 (1976), 849-864. doi: 10.1016/0022-247X(76)90201-8. Google Scholar
D. J. Kaup, In "Significance of Nonlinearity in the Natural Science" (Kursunoglu, A. Perlmutter and L. F. Scott eds.), Plenum Press, p. 97, 1977. Google Scholar
D. J. Kaup and A. C. Newell, Solitons as particles, oscillators, and in slowly changing media: A singular perturbation theory, Proc. Roy. Soc., A361 (1978), 413-446. doi: 10.1098/rspa.1978.0110. Google Scholar
K. P. Kirchev and E. Kh. Hristov, Expansions connected with the products of the solutions of two regular Sturm-Liouville problems, Sibirsk. Mat. Zh., 21 (1980), 98-109 (Russian). Google Scholar
A. Kundu, Exact accelerating soliton in nonholonomic deformation of the KdV equation with two-fold integrable hierarchy, J. Phys. A: Math. Theor., 41 (2008), 495201. doi: 10.1088/1751-8113/41/49/495201. Google Scholar
A. Kundu, R. Sahadevan and L. Nalinidevi, Nonholonomic deformation of KdV and mKdV equations and their symmetries, hierarchies and integrability, J. Phys. A: Math. Theor., 42 (2009), 115213. doi: 10.1088/1751-8113/42/11/115213. Google Scholar
A. Kundu, Nonlinearizing linear equations to integrable systems including new hierarchies with nonholonomic deformations, J. Math. Phys., 50 (2009), 102702. doi: 10.1063/1.3204081. Google Scholar
A. Kundu, Two-fold integrable hierarchy of nonholonomic deformation of the derivative nonlinear Schröinger and the Lenells-Fokas equation, J. Math. Phys., 51 (2010), 022901. doi: 10.1063/1.3276447. Google Scholar
B. A. Kupershmidt, KdV6: an integrable system, Phys. Lett. A, 372 (2008), 2634-2639. doi: :10.1016/j.physleta.2007.12.019. Google Scholar
J. P. Leon, General evolution of the spectral transform from the $\partial$-approach, Phys. Lett, 123A (1987), 65-70. doi: 10.1016/0375-9601(87)90657-8. Google Scholar
J. P. Leon and A. Latifi, Solution of an initial-boundary value problem for coupled nonlinear waves, J. Phys. A: Math. Gen., 23 (1990), 1385-1403. doi: 10.1088/0305-4470/23/8/013. Google Scholar
J. P. Leon, Nonlinear evolutions with singular dispersion laws and forced systems, Phys. Lett, 144A (1990), 444-452. doi: 10.1016/0375-9601(90)90512-M. Google Scholar
J. Leon, Spectral transform and solitons for generalized coupled Bloch systems, J. Math. Phys., 29 (1988), 2012-2019. doi: 10.1063/1.527859. Google Scholar
V. K. Melnikov, Integration of the Korteweg-de Vries eqtion with source, Inverse Probl., 6 (1990), 233-246. doi: 10.1088/0266-5611/6/2/007. Google Scholar
V. K. Melnikov, Creation and annihilation of solitons in the system described by the Kortweg-de Vries equation with a self-consistent source, Inverse Probl., 6 (1990), 809-823. doi: 10.1088/0266-5611/6/5/010. Google Scholar
A. C. Newell, In "Solitons" (R. K. Bullough and P. J. Caudrey eds.), Springer Verlag, 1980. Google Scholar
S. P. Novikov, S. V. Manakov, L. P. Pitaevskii and V. E. Zakharov, "Theory of Solitons: the Inverse Scattering Method," Plenum, New York, 1984. Google Scholar
A. Ramani, B. Grammaticos and R. Willox, Bilinearization and solutions of the KdV6 equation, Anal. Appl., 6 (2008), 401-412. doi: 10.1142/S0219530508001249. Google Scholar
T. Valchev, On the Kaup-Kupershmidt equation. Completeness relations for the squared solutions, in "Nineth International Conference on Geometry, Integrability and Quantization, June 8-13 2007, Varna, Bulgaria" (eds. I. Mladenov and M. de Leon), SOFTEX, Sofia (2008), 308-319. Google Scholar
Jing Ping Wang, Extension of integrable equations, J. Phys. A: Math. Theor., 42 (2009), 362204. doi: 10.1088/1751-8113/42/36/362004. Google Scholar
A-M. Wazwaz, The integrable KdV6 equations: Multiple soliton solutions and multiple singular soliton solutions, Applied Mathematics and Computation, 204 (2008), 963-972. doi: 10.1016/j.amc.2008.08.007. Google Scholar
Y. Q. Yao and Y. B. Zeng, Integrable Rosochatius deformations of higher-order constrained flows and the soliton hierachy with self-consistent source, J. Phys. A: Math. Theor., 41 (2008), 295205. doi: 10.1088/1751-8113/41/29/295205. Google Scholar
Y. Q. Yao and Y. B. Zeng, The bi-Hamiltonian structure and new solutions of KdV6 equation, Lett. Math. Phys., 86 (2008), 193-208. doi: 10.1007/s11005-008-0281-4. Google Scholar
V. Zakharov and L. Faddeev, Korteweg-de Vries equation is a completely integrable Hamiltonian system, Func. Anal. Appl., 5 (1971), 280-287 (English). Google Scholar
Y. B. Zeng, Y. J. Shao and W. Xue, Negaton and positon solutions of the soliton equation with sself-consistent sources, J. Phys. A Math. Gen., 36 (2003), 5035-5043. doi: 10.1088/0305-4470/36/18/308. Google Scholar
María Santos Bruzón, Tamara María Garrido. Symmetries and conservation laws of a KdV6 equation. Discrete & Continuous Dynamical Systems - S, 2018, 11 (4) : 631-641. doi: 10.3934/dcdss.2018038
Benjamin Dodson, Cristian Gavrus. Instability of the soliton for the focusing, mass-critical generalized KdV equation. Discrete & Continuous Dynamical Systems, 2021 doi: 10.3934/dcds.2021171
Yuan Li, Shou-Fu Tian. Inverse scattering transform and soliton solutions of an integrable nonlocal Hirota equation. Communications on Pure & Applied Analysis, 2022, 21 (1) : 293-313. doi: 10.3934/cpaa.2021178
Parikshit Upadhyaya, Elias Jarlebring, Emanuel H. Rubensson. A density matrix approach to the convergence of the self-consistent field iteration. Numerical Algebra, Control & Optimization, 2021, 11 (1) : 99-115. doi: 10.3934/naco.2020018
Fanni M. Sélley. A self-consistent dynamical system with multiple absolutely continuous invariant measures. Journal of Computational Dynamics, 2021, 8 (1) : 9-32. doi: 10.3934/jcd.2021002
Juan-Ming Yuan, Jiahong Wu. The complex KdV equation with or without dissipation. Discrete & Continuous Dynamical Systems - B, 2005, 5 (2) : 489-512. doi: 10.3934/dcdsb.2005.5.489
Shi Jin, Christof Sparber, Zhennan Zhou. On the classical limit of a time-dependent self-consistent field system: Analysis and computation. Kinetic & Related Models, 2017, 10 (1) : 263-298. doi: 10.3934/krm.2017011
Yulan Wang. Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 329-349. doi: 10.3934/dcdss.2020019
Na An, Chaobao Huang, Xijun Yu. Error analysis of discontinuous Galerkin method for the time fractional KdV equation with weak singularity solution. Discrete & Continuous Dynamical Systems - B, 2020, 25 (1) : 321-334. doi: 10.3934/dcdsb.2019185
Alberto Maspero, Beat Schaad. One smoothing property of the scattering map of the KdV on $\mathbb{R}$. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1493-1537. doi: 10.3934/dcds.2016.36.1493
Annie Millet, Svetlana Roudenko. Generalized KdV equation subject to a stochastic perturbation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (3) : 1177-1198. doi: 10.3934/dcdsb.2018147
Rowan Killip, Soonsik Kwon, Shuanglin Shao, Monica Visan. On the mass-critical generalized KdV equation. Discrete & Continuous Dynamical Systems, 2012, 32 (1) : 191-221. doi: 10.3934/dcds.2012.32.191
S. Raynor, G. Staffilani. Low regularity stability of solitons for the KDV equation. Communications on Pure & Applied Analysis, 2003, 2 (3) : 277-296. doi: 10.3934/cpaa.2003.2.277
Tadahiro Oh, Yuzhao Wang. On global well-posedness of the modified KdV equation in modulation spaces. Discrete & Continuous Dynamical Systems, 2021, 41 (6) : 2971-2992. doi: 10.3934/dcds.2020393
Yuqian Zhou, Qian Liu. Reduction and bifurcation of traveling waves of the KdV-Burgers-Kuramoto equation. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2057-2071. doi: 10.3934/dcdsb.2016036
Gianluca Frasca-Caccia, Peter E. Hydon. Locally conservative finite difference schemes for the modified KdV equation. Journal of Computational Dynamics, 2019, 6 (2) : 307-323. doi: 10.3934/jcd.2019015
Jerry L. Bona, Stéphane Vento, Fred B. Weissler. Singularity formation and blowup of complex-valued solutions of the modified KdV equation. Discrete & Continuous Dynamical Systems, 2013, 33 (11&12) : 4811-4840. doi: 10.3934/dcds.2013.33.4811
Rong Rong, Yi Peng. KdV-type equation limit for ion dynamics system. Communications on Pure & Applied Analysis, 2021, 20 (4) : 1699-1719. doi: 10.3934/cpaa.2021037
Simopekka Vänskä. Stationary waves method for inverse scattering problems. Inverse Problems & Imaging, 2008, 2 (4) : 577-586. doi: 10.3934/ipi.2008.2.577
Fang Zeng. Extended sampling method for interior inverse scattering problems. Inverse Problems & Imaging, 2020, 14 (4) : 719-731. doi: 10.3934/ipi.2020033
Vladimir S. Gerdjikov Georgi Grahovski Rossen Ivanov | CommonCrawl |
Earlier than Jan-15-2021 (483)
Any in Technology (483)
Any in Industry (334)
Any in AI-Alerts (7)
Any in Genre (143)
#artificialintelligence (111)
@machinelearnbot (3)
AAAI Conferences (19)
AI Magazine (8)
AITopics Original Links (1)
arXiv.org Artificial Intelligence (33)
arXiv.org Machine Learning (149)
Communications of the ACM (5)
Daily Mail - Science & tech (1)
Journal of Artificial Intelligence Research (16)
Neural Information Processing Systems (119)
TIME - Tech (1)
U.S. News (1)
arXiv.org (182)
Conferences (138)
AbdAlmageed, Wael (1)
Abdulkadir, Ahmed (1)
Adibi, Arman (1)
Agarwal, Divyansh (1)
Agarwal, Ritesh (1)
Agarwal, Shivani (2)
Agirre, E. (1)
Ahmed, Nisar R (1)
Aliod, Diego Molla (1)
Allen-Blanchette, Christine (1)
Alonso, Eduardo (1)
Amin, Kareem (1)
Anandkumar, Anima (1)
Anati, Roy (1)
Andonian, Alex (1)
Andrassy, Bernt (1)
Anwar, Syed Muhammad (1)
Appling, Alison (1)
Apsel, Alyssa (3)
Arslan, Omur (1)
Arthur, John V. (3)
Ashmore, Anthony (1)
Atanasova, Pepa (1)
Atkeson, Chris G. (1)
Bagci, Ulas (2)
Bagnell, J. A. (3)
Bakas, Spyridon (1)
Baldi, Pierre (3)
Baluja, Shumeet (3)
Bangalore, Srinivas (1)
Banser, Anita (2)
Bargal, Sarah Adel (1)
Baron, Jonathan (1)
Barros, Gabriella A. B. (1)
Barrón-Cedeño, Alberto (1)
Bassett, Danielle S. (2)
Basu, Sumit (1)
Bauer, Christopher (1)
Beck, Erin S (1)
Beck, Joseph (1)
Bendersky, Leonid A. (1)
Benjamin, Ari S. (2)
Bergermann, Kai (1)
Berhan, Getachew (1)
Berk, Richard (2)
Beshah, Tibebe (1)
Bhanu, Bir (1)
Birmingham, W. P. (4)
Biswas, Sanjay (3)
Blackman, David (3)
Blanchard, David (1)
Bleich, Justin (1)
Blitzer, John (1)
Blunsom, Phil (1)
Blythe, Jim (1)
Boahen, Kwabena (6)
Boahen, Kwabena A. (1)
Boddy, Mark (1)
Bogdan, Malgorzata (1)
Bowen, Dillon (1)
Brantley, Susan L. (1)
Braun, Daniel A. (1)
Breese, John S. (1)
Breitenstein, Matthew K. (1)
BrightSky, Matthew (1)
Brown, Christopher D. (1)
Brown, Lawrence D. (1)
Brown, Lisa (1)
Bruna, Joan (2)
Brunskill, Emma (1)
Bu, Zhiqi (1)
Buja, Andreas (1)
Bzdok, Danilo (1)
Cai, Biao (2)
Cai, T. Tony (16)
Cai, Tianxi (2)
Cairns, Junmei (1)
Calhoun, Vince (1)
Calhoun, Vince D. (3)
Candes, Emmanuel (1)
Caraballo, Pedro J. (1)
Carbonell, Jaime G. (1)
Carenini, G. (1)
Carroll, Raymond (1)
Cesta, Amedeo (1)
Chakrabortty, Abhishek (3)
Chaudhari, Pratik (1)
Che, Wanxiang (4)
Chechik, Gal (3)
Cheikes, Brant A. (1)
Chen, Ji (1)
Chen, Lin (3)
Chen, Tianyi (1)
Chen, Yudong (1)
Cheng, Li-Fang (1)
Chipman, Hugh A. (3)
Chiu, Timothy (3)
Chivers, Corey (1)
Clare, Thomas (3)
Clarke, John R. (1)
Cohen, Timothy (1)
Cohn, T. A. (1)
Collins, Joseph B. (1)
Collins, M. J. (1)
Cong, Shan (1)
Corbo, Jacomo (1)
Costabal, Francisco Sahli (1)
Cour, Timothee (3)
Cranshaw, Justin (1)
Cuadra, Meritxell Bach (1)
Cui, Yuhan (1)
Curtarolo, Stefano (1)
Dai, Xiaowu (1)
Daniilidis, Kostas (2)
Dann, Christoph (1)
Dao, Joseph (3)
Darnell, Gregory (1)
Dasgupta, Prithviraj (1)
Davatzikos, Christos (2)
Davies, Scott (3)
Davydov, Albert V. (1)
De Pierro, Alvaro (1)
DeCost, Brian (1)
Der, Ricky (3)
Detre, John A. (1)
Devarajan, Karthik (1)
Dhillon, Paramveer (3)
Dietterich, Thomas G. (1)
Dobriban, Edgar (1)
Dodonova, Yulia (1)
Donham, Christopher (3)
Doshi, Jimit (1)
Drake, John (1)
Draugelis, Michael E (1)
Dredze, Mark (2)
Dunson, David (1)
Durfee, E. H. (4)
Dyer, Justin (1)
Eagle, Nathan (2)
Eaton, Eric (2)
Edwards, Brandon (1)
Efros, Alexei (2)
Eisen, Mark (1)
Elzayn, Hadi (1)
Engelhardt, Barbara E (1)
Engelhardt, Barbara E. (1)
Erman, L. (1)
Erus, Guray (1)
Eryilmaz, S. Burc (1)
Eskenazi, Maxine (1)
Esteves, Carlos (1)
Etienne-Cummings, Ralph (3)
Fahlman, Scott E. (1)
Fan, Chuchu (1)
Fan, Quanfu (1)
Fan, Yong (2)
Feng, Zhili (1)
Fertig, Kenneth W. (1)
Fichte, Johannes K. (1)
Finkel, Leif (1)
Foster, Dean (1)
Foster, Dean P. (6)
Fox, Emily B. (2)
Freytsis, Marat (1)
Friedman, Nir (1)
Frisch, Jérôme (1)
Fu, Weixuan (2)
Gama, Fernando (2)
Gama, João (2)
Ganchev, Kuzman (2)
George, Edward I. (3)
Globerson, Amir (4)
Goldberg, Kenneth Y. (3)
Gossmann, Alexej (2)
Guo, Jiang (4)
Guralnik, Dan P. (2)
Hajek, Bruce (2)
Hassani, Hamed (10)
Hill, Shawndra (3)
Hsieh, Tzu-pu (3)
Isele, David (3)
Isler, Volkan (2)
Israelsen, Brett W (2)
Jadbabaie, Ali (11)
Jeong, Heejin (2)
Jog, Varun (2)
Jordan, Michael I. (2)
Jung, Christopher (2)
Kakade, Sham M. (7)
Karbasi, Amin (5)
Karniadakis, George Em (3)
Kassani, Peyman Hosseinzadeh (2)
Kearns, Michael (11)
Khim, Justin (2)
Koppel, Alec (3)
Kulesza, Alex (4)
Kumar, Harshat (2)
La Cava, William (3)
Lapata, M. (2)
Li, Hongzhe (6)
Li, Sai (2)
Li, Xiang (2)
Li, Xiaodong (3)
Lin, Yuanqing (3)
Litt, Brian (2)
Liu, Tianlin (2)
Liu, Ting (4)
Loh, Po-Ling (4)
Loinaz, Marc (3)
Lu, Yichao (4)
Ma, Rong (2)
Ma, Zongming (2)
Mangasarian, Olvi L. (3)
Mcculloch, Robert E. (3)
Mokhtari, Aryan (7)
Moore, Andrew W. (2)
Moore, Jason H. (7)
Moran, Bill (2)
Mossel, Elchanan (2)
Mueller, Paul (6)
Musicant, David R. (3)
Ng, Andrew Y. (3)
Ning, Qiang (2)
Normoyle, Aline (2)
Olson, Matthew (3)
Olson, Randal S. (4)
Ortega, Pedro A. (6)
Orzechowski, Patryk (2)
Pappas, George J. (5)
Park, S. (4)
Paskin, Mark A. (3)
Pearlmutter, Barak A. (3)
Perdikaris, Paris (11)
Pereira, Fernando (6)
Pomerleau, Dean (3)
Pomerleau, Dean A. (3)
Rahimian, Mohammad Amin (3)
Raissi, Maziar (3)
Rakhlin, Alexander (4)
Ribeiro, Alejandro (9)
Robey, Alexander (2)
Rostami, Mohammad (4)
Roth, Dan (2)
Sapp, Benjamin (2)
Saul, Lawrence K. (7)
Schneider, Jeff G. (3)
Sedoc, João (3)
Sha, Fei (7)
Shahrampour, Shahin (7)
Shi, Jianbo (6)
Soltanolkotabi, Mahdi (2)
Spiegel, Jan Van der (6)
Srinivasan, Praveen (4)
Stephen, Julia M. (3)
Stocker, Alan A. (7)
Su, Weijie (4)
Sun, Tingni (2)
Syed, Umar (2)
Taba, Brian (4)
Takahashi, Naomi (3)
Tan, Jinsong (3)
Taskar, Ben (8)
Tishby, Naftali (4)
Torigian, Drew A. (2)
Tsiamis, Anastasios (2)
Ungar, Lyle (5)
Ungar, Lyle H. (2)
Urbanowicz, Ryan J. (3)
Venkatesh, Santosh S. (10)
Visontai, Mirkó (2)
Wang, Changfeng (3)
Wang, Haifeng (4)
Wang, Lei (2)
Wang, Sifan (2)
Wang, Xuezhi (2)
Wang, Yu-Ping (4)
Wang, Zhuo (2)
Wei, Hongji (2)
Wei, Xue-xin (3)
Weinberger, Kilian Q. (3)
Wilson, Tony W. (3)
Wortman, Jennifer (10)
Wu, Yihong (3)
Wyner, Abraham (3)
Wyner, Abraham J. (2)
Xiao, Li (3)
Xu, Jiaming (5)
Yang, Yibo (4)
Yarowsky, David (4)
Zhang, Anru (5)
Zhang, Cun-hui (2)
Zhang, Gemeng (2)
Zhang, Linjun (2)
Zhang, Mingrui (2)
Zhou, Wen-Xin (2)
Zhu, Dekang (2)
Zhu, Qihui (3)
accuracy (27)
algorithm (181)
algorithm 1 (12)
approximation (17)
arxiv preprint arxiv (23)
assumption (33)
bayesian inference (37)
civil rights & constitutional law (14)
classification (18)
classifier (27)
computation (12)
computational linguistics (18)
compute (19)
computer based training (11)
constraint (37)
convergence (24)
correlation (13)
data mining (33)
denote (30)
diagnostic medicine (27)
educational setting (36)
educational software (11)
educational technology (18)
eigenvalue (11)
equation (19)
estimator (32)
generalization (9)
generalization error (9)
gradient (12)
graph (12)
health & medicine (156)
inductive learning (14)
inference (18)
information technology services (11)
interaction (15)
international conference (21)
iteration (27)
learner (14)
likelihood (12)
loss function (15)
natural language (50)
neural network (132)
neuron (30)
node (24)
noise (13)
optimization problem (43)
physician (16)
procedure (20)
projection (11)
reinforcement learning (29)
sequence (33)
subset (12)
survey article (34)
theorem (11)
threshold (11)
training data (24)
us government (36)
variance (11)
2009 AAAI Fall Symposium Series (1)
2010 AAAI Spring Symposium Series (4)
Eighth Artificial Intelligence and Interactive Digital Entertainment Conference (1)
Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (1)
Fifteenth AAAI/SIGART Doctoral Consortium (1)
Sixth Artificial Intelligence and Interactive Digital Entertainment Conference (1)
Sixth International AAAI Conference on Weblogs and Social Media (1)
Thirty-First AAAI Conference on Artificial Intelligence (3)
Thirty-Second AAAI Conference on Artificial Intelligence (3)
Twenty-Eighth AAAI Conference on Artificial Intelligence (1)
Twenty-Ninth AAAI Conference on Artificial Intelligence (2)
Any in Country (483)
Page 1 of 483 results
Joint Estimation of Image Representations and their Lie Invariants
Allen-Blanchette, Christine, Daniilidis, Kostas
arXiv.org Artificial Intelligence Dec-8-2020
The former is useful for tasks such as planning and control, and the latter for classification. The automatic extraction of this information is challenging because of the high-dimensionality and entangled encoding inherent to the image representation. This article introduces two theoretical approaches aimed at the resolution of these challenges. The approaches allow for the interpolation and extrapolation of images from an image sequence by joint estimation of the image representation and the generators of the sequence dynamics. In the first approach, the image representations are learned using probabilistic PCA [1]. The linear-Gaussian conditional distributions allow for a closed form analytical description of the latent distributions but assumes the underlying image manifold is a linear subspace. In the second approach, the image representations are learned using probabilistic nonlinear PCA which relieves the linear manifold assumption at the cost of requiring a variational approximation of the latent distributions. In both approaches, the underlying dynamics of the image sequence are modelled explicitly to disentangle them from the image representations. The dynamics themselves are modelled with Lie group structure which enforces the desirable properties of smoothness and composability of inter-image transformations.
artificial intelligence, neural network, representation, (16 more...)
arXiv.org Artificial Intelligence
Country: North America > United States > Pennsylvania (0.46)
Information Technology > Sensing and Signal Processing > Image Processing (1.00)
Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.93)
New Artificial Neural Networks To Use Graphene Memristors
#artificialintelligence Nov-26-2020, 08:06:07 GMT
Research in the field of traditional computing systems is slowing down, with new types of computing moving to the forefront now. A team of engineers from Pennsylvania State University (Penn State) in the U.S. has been working on creating a type of computing based on our brain's neural networks' systems all while using the brain's analog nature. The team has discovered that graphene-based memory resistors show promise for this new computing form. Their findings were recently published in Nature Communications. "We have powerful computers, no doubt about that, the problem is you have to store the memory in one place and do the computing somewhere else," said Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics.
artificial intelligence, machine learning, new artificial neural network, (4 more...)
Technology: Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
Machine Learning Helps Reduce Food Insecurity During COVID-19
The team leveraged advanced analytics tools to create cost-effective bus routes that allow nonprofit organizations to deliver meals to senior citizens, as well as K-12 students and families who would otherwise rely on schools for free meals. The machine learning tools identified ideal distribution locations to reach as many people as possible, three days a week. The program began in July, and nearly 6,000 meals are delivered each month. It has since expanded to include dinners that feed a family of four. Carnegie Mellon's Metro21: Smart City Institute leads the project alongside Allies for Children, United Way of Southwestern Pennsylvania and the Greater Pittsburgh Community Food Bank.
health & medicine, immunology, learning help reduce food insecurity, (3 more...)
Technology: Information Technology > Artificial Intelligence > Machine Learning (1.00)
On-the-fly Closed-loop Autonomous Materials Discovery via Bayesian Active Learning
Kusne, A. Gilad, Yu, Heshan, Wu, Changming, Zhang, Huairuo, Hattrick-Simpers, Jason, DeCost, Brian, Sarker, Suchismita, Oses, Corey, Toher, Cormac, Curtarolo, Stefano, Davydov, Albert V., Agarwal, Ritesh, Bendersky, Leonid A., Li, Mo, Mehta, Apurva, Takeuchi, Ichiro
arXiv.org Machine Learning Nov-10-2020
Active learning - the field of machine learning (ML) dedicated to optimal experiment design, has played a part in science as far back as the 18th century when Laplace used it to guide his discovery of celestial mechanics [1]. In this work we focus a closed-loop, active learning-driven autonomous system on another major challenge, the discovery of advanced materials against the exceedingly complex synthesis-processes-structure-property landscape. We demonstrate autonomous research methodology (i.e. autonomous hypothesis definition and evaluation) that can place complex, advanced materials in reach, allowing scientists to fail smarter, learn faster, and spend less resources in their studies, while simultaneously improving trust in scientific results and machine learning tools. Additionally, this robot science enables science-over-the-network, reducing the economic impact of scientists being physically separated from their labs. We used the real-time closed-loop, autonomous system for materials exploration and optimization (CAMEO) at the synchrotron beamline to accelerate the fundamentally interconnected tasks of rapid phase mapping and property optimization, with each cycle taking seconds to minutes, resulting in the discovery of a novel epitaxial nanocomposite phase-change memory material.
artificial intelligence, composition, us government, (17 more...)
arXiv.org Machine Learning
Genre: Research Report (0.46)
Education > Teaching Methods (0.92)
Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.69)
Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.64)
Estimation, Confidence Intervals, and Large-Scale Hypotheses Testing for High-Dimensional Mixed Linear Regression
Zhang, Linjun, Ma, Rong, Cai, T. Tony, Li, Hongzhe
arXiv.org Machine Learning Nov-6-2020
This paper studies the high-dimensional mixed linear regression (MLR) where the output variable comes from one of the two linear regression models with an unknown mixing proportion and an unknown covariance structure of the random covariates. Building upon a high-dimensional EM algorithm, we propose an iterative procedure for estimating the two regression vectors and establish their rates of convergence. Based on the iterative estimators, we further construct debiased estimators and establish their asymptotic normality. For individual coordinates, confidence intervals centered at the debiased estimators are constructed. Furthermore, a large-scale multiple testing procedure is proposed for testing the regression coefficients and is shown to control the false discovery rate (FDR) asymptotically. Simulation studies are carried out to examine the numerical performance of the proposed methods and their superiority over existing methods. The proposed methods are further illustrated through an analysis of a dataset of multiplex image cytometry, which investigates the interaction networks among the cellular phenotypes that include the expression levels of 20 epitopes or combinations of markers.
algorithm, artificial intelligence, health & medicine, (16 more...)
Industry: Health & Medicine (1.00)
Technology: Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Regression (1.00)
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data
Robey, Alexander, Hassani, Hamed, Pappas, George J.
While deep learning has resulted in major breakthroughs in many application domains, the frameworks commonly used in deep learning remain fragile to artificially-crafted and imperceptible changes in the data. In response to this fragility, adversarial training has emerged as a principled approach for enhancing the robustness of deep learning with respect to norm-bounded perturbations. However, there are other sources of fragility for deep learning that are arguably more common and less thoroughly studied. Indeed, natural variation such as lighting or weather conditions can significantly degrade the accuracy of trained neural networks, proving that such natural variation presents a significant challenge for deep learning. In this paper, we propose a paradigm shift from perturbation-based adversarial robustness toward model-based robust deep learning. Our objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data. Critical to our paradigm is first obtaining a model of natural variation which can be used to vary data over a range of natural conditions. Such models may be either known a priori or else learned from data. In the latter case, we show that deep generative models can be used to learn models of natural variation that are consistent with realistic conditions. We then exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model. Our extensive experiments show that across a variety of naturally-occurring conditions and across various datasets, deep neural networks trained with our model-based algorithms significantly outperform both standard deep learning algorithms as well as norm-bounded robust deep learning algorithms.
deep learning, natural variation, neural network, (20 more...)
Overview (0.87)
Research Report (0.67)
Education (1.00)
Health & Medicine (0.92)
Government > Military (0.67)
(3 more...)
USC leads massive new artificial intelligence study of Alzheimer's
#artificialintelligence Oct-28-2020, 16:20:06 GMT
A massive problem like Alzheimer's disease (AD) — which affects nearly 50 million people worldwide — requires bold solutions. New funding expected to total $17.8 million, awarded to the Keck School of Medicine of USC's Mark and Mary Stevens Neuroimaging and Informatics Institute (INI) and its collaborators, is one key piece of that puzzle. The five-year National Institutes of Health (NIH)-funded effort, "Ultrascale Machine Learning to Empower Discovery in Alzheimer's Disease Biobanks," known as AI4AD, will develop state-of-the-art artificial intelligence (AI) methods and apply them to giant databases of genetic, imaging and cognitive data collected from AD patients. Forty co-investigators at 11 research centers will team up to leverage AI and machine learning to bolster precision diagnostics, prognosis and the development of new treatments for AD. "Our team of experts in computer science, genetics, neuroscience and imaging sciences will create algorithms that analyze data at a previously impossible scale," said Paul Thompson, PhD, associate director of the INI and project leader for the new grant. "Collectively, this will enable the discovery of new features in the genome that influence the biological processes involved in Alzheimer's disease." Predicting a diagnosis The project's first objective is to identify genetic and biological markers that predict an AD diagnosis — and to distinguish between several subtypes of the disease. To accomplish this, the research team will apply sophisticated AI and machine learning methods to a variety of data types, including tens of thousands of brain images and whole genome sequences. The investigators then will relate these findings to the clinical progression of AD, including in patients who have not yet developed dementia symptoms. The researchers will train AI methods on large databases of brain scans to identify patterns that can help detect the disease as it emerges in individual patients. "As we get older, each of us has a unique mix of brain changes that occur for decades before we develop any signs of Alzheimer's disease — changes in our blood vessels, the buildup of abnormal protein deposits and brain cell loss," said Thompson, who also directs INI's Imaging Genetics Center. "Our new AI methods will help us determine what changes are happening in each patient, as well as drivers of these processes in their DNA, that we can target with new drugs." The team is even creating a dedicated "Drug Repurposing Core" to identify ways to repurpose existing drugs to target newly identified segments of the genome, molecules or neurobiological processes involved in the disease. "We predict that combining AI with whole genome data and advanced brain scans will outperform methods used today to predict Alzheimer's disease progression," Thompson said. Advancing AI The AI4AD effort is part of the "Cognitive Systems Analysis of Alzheimer's Disease Genetic and Phenotypic Data" and "Harmonization of Alzheimer's Disease and Related Dementias (AD/ADRD) Genetic, Epidemiologic, and Clinical Data to Enhance Therapeutic Target Discovery" initiatives from the NIH's National Institute on Aging. These initiatives aim to create and develop advanced AI methods and apply them to extensive and harmonized rich genomic, imaging and cognitive data. Collectively, the goals of AI4AD leverage the promise of machine learning to contribute to precision diagnostics, prognostication, and targeted and novel treatments. Thompson and his USC team will collaborate with four co-principal investigators at the University of Pennsylvania, the University of Pittsburgh and the Indiana University School of Medicine. The researchers will also host regular training events at major AD neuroimaging and genetics conferences to help disseminate newly developed AI tools to investigators across the field. Research reported in this publication will be supported by the National Institute on Aging of the National Institutes of Health under Award Number U01AG068057. Also involved in the project are INI faculty members Neda Jahanshad and Lauren Salminen, as well as consortium manager Sophia Thomopoulos. — Zara Greenbaum
alzheimer, dementia, neurology, (20 more...)
North America > United States > Pennsylvania (0.35)
North America > United States > Indiana (0.35)
Health & Medicine > Therapeutic Area > Neurology > Dementia (0.78)
Health & Medicine > Therapeutic Area > Neurology > Alzheimer's Disease (0.51)
Learning Strategies in Decentralized Matching Markets under Uncertain Preferences
Dai, Xiaowu, Jordan, Michael I.
arXiv.org Machine Learning Oct-28-2020
We study two-sided decentralized matching markets in which participants have uncertain preferences. We present a statistical model to learn the preferences. The model incorporates uncertain state and the participants' competition on one side of the market. We derive an optimal strategy that maximizes the agent's expected payoff and calibrate the uncertain state by taking the opportunity costs into account. We discuss the sense in which the matching derived from the proposed strategy has a stability property. We also prove a fairness property that asserts that there exists no justified envy according to the proposed strategy. We provide numerical results to demonstrate the improved payoff, stability and fairness, compared to alternative methods.
agent, educational setting, game theory, (19 more...)
Education > Educational Setting > Higher Education (0.94)
Information Technology > Game Theory (1.00)
Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
Information Technology > Data Science > Data Mining (0.67)
Graph-based Reinforcement Learning for Active Learning in Real Time: An Application in Modeling River Networks
Jia, Xiaowei, Lin, Beiyu, Zwart, Jacob, Sadler, Jeffery, Appling, Alison, Oliver, Samantha, Read, Jordan
arXiv.org Artificial Intelligence Oct-26-2020
Effective training of advanced ML models requires large amounts of labeled data, which is often scarce in scientific problems given the substantial human labor and material cost to collect labeled data. This poses a challenge on determining when and where we should deploy measuring instruments (e.g., in-situ sensors) to collect labeled data efficiently. This problem differs from traditional pool-based active learning settings in that the labeling decisions have to be made immediately after we observe the input data that come in a time series. In this paper, we develop a real-time active learning method that uses the spatial and temporal contextual information to select representative query samples in a reinforcement learning framework. To reduce the need for large training data, we further propose to transfer the policy learned from simulation data which is generated by existing physics-based models. We demonstrate the effectiveness of the proposed method by predicting streamflow and water temperature in the Delaware River Basin given a limited budget for collecting labeled data. We further study the spatial and temporal distribution of selected samples to verify the ability of this method in selecting informative samples over space and time.
deep learning, neural network, river segment, (20 more...)
North America > United States > Delaware (0.34)
North America > United States > New York (0.34)
North America > United States > New Jersey (0.34)
Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.70)
Provable tradeoffs in adversarially robust classification
Dobriban, Edgar, Hassani, Hamed, Hong, David, Robey, Alexander
It is well known that machine learning methods can be vulnerable to adversarially-chosen perturbations of their inputs. Despite significant progress in the area, foundational open problems remain. Here we address several of these key questions. We derive exact and approximate Bayes-optimal robust classifiers for the important setting of two- and three-class Gaussian classification problems with arbitrary imbalance, for $\ell_2$ and $\ell_\infty$ adversaries. In contrast to classical Bayes-optimal classifiers, decisions here cannot be made pointwise and new theoretical approaches are needed. We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry (Cianci et al, 2011, Mossel and Neeman 2015), which, to our knowledge, have not yet been used in the area. Our results reveal tradeoffs between standard and robust accuracy that grow when data is imbalanced. We also show further foundational results, including an analysis of the loss landscape, classification calibration for convex losses in certain models, and finite sample rates for the robust risk.
artificial intelligence, classifier, neural network, (15 more...) | CommonCrawl |
In today's money, what was the value of a 1492 Spanish maravedí?
Satava (2007) estimates that the costs of Columbus's 1492 voyage was 1,765,734 maravedís.
currency money economic-history
$\begingroup$ It's interesting that payroll was only a small part of that "On the first voyage, the crew was paid as follows: Masters and pilots, 2000 maravedis per month; able seamen, 1000 maravedis per month; ordinary seamen and ship's boys, 666 maravedis per month. Total payroll was 250,180 per month." $\endgroup$
$\begingroup$ @Fizz why do you think that payroll was only a small part of that? 250k per month for a seven month voyage comes out to 1750k, which is pretty much all of the total estimate. $\endgroup$
– Peteris
$\begingroup$ @Peteris: I guess food was free then, then. $\endgroup$
$\begingroup$ @Fizz: I'm not sure who/what you are quoting from. The only reference mentioned so far in the above question and comments is Satava (2007). And in that paper, Exhibit 3 gives these estimates for the entire voyage (Aug 3, 1492 - March 5, 1493): "Salaries - Crew" = 252,000 and "Salaries - Officers" = 268,000. I can't find your quoted statement anywhere in Satava (2007). $\endgroup$
Walsh (1931):
The author of this work has translated maravedis into dollars of 1929 by reference to statistics on purchasing power in wheat, corn and other staples. Hence his opinion that the maravedi was worth about two American cents of 1929.
According to the BLS inflation calculator, \$1 in 1929 is about \$15 today (2020).
So, if we accept Walsh's estimate, then one maravedi in 1492 converts to about \$0.30 today.
(And if we also accept Satava's estimate, then Columbus's voyage cost 1,765,734 $\times$ \$0.30 $\approx$ \$529,720 in today's USD.)
$\begingroup$ $300 per month for a seaman then. I guess if you hired from India it's feasible even today. $\endgroup$
$\begingroup$ @Fizz IMHO - Phillipino. That is at least my understanding of the source of sailors today. Phillipino and Malay. $\endgroup$
– Stian Yttervik
$\begingroup$ @Fizz Columbus hire from India? Very funny. $\endgroup$
– Michael McFarlane
dtcm840's answer is about as good as you're going to do for a single actual number: it is clear, well-sourced, and almost certainly misleading. That's not their fault: all comparisons over time periods this far off are rendered meaningless by the very different markets & relative prices of commodities & labor between now and then. This question does not have an answer in the way you'd like.
I am suspicious of the Walsh source cited in dtcm840's answer: it used purchasing power based on staple foodstuffs for a basis for comparison; however food simply cost a far greater amount relative to other goods in the early modern period. An estimate based on the price of grain commodities between 1490 and 1920 will wind up dramatically undervaluing non-food goods.
This may account for why a recent reproduction of the Santa Maria alone, if we use Wilson's conversion, cost about 7 times as much as Satava's estimate for Columbus' entire voyage. Obviously, modern materials, and safety consideration, and they were building a museum, etc. But a factor of 7 times, for only the construction of just one of the ships? Clearly something isn't quite comparable here.
The potential for confusion with a simple conversion rate becomes particularly obvious if you look at things like wages, or the purchasing power available to actual people. A woman employed nursing foundling children in the period would've been paid about 100-200 maravedis per month, which on the Wilson estimate works out to an annual salary of \$750. Obviously this is an extremely low-status and correspondingly low-paid job, but it still highlights the difference between period and modern labor markets.
And that's why you can't really answer the question: it will only lead to confusion to say "A maravedi is worth about thirty cents" without also noting that a person's yearly labor was worth maybe a couple grand. Even saying that, conversion into modern currency doesn't help us understand that economy, because so much of Europe (and the world) was living under the current UN international poverty line, as an inevitable consequence of the different productivity levels of their economy and ours.
That's without getting into the implication that an iPad would have been about 2750 maravedis.
If you want to understand how much Columbus' voyage cost, you're better off comparing the price of the voyage to either the other activities undertaken by the Spanish state at the time, or to the total crown revenues, or something of this nature. Even then, recordkeeping is pretty spotty, and requires a lot of estimation--and estimates vary wildly. The answer here would probably be another question (and might be more appropriate for the history stack). If you are interested in the finances of the Spanish crown, this seems to be worth a read--although it'd have to be a closer reading than the one I've given it. One chart suggests that the Spanish military expenditures in 1565 were on the order of 1.8 million ducats, which by the 375 mrs to ducat conversion factor would be 675 million mrs, at which point Columbus' voyage is a rounding error. (Of course, at the 30-cent conversion rate, that would suggest Phillip II had a \$200-million annual military budget, roughly the amount Hitler sent Franco--a comment notes this is well into the inflationary period caused by importing New World specie; but even if prices tripled over those 70 years, it's still a rounding error in one year's military expenditures).
For another way to try to understand the magnitude of the outlay for this voyage, Vasco da Gama, the Portuguese admiral who opened the circum-African sea route to India, was awarded a royal pension of 300,000 reis upon his return in 1499. On this source, in the 1480s one Portuguese real was worth about 96/100ths of a maravedi, meaning da Gama's pension was roughly 285,000 mrs. (Contemporary conversions between currencies aren't perfect, but much less fraught than trying to create exchange rates 500 years apart.) So Portugal, one-fifth the size of Spain, was ready to pay the guy every 6 years what Satava estimates it cost to finance Columbus' entire voyage, just as a reward for prior services rendered.
TierceletTiercelet
$\begingroup$ This is not an answer and merely a very long comment. Also, 1565 was more than 70 years after 1492 and well into the (Spanish) Price Revolution. $\endgroup$
$\begingroup$ @user54743 I haven't spent a lot of time on economics SE to get a good feel for the zeitgeist, but generally this is acceptable on other stacks if the "answer" add significant value and is too long for a comment. $\endgroup$
– Jared Smith
$\begingroup$ In contrast to Jared, I would say this is absolutely an answer—"the kind of answer you are expecting doesn't and can't exist" is and must be a valid sort of answer, because sometimes that is the reality and there needs to be a way to say so in such cases. It is emphatically not a comment—it does not seek to clarify or improve the question. $\endgroup$
What are the differences between the older gold standard and the current fiat money standard?
Value of the ruble in year 1825
Was the average person no better off in 1800 than in 100,000 BC?
What is the actual numerical value of the velocity of money?
Calculating the amount paid for a loan in today's dollars
Why does fiat money have value? | CommonCrawl |
Search all SpringerOpen articles
ROBOMECH Journal
Body schema
Extra robotic thumb system
Modification of body schema by use of extra robotic thumb
Noel Segura Meraz1Email authorView ORCID ID profile,
Masafumi Sobajima2,
Tadayoshi Aoyama1 and
Yasuhisa Hasegawa1
ROBOMECH Journal20185:3
Received: 22 September 2017
Accepted: 6 January 2018
In recent years, there has been great interest in the possibility of using artificial limbs as an extension of the human body as well as replacement of lost limbs. In this paper, we develop a sixth finger system as an extension of the human body. We then investigate how an extra robotic thumb, that works as a sixth finger and gives somatosensory feedback to the user, modifies the body schema, and also affecting the self-perception of existing limbs. The sixth robotic finger is controlled with the thumb of the opposite hand, and contact information is conveyed via electrostimulation to the tip of the thumb controlling the movement. We conducted reaching task experiments with and without visual information to evaluate the level of embodiment of the sixth robotic finger and the modification of the self-perception of the finger controlling the system. The experimental results indicate that not only the sixth finger is incorporated into the body schema of the user, but also the body schema of the controlling finger is modified; ability of the brain to adapt to different scenarios and geometries of the body is also implied.
Artificial extra limbs
Proprioceptive feedback
Body schema modification
Sixth finger
Artificial limbs had been used for long time as prostheses, and it has been studied how much are these artificial limbs embodied to the users self-perception of the body [1]. It has been shown that an increase in embodiment improves the performance and comfortability of these artificial limbs [2]. In recent years, great interest has arisen on the possible use of artificial limbs not just as replacement of lost limbs, but also as an extension of the human body. Some research groups have studied the use of artificial limbs as extra limbs, and their possible applications [3, 4].
Extra limbs offer the possibility to increase workspace, dexterity, strength and reduce fatigue of the user [5]. However, extra limbs require new control strategies because extra limbs perform actions that the users have not experienced. Control strategies of extra limbs proposed previously can be divided into two approaches. First approach is the indirect control from the user by moving the artificial limbs by synergy with other movements from the user [6] or predicting the intended movement [7]. Second approach is the direct control from the user mimicking the motion from a similar limb [8].
It has been shown that artificial limbs can be incorporated in the body schema of users when replacing lost limbs as prostheses [9], or as supernumerary limbs [10]. Specially, the use of somatosensory feedback increases the sense of embodiment [11] and performance on grasping motions [12]. This embodiment means that the user can have an accurate idea of the position of the artificial limb without visual information.
One example of these possible extra limbs are supernumerary fingers [8]. That have been studied to increase grasping range and ability to perform activities with one hand [13]. They have been also studied to help the recovery of stroke patients with somatosensory feedback [14] and without it [15].
Although there are several approaches to control devices of extra limbs, it has not been elucidated how these different control strategies affect the embodiment of the artificial limbs, or modify the body schema of the users [2]. In this paper, we focus on the effect of controlling a supernumerary limb with direct control from the motion of a similar limb with somatosensory feedback, and its effect on the body schema of the users.
We develop a sixth finger device system as an extension of the human body, a thumb motion capture device to control movements of the device, and a electrical stimulation device to convey contact information. We then perform experiments using the sixth finger device with somatosensory feedback during 1 month with varying rest time between sessions, with and without visual feedback. In addition, the sixth finger device is controlled by mirroring the movement of the thumb from the opposite hand, while giving somatosensory feedback from contact information via electrostimulations. From the experimental results, the performance on a reaching task and the modification of self-perception of the controlling finger after each session are evaluated. Finally, we discuss about ability of the brain to adapt to different scenarios and geometries of the body though the evaluations.
Body schema was firstly put forward by Holmes and Head in 1911 [4]. Humans can perceive intuitively the position and movement of our own body without looking because experience about our own body has been accumulated as a mental model over time. The body schema is the map of our own body generated based on the previous experience. When we move or perceive our posture, the body schema is referenced unconsciously. The body schema has various properties [16]. Adaptability is one of these properties of the body schema. For example, when we use tools, the body schema will be changed and take the tools into consideration [17]. We can use tools dexterously owing to this property. When we stop the use of these tools, the body schema will be restored to its initial state. We call embodiment when the prolonged use of these tools writes them strongly in the body schema.
Supramodal representation is also one of the important properties of the body schema. The body schema integrates a variety of sensory feedbacks, usually called afferent input, such as position sense, tactile sense obtained from peripheral nerves. This property makes us possible to identify the position stimulated without looking at it directly. Moreover, the body schema generates an efferent copy when we perform an activity [18]. Efferent copy is an expected image generated before we move. We can expect the result of the movement before we actually move. The body schema associates afferent copy with efferent copy and the difference is used to update the body schema. The difference between the performed motion and the expected motion is inversely proportional to the accuracy of the efferent copy and body schema.
Somatosensory feedback has been shown to be an important factor in the performance of different interfaces [19] and artificial limbs [20]. In this study, we provide pseudo-tactile feedback using electrical stimulation. The use of electrical stimulation has been shown to simulate contact and tactile feedback [21]. This pseudo afferent input increases the extra robotic thumb embodiment. As a result, a more accurate efferent copy will be generated, and operability of the extra robotic thumb will be improved.
Extra robotic thumb (ERT)
Figure 1 depicts a new developed extra robotic thumb (ERT) that is a super numerary limb device used to simulate an extra finger on the hand. The ERT attaches on the ulnar side of the palm to produce the feeling of having a second thumb on the hand. The ERT is located at the point so that the robotic thumb mirrors the position of the real thumb in the opposite side of the hand.
The ERT is designed to resemble a thumb, in its size, movement and degrees of freedom. It consists of three links and an attachment to the hand. The robotic thumb is made of ABS resin and total weight of the device is 61 g. The movable range of the robotic thumb is enough to touch five fingertips of the hand. Denavit–Hartemberg parameters of the kinematic chain are shown in Table 1. The base reference frame is located at skin level on the ulnar side of the palm. Three servomotors (JR Propo, DS318) are attached at each joint. The maximum torque of the servo motor is 1.8 kgfcm, and it is controlled via an Mbed microcomputer. The ERT is capable of reaching 3.2 N of force for grasping actions by using these servomotors. A pressure sensor (Optoforce, OMD-20-SE-40N) is installed at the tip of the finger in order to feedback contact force information to the users.
Extra robotic thumb
Denavit–Hartemberg parameters of extra robotic thumb [22]
\(\theta\)
r (mm)
\(\alpha\)
\(\theta _1\)
\(\frac{\pi }{2}\)
\(-\frac{\pi }{2}\)
Thumb motion capture
The TMC is a device attached to the posterior side of the right hand and the right wrist, and it is used to capture the movements of the right thumb. Figure 2 depicts the overview of the device. This device is designed to follow the movements of the thumb, and record the position of the tip of the thumb with respect to its base. It is designed in such a way to not interfere with the movements of the thumb.
The TMC consists of a base attached to the back of the hand and a 4-link mechanism to follow the movement of the thumb without perturbing its movement. The Denavit–Hartemberg parameters of the kinematic chain are shown in Table 2. A base reference frame is located on the carpometacarpal joint of the thumb. Each joint of the TMC device has a potentiometer to measure the angle of each joint to calculate the position of the tip of the thumb with respect to the hand. The position of the tip of the thumb is then send to the Mbed microcomputer that controls the movement of the ERT.
Denavit–Hartemberg parameters of thumb motion Capture [22]
Electrical stimulation device
Surface electrical stimulation has been shown to be a reliable method to feedback tactile information to users [21]. Electrodes can be very light and easily attached to the location where the stimulation is desired without interfering with the activity of the user. In this study, we attached the electrodes to the tip of the right thumb. Electrical stimulation activates several skin mechanoreceptors depending on the frequency and intensity of the stimulation. With the right parameters, a user can easily and comfortably discriminate between presence and absence, or strength of the stimulation. The developed electrical stimulation system consists of an electric stimulator, a current amplifier, an I/O board, a signal multiplexor, a switching circuit, and electrodes as shown in Fig. 3.
Human skin has four main types of mechanoreceptors under the skin: Pacinian corpuscles, Meissners corpuscles, Merkels discs, and Ruffini endings. This electrical stimulation device system stimulates mainly Meissners corpuscles, which have a role in perceiving tactile information. The frequency of the stimulation is 50 Hz to effectively stimulates Meissners corpuscles [23]. Maximum applied current is 10 mA, and stimulation is provided in pulses of 1% duty cycle.
Control method
Figure 4 shows the whole control architecture of the system. Users attach the thumb motion capture and the electrical stimulation electrodes to the right thumb to control the system.
Control diagram of the sixth finger system
The angle information of each joint is used to estimate the position (x,y,z) of the thumb tip with respect to its base using Eq. 1, with the use of the Denavit–Hartemberg parameters of the TMC. This position is then used to move the extra robotic thumb in a similar fashion by calculating the angles (\(\theta _1, \theta _2, \theta _3\)) that would solve Eq. 2 with the ERT parameters. Thus, the tip of the robotic thumb is the same position relative to its base as the right-hand thumb is. In addition, contact force information is conveyed to the fingertip of the right thumb via electrical stimulation. Thus, the user is aware of when the robotic thumb tip makes contact with something and the force of this contact.
$$\begin{aligned} \begin{bmatrix} x \\ y \\ z \\ 1 \\ \end{bmatrix}= & {} \prod _{n=1}^{5} \begin{bmatrix} {\text{cos}}(\theta _n)&-{\text{sin}}(\theta _n){\text{cos}}(\alpha _n)&{\text{sin}}(\theta _n){\text{sin}}(\alpha _n)&r_n{\text{cos}}(\theta _n) \\ {\text{sin}}(\theta _n)&{\text{cos}}(\theta _n){\text{cos}}(\alpha _n)&-{\text{cos}}(\theta _n){\text{sin}}(\alpha _n)&r_n{\text{sin}}(\theta _n) \\ 0&{\text{sin}}(\alpha _n)&{\text{cos}}(\alpha _n)&d_n \\ 0&0&0&1 \\ \end{bmatrix}\end{aligned}$$
$$\begin{aligned} \begin{bmatrix} x \\ y \\ z \\ 1 \\ \end{bmatrix}= & {} \prod _{n=1}^{3} \begin{bmatrix} {\text{cos}}(\theta _n)&-{\text{sin}}(\theta _n){\text{cos}}(\alpha _n)&{\text{sin}}(\theta _n){\text{sin}}(\alpha _n)&r_n{\text{cos}}(\theta _n) \\ {\text{sin}}(\theta _n)&{\text{cos}}(\theta _n){\text{cos}}(\alpha _n)&-{\text{cos}}(\theta _n){\text{sin}}(\alpha _n)&r_n{\text{sin}}(\theta _n)\\ 0&{\text{sin}}(\alpha _n)&{\text{cos}}(\alpha _n)&d_n \\ 0&0&0&1\\ \end{bmatrix} \end{aligned}$$
We conducted experiments of a finger reaching task with and without visual feedback from the movement of the ERT in order to evaluate the level of embodiment of ERT.
The task consists on asking the user to move the ERT to touch the tip of one of four fingers of the left hand (index, middle, ring or pinkie). The finger to touch is randomly selected and presented to the user in a screen located in front of the user Fig. 5. Once the user moves the ERT to touch a finger, a new target is presented to the user, and a hit is recorded if the user touched the right finger, or a miss if any of the other three fingers are touched. One round lasts 30 s, and users were asked to try to obtain as many hits as comfortably possible during this time. Four male subjects (average age 24 years, variance 1.6), were divided into two groups randomly. First group can see their left hand and the ERT during the experiment. Second group had its vision obstructed on the left side, and the subjects of the group were not able to see their left hand during the experiment. Both groups had its vision obstructed on the right side, and both group were not able to see their right hand during the experiment as shown in Figs. 6 and 7. Experiments were performed in sessions of 10 rounds during a period of one moth with varying times between sessions. Since subjects were not able to see their right hand during the experiment, we measured any change in perception of location of the right thumb by asking the subjects to try to point the right thumb tip with the left index finger while their eyes were closed, before and after each session.
Screen shot of the interface that the subjects see during experiment
First group condition, possible to see left hand
Second group condition, cannot see left hand
Experimental results of reaching task
The experiment was performed five times with rest times of 1 day between sessions, then four times with rest time of 1 week, and finally one last time, after 1 month had passed since last session, in order to see if progress is affected, or performance ability forgotten over time.
Figure 8 shows the number of hits and misses of both groups. It is possible to see that performance of both groups increases with practice and reaches a steady performance after a couple of weeks. The performance of the group that was not allowed to see their left hand is slightly better by the end of the experiment. Reach count on the last session is 4% higher for the obstructed vision group. Which indicate a stronger sense of embodiment of the device.
Number of correct and incorrect fingers touched in each of the 10 rounds per session for both groups. In gray is marked the rest time between sessions
Experimental results of proprioceptive drift
The subjects were asked to point, as closely as possible, the position of the right thumb tip with the left index finger while having the eyes closed. The subjects were asked to perform this activity before and after each session to see if the use of the sixth finger device had any effect in the self-perception of the subject.
Figure 9 shows the proprioceptive drift of the distance from the pointed location to the tip of the thumb after each session to the distance of the pointed location to the tip of the thumb before each session. The proprioceptive drift is always positive meaning that the pointed location is always further from the tip of the thumb after each session. We believe the results caused by modification of the subjects body schema by relating the action of the right thumb to the movement of the sixth finger in the left hand. This may indicate a transfer of self-perception from the right thumb to the extra finger analogous to the rubber hand illusion, but actively created by the users actions.
Proprioceptive drift given by the difference between the distance of the pointed location where the subject believe their right thumb tip is located before and after each session
The experimental results of reaching task indicate that the number of channels and type of feedback is an important factor on the level of embodiment that a supernumerary limb device can achieve in a set amount of time. From previous results [8] we know that the presence of somatosensory feedback is an important factor for the performance of this kind of devices. We assume that the level of embodiment is directly related to the performance in reaching tasks, via the accuracy of the efferent copy of the body schema of the system user-device. If the movements predicted by the feed forward model of the user match the real actions of the sixth finger, then the user will be able to reach more targets in a given time. In our experiments, the group without visual feedback from the sixth finger had an slightly better performance, pointing that they had to rely less on the visual information, and more in the tactile information, which would have helped in creating a better efferent copy of the sixth finger system.
The experimental results of the proprioceptive drift indicate that the use of direct control to drive a supernumerary limb affects not only the creation of a new body representation of the supernumerary limb, but also modifies the existing body representation of the controlling limb. In this case, the self-image of the position of the fingertip is modified during the course of the experiment, pointing to the plasticity of the brain to modify the existing body schema.
There is a light correlation between performance and proprioceptive drift, which could indicate that the process of creating the new body representation of the sixth finger to the movement of the right thumb is related to the accuracy of the right thumb body representation. This could be caused by the brain using the efferent information of the right thumb in creating the efferent copy of the sixth finger, and not in its movement, degrading the efferent copy of the real thumb.
The decrease in accuracy in the body representation of the right thumb is lower, than the increase of performance of the sixth finger. Meaning that the brain is able to adapt to a more complex body schema, introducing new limbs without seriously sacrificing existing parts of the body representation. The limit of the complexity of the body schema achievable is still an open question to be researched.
It has been studied that patients recovering from cervical injuries [24] or stroke [25] can regain part of the body schema lost due to injury or disease [26]. It has also been studied the use of extra robotic fingers to help patients recovering from stroke [27]. Our results indicate that it might be possible to help these kinds of patients to achieve a new body schema by sacrificing other part of the body schema, specifically, part that is no longer in use.
We presented a sixth finger system composed of an extra robotic thumb attached to the left hand, that simulates a second thumb in that hand and has a force sensor in its tip; a thumb motion capture, that attaches to the right hand, and captures the movements and position of the tip of the thumb in that hand, to mirror its movements into the extra robotic thumb. And a electrostimulation device that provides pseudo tactile feedback from the force sensor on the tip of the robotic thumb to the tip of the right thumb.
We showed the performance of this sixth finger device with somatosensory feedback measured via a pointing task by comparing two groups of subjects; the subjects of the first group were allowed to see the sixth finger device during the test, and the subjects of second group were not allowed to see the sixth finger device during the test. The movement of the device was controlled by the opposite hand thumb and the contact information was conveyed via electric stimulation from the tip of the extra robotic thumb to the tip of the finger controlling the movement. The experimental results show the group without visual feedback had a better performance than the group with visual feedback; this indicates that subjects of this group achieved better embodiment of the system in the user's body schema.
In addition, we measured the proprioceptive drift of the controlling thumb by asking the user to point the location of the tip of the right thumb, before and after the experiment. The results show that the proprioceptive drift is always positive meaning that the accuracy is always lower after the experiment. This results also indicate that the sixth finger is not just embodied, but also controlling the device in a direct manner modifies the controlling thumb schema; there is a light correlation between performance and proprioceptive drift. It is also suggested that the ability of the brain to adapt to different scenarios and geometries of the body even those with supernumerary limbs that were not present before, and the body schema of existing limbs can be modified via somatosensory feedback. The body schema will be modified in a way that resembles transfer of embodiment from the controlling limb to the new supernumerary one by using the efferent information from one movement to control a new limb, and providing afferent information from the actions of the new limb.
Proper embodiment of artificial limbs means that users can control them without visual feedback, and embodiment of extra limbs increase its usefulness and usability as extensions of the human body. This technique can be used to extend the body schema of the user with the addition of supernumerary limbs or help recovery patients to regain lost body schema.
As a future work the effect and comparison of different kind of interfaces on the embodiment of these kind of systems should be tested. Furthermore, usability cases of these kind of devices in daily life, work or medical treatments could be studied. Also the level of embodiment achievable by using extra robotic limbs that do not have a geometry similar to human body remains to be researched.
YH and MS initiated the research, design and perform the experiments. NSM and MS design and build the devices. NSM performed the data analysis, interpretation of the results and wrote the manuscript with the help and review from TA. All authors read and approved the final manuscript.
The dataset supporting the conclusions of this article is included within the article and its additional files.
This work was supported by JSPS KAKENHI Grant Number JP17H05906.
Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Furo-cho,Chikusa-ku, Nagoya 464-8603, Japan
Department of Micro-Nano Systems, Nagoya University, Furo-cho,Chikusa-ku, Nagoya 464-8603, Japan
Murray CD (2004) An interpretative phenomenological analysis of the embodiment of artificial limbs. Disabil Rehabil 26(16):963–973View ArticleGoogle Scholar
Murray CD (2008) "Embodiment and Prosthetics" in psychoprosthetics. Springer London, London, pp 119–129View ArticleGoogle Scholar
Parietti F, Chan K, Hunter B, Asada H (2015) Design and control of supernumerary robotic limbs for balance augmentation. In: IEEE international Conference on robotics and automation. ICRA, Seattle, USA; May 2015Google Scholar
Schaefer M, Heinze HJ, Rotte M (2009) My third arm: shifts in topography of the somatosensory homunculus predict feeling of an artificial supernumerary arm. Hum Brain Mapp 30:1413–1420View ArticleGoogle Scholar
Llorens-Bonilla B, Parietti F, Asada HH (2012) Demonstration-based control of supernumerary robotic limbs. In: IEEE/RSJ international conference on intelligent robots and systems, 2012, IEEE, Vilamoura, pp 3936–3942Google Scholar
Wu F, Asada H (2014) Bio-artificial synergies for grasp posture control of supernumerary robotic fingers. In: Robotics: science and systems, MIT Press, Berkeley, pp 12–16Google Scholar
Wu FY, Asada HH (2016) Implicit and intuitive grasp posture control for wearable robotic fingers: a data-driven method using partial least squares. IEEE Trans Robot 32:176–186View ArticleGoogle Scholar
Sobajima M, Sato Y, Xufeng W, Hasegawa Y (2015) Improvement of operability of extra robotic thumb using tactile feedback by electrical stimulation. MHSGoogle Scholar
Holmes NP, Spence C (2004) The body schema and the multisensory representation(s) of peripersonal space. Cognit process 5(2):94–105. https://doi.org/10.1007/s10339-004-0013-3View ArticleGoogle Scholar
Di Pino G, Maravita A, Zollo L, Guglielmelli E, Di Lazzaro V (2014) Augmentation-related brain plasticity. Front Syst Neurosci 8:109. https://doi.org/10.3389/fnsys.2014.00109.eCollectionGoogle Scholar
Mulvey MR, Fawkner HJ, Radford HE, Johnson MI (2012) Perceptual embodiment of prosthetic limbs by transcutaneous electrical nerve stimulation. Neuromodulation 15(1):42–46 (discussion 47)View ArticleGoogle Scholar
Bicchi A, Salisbury JK, Dario P (1989) Augmentation of grasp robustness using intrinsic tactile sensing. In: Proceedings, 1989 international conference on robotics and automation, Scottsdale, AZ, pp 302–307Google Scholar
Prattichizzo D, Malvezzi M, Hussain I, Salvietti G (2014) The sixth-finger: a modular extra-finger to enhance human hand capabilities. In: The 23rd IEEE international symposium on robot and human interactive communication, Edinburgh, pp 993–998Google Scholar
Hussain I, Salvietti G, Meli L, Pacchierotti S, Prattichizzo D (2015) Using the robotic sixth finger and vibrotactile feedback for grasp compensation in chronic stroke patients. In: Proceedings IEEE/RAS-EMBS international conference rehabilitation robotics (ICORR)Google Scholar
Salvietti G, Hussain I, Cioncoloni D, Taddei S, Rossi S, Prattichizzo D (2017) Compensating hand function in chronic stroke patients through the robotic sixth finger. IEEE Trans Neural Syst Rehabil Eng 25(2):142–150View ArticleGoogle Scholar
Morasso P et al (2017) Revisiting the body-schema concept in the context of whole-body postural-focal dynamics. Front Hum Neurosci 9:83Google Scholar
Shokur S, et al. (2013) Expanding the primate body schema in sensorimotor cortex by virtual touches of an avatar. In: Proceedings of the national academy of sciences of the United States of America 110(37):15121–15126. PMC. Web. 7 Sept. 2017Google Scholar
Ramsey R, Cross ES, de Hamilton Antonia AF (2013) Supramodal and modality-sensitive representations of perceived action categories in the human brain. Exp Brain Res 230:345–357View ArticleGoogle Scholar
Chatterjee A, Aggarwal V, Ramos A, Acharya S, Thakor NV (2007) A brain-computer interface with vibrotactile biofeedback for haptic information. J NeuroEng Rehabil 4(1):40View ArticleGoogle Scholar
Bay JS (1989) Tactile shape sensing via single- and multifingered hands. In: Proceedings, 1989 international conference on robotics and automation, vol. 1, Scottsdale, AZ, pp 290–295Google Scholar
Hasegawa Y, Ozawa K (2014) Pseudo-somatosensory feedback about joints angle using electrode array, IEEE/SICE international Symposium on system integration (SII), pp 644–649Google Scholar
Paul Richard (1981) Robot manipulators: mathematics, programming, and control: the computer control of robot manipulators. MIT Press, CambridgeGoogle Scholar
Purves D, Augustine GJ, Fitzpatrick D, et al., editors (2001) Neuroscience, 2nd edition. Sinauer Associates, Sunderland. Mechanoreceptors specialized to receive tactile information. https://www.ncbi.nlm.nih.gov/books/NBK10895/
Perreault EJ, Crago PE, Kirsch RF (2001) Postural arm control following cervical spinal cord injury. IEEE Trans Neural Syst Rehabil Eng 9(4):369–377View ArticleGoogle Scholar
Volpe BT, Krebs HI, Hogan N (2001) Is robot-aided sensorimotor training in stroke rehabilitation a realistic option? Curr Opin Neurol 14(6):745–752View ArticleGoogle Scholar
Webb J, Xiao ZG, Aschenbrenner KP (2012) Towards a portable assistive arm exoskeleton for stroke patient rehabilitation controlled through a brain computer interface. In: 4th IEEE RAS and EMBS international conference on biomedical robotics and biomechatronics, Rome, Italy, 2427, pp 1299–1304Google Scholar
Chiri A, Vitiello N, Giovacchini F, Roccella S, Vecchi F, Carrozza MC (2012) Mechatronic design and characterization of the index finger module of a hand exoskeleton for post-stroke rehabilitation. IEEE/ASME Trans Mech 17(5):884–894View ArticleGoogle Scholar
Follow SpringerOpen
SpringerOpen Twitter page
SpringerOpen Facebook page | CommonCrawl |
Comparative analysis of differential gene expression analysis tools for single-cell RNA sequencing data
Tianyu Wang1,
Boyang Li2,
Craig E. Nelson3 &
Sheida Nabavi ORCID: orcid.org/0000-0002-5996-10204
BMC Bioinformatics volume 20, Article number: 40 (2019) Cite this article
The analysis of single-cell RNA sequencing (scRNAseq) data plays an important role in understanding the intrinsic and extrinsic cellular processes in biological and biomedical research. One significant effort in this area is the detection of differentially expressed (DE) genes. scRNAseq data, however, are highly heterogeneous and have a large number of zero counts, which introduces challenges in detecting DE genes. Addressing these challenges requires employing new approaches beyond the conventional ones, which are based on a nonzero difference in average expression. Several methods have been developed for differential gene expression analysis of scRNAseq data. To provide guidance on choosing an appropriate tool or developing a new one, it is necessary to evaluate and compare the performance of differential gene expression analysis methods for scRNAseq data.
In this study, we conducted a comprehensive evaluation of the performance of eleven differential gene expression analysis software tools, which are designed for scRNAseq data or can be applied to them. We used simulated and real data to evaluate the accuracy and precision of detection. Using simulated data, we investigated the effect of sample size on the detection accuracy of the tools. Using real data, we examined the agreement among the tools in identifying DE genes, the run time of the tools, and the biological relevance of the detected DE genes.
In general, agreement among the tools in calling DE genes is not high. There is a trade-off between true-positive rates and the precision of calling DE genes. Methods with higher true positive rates tend to show low precision due to their introducing false positives, whereas methods with high precision show low true positive rates due to identifying few DE genes. We observed that current methods designed for scRNAseq data do not tend to show better performance compared to methods designed for bulk RNAseq data. Data multimodality and abundance of zero read counts are the main characteristics of scRNAseq data, which play important roles in the performance of differential gene expression analysis methods and need to be considered in terms of the development of new methods.
Next generation sequencing (NGS) [1] technologies greatly promote research in genome-wide mRNA expression data. Compared with microarray technologies, NGS provides higher resolution data and more precise measurement of levels of transcripts for studying gene expression. Through downstream analysis of RNA sequencing (RNAseq) data, gene expression levels reveal the variability between different samples. Typically, in RNAseq data analysis, the expression value of a gene from one sample represents the mean of all expression values of the bulk population of cells. Although it is common to use expression values on such a bulk scale in certain situations [2,3,4], it is not sufficient to employ bulk RNAseq data for other biological research that involves, for example, studying circulating tumor cells [5] and stem cells. Consequently, analyzing gene expression values on the single-cell scale provides deep insight into the interplay between intrinsic cellular processes and stochastic gene expression in biological and biomedical research [6,7,8,9]. For example, single-cell data analysis is important in cancer studies, as differential gene expression analysis between different cells can help to uncover driver genes [10].
Tools developed for differential gene expression analysis on bulk RNAseq data, such as DESeq [11] and edgeR [12], can be applied to single-cell data [11,12,13,14,15,16,17,18,19,20]. Single-cell RNAseq (scRNAseq) data, however, have different characteristics from those of bulk RNAseq data that require the use of a new differential expression analysis definition, beyond the conventional definition of a nonzero difference in average expression. In scRNAseq data, due to the tiny number and low capture efficiency of RNA molecules in single cells [6], many transcripts tend to be missed during the reverse transcription. As a result, we may observe that some transcripts are highly expressed in one cell but are missed in another cell. This phenomenon is defined as a "drop-out" event [21]. Recent studies have shown that gene expression in a single cell is a stochastic process and that gene expression values in different cells are heterogeneous [22, 23], which results in multimodality in expression values in different cells. For example, cells from the same brain tissue or the same tumor [24] pose huge heterogeneity from cell to cell [24,25,26,27,28]. Even though they are from the same tissue, these cells are different in regard to cell types, biological functions, and response to drugs. Therefore, unlike bulk RNAseq data, scRNAseq data tend to exhibit an abundance of zero counts, a complicated distribution, and huge heterogeneity. Examples of distributions of scRNAseq expression values between two conditions are shown in Fig. 1. Consequently, the heterogeneity within and between cell populations manifests major challenges to the differential gene expression analysis in scRNAseq data.
Distributions of gene expression values of total 92 cells in two groups (ES and MEF) using real data show that scRNAseq data exhibit a different types of multimodality (DU, DP, DM, and DB) and b large amounts of zero counts. X axis represents log-transformed expression values. To clearly show the multimodality of scRNAseq data, zero counts are removed from the distribution plots in (a)
To address the challenges of multimodal expression values and/or drop-out events, new strategies and models [21, 29,30,31,32,33,34,35,36,37] have been proposed for scRNAseq data. Single-cell differential expression (SCDE) [21] and model-based analysis of single-cell transcriptomics (MAST) [29] use a two-part joint model to address zero counts; one part corresponds to the normal observed genes, and the other corresponds to the drop-out events. Monocle2 [38] is updated from the previous Monocle [32] and employs census counts rather than normalized transcript counts as input to better normalize the counts and eliminate variability in single-cell experiments. A recent approach, termed scDD [39], considers four different modality situations for gene expression value distributions within and across biological conditions. DEsingle employs a zero-inflated negative binomial (ZINB) regression model to estimate the proportion of the real and drop-out zeros and classifies the differentially expressed (DE) genes into three categories. Recently, nonparametric methods, SigEMD [37], EMDomics [31], and D3E [33], have been proposed for differential gene expression analysis of heterogeneous data. Without modeling the distributions of gene expression values and estimating their parameters, these methods identify DE genes by employing a distance metric between the distributions of genes in two conditions.
A few studies have compared differential expression analysis methods for scRNAseq data. Jaakkola et al. [40] compared five statistical analysis methods for scRNAseq data, three of which are for bulk RNAseq data analysis. Miao et al. [41] evaluated 14 differential expression analysis tools, three of which are newly developed for scRNAseq data and 11 of which are old methods for bulk RNAseq data. A recent comparison study [42] assessed six differential expression analysis tools, four of which were developed for scRNAseq and two of which were designed for bulk RNAseq. In this study, we consider all differential gene expression analysis tools that have been developed for scRNAseq data as of October 2018 (SCDE [21], MAST [29], scDD [39], D3E [33], Monocle2 [38], SINCERA [34], DEsingle [36], and SigEMD [37]). We also consider differential gene expression analysis tools that are designed for heterogeneous expression data (EMDomics [31]) and are commonly used for bulk RNAseq data (edgeR [4], DESeq2 [43]).
The goal of this study is to reveal the limitations of the current tools and to provide insight and guidance in regard to choosing a tool or developing a new one. In this work, we discuss the computational methods used by these tools and comprehensively evaluate and compare the performance of the tools in terms of sensitivity, false discover rate, and precision. We use both simulated and real data to evaluate the performance of the above-noted tools. To generate more realistic simulated data, we model both multimodality and drop-out events in simulated data. Using gold standard DE genes in both simulated and real data, we evaluate the accuracy of detecting true DE genes. In addition, we investigate the agreement among the methods in identifying significantly DE genes. We also evaluate the effect of sample size on the performance of the tools, using simulated data, and compare the runtimes of the tools, using real data. Finally, we perform gene-set enrichment and pathway analysis to evaluate the biological functional relevance of the DE genes identified by each tool.
As of October 2018, we have identified eight software tools for differential expression analysis of scRNAseq data, which are designed specifically for such data [21, 29, 30, 33, 34, 36,37,38] (SCDE, MAST, scDD, D3E, Monocle2, SINCERA, DEsingle, and SigEMD). We also considered tools designed for bulk RNAseq data that are widely used [4, 43] (edgeR, and DESeq2) or can apply to multimodal data [31] (EMDomics). The general characteristics of the eleven tools are provided in Table 1. MAST, scDD, EMDomics, Monocle2, SINCERA, and SigEMD use normalized TPM/FPKM expression values as input, while SCDE, D3E, and DEsingle use read counts obtained from RSEM as input. D3E runs on Python, while all other methods are developed as an R package. In the following sections, we provide the details of the tools.
Table 1 Software tools for identifying DE genes using scRNAseq data
Differential gene expression analysis methods for scRNAseq data
Single-cell differential expression (SCDE)
SCDE [21] utilizes a mixture probabilistic model for gene expression values. The observed read counts of genes are modeled as a mixture of drop-out events by a Poisson distribution and amplification components by a negative binomial (NB) distribution:
$$ \left\{\begin{array}{c}{r}_c\sim NB(e)\ \mathrm{for}\ \mathrm{normal}\ \mathrm{amplified}\ \mathrm{genes}\\ {}{r}_c\sim Possion\left({\lambda}_0\right)\ \mathrm{for}\ \mathrm{drop}-\mathrm{out}\ \mathrm{genes}\end{array}\ \right., $$
where e is the expected expression value in cells when the gene is amplified, and λ0 is always set to 0.1. The posterior probability of a gene expressed at level x in cell c based on observed rc and the fitted model Ωc is calculated by:
$$ p\left(x|{r}_c,{\Omega}_c\right)={p}_d(x){p}_{Possion}\left(x|{r}_c\right)+\left(1-{p}_d(x)\right){p}_{NB}\left(x|{r}_c\right), $$
where pd is the probability of a drop-out event in cell c for a gene expressed at an average level x, and ppoisson(x| rc) and pNB(x|rc) are the probabilities of observing expression value rc in the cases of drop-out (Poisson) and successful amplification (NB) of a gene expressed at level x in cell c, respectively. Then, after the bootstrap step, the posterior probability of a gene expressed at level x in a subpopulation of cells S is determined as an expected value:
$$ {p}_s(x)=E\left[{\prod}_{c\in B}p\left(x|{r}_c,{\Omega}_c\right)\right], $$
where B is the bootstrap samples of S. Based on the posterior probabilities of gene expression values in cells S and G, pS(x) and pG(x), SCDE uses a fold expression difference f in gene g for the differential expression analysis between subgroups S and G, which is determined as:
$$ p(f)={\sum}_{x\in X}{p}_S(x){p}_G(x), $$
where X is the expression range of the gene g. An empirical p-value is determined to test the differential expression.
Model-based analysis of single-cell transcriptomics (MAST)
MAST [29] proposes a two-part generalized linear model for differential expression analysis of scRNAseq data. One part models the rate of expression level, using logistic regression:
$$ logit\left(p\left({Z}_{ig}=1\right)\right)={X}_i{\beta}_g^D, $$
where Z = [Zig] indicates whether gene g is expressed in cell i.
The other part models the positive expression mean, using a Gaussian linear model:
$$ p\left({Y}_{ig}=y|{Z}_{ig}=1\right)=N\left({X}_i{\beta}_g^C,{\sigma}_g^2\right), $$
where Y = [yig] is the expression level of gene g in cell i observed Zig = 1. The cellular detection rate (CDR) for each cell, defined as CDRi = (1/N)∑g = 1Zig (N is the total number of genes), is introduced as a column in the design matrix Xi of the logistic regression model and the Gaussian linear model. For the differential expression analysis, a test with asymptotic chi-square null distribution is utilized, and a false discovery rate (FDR) adjustment control [44] is used to decide whether a gene is differentially expressed.
Bayesian modeling framework (scDD)
scDD [39] employs a Bayesian modeling framework to identify genes with differential distributions and to classify them into four situations: 1—differential unimodal (DU), 2—differential modality (DM), 3—differential proportion (DP), and 4—both DM and DU (DB), as shown in Additional file 1: Figure S1. The DU situation is one in which each distribution is unimodal but the distributions across the two conditions have different means. The DP situation involves genes with expression values that are bimodally distributed. The bimodal distribution of gene expression values in each condition has two modes with different proportions, but the two modes across the two conditions are the same. DM and DB situations both include genes whose expression values follow a unimodal distribution in one condition but a bimodal distribution in the other condition. The difference is that, in the DM situation, one of the modes of the bimodal distribution is equal to the mode of the unimodal distribution, whereas in the DB situation, there is no common mode across the two distributions.
Let Yg be the expression value of gene g in a collection of cells. The non-zero expression values of gene g are modeled as a conjugate Dirichlet process mixture (DPM) model of normals, and the zero expression values of gene g are modeled using logistic regression as a separate distributional component:
$$ \left\{\begin{array}{c}\mathrm{nonzero}\ {Y}_g\sim \mathrm{conjugate}\ \mathrm{DPM}\ \mathrm{of}\ \mathrm{normals}\\ {}\mathrm{zero}\ {Y}_g\sim \mathrm{logistic}\ \mathrm{regression}\end{array}\right. $$
For detecting the DE genes, a Bayes factor for gene g is determined as:
$$ {BF}_g=\frac{f\left({Y}_g|{M}_{DD}\right)}{f\left({Y}_g|{M}_{ED}\right)}, $$
where f(Yg| MDD) is the predictive distribution of the observed expression value from gene g under a given hypothesis, MDD denotes the differential distribution hypothesis, and MED denotes the equivalent distribution hypothesis that ignores conditions. As there is no solution for the Bayes factor BFg, a closed form is calculated to present the evidence of whether a gene is differentially expressed:
$$ Scor{e}_g=\log \frac{f\left({Y}_g,{Z}_g|{M}_{DD}\right)}{f\left({Y}_g,{Z}_g|{M}_{ED}\right)}=\log \frac{f_{C1}\left({Y}_g^{C1},{Z}_g^{C1}\right){f}_{C2}Y\left({}_g^{C2},{Z}_g^{C2}\right)}{f_{C1,C2}\left({Y}_g,{Z}_g\right)}, $$
where Zg is the vector of the mean and the variance for gene g, and C1 and C2 represent the two conditions.
EMDomics
EMDomics [31], a nonparametric method based on Earth Mover's Distance (EMD), is proposed to reflect the overall difference between two normalized distributions by computing the EMD score for each gene and determining the estimation of FDRs. Suppose P = {(p1,wp1),(p2,wp2)…(pm,wpm)} and Q = {(q1,wq1),(q2,wq2)… (qn,wqn)} are two signatures, where pi and qj are the centers of each histogram bin, and wpi and wqj are the weights of each histogram bin. The COST is defined as the summation of the multiplication of flow fij and the distance dij:
$$ COST\left(P,Q,F\right)={\sum}_{i=1}^m{\sum}_{j=1}^n{f}_{ij}{d}_{ij}, $$
where dij is the Euclidean distance between pi and qj, and fij is the amount of weight that need to be moved between pi and qj. An optimization algorithm is used to find a flow F = [fij] between pi and qj to minimize the COST. After that, the EMD score is calculated as the normalized minimum COST.
$$ EMD\left(P,Q\right)=\frac{\sum_{i=1}^m{\sum}_{j=1}^n{f}_{ij}{d}_{ij}}{\sum_{i=1}^m{\sum}_{j=1}^n{f}_{ij}} $$
A q-value, based on the permutations of FDRs, is introduced to describe the significance of the score for each gene.
Monocle2
Monocle2 [38] is an updated version of Monocle [32], a computational method used for cell type identification, differential expression analysis, and cell ordering. Monocle applies a generalized additive model, which is a generalized linear method with linear predictors that depend on some smoothing functions. The model relates a univariate response variable Y, which belongs to the exponential family, to some predictor variables, as follows:
$$ h\left(E(Y)\right)={\beta}_0+{f}_1\left({x}_1\right)+{f}_2\left({x}_2\right)+\dots +{f}_m\left({x}_m\right), $$
where h is the link function, such as identity or log function, Y is the gene expression level, xi is the predictor variable that expresses the cell categorical label, and fi is a nonparametric function, such as cubic splines or some other smoothing functions. Specifically, the gene expression level Y is modeled using a Tobit model:
$$ Y=\left\{\begin{array}{c}{Y}^{\ast }\ if\ {Y}^{\ast }>\lambda \\ {}\lambda\ if\ {Y}^{\ast}\le \lambda \end{array}\right., $$
where Y* is a latent variable that corresponds to predictor x, and λ is the detection threshold. For identifying DE genes, we use an approximate chi-square (χ2) likelihood ratio test.
In Monocle2, a census algorithm is used to estimate the relative transcript counts, which leads to an improvement of the accuracy compared with using the normalized read counts, such as TPM values.
Discrete distributional differential expression (D3E)
D3E [33] consists of four steps: 1—data filtering and normalization, 2—comparing distributions of gene expression values for DE genes analysis, 3—fitting a Poisson-Beta model, and 4—calculating the changes in parameters between paired samples for each gene. For the normalization, D3E uses the same algorithm as used by DESeq2 [11] and filters genes that are not expressed in any cell. Then, the non-parametric Cramer-von Mises test or the Kolmogorov-Smirnov test is used to compare the expression values' distributions of each gene for identifying the DE genes. Alternatively, a parametric method, the likelihood ratio test, can be utilized after fitting a Poisson-Beta model:
$$ {\displaystyle \begin{array}{c} PB\left(n|\alpha, \beta, \gamma, \lambda \right)= Poisson\left(n|\frac{\gamma x}{\lambda}\right)\underset{x}{\bigwedge \limits } Beta\left(x|\alpha, \beta \right)\\ {}=\frac{\gamma^n{e}^{-\frac{\gamma }{\lambda }}\varGamma \left(\frac{\alpha }{\lambda }+\frac{\beta }{\lambda}\right)}{\lambda^n\varGamma \left(n+1\right)\varGamma \left(\frac{\alpha }{\lambda }+\frac{\beta }{\lambda }+n\right)\varGamma \left(\frac{\alpha }{\lambda}\right)}\varPhi \left(\frac{\alpha }{\lambda },\frac{\alpha }{\lambda }+\frac{\beta }{\lambda }+n,\frac{\gamma }{\lambda}\right)\end{array}}, $$
where n is the number of transcripts of a particular gene, α is the rate of promoter activation, β is the rate of promoter inactivation, γ is the rate of transcription when the promoter is in the active state, λ is the transcript degradation rate, and x is the auxiliary variable. The parameters α, β, and γ can be estimated by moments matching or Bayesian inference method, but λ should be known and assumed to be constant.
SINCERA
SINCERA [34] is a computational pipeline for single cell downstream analysis that enables pre-processing, normalization, cell type identification, differential expression analysis, gene signature prediction, and key transcription factors identification. SINCERA calculates the p-value for each gene from two groups based on a statistical test to identify the DE genes. It provides two methods: one-tailed Welch's t-test for genes, assuming they are from two independent normal distributions, and the Wilcoxon rank sum test for small sample sizes. Last, the FDRs are adjusted, using the Benjamini and Hochberg method [44].
edgeR [4] is a negative binomial model-based method to determine DE genes. It uses a weighted trimmed mean of the log expression ratios to normalize the sequencing depth and gene length between the samples. Then, the expression data are used to fit a negative binomial model, whereby the mean μ and variance ν have a relationship of ν = μ + αμ2, and α is the dispersion factor. To estimate the dispersion factor, edgeR combines a common dispersion across all the genes, estimated by a likelihood function, and a gene-specific dispersion, estimated by the empirical Bayes method. Last, an exact test with FDR control is used to determine DE genes.
DESeq2
DESeq2 [43] is an advanced version of DESeq [11], which is also based on the negative binomial distribution. Compared with the DESeq, which uses a fixed normalization factor, the new version of DESeq2 allows the use of a gene-specific shrinkage estimation for dispersions. When estimating the dispersion, DESeq2 uses all of the genes with a similar average expression. The fold-change estimation is also employed to avoid identifying genes with small average expression values.
DEsingle
DEsingle [36] utilizes a ZINB regression model to estimate the proportion of the real and drop-out zeros in the observed expression data. The expression values of each gene in each population of cells are estimated by a ZINB model. The probability mass function (PMF) of the ZINB model for read counts of gene g in a group of cells is:
$$ {\displaystyle \begin{array}{c}P\left({N}_g=n|\theta, r,p\right)=\theta \bullet I\left(n=0\right)+\left(1-\theta \right)\bullet {f}_{NB}\left(r,p\right)\\ {}=\theta \bullet I\left(n=0\right)+\left(1-\theta \right)\bullet \left(\begin{array}{c}n+r-1\\ {}n\end{array}\right){p}^n{\left(1-p\right)}^r\end{array}}, $$
where θ is the proportion of constant zeros of gene g in the group of cells, I(n = 0) is an indicator function, fNB is the PMF of the NB distribution, r is the size parameter and p is the probability parameter of the NB distribution. By testing the parameters (θ, r, and p) of two ZINB models for the two different groups of cells, the method can classify the DE genes into three categories: 1—different expression status (DEs), 2—differential expression abundance (DEa), and 3—general differential expression (DEg). DEs represents genes that they show significant different proportion of cells with real zeros in different groups (i.e. θs are significantly different) but the expression of these genes in the remaining cells show no significance (i.e. r, and p show no significance). DEa represents genes that they show no significance in the proportion of real zeros, but show significant differential expression in remaining cells. DEg represents genes that they not only have significant difference in the proportion of real zeros, but also significantly expressed differentially in the remaining cells.
SigEMD
SigEMD [37] employs logistic regression to identify the genes that their zero counts significantly affect the distribution of expression values; and employs Lasso regression to impute the zero counts of the identified genes. Then, for these identified genes, SigEMD employs EMD, similar to EMDomics, for differential analysis of expression values' distributions including the zero values; while for the remaining genes, it employs EMD for differential analysis of expression values' distributions ignoring the zero values. The regression model and data imputation declines the impact of large amounts of zero counts, and EMD enhances the sensitivity of detecting DE genes from multimodal scRNAseq data.
In this work, we used both simulated and real data to evaluate the performance of the differential expression analysis tools.
Simulated data
As we do not know exactly the true DE genes in real single-cell data, we used simulated data to compute the sensitivities and specificities of the eleven methods. Data heterogeneity (multimodality) and sparsity (large number of zero counts), which are the main characteristics of scRNAseq data, are modeled in simulated data. First, we generated 10 datasets, including simulated read counts in the form of log-transformed counts, across a two-condition problem by employing a simulation function from the scDD package [30] in R programing language [45]. For each condition, there were 75 single cells with 20,000 genes in each cell. Among the total 20,000 genes, 2000 genes were simulated with differential distributions, and 18,000 genes were simulated as non-DE genes. The 2000 DE genes were equally divided into four groups, corresponding to the DU, DP, DM, and DB scenarios (Additional file 1: Figure S1). Examples of these four situations from the real data are shown in Fig. 1a. From the 18,000 non-DE genes, 9000 genes were generated, using a unimodal NB distribution (EE scenario), and the other 9000 genes were simulated using a bimodal distribution (EP scenario). All of the non-DE genes had the same mode across the two conditions. Then, we simulated drop-out events by introducing large numbers of zero counts. To introduce zero counts, first, we built the cumulative distribution function (CDF) of the percentage of zeros of each gene, using the real data, FX(x). Then, in the simulated data for each gene, we randomly selected c (c~ FX(x)) cells from the total cells for half of the genes in each scenario and forced their expression values to zero (10,000 genes in total). Thus, the CDF of the percentage of zeros of each gene is similar between the simulated and real data (Additional file 1: Figure S2). This way, the distribution of the total counts in the simulated data is more similar to real data, which enables us to assess the true positives (TPs) and false positives (FPs) more accurately.
We used the real scRNAseq dataset provided by Islam et al. [46] as the positive control dataset to compute TP rates. The datasets consist of 22,928 genes from 48 mouse embryonic stem cells and 44 mouse embryonic fibroblasts. The count matrix is available in the Gene Expression Omnibus (GEO) database with Accession No. GSE29087. To assess TPs, we used the already-published top 1000 DE genes that are validated through qRT-PCR experiments [47] as a gold standard gene set [21, 40, 42].
We also used the dataset from Grün et al. [48] as the negative control dataset to assess FPs. We retrieved 80 pool-and-split samples that were obtained under the same condition from the GEO database with Accession No. GSE54695. By employing random sampling from the 80 samples, we generated 10 datasets to obtain the statistical characteristics of the results. For each generated dataset, we randomly selected 40 out of the 80 cells as one group and considered the remaining 40 cells as the other group [42]. Because all of the samples are under the same condition, there should be no DE genes in these 10 datasets.
In the preprocessing of the real datasets, we filtered out genes that are not expressed in all cells (zero read counts across all cells), and we used log-transformed transcript per millions (TPM) values as the input.
Accuracy of identification of DE genes
Results from simulated data
We used simulated data to compute true sensitivities and precision of the tools for detecting DE genes. The receiver operating characteristic (ROC) curves, using the simulated data, are shown in Fig. 2. As can be seen in the figure, the tools show comparable areas-under-the-curve (AUC) values.
ROC curves for the eleven differential gene expression analysis tools using simulated data
The average true positive rates (TPRs, sensitivities), false positive rates (FPRs), precision, accuracy, and F1 score of the tools under the adjusted p-value of 0.05 are given in Table 2. We defined TPs as the truly called DE genes, and FPs as the genes that were called significant but were not true DE genes. Similarly, true negatives (TNs) were defined as genes that were not true DE and were not called significant, and false negatives (FNs) were defined as genes that were true DE but were not called significant. We computed TPRs as the number of TPs over the 2000 ground-truth DE genes, FPRs as the number of FPs genes over the 18,000 genes that are not differentially expressed, precision as the number of TPs over all of the detected DE genes, and accuracy as the sum of TPs and TNs over all of the 20,000 genes.
Table 2 Numbers of the detected DE genes, sensitivities, false positive rates, precisions, and accuracies of the nine tools using simulated data for an adjusted p-value or FDR of 0.05
As seen in Table 2, Monocle2 identified the greatest number of true DE genes but also introduced the greatest number of false DE genes, which results in a low identification accuracy, at 0.824. The nonparametric methods, EMDomics and D3E, identified more true DE genes compared to parametric methods (2465.8 and 1683.4 true DE genes, respectively). They also, however, introduced many FPs, resulting in lower accuracies (0.91 and 0.929, respectively) than did parametric methods. In contrast, tools with higher precisions, larger than 0.9 (MAST, SCDE, edgeR, and SINCERA), introduce lower numbers of FPs but identify lower numbers of TPs. Interestingly, F1 scores show that DESeq2 and edgeR, which are designed for traditional bulk RNAseq data, do not show poor performance compared to the tools that are designed for scRNAseq data. DEsingle and SigEMD performed the best in terms of accuracy and F1 score since they identified high TPs and did not introduce many FPs.
A bar plot of true detection rates of the eleven tools under the four scenarios for DE genes (i.e., DU, DM, DP, and DB) and the two scenarios for non-DE genes (i.e., EP and EE), are shown in Fig. 3. As shown in the figure, all of the methods could achieve a TPR near to or larger than 0.5 for the DU and DM scenarios, where there is no multimodality (DU scenario) or the level of multimodality is low (DM scenario). For scenarios with a high level of multimodality (DP and DB), however, some of the tools, except EMDomics, Monocle2, DESeq2, D3E, DEsingle, and SigEMD, perform poorly. In the DP scenario, only EMDomics and Monocle2 exhibited TPRs larger than 0.5, and SCDE fails for this multimodal scenario. Similarly, for the DB scenario, Monocle2, DESeq2, and DEsingle have a TPR larger than 0.5; however, MAST and SINCERA completely fail. SigEMD exhibited a TPR around 0.5 for both DP and DB scenarios. DEsingle performed the best for the DB scenario but exhibited a low TPR for the DP scenario. We showed the TPRs and true negative rates, using the simulated data with and without large numbers of zeros separately in Additional file 1: Figures S3 and S4. All of the tools have a better performance for the four scenarios when there are not large numbers of zero counts. We also showed the ROC curve for the data with and without large numbers of zeros in Additional file 1: Figures S5 and S6.
True detection rates for different scenarios of DE genes and non-DE genes using simulated data. a true positive rates for DE genes under DU, DP, DM, DB scenarios b true negative genes for non-DE genes under EP and EE scenarios
It is important to notice that, even though simulated data contain multimodality and zero counts, they cannot capture the real multimodality and zero count behaviors of real data. Therefore, as seen in the following, we evaluated the detection accuracy of detecting DE genes, using real data.
Results from positive control real data
We used the positive control real dataset to evaluate the accuracy of the identification of DE genes. We employed the validated 1000 genes as a gold standard gene set. We defined true detected DE genes as DE genes that are called by the tools and are among the 1000 gold standard DE genes. The number of detected DE genes and the number of true detected DE genes over the 1000 gold standard genes (defined as sensitivity) for each tool, using an FDR or adjusted p-value of 0.05, are given in Table 3.
Table 3 Number of detected DE genes, and sensitivities of the eleven tools using positive control real data for an adjusted p-value or FDR of 0.05
The tools can be ranked in three levels based on their sensitivities: Monocle2, EMDomics, SINCERA, D3E, and DEsingle rank in the first level, with sensitivities more than 0.7; edgeR, DESeq2, and SigEMD rank in the second level, with sensitivities between 0.4 and 0.7; and SCDE, scDD, and MAST rank in the third level with sensitivities below 0.4. The methods that show better sensitivities, however, also called more than 7000 genes as significantly DE genes. In Fig. 4, the blue bars show the intersection between the gold standard genes and the DE genes called by the methods (true detected DE genes), whereas the yellow bars show the number of significantly DE genes that are not among the gold standard genes.
Tools' total numbers of detected significantly DE genes with the p-value or FDR threshold of 0.05 and their overlaps with the 1000 gold standard genes
We need to note that we do not have all of the true positive DE genes for the positive control dataset. The 1000 gold standard genes are a subset of DE genes from the dataset that are validated through qRT-PCR experiments [47]. In addition, the datasets that we used in this study have been generated under similar conditions as those of the positive control datasets; however, they are not from the same assay and experiment. Therefore, the results we present here provide information about sensitivities to some degree.
Results in negative control real data
Because all of the real true DE genes in the positive control real dataset are unknown, we can test only the TPs, using the 1000 gold standard genes but not the FPs. To validate the FPs, we applied the methods to 10 datasets with two groups, randomly sampled from the negative control real dataset. Because cells in the two groups are from the same condition, we expect the methods to not identify any DE gene. Using an FDR or adjusted p-value of 0.05, MAST, SCDE, edgeR, and SINCERA did not call any gene as a DE gene, as we expected, whereas DEsingle, scDD, DESeq2, SigEMD, D3E, EMDomics, and Monocle2 identified 4, 5, 19, 50, 160, 733, and 917 significantly DE genes, respectively, out of 7277 genes in average over the 10 datasets. The number of detected DE genes and FPRs are shown in Table 4. EMDomics and Monocle2, which show the best sensitivities, using the positive control datasets, introduce the most FPs.
Table 4 Number of the detected DE genes and false positive rates of the eleven tools using negative control real data for an adjusted p-value or FDR of 0.05
Agreement among the methods in identifying DE genes
In general, agreement among all of the tools is very low. Considering the top 1000 DE genes detected by the eleven tools in the positive control real data, there are only 92 common DE genes across all of the tools. Of these 92 DE genes, only 41 intersect with the gold standard 1000 DE genes.
We investigated how much the tools agreed with each other on identifying DE genes by examining the number of identified DE genes that were common across a pair of tools, which we called common DE genes. First, we ranked genes by their adjusted p-values or FDRs, and then we selected the top 1000 DE genes. We defined pairwise agreement as the number of common DE genes identified by a pair of tools. The numbers of common DE genes between pairs of tools are between 770 and 1753 for simulated data (Additional file 1: Figure S7), and 142 and 856 for real data (Fig. 5). We observed that the methods do not have high pairwise agreement in either the simulated data or the real data.
Numbers of pairwise common DE genes tested by top 1000 genes in real data
In addition, we used significantly DE genes under a p-value or FDR threshold of 0.05 to investigate the pairwise agreement among the tools. The pairwise agreement varies from 432 to 7934 for the real data (Fig. 6) and from 444.8 to 1878 for the simulated data (Additional file 1: Figure S8). In the real data, MAST identified fewer significantly DE genes under the 0.05 cut-off adjusted p-value, but the majority of its significantly DE genes overlapped with the significantly DE genes from other tools.
Numbers of pairwise common DE genes tested by adjusted p-value< 0.05 in real data
Effect of sample size
We investigated the effect of sample size on detecting DE genes in terms of TPR, FPR, precision, and accuracy, using the simulated data. Precision was defined as TP/(TP + FP) and accuracy as (TP + TN)/(TP + TN + FP + FN). We generated eight cases: 10 cells, 30 cells, 50 cells, 75 cells, 100 cells, 200 cells, 300 cells, and 400 cells for each condition. We noticed that the number of identified DE genes and the TPRs of detection under a default FDR or adjusted p-value (< 0.05) tend to increase when the sample size increases from 10 to 400 (Fig. 7) for all tools.
Effect of sample size (number of cells) on detecting DE genes. The sample size is in horizontal axis, from 10 to 400 cells in each condition. Effect of sample size on a TPR, b FPR, c accuracy (=(TP + TN)/(TP + FP + TN + FN)), and precision (=TP/(TP + FP)). A threshold of 0.05 is used for FDR or adjusted p-value
The results show that sample size is very important, as the tools' precision increases significantly by increasing the sample size from 10 to 75. The FPRs tend to be steady when the sample size is > 75, except for DEsingle. DEsingle works well for a large number of zero counts in a larger dataset. These results also show that Monocle2, EMDomics, DESeq2, DEsingle, and SigEMD can achieve TPRs near 100% by increasing the sample size, while the other methods cannot. Monocle2, EMDomics, DESeq2, and D3E, however, introduce FPs (FPR > 0.05%), whereas FPRs for other methods are very low (close to zero). All of the tools similarly perform poorly for a sample size of < 30. When the sample size exceeded 75 in each condition, the tools achieved better accuracy in detection.
Enrichment analysis of real data
To examine whether the identified DE genes are meaningful to biological processes, we conducted gene set enrichment analysis through the "Investigate Gene Sets" function of the web-based GSEA software tool (http://www.broadinstitute.org/gsea/msigdb/annotate.js). We investigated the KEGG GENES database (KEGG; contains 186 gene sets) from the Molecular Signatures Database (MSigDB) for the gene set enrichment analysis (FDR threshold of 0.05). We used the same number of identified DE genes (top n = 300 genes) of each tool as the input for KEGG pathway enrichment analysis. The results are shown in Table 5. We observed that the 300 top-ranked DE genes identified by nonparametric methods (EMDomics and D3E) were enriched for more KEGG pathways compared to other methods. We also used a box plot to compare the FDRs of the top 10 most significant gene sets enriched by the top-ranked DE genes from the tools (Additional file 1: Figure S9). It can be observed that pathways enriched by the top-ranked DE genes from edgeR and Monocle2 have the highest strength. The 10 top-ranked KEGG pathways for the eleven tools are listed in Additional file 1: Tables S1 to S11.
Table 5 Number of KEGG gene sets and GO terms enriched by the top 300 DE genes identified by each tool under an FDR threshold of 0.05
We also used DAVID (https://david.ncifcrf.gov/summary.jsp) for the Gene Ontology Process enrichment analysis of the 300 top-ranked DE genes identified by each tool. The numbers of gene ontology (GO) terms under a cutoff FDR of 0.05 are shown in Table 5. Top DE genes identified by EMDomics, D3E, Monocle, and DESeq2 are enriched in more KEGG pathways and/or GO terms compared to those of other tools.
Finally, although the quantitative values of terms recovered from gene set enrichment analysis is informative with regard to the relative statistical power of calling biologically meaningful genes of these tools, very different gene lists can result in very similar quantitative performance values. To perform a qualitative assessment of the biological relevance of the differentially expressed gene lists recovered by each tool, we ranked the performance of each tool in recovering stem cell-relevant GO terms from the 300 top-ranked DE genes. Each gene list was subjected to gene set enrichment against the Biological Process portion of the Gene Ontology Process, and all significant enriched terms were recovered. The results of the Gene Ontology Process enrichment analysis of the 300 top-ranked DE genes and the list of the 300 top-ranked genes for each tool are given in Additional file 2. Significant GO terms with their negative log transform of their q-values for each tool are given in Additional file 3. To consolidate closely related processes recovered in this step, we subjected each list of GO terms to word and phrase significance analysis, using world cloud analysis, whereby negative log transform q-values are considered as frequencies in this analysis. The phrase significance of each tool, in the form of word clouds, is shown in Additional file 1: Figures S10–S20, and the word significance, in the form of word clouds, is shown in Additional file 1: Figures S21–S31. In these plots, the font size represents the significance of the word/phrase. This provides a readily interpretable visualization of the biologically relevant GO terms.
Several stem cell biologists were then asked to rank the performance of each algorithm in terms of its ability to recover the GO terms most relevant to the experiment that provides the real dataset used in this study. Each algorithm was scored on a 1–3 scale, with 3 as the best recovery of biologically relevant terms and phrases; then, the scores for terms and phrases were added to give an overall performance score from 2 to 6 (Table 6). As expected, many of these tools recovered, at high significance, several terms strongly related to stem cell biology, including development, differentiation, morphogenesis, multicellular, and adhesion as well as many others. Interestingly, scDD and SCDE failed to recover stem cell-relevant terms at high significance. Instead, these approaches appeared to yield terms and phrases related to cellular housekeeping processes. Monocle2 and MAST performed the best at recovering stem cell-relevant terms. Following them, EMDomics, DESeq2, D3E, DEsingle, SigEMD, edgeR, SINCERA all performed well. This result strongly suggests not only that the methods used for identifying DE genes may yield non-overlapping and quantitatively different gene sets but that some methods are much better at extracting biologically relevant gene sets from the data.
Table 6 Scores from word and phrase significance analysis of each tool to recover biologically relevant terms and phrases
We compared the runtimes of the eleven tools (Table 7). Except for D3E, which was implemented in Python, all of the tools were implemented in R (Table 1). The runtime was computed using a personal computer, iMac with 3.1GHz CPU and up to 8 gigabytes of memory. The average runtime (of 10 times) of each tool, using the positive control dataset, is shown in Table 7. SINCERA has the lowest time cost because it employs a simple t-test. edgeR has the lowest time cost among the model-based and nonparametric methods. MAST, Monocle2, and DESeq2 run fast (less than 5 min), as MAST and Monocle2 use linear regression methods, and DESeq2 uses a binomial model for identifying DE genes. scDD takes longer, as it needs time to classify DE genes into different modalities. The nonparametric method, SigEMD, EMDomics and D3E, take more time compared to the model-based methods because they need to compute the distance between two distributions for each gene. We note that D3E had two running modes: It takes about 40 min when running under the simple mode and about 30 h when running under the more accurate mode.
Table 7 Average runtime of identifying DE genes in real data by each tool
As shown in Fig. 1, scRNAseq expression data are multimodal, with a high number of zero counts that make differential expression analysis challenging. In this study, we conducted a comprehensive evaluation of the performance of eleven software tools for single cell differential gene expression analysis: SCDE, MAST, scDD, EMDomics, D3E, Monocle2, SINCERA, edgeR, DESeq2, DEsingle, and SigEMD. Using simulated data and real scRNAseq data, we compared the accuracy of the tools in identifying DE genes, agreement among the tools in detecting DE genes, and time consumption of the tools. We also examined the enrichment of the identified DE genes by running pathway analysis and GO analysis for the real data.
Detection accuracy
In general, the eleven methods behave differently in terms of calling true significantly DE genes. The tools that show higher sensitivity also show lower precision. Among all of the tools, DEsingle and SigEMD, which are designed for the scRNAseq, tend to show a better trade-off between TPRs and precision.
All of the tools perform well when there is no multimodality or low levels of multimodality. They all also perform better when the sparsity (zero counts) is less. For data with a high level of multimodality, methods that consider the behavior of each individual gene, such as DESeq2, EMDomics, Monocle2, DEsingle, and SigEMD, show better TPRs. This is because EMDomics and SigEMD use a nonparametric method to compute the distance between two distributions and can capture the multimodality; DEsingle models dropout events well by using a zero inflated negative model to estimate the proportion of real and drop-out zeros in the expression value; Monocle2 uses a census algorithm to estimate the relative transcript counts for each gene instead of using normalized read counts, such as TPM values; and DESeq2 uses a gene-specific shrinkage estimation for the dispersions parameter to fit a negative binomial model to the read counts. If the level of multimodality is low, however, SCDE, MAST, and edgeR can provide higher precision.
Agreement among the methods
The overall agreement in terms of finding DE genes among all of the tools is low. We used the top 1000 DE genes identified by the eleven tools (ranked by p-values) and significantly DE genes with a significant threshold of 0.05 to identify the common DE genes across the tools and between pairs of tools. The DE genes identified by DESeq2, EMDomics, D3E, Monocle2, SINCERA, DEsingle, and SigEMD show higher pairwise agreement, whereas the model-based methods, SCDE and scDD, show less pairwise agreement within other tools. No single tool is clearly superior for identifying DE genes, using single cell sequencing datasets. The tools use different methods with different strengths and limitations for calling DE genes. The sequencing data also are very noisy. The methods treat zero counts, multimodality, and noise differently, resulting in low agreement among them. Some tools work well when the drop-out event is not significant and some, when data multimodality is not significant. For instance, scDD aims at characterizing different patterns of differential distributions; however, handling a large number of zero counts in the expression values is a challenging task for this tool.
Sample size effect
All of the tools perform better when there are more samples in each condition. TPRs improve significantly by increasing sample size from 10 to 75, but they slow down for sample sizes greater than 100; and for sample sizes of 300 and larger, there are almost no changes in TPRs and FPRs. Monocle2, EMDomics, DESeq2, DEsingle, and SigEMD can achieve a TPR close to 100% by increasing the sample size. DEsingle works well for a larger number of zero counts or small number of samples. When the number of zero counts is low and the number of samples is large, its model cannot capture the dropout event well.
Enrichment analysis
As expected, top-ranked DE genes of many of these tools are enriched for GO terms strongly related to stem cell biology. scDD and SCDE, however, failed to recover stem cell-relevant terms at high significance. Instead, they appeared to yield GO terms related to cellular housekeeping processes. This result suggests that model-based single cell DE analysis methods that do not consider multimodality do not perform well in extracting biologically relevant gene sets from the data.
In conclusion, the identification of DE genes, using scRNAseq data, remains challenging. Tools developed for scRNAseq data focus on handling zero counts or multimodality but not both. In general, the methods that can capture multimodality (non-parametric methods), perform better than do the model-based methods designed for handling zero counts. However, a model-based method that can model the drop-out events well, can perform better in terms of true positive and false positive. We observed that methods developed specifically for scRNAseq data do not show significantly better performance compared to the methods designed for bulk RNAseq data; and methods that consider behavior of each individual gene (not all genes) in calling DE genes outperform the other tools. The lack of agreement in finding DE genes by these tools and their limitations in detecting true DE genes and biologically relevant gene sets indicate the need for developing more precise methods for differential expression analysis of scRNAseq data. Multimodality, heterogeneity, and sparsity (many zero counts) are the main characteristics of scRNAseq data that all need to be addressed when developing new methods.
AUC:
Area under curve
Both DM and DU
DM:
Differential modality
DP:
Differential proportion
DPM:
Dirichlet process mixture
DU:
Differential unimodal
FDRs:
False discovery rates
FN:
False negative
FP:
FPR:
False positive rate
GAMs:
Generalized additive models
Negative binomial
RNAseq:
ROC:
Receiver operating characteristic
scRNAseq:
Single Cell RNA sequencing
TMM:
Trimmed mean of M-values
TN:
True negative
TNR:
True negative rate
True positive
TPM:
Transcript per million
TPR:
True positive rate
ZINB:
Zero-inflated negative binomial
Metzker ML. Sequencing technologies — the next generation. Nat Rev Genet. 2010;11:31–46.
Perry GH, Melsted P, Marioni JC, Wang Y, Bainer R, Pickrell JK, et al. Comparative RNA sequencing reveals substantial genetic variation in endangered primates. Genome Res. 2012;22:602–10.
Arnold CD, Gerlach D, Stelzer C, Boryń ŁM, Rath M, Stark A. Genome-wide quantitative enhancer activity maps identified by STARR-seq. Science. 2013;339:1074–7.
Han Y, Gao S, Muegge K, Zhang W, Zhou B. Advanced applications of RNA sequencing and challenges. Bioinforma Biol Insights. 2015;9(Suppl 1):29–46.
Schissler AG, Li Q, Chen JL, Kenost C, Achour I, Billheimer DD, et al. Analysis of aggregated cell–cell statistical distances within pathways unveils therapeutic-resistance mechanisms in circulating tumor cells. Bioinformatics. 2016;32:i80–9.
Saliba A-E, Westermann AJ, Gorski SA, Vogel J. Single-cell RNA-seq: advances and future challenges. Nucleic Acids Res. 2014;42:8845–60.
Stegle O, Teichmann SA, Marioni JC. Computational and analytical challenges in single-cell transcriptomics. Nat Rev Genet. 2015;16:133–45.
Bacher R, Kendziorski C. Design and computational analysis of single-cell RNA-sequencing experiments. Genome Biol. 2016;17. https://0-doi-org.brum.beds.ac.uk/10.1186/s13059-016-0927-y.
Grün D, van Oudenaarden A. Design and analysis of single-cell sequencing experiments. Cell. 2015;163:799–810.
Myers JS, von Lersner AK, Robbins CJ, Sang Q-XA. Differentially expressed genes and signature pathways of human prostate Cancer. PLoS One. 2015;10:e0145322.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11:R106.
Robinson MD, McCarthy DJ, Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010;26:139–40.
Leng N, Dawson JA, Thomson JA, Ruotti V, Rissman AI, Smits BMG, et al. EBSeq: an empirical Bayes hierarchical model for inference in RNA-seq experiments. Bioinformatics. 2013;29:1035–43.
Hardcastle TJ, Kelly KA. baySeq: empirical Bayesian methods for identifying differential expression in sequence count data. BMC Bioinformatics. 2010;11:422.
Di Y, Schafer DW, Cumbie JS, Chang JH. The NBP negative binomial model for assessing differential gene expression from RNA-Seq. Stat Appl Genet Mol Biol. 2011;10. https://0-doi-org.brum.beds.ac.uk/10.2202/1544-6115.1637.
Smyth GK. Linear models and empirical bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004;3:Article3.
Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, et al. Limma powers differential expression analyses for RNA-sequencing and microarray studies. Nucleic Acids Res. 2015;43:e47.
Li J, Tibshirani R. Finding consistent patterns: a nonparametric approach for identifying differential expression in RNA-Seq data. Stat Methods Med Res. 2013;22:519–36.
Tarazona S, García-Alcalde F, Dopazo J, Ferrer A, Conesa A. Differential expression in RNA-seq: a matter of depth. Genome Res. 2011;21:2213–23.
Auer PL, Doerge RW. A two-stage Poisson model for testing RNA-Seq data. Stat Appl Genet Mol Biol. 2011;10. https://0-doi-org.brum.beds.ac.uk/10.2202/1544-6115.1627.
Kharchenko PV, Silberstein L, Scadden DT. Bayesian approach to single-cell differential expression analysis. Nat Methods. 2014;11:740–2.
Elowitz MB, Levine AJ, Siggia ED, Swain PS. Stochastic gene expression in a single cell. Science. 2002;297:1183–6.
Raj A, van Oudenaarden A. Stochastic gene expression and its consequences. Cell. 2008;135:216–26.
Patel AP, Tirosh I, Trombetta JJ, Shalek AK, Gillespie SM, Wakimoto H, et al. Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma. Science. 2014;344:1396–401.
Darmanis S, Sloan SA, Zhang Y, Enge M, Caneda C, Shuer LM, et al. A survey of human brain transcriptome diversity at the single cell level. Proc Natl Acad Sci U S A. 2015;112:7285–90.
Steinmeyer JD, Yanik MF. High-throughput single-cell manipulation in brain tissue. PLoS One. 2012;7:e35603.
Tirosh I, Izar B, Prakadan SM, Wadsworth MH, Treacy D, Trombetta JJ, et al. Dissecting the multicellular ecosystem of metastatic melanoma by single-cell RNA-seq. Science. 2016;352:189–96.
Nelson SB. Cortical microcircuits: diverse or canonical? Neuron. 2002;36:19–27.
Finak G, McDavid A, Yajima M, Deng J, Gersuk V, Shalek AK, et al. MAST: a flexible statistical framework for assessing transcriptional changes and characterizing heterogeneity in single-cell RNA sequencing data. Genome Biol. 2015;16. https://0-doi-org.brum.beds.ac.uk/10.1186/s13059-015-0844-5.
Korthauer KD, Chu L-F, Newton MA, Li Y, Thomson J, Stewart R, et al. scDD: a statistical approach for identifying differential distributions in single-cell RNA-seq experiments. bioRxiv. 2016:035501. https://0-doi-org.brum.beds.ac.uk/10.1101/035501.
Nabavi S, Schmolze D, Maitituoheti M, Malladi S, Beck AH. EMDomics: a robust and powerful method for the identification of genes differentially expressed between heterogeneous classes. Bioinformatics. 2016;32:533–41.
Trapnell C, Cacchiarelli D, Grimsby J, Pokharel P, Li S, Morse M, et al. The dynamics and regulators of cell fate decisions are revealed by pseudotemporal ordering of single cells. Nat Biotechnol. 2014;32:381–6.
Delmans M, Hemberg M. Discrete distributional differential expression (D3E) - a tool for gene expression analysis of single-cell RNA-seq data. BMC Bioinformatics. 2016;17:110.
Guo M, Wang H, Potter SS, Whitsett JA, Xu Y. SINCERA: a pipeline for single-cell RNA-Seq profiling analysis. PLoS Comput Biol. 2015;11:e1004575.
Katayama S, Töhönen V, Linnarsson S, Kere J. SAMstrt: statistical test for differential expression in single-cell transcriptome with spike-in normalization. Bioinformatics. 2013;29:2943–5.
Miao Z, Deng K, Wang X, Zhang X. DEsingle for detecting three types of differential expression in single-cell RNA-seq data. Bioinformatics. 2018;34:3223–4.
Wang T, Nabavi S. SigEMD: a powerful method for differential gene expression analysis in single-cell RNA sequencing data. Methods San Diego Calif. 2018;145:25–32.
Qiu X, Hill A, Packer J, Lin D, Ma Y-A, Trapnell C. Single-cell mRNA quantification and differential analysis with census. Nat Methods 2017;advance online publication. doi:https://0-doi-org.brum.beds.ac.uk/10.1038/nmeth.4150.
Korthauer KD, Chu L-F, Newton MA, Li Y, Thomson J, Stewart R, et al. A statistical approach for identifying differential distributions in single-cell RNA-seq experiments. Genome Biol. 2016;17. https://0-doi-org.brum.beds.ac.uk/10.1186/s13059-016-1077-y.
Jaakkola MK, Seyednasrollah F, Mehmood A, Elo LL. Comparison of methods to detect differentially expressed genes between single-cell populations. Brief Bioinform. https://0-doi-org.brum.beds.ac.uk/10.1093/bib/bbw057.
Miao Z, Zhang X. Differential expression analyses for single-cell RNA-Seq: old questions on new data. Quant Biol. 2016;4:243–60.
Dal Molin A, Baruzzo G, Di Camillo B. Single-cell RNA-sequencing: assessment of differential expression analysis methods. Front Genet. 2017;8. https://0-doi-org.brum.beds.ac.uk/10.3389/fgene.2017.00062.
Love MI, Huber W, Anders S. Moderated estimation of fold change and dispersion for RNA-seq data with DESeq2. Genome Biol. 2014;15:550.
Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Methodol. 1995;57:289–300.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2016. URL https://www.r-project.org/ . https://www.r-project.org/
Islam S, Kjällquist U, Moliner A, Zajac P, Fan J-B, Lönnerberg P, et al. Characterization of the single-cell transcriptional landscape by highly multiplex RNA-seq. Genome Res. 2011;21:1160–7.
Moliner A, Enfors P, Ibáñez CF, Andäng M. Mouse embryonic stem cell-derived spheres with distinct neurogenic potentials. Stem Cells Dev. 2008;17:233–43.
Grün D, Kester L, van Oudenaarden A. Validation of noise models for single-cell transcriptomics. Nat Methods. 2014;11:637–40.
This study was supported by a grant from the National Institutes of Health (NIH, R00LM011595, PI: Nabavi).
The single cell RNAseq data are publicly available in Gene Expression Omnibus, accession number GSE29087 and GSE54695.
Computer Science and Engineering Department, University of Connecticut, Storrs, CT, USA
Tianyu Wang
Department of Molecular & Cell Biology, University of Connecticut, Storrs, CT, USA
Boyang Li
Department of Molecular & Cell Biology, The Institute for Systems Genomics, CLAS, University of Connecticut, Storrs, CT, USA
Craig E. Nelson
Computer Science and Engineering Department, The Institute for Systems Genomics, University of Connecticut, Storrs, CT, USA
Sheida Nabavi
SN and TW designed the study. TW, BL, and CN implemented the analysis. SN, CN and TW wrote the manuscript. All authors read and approved the final manuscript.
Correspondence to Sheida Nabavi.
No ethics approval was required for the study.
Supplementary materials (Supplementary Tables S1-S11, Supplementary Figures S1-S31). (DOCX 2225 kb)
Results of the Gene Ontology Process enrichment analysis of the 300 top-ranked DE genes and the list of the 300 top-ranked genes for each tool. (XLSX 115 kb)
Significant GO terms with their negative log transform of their q-values for each tool. (CSV 24 kb)
Wang, T., Li, B., Nelson, C.E. et al. Comparative analysis of differential gene expression analysis tools for single-cell RNA sequencing data. BMC Bioinformatics 20, 40 (2019). https://0-doi-org.brum.beds.ac.uk/10.1186/s12859-019-2599-6
DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12859-019-2599-6
Single-cell
RNAseq
Sequence analysis (methods) | CommonCrawl |
Mechanism of formation of peak of mass spectrum of isobutane at m/z = 27
Here is the mass spectrum of isobutane:
http://webbook.nist.gov/cgi/cbook.cgi?Spec=C75285&Index=0&Type=Mass
Clearly the peak at $\mathrm{m/z} = 27$ corresponds to the species $\ce{C2H3+}$, the "protonated acetylene".
I am interested in possible formations for its formation from the molecular ion $\ce{t-BuH+}$.
organic-chemistry reaction-mechanism mass-spectrometry
DHMODHMO
$\begingroup$ Electron ionization fragmentation spectra, unlike say CID spectra, can be reasonably predicted computationally for small molecules. This capability is only a few years old and is a major testament to how computational chemistry has advanced in the last few years. Here's a recent review. pubs.acs.org/doi/full/10.1021/acs.jpca.6b02907 $\endgroup$ – Curt F. Jan 16 '17 at 5:38
Some of the first mass spectrometrists to see this ion were Stevenson and Hipple in 1942.1 They were puzzled by it.
Fifteen years later, Rosenstock and Melton explained the source of this ion in an exhaustive study.2 First, the molecular ion $\ce{C4H10+}$ dissociates into fragments via five possible pathways, each with its own activation energy. See Table VIII in their paper, which I reproduce below.
None of these primary dissociation pathways gives rise to $\ce{C2H3+}$. Instead, it arises from a variety of processes from a subset (the metastable subset) of these five primary transitions. These processes are shown in Tables X and XI of their paper.
All of the reactions that give rise to $\ce{C2H3+}$ are probably happening to some extent, but the lowest-energy route there for isobutane (right column) appears to be via $\ce{C2H5+ -> C2H3+ + H2}$ (only 1.9 eV). However, $\ce{C2H5+}$ itself requires more energy to form than $\ce{C3H7+}$ ($m/z=43$), so the route $\ce{C3H7+ -> C2H3+ + CH4}$ may also occur as well.
The general conclusions of their paper are worth quoting:
The basic concept of the quasi-equilibrium theory is that mass spectra are formed by a series of competing consecutive unimolecular decomposition of excited ions. The results discussed above lend strong support to this concept. It was found that of several possible reactions of a given ion, the metastable transition was associated with the reaction having the least activation energy. The general rate equation appropriate for these systems is the product of a frequency factor and a function of the excitation energy $E$ and activation energy $\epsilon$. When the value of $E$ is such that the rate constant is of the order of $10^6~\mathrm{sec^{-1}}$, the rate constant is extremely sensitive to the value of $\epsilon$. The result is that a second reaction having an $\epsilon$ higher by more than a few tenths of an eV will have a considerably smaller rate constant, even if the frequency factor is larger by a factor of a thousand. Thus, in general, the correlation can be carried out in terms of activation energies alone.
D. P. Stevenson, J. A. Hipple, Jr., J. Am. Chem. Soc. 1942, 64 (7), 1588–1594.
H. M. Rosenstock, C. E. Melton, J. Chem. Phys. 1957, 26 (2), 314-322.
Curt F.Curt F.
Not the answer you're looking for? Browse other questions tagged organic-chemistry reaction-mechanism mass-spectrometry or ask your own question.
Why would the 2+ molecular ion of butane have a mass spectrum peak at 29?
Pentabromophenol Mass Spectrum
Why is the molecular peak not observed in the mass spectrum of 2,2-dimethylbutane?
Fragmentations and rearrangements in n-(4-alkyl phenyl) alkanoic acids in mass spectrometry
Mechanism of the oxidation of alcohols with KMnO4
Are there any acid-base indicators whose protonated and deprotonated forms have the same number of resonance structures?
In the mass spectrum of niacin (3-pyridine-carboxylic acid), what ion corresponds to the peak at m/z = 105?
How to identify the structure of an organic acid based on its reactivity, mass and NMR spectrum?
What do we learn from an isotope peak (or M+2 peak) in a mass spectrum? | CommonCrawl |
December 2020 , 20:70 | Cite as
A good beginning: study protocol for a group-randomized trial to investigate the effects of sit-to-stand desks on academic performance and sedentary time in primary education
A. (Lex) E. Q. van Delden
Guido P. H. Band
Joris P. J. Slaets
Study protocol
Part of the following topical collections:
Sedentary behavior is associated with health risks and academic under-achievement in children. Still, children spend a large part of their waking hours sitting at a desk at school. Recent short-term studies demonstrated the potential of sit-to-stand desks to reduce sitting time in primary education. The program of "A Good Beginning" was conceived to assess the long-term effects of sit-to-stand desks on sitting time in primary education, and to examine how sit-to-stand desks versus regular desks relate to academic performance, and measures of executive functioning, health and wellbeing. The present paper describes the design of this group-randomized trial, which started in 2017 and will be completed in 2019.
Children of two grade-three groups (age 8–9) following regular primary education in Leiden, The Netherlands, were recruited. A coin toss determined which group is the experimental group; the other group is the control group. All children in the experimental group received sit-to-stand desks. They are invited and motivated to reduce sedentary time at school, however, it is their own choice to sit or stand. Children in the control group use regular desks. Otherwise, both groups receive regular treatment. Outcomes are assessed at baseline (T0) and at five follow-up sessions (T1-T5) alternately in winter and summer seasons over three academic years. Primary outcome measures are academic performance, and the proportion of sitting time at school, measured with a 3D accelerometer. Secondary outcome measures are a number of measures related to executive functioning (e.g., N-back task for working memory), health (e.g., height and weight for BMI), and wellbeing (e.g., KIDSCREEN-52 for Quality of Life).
A Good Beginning is a two-and-a-half-year research program, which aims to provide a better understanding of the long-term effects of sit-to-stand desks on sedentary time at school and the relation between sitting time reduction and academic performance, executive functioning, health and wellbeing. The findings may serve as useful information for policy making and practical decision making for school and classroom environments.
The program of "A Good Beginning" is registered at the Netherlands Trial Register (NTR, https://www.trialregister.nl), number NL6166, registration date 24 November 2016.
Sedentary behavior Primary school Children Health Academic performance Cognition Quality of life Intervention Sit-to-stand desks
BSFSC
Bristol Stool Form Scale for Children
CCMO
Dutch Central Committee on Research Involving Human Subjects (Centrale Commissie Mensgebonden Onderzoek)
Central Institute for Test Development (Centraal Instituut voor Toets Ontwikkeling)
Fish Flanker Test
20-m Shuttle Run Test
Medical Research Involving Human Subjects Act
Following the advances in technology in the past century, people nowadays spend the largest part of their time awake in a sitting position [1, 2]. In adults, sedentary behavior is associated with serious health risks, such as obesity, cardio-vascular diseases, diabetes, and a reduced cardio-respiratory fitness [3]. Although less consistent than in adults, sedentary behavior is also related to health risks in children [4, 5]. In addition, children's sedentary behavior is associated with academic under-achievement [4, 5].
Children spend a large part of their waking hours at school sitting [6, 7, 8], in a classroom that is designed for sitting. Children's sedentary behavior, implicitly learned in school, may continue into adolescence [7, 8, 9, 10, 11] and adulthood [9, 10, 12]. Consequently, from childhood onwards, sedentary behavior becomes the rule, rather than the exception. Therefore, the school setting, in particular, is the place to reduce children's sedentary time and promote standing and active behavior, which may also prevent excessive sedentary behavior in later age. Given the fact that the environment has a very strong influence on behavior [13, 14, 15], sit-to-stand desks have the potential to invite and seduce children to sit less and to promote standing and active behavior.
On the one hand, reduced sitting time [16, 17, 18, 19, 20, 21] and increased energy expenditure [22, 23, 24] are relevant physical benefits of using desks that promote standing found in previous studies. On the other hand, concerns about the use of such desks have been expressed by teachers and the long term impact on academic performance is unclear [25]. In short term studies, academic performance didn't seem to suffer [26]. Moreover, better working memory capabilities [24], and attention and task focus [27] (i.e., the mechanisms suggested to underlie the relation between executive function and academic performance [28]) are advantages that come with the use of desks that promote standing. Hence, academic performance (and cognitive skills) may not suffer from, and even improve with, less sitting and more standing. However, all these benefits have only been reported in short-term studies with a duration of a year or (in most studies, much) less, which may suffer from effects of novelty and season. Students' enthusiasm to sit less may wane when the novelty of the sit-to-stand desks wears off (cf. [29]), and children are more activity prone in summer than in winter [30].
The program of "A Good Beginning" entails a group-randomized trial in which the merits of sit-to-stand desks in the primary school classroom are investigated over a two-and-a-half-year period, beyond the effect of novelty and controlling for season. To this end, two grade-three groups (students aged 8–9 years) were recruited. In The Netherlands, grade-three students are the oldest students to recruit for a study of this duration. In the final year of primary school, sixth grade, the curricular activities differ notably from the activities in the curriculum of grades one to five. Children in one group, the experimental group, received sit-to-stand desks for the entire study. Students in the control group use regular, seated desks. The results of the program are expected by the beginning of 2020.
The primary aim of the program of "A Good Beginning" is to assess possible harm inflicted on academic performance as an adverse event of long-term implementation of sit-to-stand desks in the primary school classroom. Based on previous findings in shorter term studies [24, 25, 26, 27, 31], we expect that the sit-to-stand desks are proper alternatives to regular, seated desks without negative effects on academic performance, and possibly with positive effects. Secondly, in terms of effectiveness, the program aims to assess the long-term effect of sit-to-stand desks on sedentary time. Based on earlier findings, a reduction in sedentary time may be expected.
Furthermore, to gain a broader view on the long term effects, cognitive skills and indicators of health and wellbeing are investigated in relation to sedentary time. Cognitive skills relevant to academic performance, such as working memory, planning, inhibition, and cognitive flexibility, may be influenced by sedentary time [24, 27, 28]. Inactivity and sedentary behavior are negatively associated with wellbeing [32, 33, 34], physical fitness [4, 5], and strength [4], and positively associated with childhood obesity [4, 5], (cf. [35]), constipation [36, 37], and a higher risk of insomnia and sleep disturbance [4, 38]. Moreover, short sleep duration and sedentary behavior together are associated with childhood obesity [39], while executive function appears to mediate between sleep duration and sedentary behavior [40].
The outcomes of this study will provide a better understanding of the effects of the classroom environment on academic performance, sedentary time, cognition, health, and wellbeing. The findings may serve as useful information for policy making and practical decision making with regard to school and classroom environments, as well as for future long term efficacy trials on a larger scale. With this study protocol, together with the ethical approval and trial registration, we wish to contribute to transparency, reduce publication bias, and improve reproducibility. This study protocol prevents unnecessary duplication of research, and indicates when to expect the results and findings.
The program of "A Good Beginning" has received ethical approval from the Dutch Central Committee on Research Involving Human Subjects (CCMO, https://english.ccmo.nl, number NL60159.000.17). This study is conducted according to the principles of the Declaration of Helsinki [41] and in accordance with the Medical Research Involving Human Subjects Act (WMO) [42]. Risks associated with participation and physical and psychological discomfort are very small to negligible. There are no risks related to the use of sit-stand desks other than, or additional to, the use of regular desks. A SPIRIT checklist covering all recommended trial protocol items is provided as Additional file 1.
This is a prospective, two-armed, group-randomized trial (see Fig. 1). The trial started in 2017 and will be completed in 2019. Outcomes are assessed at baseline (T0; May 2017) and at five follow-up sessions (T1-T5) with an interval of approximately 6 months, alternately in summer (i.e., July 2017, 2018 and 2019) and winter (i.e., January/February 2018 and 2019) seasons over three academic years. After the baseline assessment, which took place after informed consent, a coin toss (by AEQ in the presence of the teachers and the deputy director) determined which group is the experimental group; the other group is the control group. Most tests during the assessment periods take place at school; activity tracking and keeping diaries (also) take place outside the school.
Flow-chart of the program of "A Good Beginning"
Blinding is not possible for children, teachers and parents/caregivers. Attempts are taken to keep assessors blinded. However, working with children in this age range proves to be difficult to uphold blindness of assessors. Nevertheless, the influence of assessors on most, and at least on the primary outcomes is considered minimal. The assessors are neither present at the time of measuring sitting time nor at testing academic performance. The assessors are trained junior researchers from Leyden Academy on Vitality and Ageing (Leyden Academy) and the Cognitive Psychology Unit from the Institute of Psychology at Leiden University. Under primary and secondary outcome variables we indicate for which outcome assessment the assessors (and teachers) are present for supervision.
Each year the study is evaluated by the principal investigators, the involved teachers, the school (deputy) director, and members of the Parent-Teacher Association. During this meeting, the progress of the study, as well as safety issues related to testing and the use of the sit-stand desks are discussed.
Students of two grade-three groups (aged 8–9 years) following regular primary education were recruited from a school in Leiden, The Netherlands, of which the school director initiated this study. At this school there were four grade-three groups. The common group size is 26 to 30 students. The deputy director of the school selected two groups in consultation with the teachers, based on the willingness of the teachers to be involved in the study. All children in both groups were asked to participate. The allocation was only established after baseline assessment (which was after informed consent). In order to be eligible to participate in this study, a student had to meet all of the following criteria:
Follow regular primary education in third grade
Have a signed Informed Consent form to participate; given the age, parents/caregivers should sign the Informed Consent form for their child but the student will be asked to sign one as well
Be physically able to stand without any serious health issues or injuries; note that a student who is normally physically able to stand, but temporarily unable to do so because of a recovery from a temporary injury of trauma, can still participate in this study
Students who object to participate are not included in the study. Examples of objection are (signals of) fear, sadness, and anger. We expected to recruit around 20 students per group. Students who did not participate from the start of the study, but wish to do so at a later moment, are also enrolled at the next moment of testing. Those who terminate their participation before the end of the study are asked for the reason(s) why. Demographics of students that in fact are recruited (and for whom we received written informed consent from themselves and their parents/caregivers) are presented in Table 1.
Participants' age and sex
Intervention group
(n = 19)
(n = 19)
Years of age median [range]
9.0 [8.5 to 10.2]
Girls n (%)
Sample size, power, and non-inferiority
In relation to possible harm to academic performance as an adverse event of long term use of sit-to-stand desks, we will investigate (non-)inferiority, rather than the effectiveness of sit-to-stand desks on measures of academic performance. For this, the 95% confidence interval (CI) and a margin of non-inferiority (the maximum acceptable extent of non-inferiority of an experimental treatment) are relevant. For the following calculations we have used the expected 20 students per group, and the distribution data of academic performance provided by CITO [https://www.cito.com]. Note that the data of the school participating in this study are similar to the data provided by CITO:
$$ \mathrm{We}\ \mathrm{assume}:{\mathrm{n}}_{\mathrm{exp}}={\mathrm{n}}_{\mathrm{con}}=20\ \mathrm{and}\ \mathrm{equal}\ \mathrm{variance} $$
SE diff
95% CI diff
% mean
This means that for accepting non-inferiority in case μ = 0 or μ > 0, at maximum a difference between the mean scores of both groups of around 10% will be accepted. If μ < 0 (i.e., the mean score of the experimental group is more than 10% lower than the mean score of the control group), non-inferiority will not be accepted.
With regard to the proportion of sitting time (i.e., second primary outcome), we look at the effectiveness of sit-to-stand desks. For these calculations we used the results from the study by Clemes et al. [17]. We used the pooled statistics of the follow-up assessment of the control groups in this study to calculate the 95% CI. Based on these results, chances are small that in this study, with an expected 20 participants in each group, a significant difference in sitting time will be found between groups at the end of the study (i.e., the α-approach). The lower and upper limits of the 95% CI (H0) are 59.9 and 70.2 respectively, with an expected mean of 60.3 for the experimental group. However, we may fail to detect a difference when actually there is a difference (i.e., the β-approach). Following similar differences between experimental and control groups in the Clemes et al. [17] study, this probability of failing to detect a difference when actually there is a difference is almost 50% (β = 0.54; power = 0.46).
In the experimental classroom, newly developed sit-to-stand desks replaced the regular desks. These sit-to-stand desks are called Adjust-Table Basic, and are manufactured and provided by Presikhaaf Schoolmeubelen, Arnhem, The Netherlands (https://www.schoolmeubelen.com). The desks have been designed to be easy to operate by young children. They are operated by a lever, which releases a gas spring lock. The gas spring allows very low-effort raising and lowering of the desktop. The desktops of the sit-to-stand desks have the same dimensions as the desktops of the regular desks (i.e., 500 × 700 × 18 mm). Height can be infinitely adjusted between 75 and 120 cm.
Each student in the experimental group received such a sit-to-stand desk, also the students that do not participate in the study. They will keep their sit-to-stand desks until they go to secondary school. The teacher of the experimental group also received a sit-to-stand desk (for adults). The students in the experimental group are not obligated to stand; rather, they are invited to stand, first by the mere opportunity to stand offered by the sit-to-stand desks (cf. affordance: the qualities or properties of an object that define its possible uses or make clear how it can or should be used), and second by the teacher who functions as a role model. For instructions, however, the teacher can order all children to sit for a proper view the (digital) blackboard at the front of the classroom. The students in the control group use their regular desks, as does their teacher.
Primary outcome variables
Primary outcome measures are academic performance and the proportion of sitting time at school. Academic performance is assessed with the standardized and norm-referenced CITO test battery [43]. The standard procedure for most schools in The Netherlands is to assess academic performance twice each year with the CITO test battery. The scores of arithmetic, orthography, and reading comprehension are used in this study. Academic performance is assessed shortly before the week that sitting time is measured.
Sitting time is measured with an Activ8® Professional activity tracker [44, 45]. The Activ8® Professional (30 × 32 × 10 mm) is a 3D accelerometer that classifies postures and activities when worn on the upper leg: lying, sitting, standing, walking, cycling, and running. As with most other activity trackers, the Activ8® Professional is not able to validly identify all categories [28]. In youths, it can validly distinguish basic postures and activities (i.e., good to excellent validity), but has difficulty in distinguishing standing from other movements in complex activities; in complex activities, standing is often underestimated and detected as walking [44]. Each assessment (i.e., twice each year), an activity tracker is fixed on the upper leg of all participating students with a skin-friendly, waterproof, and transparent dressing (Tegaderm™, 3 M), halfway between the hip and knee. The activity tracker is worn for a school week during each assessment period: 24 h each day for five consecutive days from Monday morning till Friday afternoon. The recording interval is set at 10 s. At the end of the week data are collected via the Activ8® Professional recording tool.
Secondary outcome variables
At each assessment period additional outcomes are measured following the week after sitting time is measured. Secondary outcomes are proportion of time spent in other postures and activities than sitting at school; proportion of sitting time and proportion of time spent in other postures and activities while awake outside school hours; cognitive skills; indicators of health; and indicators of wellbeing. Secondary outcome data, which may be influenced by the use of sit-to-stand desks [4, 19, 46, 47, 48, 49, 50, 51], will be analyzed in relation to (changes in) sitting time.
The proportion of time spent in postures (i.e., lying, standing) and activities (i.e., walking, cycling and running) other than sitting at school are extracted from the Activ8® recordings. Additionally, to compare the proportion of time spent in postures and activities at school and outside school hours, the proportion of time spent in the different postures and activities outside school hours are also extracted. The times used for this are between 7:00 AM and 8:45 AM (the beginning of the school day), and between 3:00 PM (the end of the school day; 12:30 PM on Wednesdays) and 10:00 PM. Furthermore, the Activ8® recordings will also be used to compare posture and behavior with the wake up and sleep times recorded in the sleep diary.
Computer tests are used to study four dimensions of executive functioning that are related to and may underlie academic achievements: (I) working memory, (II) planning, (III) inhibition, and (IV) cognitive flexibility [52, 53]. All tests are presented using Inquisit 4 Computer Software [54] on a 15.6 in. ASUS N551 J 64-bit laptop computer screen. Completion of the four tests together takes between 30 to 60 min, dependent on the performance on each task. Children are administered the tests individually in a separate room under supervision of an assessor.
Handling working memory load is tested with the N-Back task [55, 56]. The N-Back task is a widely used working memory task that has face validity [57], although other types of validity, such as concurrent and convergent validity, are debatable [57, 58, 59]. In this modified version students are presented with a sequence of pictures (e.g., monkey, scissors, umbrella, chicken, cupcake), instead of letters. Pictures have been used in N-Back tasks before when testing with children [60, 61]. The task consists of indicating when the current picture matches the one from n steps before in the sequence. The load factor n is adjusted to make the task more or less difficult. In this study we use three levels: n = 1, n = 2, and n = 3. The first measure we use is the number of correct responses for each level. A high number of correct responses reflects a good handling of working memory load. The second measure is the mean response time for each level, which indicates the level of control over working memory processes [61, 62].
Planning is measured with the Tower of London task [63, 64]. This test starts with two pictures on the screen. Both pictures show a board with three vertical pegs of varying length, and three colored beads (i.e., red, blue and green). The first peg can hold three beads, the second two beads, and the third one bead. One picture shows the goal state and the other is the workspace board, where participants can rearrange the beads in the least number of moves from the start constellation to the goal state. There are twelve trials with increasing difficulty to complete. The first measure that is used is the total number of excess moves a participant makes beyond the minimum required moves. A low number of excess moves reflects good planning. The second measure of interest is the mean solution time. A low solution time reflects good efficiency on the task.
Inhibition is measured with an Inquisit version of the Fish Flanker test (FFT) [65]. The FFT is a basic derivative of the flanker visual filtering test that was developed by Eriksen and Eriksen [66]. The FFT is widely used to test one specific kind of response inhibition: the resistance to distractor interference. This refers to the ability to efficiently ignore irrelevant visual distractor information while processing target stimuli. A trial consists of a picture with five identical fishes. One fish in the center and two fishes on each side of the center fish. The direction the center fish is facing (i.e., direction of the head of the fish) determines what button should be pressed, left or right. All neighboring fishes move in the same direction as the center fish in congruent trials, or in the opposite direction in incongruent trials. Trials are interpreted as false in three cases: (I) when the wrong button is pressed; (II) when the response time to a trial exceeds 3000 milliseconds (an inattentive error); or (III) when the response time is shorter than 200 milliseconds (early response). The measures of interest are the number of erroneous responses and the mean response time, each registered for both the congruent and the incongruent trials. Both a low error rate and a low response time reflect good resistance to distractor interference.
Cognitive flexibility is measured with a digital version of the Wisconsin Card Sorting Task [67]. In this task a student is presented with cards that vary in the presented shapes on them, the color of the shapes, and the number of shapes. The student is to match the upper "example" card on the screen with one of four cards presented below the example card by clicking with the computer mouse on one of them. The sorting rule in effect is the dimension to which the correct choice matches the example card, and the student should find out what sorting rule is in effect. By using the feedback given to the student when making a correct or a false choice the student can theoretically find the sorting rule in effect after two trials. There are three categories, each containing four different values: (I) color, (II) form, and (III) number. Each category is presented twice. In our modified version, the criterion for successful completion of a category is applying the correct sorting rule four times in succession. After a streak of four correct responses, the sorting rule changes without the student being informed that it will. The maximum number of trials is set at 128. The measure of interest is the number of perseverative errors. This is the number of errors where the student continues to sort cards according to the same rule despite negative feedback. A low number of perseverative errors reflects good cognitive flexibility.
Indicators of health are measured with a number of tests and diaries. The 20-m Shuttle Run Test (SRT) is used to measure physical fitness [68]. The SRT is a widely used, and validated test [69]. It is a simple, easy to administer test, and a large number of individuals can be tested simultaneously. The SRT test consists of stages of continuous, incremental speed running between two lines 20 m apart. The initial speed is set at 8.5 km/h and increases by 0.5 km/h per minute. Audio signals indicate the speed. The test ends when a student fails to reach the end lines concurrent with the audio signals on two consecutive occasions. The reached stage is recorded. The assessment is carried out by an assessors together with the physical exercise teacher.
Hand dynamometry is used to measure grip strength. A hand dynamometer is a valid and reliable tool for measuring upper body strength and hand function [70]. Each student is tested with a calibrated Jamar® hydraulic dynamometer (J.A. Preston Corporation, Clifton, NJ). The dynamometer is set at the second handle position, and the standardized testing position for measuring grip strength is used [71]. The students are allowed a total of four attempts; twice with each hand. The highest score (in kg) for each hand is recorded by the assessor.
Lower body power is assessed by the assessor with the vertical jump, as first described by Sargent in 1921 [72]. The vertical jump is a practical, low-cost, reliable [73], and valid [74] test for two-legged explosive power. First the standing reach height is measured: the student stands straight beside a wall and reaches up with the hand closest to the wall, while keeping the feet flat on the ground. The point of the fingertips is recorded using a wall mounted measuring tape (Seca 206; Seca gmbh & co. kg., Hamburg, Germany). Then the student jumps vertically from a flat footed position and tries to touch the wall as high as she or he can. The highest point of three attempts is recorded. The distance to the nearest 0.5 cm between the standing reach height and the highest vertical jump height is the recorded score.
Each student's weight (kg) and height (cm) is measured with light clothes and without shoes by the assessor. Weight is measured to the nearest 0.1 kg with a validated digital scale (Omron BF511; Omron Corp, Kyoto, Japan) [54]. Height is measured to the nearest 0.5 cm using a wall mounted measuring tape (Seca 206; Seca gmbh & co. kg., Hamburg, Germany). When measuring height, the student stands on a level floor with heels together; weight evenly distributed; heels, buttocks and shoulders against the wall; and arms loosely hanging at the sides with palms facing the thighs. BMI is calculated using the following formula: body mass/height2 (kg/m2). The scale used to measure body weight also allows bioelectrical impedance analysis of the body composition [75]. The student stands barefoot on metal footpads and grasps a handle with arms extended in front of her/his chest. Percentages of total body fat and total body muscle are measured with eight electrodes (two under each foot and two in each hand) and recorded.
A diary including the Dutch version of the Bristol Stool Form Scale for children (BSFSC) is used to report stool [76]. The BSFSC is a valid and reliable method for children to keep record of their stool [77]. At the start of the same week that the students wear their activity tracker, they receive their BSFSC. For five consecutive school days students write down their stool form (five categories ranging from "separate hard lumps like nuts" to "watery, no solid pieces), stool frequency, occasions of abdominal pain and pain during defecation, and whether they use medication for their stool.
In the same week that students keep a stool diary, they also keep a sleep diary. The diary is based on the Consensus Sleep Diary [78], which has become the de-facto sleep diary in sleep research, but has not yet been validated for children. With the sleep diary students keep track of the time they go to bed, fall asleep at night, get out of bed in the morning, and how often they wake up at night. They also rate their sleep and how well-rested they are, and report whether medication is needed to sleep. This diary is also used to confirm two transitions between inactivity and activity (i.e., the times of falling asleep and waking up) as recorded with the activity tracker. Parents are asked to remind the students to fill in both diaries.
Indicators of wellbeing are measured with questionnaires, which are assessed in the classroom under supervision of the teacher. A faces scale is used to measure happiness [79, 80]. The student answers to questions by indicating one face of a five-points-smiley-face-scale ranging from a green, happy face to a red, sad face. All six questions address the student's feeling of happiness in general or in a specific situation (e.g., "How do you feel at the moment?" and "How do you usually feel at school?"). This questionnaire is not validated.
The KIDSCREEN-52 is a questionnaire to measure quality of life (QoL) in children and adolescents [81]. This is a reliable and valid, generic instrument that is used throughout many countries in the world [82]. The KIDSCREEN questionnaire consists of 52 items to be answered by the student. Students' QoL is assessed in ten dimensions: physical well-being; psychological well-being; moods and emotions; self-perception; autonomy; relations with parents and home life; social support and peers; school environment; social acceptance (bullying); and financial resources. The items are five-point Likert scales to assess either the frequency (never to always) of certain behaviors/feelings or the intensity of an attitude (not at all to extremely). The recall period for each item is 1 week.
Satisfaction with the school environment is assessed with a short questionnaire. Eight items address the students' satisfaction with the school building, their classroom, and their school furniture (e.g., "I am satisfied with how our classroom looks." and "The furniture in our classroom is looking good."). Students rate to what extent they agree or disagree on a four-point Likert-scale. These eight items are selected from the Dutch Quality Indicator in Primary Education (Kwaliteitsmeter Primair Onderwijs; Van Beekveld & Terpstra, Hoorn, The Netherlands); a commonly used, however non-validated, student questionnaire to rate the quality of a school.
Records of adherence and adverse events are also kept. Adherence is defined as the percentage of students in the experimental group who keep participating in the study measured at every test week. Adverse events are defined as any undesirable experience occurring to a subject during the study, whether or not considered related to the investigational product, testing procedures or experimental intervention. All adverse events reported spontaneously by the student or observed by a teacher, parent or assessor are recorded.
A complete list of primary and secondary outcome measures and supervision is provided as Additional file 2. Regarding supervision, assessors are present during computer testing, SRT, hand dynamometry, vertical jump, and height, weight, and body composition assessments. Teachers are present when students fill out questionnaires, and parents help the students to keep record of diaries.
Data are handled confidentially and anonymously. Each student's data are stored under a unique eight-digit code that is not related to the student's name, initials or birth date. The key to the codes remains at the school with the director. All research data (i.e., data with codes) are stored on a secured server computer at the Leyden Academy. The school (code key) and the Leyden Academy (data) are physically separated. The handling of data complies with the General Data Protection Regulation [83]. The data will be stored for 10 years. In all reporting no codes will be reported.
The database will be managed in Microsoft Excel 2016 and IBM SPSS Statistics, Version 23.0. The results of the Activ8® Professional activity tracker are collected with the Activ8® Professional recording tool; the computer test data from the executive functioning tests are first collected with the Inquisit 4 Computer Software. Data collected on paper forms are checked and data entry is performed by the Data Manager at Leyden Academy. All authors (AEQ, GPH, JPJ) are given access to the full dataset.
The statistical analysis will be conducted in SPSS. A two-tailed significance level of .05 will be used for all tests.
Summary statistics for the baseline measures and participant demographics for both groups will first be examined. Dependent on the variable, the means, medians or frequencies will be compared between both groups using a two-sample T-test, a Mann-Whitney U test or chi-square test, as appropriate.
Outcome data obtained from all students will be included in the data analysis, and data will be analyzed as per group allocation. In our statistical analysis the longitudinal and multilevel structure of the data are acknowledged and will be taken into account in achievement growth modeling. For this, mixed modeling will be used, because this specifically allows for the analysis of repeated measures and nested data. Furthermore, missing data are better handled in mixed modeling. We will first determine the growth (longitudinal changes) for the different outcome measures on an individual and on a group level. In determining the proportion of variance explained by group allocation (i.e., sit-to-stand or regular desks) for sitting time as well as for academic performance (i.e., the primary outcome measures), explanatory variables such as age, gender and baseline outcomes will be added to the model. We will do this both on an individual and on a group level, also for a number of secondary outcome measures. We will test for significance of the effect of group allocation on a group level.
Sedentary behavior is associated with academic under-achievement and health risks in children [4, 5]. Still, children spend a large part of their waking hours sitting at a desk at school [6]. The program of "A Good Beginning" was conceived to assess the long-term effects of sit-to-stand desks on academic performance and sitting time in primary education, and to examine how sit-to-stand desks versus regular desks relate to measures of executive functioning, health and wellbeing. The paper presents the design of this group-randomized trial.
The program of "A Good Beginning" is expected to provide a significant contribution to the understanding of the effects of sit-to-stand desks on sedentary time at school and academic performance. The program is contingent upon the fact that stakeholders learn to understand the detrimental effects of long-term inactivity and excessive sedentary behavior, also at a young age. Given that children spend a large part of their waking hours at school sitting at a desk [6], and the fact that the environment has a very strong influence on behavior [13, 14, 15], the school setting, in particular, is the place to reduce children's sedentary time. Accordingly, this study is relevant and needed to objectively guide policy making and practical decision making with regard to school and classroom environments.
Recent short-term studies have demonstrated relevant benefits of using sit-to-stand desks with regard to sitting time reduction [16, 17, 18, 19, 20], without negatively affecting academic performance [26]. Moreover, in their study with grade one students (6 to 7 years old), Blake et al. found a positive effect of the use of desks that promote standing on attention and focus, as reported by the teachers [24]. In the study by Dornhecker et al. second to fourth grade students were observed by trained research assistants [26]. Based on these observations, they conclude that students with desks that promote standing exhibited greater levels of academic engagement (i.e., activities such as answering a question, raising a hand, participating in active discussion) than students with regular desks. However, Koepp et al. did not find a significant difference in teacher reported concentration between sixth graders that used desks that promote standing and those that used regular desks [31]. The program of "A Good Beginning" is a unique project that goes beyond the short-term view by covering two-and-a-half years, with assessments twice a year in winter and summer seasons. Consequently, it will not suffer from effects of novelty, and effects of season are taken into account. The program specifically allows insight into the combination of assessments of objective sedentary time and academic performance over a longer period of time, which is of particular interest to educators.
The study also knows some limitations. First, the sample size is most likely too small for detecting a significant difference in sitting time. The downside is that a small sample size limits population estimation and extrapolation of findings. In addition, a possible true effect may be masked by a relatively large variance in our sample (this is the reason why we calculated the probability of failing to detect a difference). On the upside, in the study we use a repeated-measures design instead of the pre-post design that was used in the study by Clemes et al. [17]. This increases the power to some extent. However, given the fact that the primary aim was to assess possible harm to academic performance, the effect of sit-to-stand desks on sedentary time reduction (and the related sample size calculation based on results the study by Clemes et al. [17]) was considered subordinate to the investigation of possible harm to academic performance (and the related non-inferiority calculation). Given the circumstances, we therefore accepted a small sample size, which is around half the sample size of the study by Clemes et al. [17], and we will treat the outcomes with cause.
An additional limitation follows from the group randomization, which is inevitable in studies with school classes. Group randomization has a negative effect on the power of the study. With the proposed statistical analysis (i.e., mixed modeling), we will be able to determine the extent of the negative effect of this design on the power of the study and discuss the consequences.
Furthermore, this study involves only one school. Potential limited external validity and problems with blinding are common limitations in single-center (ic. single-school) studies, which may also apply to this study. We will take these limitations into consideration in our analyses and findings.
The program of "A Good Beginning" started in 2017 and results are expected by the beginning of 2020. The results of this study may provide relevant information to set up a larger scale efficacy trial, possibly with children of different ages (also in secondary education), more schools with different populations, and a protocol focusing primarily on sedentary time reduction.
Supplementary information accompanies this paper at https://doi.org/10.1186/s12889-019-8135-9.
This program is funded by the Municipality of Leiden, The Netherlands. We thank Eef Pothast and Leonard de Groot for their contributions to the development of the computer tests and associated analysis tools, and for their help during the first couple of assessments. We are grateful to Wim de Goei and Ronald Burgers for developing new sit-to-stand desks based on the students' and our preferences. We are also grateful to Henk Lardee and Richard van den Berg for the school's commitment to the program. Finally, we would like to thank all the children and teachers involved for their participation in this project.
Public disclosure and publication policy
The results of this study will be disclosed unreservedly, regardless of positive or negative results. Findings will be written in articles and offered to international peer-reviewed scientific journals for publication. In addition, a report in Dutch will be composed, which will first be offered to the participating children, their parents/caregivers, and the school. This report and the published scientific articles will be publicly available on the website of the Leyden Academy on Vitality and Ageing (www.leydenacademy.nl).
AEQ is the principal investigator of the "A Good Beginning" program. AEQ conceived the study and is responsible for the study design, execution and funding. AEQ drafted and revised the manuscript. All authors contributed to the protocol design. GPH contributed to the protocol design and was involved in developing the computer tests for executive functioning. GPH critically reviewed the manuscript. JPJ contributed to the protocol design. JPJ critically reviewed the manuscript. All authors read and approved the final manuscript.
The Municipality of Leiden funds this study with grant number 4661165. Presikhaaf Schoolmeubelen provided 30 sit-to-stand desks to the school. The Municipality of Leiden and Presikhaaf Schoolmeubelen neither have a role in the design of the study and collection, analysis, and interpretation of data, nor in writing the manuscript.
The program of "A Good Beginning", including information letters and informed consent forms, has received ethical approval from the Dutch Central Committee on Research Involving Human Subjects (CCMO, https://english.ccmo.nl, number NL60159.000.17).
Before the testing started (and before the groups were allocated), the parents/caregivers, children, and teachers were informed. An information evening was organized for parents/caregivers and teachers; information was provided by the principal investigator and questions were answered. The children were informed in class, also by the principal investigator. Furthermore, children as well as parents/caregivers received an information letter (separate documents for children and parents/caregivers), and contact details of the independent expert. Together with the letters, forms were sent to ask for signed consent. These were also separate documents for children and parents/caregivers. Hence, each individual child and her/his parents/caregivers were to sign for informed consent before she/he was enrolled in the study. Ten days were given to consider the decision for participation. Children who did not participate from the start of the study, but wished to do so at a later moment, were also enrolled at the next moment of testing. We have received written parental consent, as well as written consent by each child for the 38 children that were enrolled in the study (i.e., 19 in each group). Students in the intervention group who did not wish to participate, still received a sit-to-stand desk (and it is up to them whether they use the functions of the desk), but they are not involved in any testing.
The "gedragscode gezondheidsonderzoek",31 as well as the "code of conduct relating to expressions of objection by minors participating in medical research"32 are followed in this study. The grounds on which a child is deemed to object are (signals of) fear, sadness, or anger. In case (signals of) fear, sadness, and anger are expressed, parents/caregivers and teachers are asked to report this to the principal investigator.
12889_2019_8135_MOESM1_ESM.docx (20 kb)
Additional file 1. SPIRIT Checklist.
Additional file 2. Outcome Measures.
Janssen X, Basterfield L, Parkinson KN, Pearce MS, Reilly JK, Adamson AJ, et al. Objective measurement of sedentary behavior: impact of non-wear time rules on changes in sedentary time. BMC Public Health. 2015;15:504.PubMedPubMedCentralCrossRefGoogle Scholar
McVeigh JA, Winkler EA, Howie EK, Tremblay MS, Smith A, Abbott RA, et al. Objectively measured patterns of sedentary time and physical activity in young adults of the Raine study cohort. Int J Behav Nutr Phys Act. 2016;13:41.PubMedPubMedCentralCrossRefGoogle Scholar
de Rezende LF, Rodrigues Lopes M, Rey-López JP, Matsudo VK, Luiz OC. Sedentary behavior and health outcomes: an overview of systematic reviews. PLoS One. 2014;9(8):e105620.PubMedPubMedCentralCrossRefGoogle Scholar
Carson V, Hunter S, Kuzik N, Gray CE, Poitras VJ, Chaput JP, et al. Systematic review of sedentary behaviour and health indicators in school-aged children and youth: an update. Appl Physiol Nutr Metab. 2016;41:S240–65.PubMedCrossRefPubMedCentralGoogle Scholar
Tremblay MS, LeBlanc AG, Kho ME, Saunders TJ, Larouche R, Colley RC, et al. Systematic review of sedentary behaviour and health indicators in school-aged children and youth. Int J Behav Nutr Phys Act. 2011;8:98.PubMedPubMedCentralCrossRefGoogle Scholar
Mooses K, Mägi K, Riso E-M, Kalma M, Kaasik P, Kull M. Objectively measured sedentary behaviour and moderate and vigorous physical activity in different school subjects: a cross-sectional study. BMC Public Health. 2017;17:108.PubMedPubMedCentralCrossRefGoogle Scholar
Contardo Ayala AM, Salmon J, Dunstan DW, Arundell L, Parker K, Timperio A. Longitudinal changes in sitting patterns, physical activity, and health outcomes in adolescents. Children (Basel). 2018;6(1):2.Google Scholar
Janssen X, Mann KD, Basterfield L, Parkinson KN, Pearce MS, Reilly JK, et al. Development of sedentary behavior across childhood and adolescence: longitudinal analysis of the Gateshead millennium study. Int J Behav Nutr Phys Act. 2016;13:1–10.CrossRefGoogle Scholar
Biddle SJH, Pearson N, Ross GM, Braithwaite R. Tracking of sedentary behaviours of young people: a systematic review. Prev Med. 2010;51:345–51.PubMedCrossRefPubMedCentralGoogle Scholar
Ortega FB, Konstabel K, Pasquali E, Ruiz JR, Hurtig-Wennlöf A, Mäestu J, et al. Objectively measured physical activity and sedentary time during childhood, adolescence and young adulthood: a cohort study. PLoS One. 2013;8(4):e60871.PubMedPubMedCentralCrossRefGoogle Scholar
Pearson N, Haycraft E, P Johnston J, Atkin AJ. Sedentary behaviour across the primary-secondary school transition: a systematic review. Prev Med. 2017;94:40–7.PubMedCrossRefPubMedCentralGoogle Scholar
Hayes G, Dowd KP, MacDonncha C, Donnelly AE. Tracking of physical activity and sedentary behavior from adolescence to young adulthood: a systematic literature review. J Adolesc Health. 2019;65(4):446–54.PubMedCrossRefPubMedCentralGoogle Scholar
Dannenberg AL, Frumkin H, Jackson RJ, editors. Making healthy places: designing and building for health, well-being, and sustainability. Washington DC: Island Press; 2011.Google Scholar
Kahneman D. Thinking fast and slow. New York: Farra, Straus and Giroux; 2011.Google Scholar
Kopcakova J, Dankulincova Veselska Z, Madarasova Geckova A, Bucksch J, Nalecz H, Sigmundova D, et al. Is a perceived activity-friendly environment associated with more physical activity and fewer screen-based activities in adolescents? Int J Environ Res Public Health. 2017;14(1):39.PubMedCentralCrossRefGoogle Scholar
Aminian S, Hinckson EA, Stewart T. Modifying the classroom environment to increase standing and reduce sitting. Build Res Inf. 2015;43(5):631–45.CrossRefGoogle Scholar
Clemes SA, Barber SE, Bingham DD, Ridgers ND, Fletcher E, Pearson N, et al. Reducing children's classroom sitting time using sit-to-stand desks: findings from pilot studies in UK and Australian primary schools. J Public Health (Oxf). 2016;38(3):526–33.CrossRefGoogle Scholar
Contardo Ayala AM, Salmon J, Timperio A, Sudholz B, Ridgers ND, Sethi P, et al. Impact of an 8-month trial using height-adjustable desks on children's classroom sitting patterns and markers of cardio-metabolic and musculoskeletal health. Int J Environ Res Public Health. 2016;13(12):1227.PubMedCentralCrossRefGoogle Scholar
Ee J, Parry S, Oliveira BI, McVeigh JA, Howie E, Straker L. Does a classroom standing desk intervention modify standing and sitting behaviour and musculoskeletal symptoms during school time and physical activity during waking time? Int J Environ Res Public Health. 2018;15(8):1668.PubMedCentralCrossRefGoogle Scholar
Hinckson EA, Aminian S, Ikeda E, Stewart T, Oliver M, Duncan S, et al. Acceptability of standing workstations in elementary schools: a pilot study. Prev Med. 2013;56:82–5.PubMedCrossRefPubMedCentralGoogle Scholar
Silva DR, Minderico CS, Pinto F, Collings PJ, Cyrino ES, Sardinha LB. Impact of a classroom standing desk intervention on daily objectively measured sedentary behavior and physical activity in youth. J Sci Med Sport. 2018;21(9):919–24.PubMedCrossRefPubMedCentralGoogle Scholar
Benden ME, Blake JJ, Wendel ML, Huber JC Jr. The impact of stand-biased desks in classrooms on calorie expenditure in children. Am J Public Health. 2011;101:1433–6.PubMedPubMedCentralCrossRefGoogle Scholar
Benden ME, Zhao H, Jeffrey CE, Wendel ML, Blake JJ. The evaluation of the impact of a stand-biased desk on energy expenditure and physical activity for elementary school students. Int J Environ Res Public Health. 2014;11:9361–75.PubMedPubMedCentralCrossRefGoogle Scholar
Blake JJ, Benden ME, Wendel ML. Using stand/sit workstations in classrooms: lessons learned from a pilot study in Texas. J Public Health Manag Pract JPHMP. 2012;18:412–5.PubMedCrossRefPubMedCentralGoogle Scholar
Hinckson E, Salmon J, Benden M, Clemes SA, Sudholz B, Barber SE, et al. Standing classrooms: research and lessons learned from around the world. Sports Med. 2016;46(7):977–87.PubMedCrossRefPubMedCentralGoogle Scholar
Dornhecker M, Blake JJ, Benden M, Zhao H, Wendel M. The effect of stand-biased desks on academic engagement: an exploratory study. Int J Health Promot Educ. 2015;53:271–80.PubMedPubMedCentralCrossRefGoogle Scholar
Mehta RK, Shortz AE, Benden ME. Standing up for learning: a pilot investigation on the neurocognitive benefits of stand-biased school desks. Int J Environ Res Public Health. 2015;13:59.PubMedCentralCrossRefGoogle Scholar
Bergman Nutley S, Söderqvist S. How is working memory training likely to influence academic performance? Current evidence and methodological considerations. Front Psychol. 2017;8:69.PubMedPubMedCentralCrossRefGoogle Scholar
Shin G, Feng Y, Jarrahi MH, Gafinowitz N. Beyond novelty effect: a mixed-methods exploration into the motivation for long-term activity tracker use. JAMIA Open. 2019;2(1):62–72.CrossRefGoogle Scholar
Silva P, Santos R, Welk G, Mota J. Seasonal differences in physical activity and sedentary patterns: the relevance of the PA context. J Sports Sci Med. 2011;10(1):66–72.PubMedPubMedCentralGoogle Scholar
Koepp GA, Snedden BJ, Flynn L, Puccinelli D, Huntsman B, Levine JA. Feasibility analysis of standing desks for sixth graders. Infant Child Adolesc Nutr. 2012;4(2):89–92.CrossRefGoogle Scholar
Asare M. Sedentary behaviour and mental health in children and adolescents: a meta-analysis. J Child Adolesc Behav. 2015;3:259.CrossRefGoogle Scholar
Hoare E, Milton K, Foster C, Allender S. The associations between sedentary behaviour and mental health among adolescents: a systematic review. Int J Behav Nutr Phys Act. 2016;13:108.PubMedPubMedCentralCrossRefGoogle Scholar
Ussher MH, Owen CG, Cook DG, Whincup PH. The relationship between physical activity, sedentary behaviour and psychological wellbeing among adolescents. Soc Psychiatry Psychiatr Epidemiol. 2007;42(10):851–6.PubMedCrossRefPubMedCentralGoogle Scholar
Biddle SJ, García Bengoechea E, Wiesner G. Sedentary behaviour and adiposity in youth: a systematic review of reviews and analysis of causality. Int J Behav Nutr Phys Act. 2017;14(1):43.PubMedPubMedCentralCrossRefGoogle Scholar
Rajindrajith S, Devanarayana NM, Crispus Perera BJ, Benninga MA. Childhood constipation as an emerging public health problem. World J Gastroenterol. 2016;22(30):6864–75.PubMedPubMedCentralCrossRefGoogle Scholar
Yamada M, Sekine M, Tatsuse T. Psychological stress, family environment, and constipation in japanese children: the Toyama birth cohort study. J Epidemiol. 2019;29(6):220–6.PubMedPubMedCentralCrossRefGoogle Scholar
Yang SY, McCracken LM, Moss-Morris R. Psychological treatments for chronic pain in east and Southeast Asia: a systematic review. Int J Behav Med. 2016;23(4):473–84.PubMedCrossRefGoogle Scholar
Must A, Parisi SM. Sedentary behavior and sleep: paradoxical effects in association with childhood obesity. Int J Obes (Lond). 2009;33(Suppl 1):S82–6.CrossRefGoogle Scholar
Warren C, Riggs N, Pentz MA. Executive function mediates prospective relationships between sleep duration and sedentary behavior in children. Prev Med. 2016;91:82–8.PubMedPubMedCentralCrossRefGoogle Scholar
Declaration of Helsinki. World Medical Association. 2018. https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/. Accessed 1 July 2019.Google Scholar
Wet medisch-wetenschappelijk onderzoek met mensen. https://wetten.overheid.nl/BWBR0009408/2019-04-02. Rijksoverheid. Accessed 1 July 2019.
Hollenberg J, van der Lubbe M, Sanders P. Toetsen op school: Primair onderwijs: CITO; 2017. https://www.cito.nl/-/media/files/kennis-en-innovatie-onderzoek/toetsen-op-school/cito_toetsen_op_school_po.pdf?la=nl-NL. Accessed 1 July 2019
Lankhorst K, van den Berg-Emons RJ, Bussmann JBJ, Horemans HLD, de Groot JF. A novel tool for quantifying and promoting physical activity in youths with typical development and youths who are ambulatory and have motor disability. Phys Ther. 2019;99:354–63.PubMedCrossRefPubMedCentralGoogle Scholar
Valkenet K, Veenhof C. Validity of three accelerometers to investigate lying, sitting, standing and walking. PLoS One. 2019;14(5):e0217545.PubMedPubMedCentralCrossRefGoogle Scholar
Biddle SJ, Asare M. Physical activity and mental health in children and adolescents: a review of reviews. Br J Sports Med. 2011;45(11):886–95.PubMedCrossRefPubMedCentralGoogle Scholar
Minges KE, Chao AM, Irwin ML, Owen N, Park C, Whittemore R, et al. Classroom standing desks and sedentary behavior: a systematic review. Pediatrics. 2016;137:1–18.CrossRefGoogle Scholar
Sherry AP, Pearson N, Clemes SA. The effects of standing desks within the school classroom: a systematic review. Prev Med Rep. 2016;3:338–47.PubMedPubMedCentralCrossRefGoogle Scholar
Suchert V, Hanewinkel R, Isensee B. Sedentary behavior and indicators of mental health in school-aged children and adolescents: a systematic review. Prev Med. 2015;76:48–57.PubMedCrossRefPubMedCentralGoogle Scholar
Verburgh L, Königs M, Scherder EJA, Oosterlaan J. Physical exercise and executive functions in preadolescent children, adolescents and young adults: a meta-analysis. Br J Sports Med. 2014;48(12):973–9.PubMedCrossRefPubMedCentralGoogle Scholar
Wick K, Faude O, Manes S, Zahner L, Donath L. I can stand learning: a controlled pilot intervention study on the effects of increased standing time on cognitive function in primary school children. Int J Environ Res Public Health. 2018;15:356.PubMedCentralCrossRefGoogle Scholar
Visu-Petra L, Cheie L, Benga O, Miclea M. Cognitive control goes to school: the impact of executive functions on academic performance. Procedia Soc Behav Sci. 2011;11:240–4.CrossRefGoogle Scholar
Finn AS, Kraft MA, West MR, Leonard JA, Bish CE, Martin RE, et al. Cognitive skills, student achievement tests, and schools. Psychol Sci. 2014;25(3):736–44.PubMedPubMedCentralCrossRefGoogle Scholar
Inquisit 4 [Computer software]. (2015). Retrieved from https://www.millisecond.com.
Jaeggi S, Studer-Luethi B, Buschkuehl M, Su Y, Jonides J, Perrig W. The relationship between n-back performance and matrix reasoning - implications for training and transfer. Intelligence. 2011;38(6):625–35.CrossRefGoogle Scholar
Owen AM, McMillan KM, Laird AR, Bullmore E. N-back working memory paradigm: a meta-analysis of normative functional neuroimaging studies. Hum Brain Mapp. 2005;25(1):46–59.PubMedCrossRefPubMedCentralGoogle Scholar
Kane MJ, Conway ARA, Miura TK, Colflesh GJH. Working memory, attention control, and the N-back task: a question of construct validity. J Exp Psychol Learn Mem Cogn. 2007;33(3):615–22.PubMedCrossRefPubMedCentralGoogle Scholar
Jaeggi SM, Buschkuehl M, Perrig WJ, Meier B. The concurrent validity of the N-back task as a working memory measure. Memory. 2010;18(4):394–412.PubMedCrossRefPubMedCentralGoogle Scholar
Miller KM, Price CC, Okun MS, Montijo H, Bowers D. Is the n-back task a valid neuropsychological measure for assessing working memory? Arch Clin Neuropsychol. 2009;24(7):711–7.PubMedPubMedCentralCrossRefGoogle Scholar
Ciesielski KT, Lesnik PG, Savoy RL, Grant EP, Ahlfors SP. Developmental neural networks in children performing a categorical N-Back task. Neuroimage. 2006;33(3):980–90.PubMedCrossRefPubMedCentralGoogle Scholar
Katz B, Jaeggi S, Buschkuehl M, Stegman A, Shah P. Differential effect of motivational features on training improvements in school-based cognitive training. Front Hum Neurosci. 2014;8:242.PubMedPubMedCentralCrossRefGoogle Scholar
McCabe DP, Roediger HL, McDaniel MA, Balota DA, Hambrick DZ. The relationship between working memory capacity and executive functioning: evidence for a common executive attention construct. Neuropsychology. 2010;24(2):222–43.PubMedPubMedCentralCrossRefGoogle Scholar
Unterrainer JM, et al. Planning abilities and the tower of London: is this task measuring a discrete cognitive function? J Clin Exp Neuropsychol. 2004;26(6):846–56.PubMedCrossRefPubMedCentralGoogle Scholar
Krikorian R, Bartok J, Gay N. Tower of London procedure: a standard method and developmental data. J Clin Exp Neuropsychol. 1994;16:840–50.PubMedCrossRefPubMedCentralGoogle Scholar
Christ SE, Kester LE, Bodner KE, Miles JH. Evidence for selective inhibitory impairment in individuals with autism spectrum disorder. Neuropsychology. 2011;25:690–701.PubMedCrossRefPubMedCentralGoogle Scholar
Eriksen B, Eriksen C. Effects of noise letters upon the identification of a target letter in a non-search task. Percept Psychophys. 1974;16:143–9.CrossRefGoogle Scholar
Berg EA. A simple objective technique for measuring flexibility in thinking. J Gen Psychol. 1948;39:15–22.PubMedCrossRefPubMedCentralGoogle Scholar
Léger LA, Mercier D, Gadoury C, Lambert J. The multistage 20 metre shuttle run test for aerobic fitness. J Sports Sci. 1988;6(2):93–101.PubMedCrossRefPubMedCentralGoogle Scholar
Castro-Piñero J, Artero EG, España-Romero V, Ortega FB, Sjöström M, Suni J, et al. Br J Sports Med. 2010;44(13):934–43.PubMedCrossRefPubMedCentralGoogle Scholar
Van Den Beld WA, Van Der Sanden GAC, Sengers RCA, Verbeek ALM, Gabreëls FJM. Validity and reproducibility of the Jamar dynamometer in children aged 4–11 years. Disabil Rehabil. 2006;28:1303–9.PubMedCrossRefPubMedCentralGoogle Scholar
Mathiowetz V, Weber K, Volland G, Kashman N. Reliability and validity of grip and pinch strength evaluation. J Hand Surg. 1984;9A:222–6.CrossRefGoogle Scholar
Sargent DA. The physical test of a man. Am Phys Ed Rev. 1921;26:188–94.CrossRefGoogle Scholar
Ayán-Pérez C, Cancela-Carral JM, Lago-Ballesteros J, Martínez-Lemos I. Reliability of Sargent jump test in 4- to 5-year-old children. Percept Mot Skills. 2017;124(1):39–57.PubMedCrossRefPubMedCentralGoogle Scholar
de Salles PG, Vasconcellos FV, de Salles GF, Fonseca RT, Dantas EH. Validity and reproducibility of the Sargent jump test in the assessment of explosive strength in soccer players. J Hum Kinet. 2012;33:115–21.PubMedPubMedCentralCrossRefGoogle Scholar
Bosy-Westphal A, Later W, Hitze B, Sato T, Kossel E, Gluer CC, et al. Accuracy of bioelectrical impedance consumer devices for measurement of body composition in comparison to whole body magnetic resonance imaging and dual X-ray absorptiometry. Obes Facts. 2008;1(6):319–24.PubMedPubMedCentralCrossRefGoogle Scholar
Chumpitazi BP, Lane MM, Czyzewski DI, Weidler EM, Swank PR, Shulman RJ. Creation and initial evaluation of a stool form scale for children. J Pediatr. 2010;157(4):594–7.PubMedPubMedCentralCrossRefGoogle Scholar
Lane MM, Czyzewski DI, Chumpitazi BP, Shulman RJ. Reliability and validity of a modified Bristol stool form scale for children. J Pediatr. 2011;159(3):437–41.PubMedPubMedCentralCrossRefGoogle Scholar
Carney CE, Buysse DJ, Ancoli-Israel S, et al. The consensus sleep diary: standardizing prospective sleep self-monitoring. Sleep. 2012;35(2):287–302.PubMedPubMedCentralCrossRefGoogle Scholar
Van Vaalen M. Geluk op de basisschool. Tijdschr Orthop. 2011;50:100–4.Google Scholar
Andrews FM, Crandall R. The validity of measures of self-reported well-being. Soc Indic Res. 1976;3(1):1–19.CrossRefGoogle Scholar
Ravens-Sieberer U, Gosch A, Rajmil L, Erhart M, Bruil J, Duer W, et al. KIDSCREEN-52 quality-of-life measure for children and adolescents. Expert Rev Pharmacoecon Outcomes Res. 2005;5(3):353–64.PubMedCrossRefPubMedCentralGoogle Scholar
Ravens-Sieberer U, Gosch A, Rajmil L, Erhart M, Bruil J, Power M, et al. The KIDSCREEN-52 quality of life measure for children and adolescents: psychometric results from a cross-cultural survey in 13 European countries. Value Health. 2008;11(4):645–58.PubMedCrossRefPubMedCentralGoogle Scholar
General Data Protection Regulation. https://ec.europa.eu/commission/priorities/justice-and-fundamental-rights/data-protection/2018-reform-eu-data-protection-rules_en. European Commission Accessed 1 July 2019.
© The Author(s). 2020
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
1.Leyden Academy on Vitality and AgeingLeidenThe Netherlands
2.Department of Public Health and Primary CareLeiden University Medical CenterLeidenThe Netherlands
3.Cognitive Psychology UnitInstitute of Psychology, Leiden UniversityLeidenThe Netherlands
4.Leiden Institute for Brain and Cognition (LIBC)LeidenThe Netherlands
5.Center for Geriatric MedicineUniversity Medical Center GroningenGroningenThe Netherlands
van Delden, A..E.Q., Band, G.P.H. & Slaets, J.P.J. BMC Public Health (2020) 20: 70. https://doi.org/10.1186/s12889-019-8135-9
Received 29 July 2019
Accepted 30 December 2019
Publisher Name BioMed Central | CommonCrawl |
Nootropics are also sought out by consumers because of their ability to enhance mood and relieve stress and anxiety. Nootropics like bacopa monnieri and L-theanine are backed by research as stress-relieving options. Lion's mane mushroom is also well-studied for its ability to boost the nerve growth factor, thereby leading to a balanced and bright mood.14
Another class of substances with the potential to enhance cognition in normal healthy individuals is the class of prescription stimulants used to treat attention-deficit/hyperactivity disorder (ADHD). These include methylphenidate (MPH), best known as Ritalin or Concerta, and amphetamine (AMP), most widely prescribed as mixed AMP salts consisting primarily of dextroamphetamine (d-AMP), known by the trade name Adderall. These medications have become familiar to the general public because of the growing rates of diagnosis of ADHD children and adults (Froehlich et al., 2007; Sankaranarayanan, Puumala, & Kratochvil, 2006) and the recognition that these medications are effective for treating ADHD (MTA Cooperative Group, 1999; Swanson et al., 2008).
The general cost of fish oil made me interested in possible substitutes. Seth Roberts uses exclusively flaxseed oil or flaxseed meal, and this seems to work well for him with subjective effects (eg. noticing his Chinese brands seemed to not work, possibly because they were unrefrigerated and slightly rancid). It's been studied much less than fish oil, but omega acids are confusing enough in general (is there a right ratio? McCluskey's roundup gives the impression claims about ratios may have been overstated) that I'm not convinced ALA is a much inferior replacement for fish oil's mixes of EPA & DHA.
On 8 April 2011, I purchased from Smart Powders (20g for $8); as before, some light searching seemed to turn up SP as the best seller given shipping overhead; it was on sale and I planned to cap it so I got 80g. This may seem like a lot, but I was highly confident that theanine and I would get along since I already drink so much tea and was a tad annoyed at the edge I got with straight caffeine. So far I'm pretty happy with it. My goal was to eliminate the physical & mental twitchiness of caffeine, which subjectively it seems to do.
Another moral concern is that these drugs — especially when used by Ivy League students or anyone in an already privileged position — may widen the gap between those who are advantaged and those who are not. But others have inverted the argument, saying these drugs can help those who are disadvantaged to reduce the gap. In an interview with the New York Times, Dr. Michael Anderson explains that he uses ADHD (a diagnosis he calls "made up") as an excuse to prescribe Adderall to the children who really need it — children from impoverished backgrounds suffering from poor academic performance.
(On a side note, I think I understand now why modafinil doesn't lead to a Beggars in Spain scenario; BiS includes massive IQ and motivation boosts as part of the Sleepless modification. Just adding 8 hours a day doesn't do the world-changing trick, no more than some researchers living to 90 and others to 60 has lead to the former taking over. If everyone were suddenly granted the ability to never need sleep, many of them would have no idea what to do with the extra 8 or 9 hours and might well be destroyed by the gift; it takes a lot of motivation to make good use of the time, and if one cannot, then it is a curse akin to the stories of immortals who yearn for death - they yearn because life is not a blessing to them, though that is a fact more about them than life.)
Sleep itself is an underrated cognition enhancer. It is involved in enhancing long-term memories as well as creativity. For instance, it is well established that during sleep memories are consolidated-a process that "fixes" newly formed memories and determines how they are shaped. Indeed, not only does lack of sleep make most of us moody and low on energy, cutting back on those precious hours also greatly impairs cognitive performance. Exercise and eating well also enhance aspects of cognition. It turns out that both drugs and "natural" enhancers produce similar physiological changes in the brain, including increased blood flow and neuronal growth in structures such as the hippocampus. Thus, cognition enhancers should be welcomed but not at the expense of our health and well being.
This mental stimulation is what increases focus and attention span in the user. The FDA permitted treatments for Modafinil include extreme sleepiness and shift work disorder. It can also get prescribed for narcolepsy, and obstructive sleep apnea. Modafinil is not FDA approved for the treatment of ADHD. Yet, many medical professionals feel it is a suitable Adderall alternative.
Not all drug users are searching for a chemical escape hatch. A newer and increasingly normalized drug culture is all about heightening one's current relationship to reality—whether at work or school—by boosting the brain's ability to think under stress, stay alert and productive for long hours, and keep track of large amounts of information. In the name of becoming sharper traders, medical interns, or coders, people are taking pills typically prescribed for conditions including ADHD, narcolepsy, and Alzheimer's. Others down "stacks" of special "nootropic" supplements.
A synthetic derivative of Piracetam, aniracetam is believed to be the second most widely used nootropic in the Racetam family, popular for its stimulatory effects because it enters the bloodstream quickly. Initially developed for memory and learning, many anecdotal reports also claim that it increases creativity. However, clinical studies show no effect on the cognitive functioning of healthy adult mice.
Coconut oil was recommended by Pontus Granström on the Dual N-Back mailing list for boosting energy & mental clarity. It is fairly cheap (~$13 for 30 ounces) and tastes surprisingly good; it has a very bad reputation in some parts, but seems to be in the middle of a rehabilitation. Seth Robert's Buttermind experiment found no mental benefits to coconut oil (and benefits to eating butter), but I wonder.
Organizations, and even entire countries, are struggling with "always working" cultures. Germany and France have adopted rules to stop employees from reading and responding to email after work hours. Several companies have explored banning after-hours email; when one Italian company banned all email for one week, stress levels dropped among employees. This is not a great surprise: A Gallup study found that among those who frequently check email after working hours, about half report having a lot of stress.
Or in other words, since the standard deviation of my previous self-ratings is 0.75 (see the Weather and my productivity data), a mean rating increase of >0.39 on the self-rating. This is, unfortunately, implying an extreme shift in my self-assessments (for example, 3s are ~50% of the self-ratings and 4s ~25%; to cause an increase of 0.25 while leaving 2s alone in a sample of 23 days, one would have to push 3s down to ~25% and 4s up to ~47%). So in advance, we can see that the weak plausible effects for Noopept are not going to be detected here at our usual statistical levels with just the sample I have (a more plausible experiment might use 178 pairs over a year, detecting down to d>=0.18). But if the sign is right, it might make Noopept worthwhile to investigate further. And the hardest part of this was just making the pills, so it's not a waste of effort.
"In 183 pages, Cavin Balaster's new book, How to Feed A Brain provides an outline and plan for how to maximize one's brain performance. The "Citation Notes" provide all the scientific and academic documentation for further understanding. The "Additional Resources and Tips" listing takes you to Cavin's website for more detail than could be covered in 183 pages. Cavin came to this knowledge through the need to recover from a severe traumatic brain injury and he did not keep his lessons learned to himself. This book is enlightening for anyone with a brain. We all want to function optimally, even to take exams, stay dynamic, and make positive contributions to our communities. Bravo Cavin for sharing your lessons learned!"
Scientists found that the drug can disrupt the way memories are stored. This ability could be invaluable in treating trauma victims to prevent associated stress disorders. The research has also triggered suggestions that licensing these memory-blocking drugs may lead to healthy people using them to erase memories of awkward conversations, embarrassing blunders and any feelings for that devious ex-girlfriend.
Government restrictions and difficulty getting approval for various medical devices is expected to impede market growth. The stringency of approval by regulatory authorities is accompanied by the high cost of smart pills to challenge the growth of the smart pills market. However, the demand for speedy diagnosis, and improving reimbursement policies are likely to reveal market opportunities.
After 7 days, I ordered a kg of choline bitartrate from Bulk Powders. Choline is standard among piracetam-users because it is pretty universally supported by anecdotes about piracetam headaches, has support in rat/mice experiments27, and also some human-related research. So I figured I couldn't fairly test piracetam without some regular choline - the eggs might not be enough, might be the wrong kind, etc. It has a quite distinctly fishy smell, but the actual taste is more citrus-y, and it seems to neutralize the piracetam taste in tea (which makes things much easier for me).
OptiMind - It is one of the best Nootropic supplements available and brought to you by AlternaScript. It contains six natural Nootropic ingredients derived from plants that help in overall brain development. All the ingredients have been clinically tested for their effects and benefits, which has made OptiMind one of the best brain pills that you can find in the US today. It is worth adding to your Nootropic Stack.
Another important epidemiological question about the use of prescription stimulants for cognitive enhancement concerns the risk of dependence. MPH and d-AMP both have high potential for abuse and addiction related to their effects on brain systems involved in motivation. On the basis of their reanalysis of NSDUH data sets from 2000 to 2002, Kroutil and colleagues (2006) estimated that almost one in 20 nonmedical users of prescription ADHD medications meets criteria for dependence or abuse. This sobering estimate is based on a survey of all nonmedical users. The immediate and long-term risks to individuals seeking cognitive enhancement remain unknown.
The next morning, four giant pills' worth of the popular piracetam-and-choline stack made me... a smidge more alert, maybe? (Or maybe that was just the fact that I had slept pretty well the night before. It was hard to tell.) Modafinil, which many militaries use as their "fatigue management" pill of choice, boasts glowing reviews from satisfied users. But in the United States, civilians need a prescription to get it; without one, they are stuck using adrafinil, a precursor substance that the body metabolizes into modafinil after ingestion. Taking adrafinil in lieu of coffee just made me keenly aware that I hadn't had coffee.
So with these 8 results in hand, what do I think? Roughly, I was right 5 of the days and wrong 3 of them. If not for the sleep effect on #4, which is - in a way - cheating (one hopes to detect modafinil due to good effects), the ratio would be 5:4 which is awfully close to a coin-flip. Indeed, a scoring rule ranks my performance at almost identical to a coin flip: -5.49 vs -5.5419. (The bright side is that I didn't do worse than a coin flip: I was at least calibrated.)
Either way, if more and more people use these types of stimulants, there may be a risk that we will find ourselves in an ever-expanding neurological arm's race, argues philosophy professor Nicole Vincent. But is this necessarily a bad thing? No, says Farahany, who sees the improvement in cognitive functioning as a social good that we should pursue. Better brain functioning would result in societal benefits, she argues, "like economic gains or even reducing dangerous errors."
A key ingredient of Noehr's chemical "stack" is a stronger racetam called Phenylpiracetam. He adds a handful of other compounds considered to be mild cognitive enhancers. One supplement, L-theanine, a natural constituent in green tea, is claimed to neutralise the jittery side-effects of caffeine. Another supplement, choline, is said to be important for experiencing the full effects of racetams. Each nootropic is distinct and there can be a lot of variation in effect from person to person, says Lawler. Users semi-annonymously compare stacks and get advice from forums on sites such as Reddit. Noehr, who buys his powder in bulk and makes his own capsules, has been tweaking chemicals and quantities for about five years accumulating more than two dozens of jars of substances along the way. He says he meticulously researches anything he tries, buys only from trusted suppliers and even blind-tests the effects (he gets his fiancée to hand him either a real or inactive capsule).
There are seven primary classes used to categorize smart drugs: Racetams, Stimulants, Adaptogens, Cholinergics, Serotonergics, Dopaminergics, and Metabolic Function Smart Drugs. Despite considerable overlap and no clear border in the brain and body's responses to these substances, each class manifests its effects through a different chemical pathway within the body.
The ethics of cognitive enhancement have been extensively debated in the academic literature (e.g., Bostrom & Sandberg, 2009; Farah et al., 2004; Greely et al., 2008; Mehlman, 2004; Sahakian & Morein-Zamir, 2007). We do not attempt to review this aspect of the problem here. Rather, we attempt to provide a firmer empirical basis for these discussions. Despite the widespread interest in the topic and its growing public health implications, there remains much researchers do not know about the use of prescription stimulants for cognitive enhancement.
Use of and/or registration on any portion of this site constitutes acceptance of our User Agreement (updated 5/25/18) and Privacy Policy and Cookie Statement (updated 5/25/18). Your California Privacy Rights. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast.
For illustration, consider amphetamines, Ritalin, and modafinil, all of which have been proposed as cognitive enhancers of attention. These drugs exhibit some positive effects on cognition, especially among individuals with lower baseline abilities. However, individuals of normal or above-average cognitive ability often show negligible improvements or even decrements in performance following drug treatment (for details, see de Jongh, Bolt, Schermer, & Olivier, 2008). For instance, Randall, Shneerson, and File (2005) found that modafinil improved performance only among individuals with lower IQ, not among those with higher IQ. [See also Finke et al 2010 on visual attention.] Farah, Haimm, Sankoorikal, & Chatterjee 2009 found a similar nonlinear relationship of dose to response for amphetamines in a remote-associates task, with low-performing individuals showing enhanced performance but high-performing individuals showing reduced performance. Such ∩-shaped dose-response curves are quite common (see Cools & Robbins, 2004)
"Where can you draw the line between Red Bull, six cups of coffee and a prescription drug that keeps you more alert," says Michael Schrage of the MIT Center for Digital Business, who has studied the phenomenon. "You can't draw the line meaningfully - some organizations have cultures where it is expected that employees go the extra mile to finish an all-nighter. "
The Smart Pills Technology are primarily utilized for dairy products, soft drinks, and water catering in diverse shapes and sizes to various consumers. The rising preference for easy-to-carry liquid foods is expected to boost the demand for these packaging cartons, thereby, fueling the market growth. The changing lifestyle of people coupled with the convenience of utilizing carton packaging is projected to propel the market. In addition, Smart Pills Technology have an edge over the glass and plastic packaging, in terms of environmental-friendliness and recyclability of the material, which mitigates the wastage and reduces the product cost. Thus, the aforementioned factors are expected to drive the Smart Pills Technology market growth over the projected period.
70 pairs is 140 blocks; we can drop to 36 pairs or 72 blocks if we accept a power of 0.5/50% chance of reaching significance. (Or we could economize by hoping that the effect size is not 3.5 but maybe twice the pessimistic guess; a d=0.5 at 50% power requires only 12 pairs of 24 blocks.) 70 pairs of blocks of 2 weeks, with 2 pills a day requires (70 \times 2) \times (2 \times 7) \times 2 = 3920 pills. I don't even have that many empty pills! I have <500; 500 would supply 250 days, which would yield 18 2-week blocks which could give 9 pairs. 9 pairs would give me a power of:
Privacy Policy. Sitemap Disclaimer: None of the statements made on this website have been reviewed by the Food and Drug Administration. The products and supplements mentioned on this site are not intended to diagnose, treat, cure, alleviate or prevent any diseases. All articles on this website are the opinions of their respective authors who do not claim or profess to be medical professionals providing medical advice. This website is strictly for the purpose of providing opinions of the author. You should consult with your doctor or another qualified health care professional before you start taking any dietary supplements or engage in mental health programs. This website is supported by different affiliates and we receive a paid commission on certain products from our advertisers. Any and all trademarks, logos brand names and service marks displayed on this website are the registered or unregistered Trademarks of their respective owners. We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. CERTAIN CONTENT THAT APPEARS ON THIS SITE COMES FROM AMAZON SERVICES LLC. THIS CONTENT IS PROVIDED 'AS IS' AND IS SUBJECT TO CHANGE OR REMOVAL AT ANY TIME.
There are also premade 'stacks' (or formulas) of cognitive enhancing superfoods, herbals or proteins, which pre-package several beneficial extracts for a greater impact. These types of cognitive enhancers are more 'subtle' than the pharmaceutical alternative with regards to effects, but they work all the same. In fact, for many people, they work better than smart drugs as they are gentler on the brain and produce fewer side-effects.
Capsule Connection sells 1000 00 pills (the largest pills) for $9. I already have a pill machine, so that doesn't count (a sunk cost). If we sum the grams per day column from the first table, we get 9.75 grams a day. Each 00 pill can take around 0.75 grams, so we need 13 pills. (Creatine is very bulky, alas.) 13 pills per day for 1000 days is 13,000 pills, and 1,000 pills is $9 so we need 13 units and 13 times 9 is $117.
It's basic economics: the price of a good must be greater than cost of producing said good, but only under perfect competition will price = cost. Otherwise, the price is simply whatever maximizes profit for the seller. (Bottled water doesn't really cost $2 to produce.) This can lead to apparently counter-intuitive consequences involving price discrimination & market segmentation - such as damaged goods which are the premium product which has been deliberately degraded and sold for less (some Intel CPUs, some headphones etc.). The most famous examples were railroads; one notable passage by French engineer-economist Jules Dupuit describes the motivation for the conditions in 1849:
Hall, Irwin, Bowman, Frankenberger, & Jewett (2005) Large public university undergraduates (N = 379) 13.7% (lifetime) 27%: use during finals week; 12%: use when party; 15.4%: use before tests; 14%: believe stimulants have a positive effect on academic achievement in the long run M = 2.06 (SD = 1.19) purchased stimulants from other students; M = 2.81 (SD = 1.40) have been given stimulants by other studentsb
Theanine can also be combined with caffeine as both of them work in synergy to increase memory, reaction time, mental endurance, and memory. The best part about Theanine is that it is one of the safest nootropics and is readily available in the form of capsules. A natural option would be to use an excellent green tea brand which constitutes of tea grown in the shade because then Theanine would be abundantly present in it.
Integrity & Reputation: Go with a company that sells more than just a brain formula. If a company is just selling this one item,buyer-beware!!! It is an indication that it is just trying to capitalize on a trend and make a quick buck. Also, if a website selling a brain health formula does not have a highly visible 800# for customer service, you should walk away.
Each nootropic comes with a recommended amount to take. This is almost always based on a healthy adult male with an average weight and 'normal' metabolism. Nootropics (and many other drugs) are almost exclusively tested on healthy men. If you are a woman, older, smaller or in any other way not the 'average' man, always take into account that the quantity could be different for you.
Turning to analyses related specifically to the drugs that are the subject of this article, reanalysis of the 2002 NSDUH data by Kroutil and colleagues (2006) found past-year nonmedical use of stimulants other than methamphetamine by 2% of individuals between the ages of 18 and 25 and by 0.3% of individuals 26 years of age and older. For ADHD medications in particular, these rates were 1.3% and 0.1%, respectively. Finally, Novak, Kroutil, Williams, and Van Brunt (2007) surveyed a sample of over four thousand individuals from the Harris Poll Online Panel and found that 4.3% of those surveyed between the ages of 18 and 25 had used prescription stimulants nonmedically in the past year, compared with only 1.3% between the ages of 26 and 49.
Null results are generally less likely to be published. Consistent with the operation of such a bias in the present literature, the null results found in our survey were invariably included in articles reporting the results of multiple tasks or multiple measures of a single task; published single-task studies with exclusively behavioral measures all found enhancement. This suggests that some single-task studies with null results have gone unreported. The present mixed results are consistent with those of other recent reviews that included data from normal subjects, using more limited sets of tasks or medications (Advokat, 2010; Chamberlain et al., 2010; Repantis, Schlattmann, Laisney, & Heuser, 2010).
3 days later, I'm fairly miserable (slept poorly, had a hair-raising incident, and a big project was not received as well as I had hoped), so well before dinner (and after a nap) I brew up 2 wooden-spoons of Malaysia Green (olive-color dust). I drank it down; tasted slightly better than the first. I was feeling better after the nap, and the kratom didn't seem to change that.
Methylphenidate – a benzylpiperidine that had cognitive effects (e.g., working memory, episodic memory, and inhibitory control, aspects of attention, and planning latency) in healthy people.[21][22][23] It also may improve task saliency and performance on tedious tasks.[25] At above optimal doses, methylphenidate had off–target effects that decreased learning.[26] | CommonCrawl |
New England Dynamics and Number Theory Seminar
Seminar Information
Schedule of Talks
Titles and Abstracts
Speaker: Emmanuel Breuillard
Title: A subspace theorem for manifolds
Abstract: Schmidt's subspace theorem is a fundamental result in diophantine approximation and a natural generalization of Roth's celebrated theorem. In this talk I will discuss a geometric understanding of this theorem that blends homogeneous dynamics and geometric invariant theory. Combined with the Kleinbock-Margulis quantitative non-divergence estimates this yields a natural generalization of the subspace theorem to systems of linear forms that depend nicely on a parameter. I will also present several applications and consequences of the main result. Joint work with Nicolas de Saxcé.
Speaker: Yotam Smilansky
Title: Multiscale substitution tilings
Abstract: Multiscale substitution tilings are a new family of tilings of Euclidean space that are generated by multiscale substitution rules. Unlike the standard setup of substitution tilings, which is a basic object of study within the aperiodic order community and includes examples such as the Penrose and the pinwheel tilings, multiple distinct scaling constants are allowed, and the defining process of inflation and subdivision is a continuous one. Under a certain irrationality assumption on the scaling constants, this construction gives rise to a new class of tilings, tiling spaces and tiling dynamical systems, which are intrinsically different from those that arise in the standard setup. In the talk I will describe these new objects and discuss various structural, geometrical, statistical and dynamical results. Based on joint work with Yaar Solomon.
Speaker: Samantha Fairchild
Title: Counting social interactions for discrete subsets of the plane
Abstract: Given a discrete subset V in the plane, how many points would you expect there to be in a ball of radius 100? What if the radius is 10,000? Due to the results of Fairchild and forthcoming work with Burrin, when V arises as orbits of non-uniform lattice subgroups of SL(2,R), we can understand asymptotic growth rate with error terms of the number of points in V for a broad family of sets. A crucial aspect of these arguments and similar arguments is understanding how to count pairs of saddle connections with certain properties determining the interactions between them, like having a fixed determinant or having another point in V nearby. We will focus on a concrete case used to state the theorem and highlight the proof strategy. We will also discuss some ongoing work and ideas which advertise the generality and strength of this argument.
Speaker: Nattalie Tamam
Title: Effective equidistribution of horospherical flows in infinite volume
Abstract: We want to provide effective information about averages of orbits of the horospherical subgroup acting on a hyperbolic manifold of infinite volume. We start by presenting the setting and results for manifolds with finite volume. Then, discuss the difficulties that arise when studying the infinite volume setting, and the measures that play a crucial role in it. This is joint work with Jacqueline Warren.
Speaker: Douglas Lind
Title: Decimation Limits of Algebraic Actions
Abstract: This is intended to be an expository talk using simple examples to illustrate what's going on, and so will (hopefully) be a gentle introduction to these topics. Given a polynomial in d commuting variables we can define an algebraic action of ℤ^d by commuting automorphisms of a compact subgroup of 𝕋^(ℤ^d). Restricting the coordinates of points in this group to finite-index subgroups of ℤ^d gives other algebraic actions, defined by polynomials whose support grows polynomially and whose coefficients grow exponentially. But by "renormalizing" we can obtain a limiting object that is a concave function on ℝ^d with interesting properties, e.g. its maximum value is the entropy of the action. For some polynomials this function also arises in statistical mechanics models as the "surface tension" of a random surface via a variational principle. In joint work with Arzhakova, Schmidt, and Verbitskiy, we establish this limiting behavior, and identify the limit in terms of the Legendre transform of the Ronkin function of the polynomial. The proof is based on Mahler's estimates on polynomial coefficients using Mahler measure, and an idea used by Boyd to prove that Mahler measure is continuous in the coefficients of the polynomial. Refinements of convergence questions involve diophantine issues that I will discuss, together with some open problems.
Speaker: Mishel Skenderi
Title: Small values at integer points of generic subhomogeneous functions
Abstract: This talk will be based on joint work with Dmitry Kleinbock that has been motivated by several recent papers (among them, those of Athreya-Margulis, Bourgain, Ghosh-Gorodnik-Nevo, Kelmer-Yu). Given a certain sort of group $G$ and certain sorts of functions $f: \mathbb{R}^n \to \mathbb{R}$ and $\psi : \mathbb{R}^n \to \mathbb{R}_{>0},$ we obtain necessary and sufficient conditions so that for Haar-almost every $g \in G,$ there exist infinitely many (respectively, finitely many) $v \in \mathbb{Z}^n$ for which $|(f \circ g)(v)| \leq \psi(\|v\|),$ where $\|\cdot\|$ is an arbitrary norm on $\mathbb{R}^n.$ We also give a sufficient condition in the setting of uniform approximation. As a consequence of our methods, we obtain generalizations to the case of vector-valued (simultaneous) approximation with no additional effort. In our work, we use probabilistic results in the geometry of numbers that go back several decades to the work of Siegel, Rogers, and W. Schmidt; these results have recently found new life thanks to a 2009 paper of Athreya-Margulis.
Date: 6 November 2020
Speaker: Byungchul Cha
Title: Intrinsic Diophantine Approximation of circles
Abstract: Let $S^1$ be the unit circle in $\mathbb{R}^2$ centered at the origin and let $Z$ be a countable dense subset of $S^1$, for instance, the set $Z = S^1(\mathbb{Q})$ of all rational points in $S^1$. We give a complete description of an initial discrete part of the Lagrange spectrum of $S^1$ in the sense of intrinsic Diophantine approximation. This is an analogue of the classical result of Markoff in 1879, where he characterized the most badly approximable real numbers via the periods of their continued fraction expansions. Additionally, we present similar results for a few different subsets $Z$ of $S^1$. This is joint work with Dong Han Kim.
Speaker: Jacqueline Warren
Title: Joining classification and factor rigidity in infinite volume
Abstract: For a group acting on two spaces, a joining of these systems is a measure on the product space that is invariant under the diagonal action and projects to the original measures on each space. Joinings are a powerful tool in ergodic theory, and joinings for the horocycle flow were classified by Ratner in the finite volume setting, with many interesting applications. In this talk, I will discuss some of these applications and present joining classification for horospherical flows in the infinite volume setting, as well as a key factor rigidity theorem that is used in the proof. This talk is intended to be accessible to graduate students.
Speaker: Shahriar Mirzadeh
Title: On the dimension drop conjecture for diagonal flows on the space of lattices
Abstract: Consider the set of points in a homogeneous space X=G/Gamma whose g_t orbit misses a fixed open set. It has measure zero if the flow is ergodic. It has been conjectured that this set has Hausdorff dimension strictly smaller than the dimension of X. This conjecture is proved when X is compact or when it has real rank 1. In this talk we will prove the conjecture for probably the most important example of the higher rank case, namely: G=SL(m+n, R), Gamma=SL(m+n,Z), and g_t = diag(exp(t/m), …, exp(t/m), exp(-t/n), …, exp(-t/n)). We can also use our main result to produce new applications to Diophantine approximation. This project is joint work with Dmitry Kleinbock.
Date: 4 December 2020
Speaker: Osama Khalil
Title: Large centralizers and counting integral points on affine varieties
Abstract: Duke-Rudnick-Sarnak and Eskin-McMullen initiated the use of ergodic methods to count integral points on affine homogeneous varieties. They reduced the problem to one of studying limiting distributions of translates of periods of reductive groups on homogeneous spaces. The breakthrough of Eskin, Mozes and Shah provided a rather complete understanding of this question in the case the reductive group has a "small centralizer" inside the ambient group. In this talk, we describe work in progress giving new results on the equidistribution of generic translates of closed orbits of semisimple groups with "large centralizers". The key new ingredient is an algebraic description of a partial compactification (for lack of a better word) of the set of intermediate groups which act as obstructions to equidistribution. This allows us to employ tools from geometric invariant theory to study the avoidance problem.
Speaker: Anthony Sanchez
Title: Gaps of saddle connection directions for some branched covers of tori
Abstract: Holonomy vectors of translation surfaces provide a geometric generalization for higher genus surfaces of (primitive) integer lattice points. The counting and distribution properties of holonomy vectors on translation surfaces have been studied extensively. In this talk, we consider the following question: How random are the holonomy vectors of a translation surface? We motivate the gap distribution of slopes of holonomy vectors as a measure of randomness and compute the gap distribution for the class of translation surfaces given by gluing two identical tori along a slit. No prior background on translation surfaces or gap distributions will be assumed.
© 2021 New England Dynamics and Number Theory Seminar • Built with GeneratePress | CommonCrawl |
Faster algorithm solving minesweeper puzzles (1)
Minesweeper is something that I'm always addicted to. This is obvious when this blog contains multiple entries talking about different aspects of this absolute classic. Last time, I talked about solving minesweeper using logical deduction generalised as a satisfiability problem. That was quite a long time ago -- when I was playing mienfield. 3 years after that I am now addicted to another minesweeper game called Minesweeper: Collector, that you can find on the Google play store.
Yes of course. Using ILP (integer linear programming) or even SAT (satisfiability) to solve minesweeper is not a very smart idea. Since we are working with equations and we know integral solution exists, we can simply employ linear algebra to make our life easier.
It is a bit hard to define how useful a deduction is by solving all the equations. For instance if we can conclude from 100 equations that now the space of possible solutions has dimension 68, then probably a faster way to solve the puzzle could be done by performing an educated guess in terms of probability. However probability is not something that linear algebra can handle easily so we would simply look for any definite deduction here. The most desirable result from the algorithm is that we can deduce the solution of some particular grids.
For the classic rectangular minesweeper we can think the whole puzzle as a matrix $M\in \mathbb{R} ^{m\times n}$, then we can setup variables $v_{11},...,v_{mn}$. Of course the same idea applies to different kinds of minesweeper shape [hexagonal etc] it is just the naming/coordinate setup that differs. They are supposed to be binary, but this is some technicality that we must handle later.
If a certain grid [for example row $s$ column $t$] is known but its neighborhoods are not all revealed then the grid yields an equation denoted by $E_{st}$. It has the form
$E_{st} ~:~ \sum _{p=s\pm 1, q = t\pm 1}^{1\leq p \leq m, 1\leq q \leq n} \delta _{pq}v_{pq} = c_{st}$
Where $c_{st}$ is the number of mines surrounded, and $\delta _{st} = 1$ if the grid $(s,t)$ is concealed, 0 otherwise. The aim of course, is to identify the mines [i.e. solving $v$] and receive more hints, eventually solving the whole puzzle.
Here is our first algorithm:
Algorithm 1.
Input: A partially solved minesweeper matrix $M\in \mathbb{R}^{m\times n}$ provided that solution exists
Output: Any definite deduction
- Convert puzzle into linear eqautions
- RREF and solve $M$
- Interpret the solution
- Return any conclusion
For example we look at a classic 1-2 combination
$\begin{pmatrix}
\square & \square & \square & \square & \square & \square & \square & \square \\
\square & 2 & 1 & 1 & 2 & 2 & 1 & \square
\end{pmatrix}$
that should give you deduction like
\square & X & O & O & X & X &O & O \\
\square & 2 & 1 & 1 & 2 & 2 & 1 & O
[where X represents mine and O is safe.] Let $v_1,...,v_{10}$ be the variables naming from left to right, top to bottom. Loot at the matrix reduction:
1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 2\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1
\end{pmatrix}
\xrightarrow[~]{rref}
\begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & 1 & 0 & 1\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & -1 & 1\\
0 & 0 & 0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 1 \end{pmatrix}$
What a mess! But we can still interpret the solution.
From row 3 we have $v_3+v_7+v_8+v_{10} = 0$, then by non-negativity they are all zero. By removing column 3,7,8,10 and row 3 we instantly get $v_2 = v_5 = v_6 = 1, v_4 = 0$. Now the solution space is given by:
$v = (0,1,0,0,1,1,0,0,1,0)^t + t(1,0,0,0,0,0,0,0,-1,0)^t$ where $t \in \mathbb{R}$, or simply $t \in \left\{ 0,1\right\}$.
as expected. However the deduction is very messy that it cannot be automated in an obvious way. Things get worse if the solution space is more complicated. Look at this simple example:
\square & \square & \square & \square & \square & \square \\
\square & 1 & 1 & 2 & 2 & \square
that yields immediately the following [this should be really obvious for any experienced player!]
\square & O & \square & \square & X & \square \\
assign the names $v_1,...,v_8$ as before, reduce the matrix:
1 & 1 & 1 & 0 & 0 & 0 & 1 & 0 & 1\\
0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 2
\begin{pmatrix}1 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 2\\
0 & 1 & 0 & 0 & -1 & 0 & 0 & 0 & -1\\
0 & 0 & 1 & 0 & 0 & -1 & 0 & -1 & 0\\
0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 2 \end{pmatrix}$
The only thing we can deduce is the second equation. Since we know $v_2,v_5\in \left\{ 0,1\right\}$ we must have $v_2 = 0, v_5 = 1$ as predicted, and this is basically all we know.
The trouble here is the deduction heavily relies on facts that we have omitted by modelling the problem into a system of linear equations. Of course rearranging the columns [hence changing the RREF] could solve this problem, but recognising the rearrangement of columns is basically equivalent to knowing the grids that can be definitely [or at least almost definitely - when the solution of that grid has a very low dimension] which is the fundamental aim.
To improve our efficiency instead of adding conditionals that check whether an equation is useful for deduction, we can change the fields that we are working on. Rings wouldn't work [and that is why ILP is bad], but we can look at fields, like $F_2$ or $F_7$!. This will be addressed in the next entry.
Labels: Algorithms, game, maths
Trauma Team review
Well not a very recent game isn't it? Some random recommendation from youtube caught my eyes then I subsequently finished the whole streaming. Back to the time I was stuck in gaming consoles [before went into MUGs] I completed the first game of the Trauma Center series almost perfected every operation, then took on the second opinion but shaking hands definitely makes the game superbly hard on Wii, and I gave up halfway.
Developing the same story line is probably not working after one or two attempts. On one hand the nature of operations being repetitive is somehow pushing the developers to raise the difficulty bar in the sequels. The population is not giving a warm reception against this act -- the precision, speed or continuity necessary to clear the later stage utilizes full functionality of the NDS pad that only a few number of games could had reach such bar. [Wii's better but not a lot, too.] Raising difficulty not only go beyond what gamers could do but also make the game more unreasonable. The story basically entered a dead end - GUILT along with its origin and associated diseases are pretty much cured. Developing something like that takes years as mentioned in the first game, and the way it was spread out decided that it is not easy to create another pandemic anymore. It just makes no sense to try to extend this linear plot.
But the whole story does not stop there. Just look at Phoenix Wright - the introduction of new character adds another layer of complexity of the whole story [in terms of mathematics it is like adding an independent vector into a subspace...right?]. Imagine that your starting pitchers are becoming old and can't be SP anymore. The introduction of a new SP definitely deliver a different game, but it's bitter and sweet finding your old player not only adapting a relief role but also helping the noobs to get through the game. To me, Trauma Center quite successfully replicate such strategy.
Derek and Angie has relieved once or twice already [so they turned into...coach?!? That make sense lol.] so Naomi is a natural choice. Her character didn't change much since her debut, but she was but in an environment with much more interaction and that exposes quite a different side of her. What a pity that the lil' guy failed to impress her enough to gain a starter at the very end.
Each of the individual stories are pretty interesting too, but I feel like getting the story done in like, 6 chapters is too rush. Don't forget that you have the main story pushing you behind so everything is just too quick for those characters to make a change [except for Gabriel who received a real shocker]. Fortunately the ending is kind of complete so I'm leaving happily from the game.
That brings another problem by dividing the whole game into different kinds of puzzles: each type of puzzles has their own significant weakness, and putting all puzzles together is enough to annoy every single player. Operations are extremely easy and generous from beginning to the end - it never escalates even with limited number of operations; endoscoped operations are even more repetitive because the opponent [virus/wounds] never 'fight back', and that is extremely unrealistic. Who would have a full kit equipped on the endoscope? The only thing I can recall from Hank's operation is the super combo that I couldn't recognize where was all the counts from; Maria's one is a nice attempt but the constraint never picks the gamer's back heavily. On the other hand, I found diagnostic and forensic puzzles extremely long and clumsy. It is good that they really collect realistic evidences with details [that stands out from other detective games], but the progress is simply too slow because you cannot reach some obvious conclusion just because the game does not allow you to. The frequent switching scene is also kind of annoying. Why am I going from by office to the scene just to collect 2 pieces of evidence and decided that it's sufficient then go back? I know all of them are logical but redundant logical induction is really bringing any wow factor to the story.
Some other elements of the game is worth mentioning as well. The graphic is nice - American comic styled slides handled the pace well and giving enough details [and easter eggs] given the limited budget [well you know, it's atlus after all...], I definitely like the Japanese styled graphics. Credits must be given to those [Japanese] producer who merged the two together well. Music's average I would say, not that grand and intense as the previous games. A remix of O Fortuna would be a great punch on the old players' stomach, but probably the producer decided not to do so because everything is relaxing and easy here :P
In overall, this is a game that is worth to have a look, especially if you've played similar stuff in the past. Probably too old to buy but streaming videos are worth a shot [say, Karin & Omega's Channel?], and something can be expected from the sequel, if there will be one -- kind of doubtful because it's been 6 years since the last one.
Labels: game | CommonCrawl |
MBE Home
Epidemic spread of influenza viruses: The impact of transient populations on disease dynamics
2011, 8(1): 223-238. doi: 10.3934/mbe.2011.8.223
A perspective on the 2009 A/H1N1 influenza pandemic in Mexico
Rodolfo Acuňa-Soto 1, , Luis Castaňeda-Davila 1, and Gerardo Chowell 2,
Departamento de Microbiología y Parasitología, Facultad de Medicina, Universidad Nacional Autónoma de México, Delegación Coyoacán, México D.F. 04510, Mexico, Mexico
Mathematical, Computational & Modeling Sciences Center, School of Human Evolution and Social Change, Arizona State University, Box 872402, Tempe, AZ 85287
Received June 2010 Revised September 2010 Published January 2011
In this article, we provide a chronological description of the 2009 H1N1 influenza pandemic in Mexico from the detection of severe respiratory disease among young adults in central Mexico and the identification of the novel swine-origin influenza virus to the response of Mexican public health authorities with the swift implementation of the National Preparedness and Response Plan for Pandemic Influenza. Furthermore, we review some features of the 2009 H1N1 influenza pandemic in Mexico in relation to the devastating 1918-1920 influenza pandemic and discuss opportunities for the application of mathematical modeling in the transmission dynamics of pandemic influenza. The value of historical data in increasing our understanding of past pandemic events is highlighted.
Keywords: Pandemic influenza; reproduction number; epidemic model; H1N1 influenza; metapopulation model..
Mathematics Subject Classification: Primary: 58F15, 58F17; Secondary: 53C3.
Citation: Rodolfo Acuňa-Soto, Luis Castaňeda-Davila, Gerardo Chowell. A perspective on the 2009 A/H1N1 influenza pandemic in Mexico. Mathematical Biosciences & Engineering, 2011, 8 (1) : 223-238. doi: 10.3934/mbe.2011.8.223
S. Akira, S. Uematsu and O. Takeuchi, Pathogen recognition and innate immunity,, Cell, 124 (2006), 783. Google Scholar
P. Ansstasiou-Fotaki, E. Deligeoroglou and G. Kreatsas, The GARDASIL vaccine can prevent cervical carcinoma caused by human papilloma virus (HPV) (results from our participation and from the study carried out in Greece),, Akush Ginekol (Sofiia), 46 (2007), 17. Google Scholar
G. J. Atkins, M. N. Fleeton and B. J. Sheahan, Therapeutic and prophylactic applications of alphavirus vectors,, Expert Rev. Mol. Med., 10 (2008). doi: 10.1017/S1462399408000859. Google Scholar
O. T. Avery and W. F. Goebel, Chemo-immunological studies on conjugated carbohydrate-proteins: II. Immunological specificity of synthetic sugar-protein antigens,, J. Exp. Med., 50 (1929), 533. doi: 10.1084/jem.50.4.533. Google Scholar
R. Barrett, C. W. Kuzawa, T. McDade and G. J. Armelagos, Emerging and re-emerging infectious diseases: The third epidemiologic transition,, Annu. Rev. Anthropol., 27 (1998), 247. doi: 10.1146/annurev.anthro.27.1.247. Google Scholar
C. Barrios, P. Brawand, M. Berney, C. Brandt, P. H. Lambert and C. A. Siegrist, Neonatal and early life immune responses to various forms of vaccine antigens qualitatively differ from adult responses: Predominance of a Th2-biased pattern which persists after adult boosting,, Eur. J. Immunol., 26 (1996), 1489. doi: 10.1002/eji.1830260713. Google Scholar
J. M. Barry, "The Great Influenza: The Story of the Deadliest Pandemic in History,", revised ed. Penguin Books, (2004). Google Scholar
J. G. Bartlett, Planning for avian influenza,, Ann. Intern. Med., 145 (2006), 141. Google Scholar
G. M. Beards and D. W. Brown, The antigenic diversity of rotaviruses: Significance to epidemiology and vaccine strategies,, Eur. J. Epidemiol., 4 (1988), 1. doi: 10.1007/BF00152685. Google Scholar
A. S. Beare and R. G. Webster, Replication of avian influenza viruses in humans,, Arch. Virol., 119 (1991), 37. doi: 10.1007/BF01314321. Google Scholar
M. Beauregard and M. A. Hefford, Enhancement of essential amino acid contents in crops by genetic engineering and protein design,, Plant Biotechnol. J., 4 (2006), 561. Google Scholar
M. W. Beijerinck, A Contagium vivum fluidum as the cause of the mosaic disease of tobacco leaves,, Centralblatt fur Bacteriologie und Parasitenkunde, 5 (1899), 27. Google Scholar
E. A. Belongia, S. A. Irving, S. C. Waring, L. A. Coleman, J. K. Meece, M. Vandermause, S. Lindstrom, D. Kempf and D. K. Shay, Clinical characteristics and 30-day outcomes for influenza A 2009 (H1N1), 2008-2009 (H1N1) and 2007-2008 (H3N2) infections,, JAMA, 304 (2010), 1091. Google Scholar
R. B. Belshe, Current status of live attenuated influenza virus vaccine in the US,, Virus Res., 103 (2004), 177. doi: 10.1016/j.virusres.2004.02.031. Google Scholar
R. B. Belshe, P. M. Mendelman, J. Treanor, J. King, W. C. Gruber, P. Piedra, D. I. Bernstein, F. G. Hayden, K. Kotloff, K. Zangwill, D. Iacuzio and M. Wolff, The efficacy of live attenuated, cold-adapted, trivalent, intranasal influenzavirus vaccine in children,, N. Engl. J. Med., 338 (1998), 1405. doi: 10.1056/NEJM199805143382002. Google Scholar
R. B. Belshe, K. L. Nichol, S. B. Black, H. Shinefield, J. Cordova, R. Walker, C. Hessel, I. Cho and P. M. Mendelman, Safety, efficacy, and effectiveness of live, attenuated, cold-adapted influenza vaccine in an indicated population aged 5-49 years,, Clin. Infect Dis., 39 (2004), 920. doi: 10.1086/423001. Google Scholar
D. R. Bentley and G. G. Brownlee, Sequence of the N2 neuraminidase from influenza virus A/NT/60/68,, Nucleic Acids Res., 10 (1982), 5033. doi: 10.1093/nar/10.16.5033. Google Scholar
O. G. Berlin, S. M. Novak, R. K. Porschen, E. G. Long, G. N. Stelma and F. W. Schaeffer, Recovery of Cyclospora organisms from patients with prolonged diarrhea,, Clin. Infect Dis., 18 (1994), 606. Google Scholar
P. L. Bhalla, Genetic engineering of wheat-current challenges and opportunities,, Trends Biotechnol., 24 (2006), 305. doi: 10.1016/j.tibtech.2006.04.008. Google Scholar
M. E. Bianchi, DAMPs, PAMPs and alarmins: all we need to know about danger,, J. Leukoc Biol., 81 (2007), 1. doi: 10.1189/jlb.0306164. Google Scholar
O. O. Bilukha and N. Rosenstein, Prevention and control of meningococcal disease,, Recommendations of the Advisory Committee on Immunization Practices (ACIP). MMWR Recomm. Rep., 54 (2005), 1. Google Scholar
R. Bock, Plastid biotechnology: prospects for herbicide and insect resistance, metabolic engineering and molecular farming,, Curr. Opin. Biotechnol, 18 (2007), 100. doi: 10.1016/j.copbio.2006.12.001. Google Scholar
B. Bottazzi, A. Doni, C. Garlanda and A. Mantovani, An integrated view of humoral innate immunity: Pentraxins as a paradigm,, Annu. Rev. Immunol., 28 (2010), 157. doi: 10.1146/annurev-immunol-030409-101305. Google Scholar
D. J. Brayden, M. A. Jepson and A. W. Baird, Keynote review: Intestinal Peyer's patch M cells and oral vaccine targeting,, Drug Discov. Today, 10 (2005), 1145. doi: 10.1016/S1359-6446(05)03536-1. Google Scholar
S. Brighenti and J. Andersson, Induction and regulation of CD8+ cytolytic T cells in human tuberculosis and HIV infection,, Biochem. Biophys Res. Commun., 396 (2010), 50. doi: 10.1016/j.bbrc.2010.02.141. Google Scholar
I. H. Brown, D. J. Alexander, P. Chakraverty, P. A. Harris and R. J. Manvell, Isolation of an influenza A virus of unusual subtype (H1N7) from pigs in England, and the subsequent experimental transmission from pig to pig,, Vet. Microbiol., 39 (1994), 125. doi: 10.1016/0378-1135(94)90093-0. Google Scholar
I. H. Brown, P. A. Harris, J. W. McCauley and D. J. Alexander, Multiple genetic reassortment of avian and human influenza A viruses in European pigs, resulting in the emergence of an H1N2 virus of novel genotype,, J. Gen. Virol., 79 (Pt 12) (1998), 2947. Google Scholar
A. Calmette, Preventive vaccination against tuberculosis with BCG,, Proc. R. Soc. Med., 24 (1931), 1481. Google Scholar
L. A. Campbell, C. C. Kuo and J. T. Grayston, Chlamydia pneumoniae and cardiovascular disease,, Emerg. Infect Dis., 4 (1998), 571. doi: 10.3201/eid0404.980407. Google Scholar
M. A. Campbell, H. A. Fitzgerald and P. C. Ronald, Engineering pathogen resistance in crop plants,, Transgenic Res., 11 (2002), 599. doi: 10.1023/A:1021109509953. Google Scholar
M. R. Castrucci, I. Donatelli, L. Sidoli, G. Barigazzi, Y. Kawaoka and R. G. Webster, Genetic reassortment between avian and human influenza A viruses in Italian pigs,, Virology, 193 (1993), 503. doi: 10.1006/viro.1993.1155. Google Scholar
T. M. Chambers, V. S. Hinshaw, Y. Kawaoka, B. C. Easterday and R. G. Webster, Influenza viral infection of swine in the United States 1988-1989,, Arch. Virol., 116 (1991), 261. doi: 10.1007/BF01319247. Google Scholar
Z. Chen, A. Aspelund, G. Kemble and H. Jin, Genetic mapping of the cold-adapted phenotype of B/Ann Arbor/1/66, the master donor virus for live attenuated influenza vaccines (FluMist),, Virology, 345 (2006), 416. doi: 10.1016/j.virol.2005.10.005. Google Scholar
K. M. Citron, BCG vaccination against tuberculosis: International perspectives,, Bmj, 306 (1993), 222. Google Scholar
H. F. Clark, P. A. Offit, R. W. Ellis, J. J. Eiden, D. Krah, A. R. Shaw, M. Pichichero, J. J. Treanor, F. E. Borian, L. M. Bell and S. A. Plotkin, The development of multivalent bovine rotavirus (strain WC3) reassortant vaccine for infants,, J. Infect Dis., 174 Suppl 1S (1996), 73. Google Scholar
J. Cohen and M. Enserink, Swine flu. after delays, WHO agrees: The 2009 pandemic has begun,, Science, 324 (2009), 1496. doi: 10.1126/science.324_1496. Google Scholar
G. A. Colditz, C. S. Berkey, F. Mosteller, T. F. Brewer, M. E. Wilson, E. Burdick and H. V. Fineberg, The efficacy of bacillus Calmette-Guerin vaccination of newborns and infants in the prevention of tuberculosis: Meta-analyses of the published literature,, Pediatrics, 96 (1995), 29. Google Scholar
D. B. Collinge, H. J. Jorgensen, O. S. Lund and M. F. Lyngkjaer, Engineering pathogen resistance in crop plants: Current trends and future prospects,, Annu. Rev. Phytopathol., 48 (2010), 269. doi: 10.1146/annurev-phyto-073009-114430. Google Scholar
G. Corradin and G. del Giudice, "Novel Adjuvants for Vaccines,", Current Medicinal Chemistry Anti-inflammatory and anti-allergy agents 4, (2005). Google Scholar
R. Curtiss, 3rd, W. Xin, Y. Li, W. Kong, S. Y. Wanda, B. Gunn and S. Wang, New technologies in using recombinant attenuated Salmonella vaccine vectors,, Crit. Rev. Immunol., 30 (2010), 255. Google Scholar
G. De Becker, V. Moulin, B. Pajak, C. Bruck, M. Francotte, C. Thiriart, J. Urbain and M. Moser, The adjuvant monophosphoryl lipid A increases the function of antigen-presenting cells,, Int. Immunol., 12 (2000), 807. doi: 10.1093/intimm/12.6.807. Google Scholar
P. Delves, S. Martin, D. Burton and I. Roitt, "Essential Immunology,", 11th ed. Wiley-Blackwell, (2006). Google Scholar
J. Diamond, "Guns, Gems and Steel: The Fates of Human Societies,", 1st ed., (1997). Google Scholar
R. Dommett, M. Zilbauer, J. T. George and M. Bajaj-Elliott, Innate immune defence in the human gastrointestinal tract,, Mol. Immunol., 42 (2005), 903. doi: 10.1016/j.molimm.2004.12.004. Google Scholar
M. L. Duran-Reynals, "The Fever Bark Tree: The Pageant of Quinine,", Doubleday, (1946). Google Scholar
J. L. Ebersole, M. A. Taubman, D. J. Smith and J. M. Goodson, Gingival crevicular fluid antibody to oral microorganisms. I. Method of collection and analysis of antibody,, J. Periodontal Res., 19 (1984), 124. doi: 10.1111/j.1600-0765.1984.tb00801.x. Google Scholar
P. Ehrlich, Ueber moderne Chemotherapie. Vortrag gehalten in der X,, Tagung der Deutschen Dermatologischen Gesellschaft. Akademische Verlagsgesellschaft m.b.H., (1908). Google Scholar
T. C. Elleman, A. A. Azad and C. W. Ward, Neuraminidase gene from the early Asian strain of human influenza virus, A/RI/5-/57 (H2N2),, Nucleic Acids Res., 10 (1982), 7005. doi: 10.1093/nar/10.21.7005. Google Scholar
L. Epstein and S. Bassein, Patterns of pesticide use in California and the implications for strategies for reduction of pesticides,, Annu. Rev. Phytopathol., 41 (2003), 351. doi: 10.1146/annurev.phyto.41.052002.095612. Google Scholar
N. Ferry, M. G. Edwards, J. A. Gatehouse and A. M. Gatehouse, Plant-insect interactions: molecular approaches to insect resistance,, Curr. Opin. Biotechnol., 15 (2004), 155. doi: 10.1016/j.copbio.2004.01.008. Google Scholar
R. H. Ffrench-Constant, P. J. Daborn and G. Le Goff, The genetics and genomics of insecticide resistance,, Trends Genet., 20 (2004), 163. doi: 10.1016/j.tig.2004.01.003. Google Scholar
A. Fleming, On the antibacterial action of cultures of a penicillium, with special reference to their use in the isolation of B. influenza,, Brit. J. Exp. Path., 10 (1929), 226. Google Scholar
J. E. Galen and M. M. Levine, Can a 'flawless' live vector vaccine strain be engineered?, Trends Microbiol., 9 (2001), 372. doi: 10.1016/S0966-842X(01)02096-0. Google Scholar
L. Garrett, "The Coming Plague: Newly Emerging Diseases in a World Out of Balance,", 1st ed. Penguin, (1995). Google Scholar
R. J. Garten, C. T. Davis, C. A. Russell, B. Shu, S. Lindstrom, A. Balish, W. M. Sessions, X. Xu, E. Skepner, V. Deyde, M. Okomo-Adhiambo, L. Gubareva, J. Barnes, C. B. Smith, S. L. Emery, M. J. Hillman, P. Rivailler, J. Smagala, M. de Graaf, D. F. Burke, Antigenic and genetic characteristics of swine-origin 2009 A(H1N1) influenza viruses circulating in humans,, Science, 325 (2009), 197. Google Scholar
Raimund Bürger, Gerardo Chowell, Pep Mulet, Luis M. Villada. Modelling the spatial-temporal progression of the 2009 A/H1N1 influenza pandemic in Chile. Mathematical Biosciences & Engineering, 2016, 13 (1) : 43-65. doi: 10.3934/mbe.2016.13.43
Olivia Prosper, Omar Saucedo, Doria Thompson, Griselle Torres-Garcia, Xiaohong Wang, Carlos Castillo-Chavez. Modeling control strategies for concurrent epidemics of seasonal and pandemic H1N1 influenza. Mathematical Biosciences & Engineering, 2011, 8 (1) : 141-170. doi: 10.3934/mbe.2011.8.141
Arni S.R. Srinivasa Rao. Modeling the rapid spread of avian influenza (H5N1) in India. Mathematical Biosciences & Engineering, 2008, 5 (3) : 523-537. doi: 10.3934/mbe.2008.5.523
Mudassar Imran, Mohamed Ben-Romdhane, Ali R. Ansari, Helmi Temimi. Numerical study of an influenza epidemic dynamical model with diffusion. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020168
Sherry Towers, Katia Vogt Geisse, Chia-Chun Tsai, Qing Han, Zhilan Feng. The impact of school closures on pandemic influenza: Assessing potential repercussions using a seasonal SIR model. Mathematical Biosciences & Engineering, 2012, 9 (2) : 413-430. doi: 10.3934/mbe.2012.9.413
Eunha Shim. Prioritization of delayed vaccination for pandemic influenza. Mathematical Biosciences & Engineering, 2011, 8 (1) : 95-112. doi: 10.3934/mbe.2011.8.95
Stephen C. Preston, Ralph Saxton. An $H^1$ model for inextensible strings. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 2065-2083. doi: 10.3934/dcds.2013.33.2065
Julien Arino, Chris Bauch, Fred Brauer, S. Michelle Driedger, Amy L. Greer, S.M. Moghadas, Nick J. Pizzi, Beate Sander, Ashleigh Tuite, P. van den Driessche, James Watmough, Jianhong Wu, Ping Yan. Pandemic influenza: Modelling and public health perspectives. Mathematical Biosciences & Engineering, 2011, 8 (1) : 1-20. doi: 10.3934/mbe.2011.8.1
Marco Arieli Herrera-Valdez, Maytee Cruz-Aponte, Carlos Castillo-Chavez. Multiple outbreaks for the same pandemic: Local transportation and social distancing explain the different "waves" of A-H1N1pdm cases observed in México during 2009. Mathematical Biosciences & Engineering, 2011, 8 (1) : 21-48. doi: 10.3934/mbe.2011.8.21
Diána H. Knipl, Gergely Röst. Modelling the strategies for age specific vaccination scheduling during influenza pandemic outbreaks. Mathematical Biosciences & Engineering, 2011, 8 (1) : 123-139. doi: 10.3934/mbe.2011.8.123
Paula A. González-Parra, Sunmi Lee, Leticia Velázquez, Carlos Castillo-Chavez. A note on the use of optimal control on a discrete time model of influenza dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 183-197. doi: 10.3934/mbe.2011.8.183
Majid Jaberi-Douraki, Seyed M. Moghadas. Optimal control of vaccination dynamics during an influenza epidemic. Mathematical Biosciences & Engineering, 2014, 11 (5) : 1045-1063. doi: 10.3934/mbe.2014.11.1045
Dashun Xu, Z. Feng. A metapopulation model with local competitions. Discrete & Continuous Dynamical Systems - B, 2009, 12 (2) : 495-510. doi: 10.3934/dcdsb.2009.12.495
Gerardo Chowell, Catherine E. Ammon, Nicolas W. Hengartner, James M. Hyman. Estimating the reproduction number from the initial phase of the Spanish flu pandemic waves in Geneva, Switzerland. Mathematical Biosciences & Engineering, 2007, 4 (3) : 457-470. doi: 10.3934/mbe.2007.4.457
Karen R. Ríos-Soto, Baojun Song, Carlos Castillo-Chavez. Epidemic spread of influenza viruses: The impact of transient populations on disease dynamics. Mathematical Biosciences & Engineering, 2011, 8 (1) : 199-222. doi: 10.3934/mbe.2011.8.199
Pierre Magal, Ahmed Noussair, Glenn Webb, Yixiang Wu. Modeling epidemic outbreaks in geographical regions: Seasonal influenza in Puerto Rico. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020237
C.M. Elliott, S. A. Smitheman. Analysis of the TV regularization and $H^{-1}$ fidelity model for decomposing animage into cartoon plus texture. Communications on Pure & Applied Analysis, 2007, 6 (4) : 917-936. doi: 10.3934/cpaa.2007.6.917
Joaquin Riviera, Yi Li. Existence of traveling wave solutions for a nonlocal reaction-diffusion model of influenza a drift. Discrete & Continuous Dynamical Systems - B, 2010, 13 (1) : 157-174. doi: 10.3934/dcdsb.2010.13.157
M. De Boeck, P. Vandendriessche. On the dual code of points and generators on the Hermitian variety $\mathcal{H}(2n+1,q^{2})$. Advances in Mathematics of Communications, 2014, 8 (3) : 281-296. doi: 10.3934/amc.2014.8.281
Linfang Liu, Xianlong Fu, Yuncheng You. Pullback attractor in $H^{1}$ for nonautonomous stochastic reaction-diffusion equations on $\mathbb{R}^n$. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3629-3651. doi: 10.3934/dcdsb.2017143
Rodolfo Acuňa-Soto Luis Castaňeda-Davila Gerardo Chowell | CommonCrawl |
Astrophysics > Solar and Stellar Astrophysics
arXiv:2112.13852 (astro-ph)
[Submitted on 27 Dec 2021]
Title:SunnyNet: A neural network approach to 3D non-LTE radiative transfer
Authors:Bruce A. Chappell, Tiago M. D. Pereira
Abstract: Context. Computing spectra from 3D simulations of stellar atmospheres when allowing for departures from local thermodynamic equilibrium (non-LTE) is computationally very intensive. Aims. We develop a machine learning based method to speed up 3D non-LTE radiative transfer calculations in optically thick stellar atmospheres. Methods. Making use of a variety of 3D simulations of the solar atmosphere, we trained a convolutional neural network, SunnyNet, to learn the translation from LTE to non-LTE atomic populations. Non-LTE populations computed with an existing 3D code were considered as the true values. The network was then used to predict non-LTE populations for other 3D simulations, and synthetic spectra were computed from its predicted non-LTE populations. We used a six-level model atom of hydrogen and H$\alpha$ spectra as test cases. Results. SunnyNet gives reasonable predictions for non-LTE populations with a dramatic speedup of about 10$^5$ times when running on a single GPU and compared to existing codes. When using different snapshots of the same simulation for training and testing, SunnyNet's predictions are within 20-40% of the true values for most points, which results in average differences of a few percent in H$\alpha$ spectra. Predicted H$\alpha$ intensity maps agree very well with existing codes. Most importantly, they show the telltale signs of 3D radiative transfer in the morphology of chromospheric fibrils. The results are not as reliable when the training and testing are done with different families of simulations. SunnyNet is open source and publicly available.
Comments: 14 pages, 14 figures, accepted for publication in A&A
Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Instrumentation and Methods for Astrophysics (astro-ph.IM)
Cite as: arXiv:2112.13852 [astro-ph.SR]
(or arXiv:2112.13852v1 [astro-ph.SR] for this version)
Journal reference: A&A 658, A182 (2022)
Related DOI: https://doi.org/10.1051/0004-6361/202142625
From: Tiago Pereira [view email]
[v1] Mon, 27 Dec 2021 19:00:01 UTC (4,185 KB)
astro-ph.SR
astro-ph.IM
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?) | CommonCrawl |
SoCG 2016
Boston, June 14-18, 2016
SoCG
SoCG+STOC
Travel+Attend
Accepted Contributions
SoCG Papers
SoCG Workshops
SoCG-meets-STOC Workshops
CG:YRF
Proceedings at dagstuhl.de
Rooms: (Tue. - Fri.)
A (for Auditorium). On ground floor, directly ahead of entrance.
Used for all single-track sessions, track A of SoCG talks, and YRF talks.
B (for Basement). One floor down, adjacent to elevators / staircase.
Used for track B of SoCG talks and workshops.
C (for Classroom). One floor up; nearest room to elevators / staircase.
Used for track C of workshops.
D (for Demo). Adjacent to C, on 2nd floor.
Used for Multimedia presentations.
(Note: each presentation will also be featured in lobby at specific times)
Detailed daily schedules (click to to show / hide contents)
Printed copies will be available at the registration desk.
Tuesday June 14
13:15 Lunch reception [+]
Location: Jaharis Center courtyard (across street from SoCG)
Please bring ID.
In case of bad weather, we may move to the SoCG coffee break room.
13:35 Welcome
14:00 Multimedia Preview [+]
Interactive Geometric Algorithm Visualization in a Browser
Lynn Asselin, Kirk P. Gardner, and Donald R. Sheehy
Geometric Models for Musical Audio Data
Paul Bendich, Ellen Gasparovic, John Harer, and Christopher Tralie
Visualizing Scissors Congruence
Satyan L. Devadoss, Ziv Epstein, and Dmitriy Smirnov
Visualization of Geometric Spanner Algorithms
Mohammad Farshi and Seyed Hossein Hosseini
Path Planning for Simple Robots using Soft Subdivision Search
Ching-Hsiang Hsu, John Paul Ryan, and Chee Yap
Exploring Circle Packing Algorithms
Kevin Pratt, Connor Riley, and Donald R. Sheehy
The Explicit Corridor Map: Using the Medial Axis for Real-Time Path Planning and Crowd Simulation
Wouter van Toll, Atlas F. Cook IV, Marc J. van Kreveld, and Roland Geraerts
High Dimensional Geometry of Sliding Window Embeddings of Periodic Videos
Christopher J. Tralie
Introduction to Persistent Homology
Matthew L. Wright
Session 1A
Chair: Joseph O'Rourke Session 1B
Chair: Matias Korman
14:20 The Planar Tree Packing Theorem [DOI] [+]
Markus Geyer, Michael Hoffmann, Michael Kaufmann, Vincent Kusters and Csaba Toth
Packing graphs is a combinatorial problem where several given graphs are being mapped into a common host graph such that every edge is used at most once. In the planar tree packing problem we are given two trees T1 and T2 on n vertices and have to find a planar graph on n vertices that is the edge-disjoint union of T1 and T2. A clear exception that must be made is the star which cannot be packed together with any other tree. But according to a conjecture of García et al. from 1997 this is the only exception, and all other pairs of trees admit a planar packing. Previous results addressed various special cases, such as a tree and a spider tree, a tree and a caterpillar, two trees of diameter four, two isomorphic trees, and trees of maximum degree three. Here we settle the conjecture in the affirmative and prove its general form, thus making it the planar tree packing theorem. The proof is constructive and provides a polynomial time algorithm to obtain a packing for two given nonstar trees. An Efficient Randomized Algorithm for Higher-Order Abstract Voronoi Diagrams [DOI] [+]
Cecilia Bohler, Rolf Klein and Chih-Hung Liu
Given a set of n sites in the plane, the order-k Voronoi diagram is a planar subdivision such that all points in a region share the same k nearest sites.The order-k Voronoi diagram arises for the k-nearest-neighbor problem, and there has been a lot of work for point sites in the Euclidean metric. In this paper, we study order-k Voronoi diagrams defined by an abstract bisecting curve system that satisfies several practical axioms, and thus our study covers many concrete order-k Voronoi diagrams. We propose a randomized incremental construction algorithm that runs in O(k(n-k) log2n + n log3n) steps, where O(k(n-k)) is the number of faces in the worst case. Due to those axioms, this result applies to disjoint line segments in the Lp norm, convex polygons of constant size, points in the Karlsruhe metric, and so on. In fact, this kind of run time with a polylog factor to the number of faces was only achieved for point sites in the L1 or Euclidean metric before.
14:40 Degree Four Plane Spanners: Simpler and Better [DOI] [+]
Iyad Kanj, Ljubomir Perkovic and Duru Turkoglu
Let P be a set of n points embedded in the plane, and let C be the complete Euclidean graph whose point-set is P. Each edge in C between two points p, q is realized as the line segment [pq], and is assigned a weight equal to the Euclidean distance |pq|. In this paper, we show how to construct in O(nlg{n}) time a plane spanner of C of maximum degree at most 4 and of stretch factor at most 20. This improves a long sequence of results on the construction of bounded degree plane spanners of C. Our result matches the smallest known upper bound of 4 by Bonichon et al. on the maximum degree while significantly improving their stretch factor upper bound from 156.82 to 20. The construction of our spanner is based on Delaunay triangulations defined with respect to the equilateral-triangle distance, and uses a different approach than that used by Bonichon et al. Our approach leads to a simple and intuitive construction of a well-structured spanner, and reveals useful structural properties of the Delaunay triangulations defined with respect to the equilateral-triangle distance. The Farthest-point Geodesic Voronoi Diagram of Points on the Boundary of a Simple Polygon
[DOI] [+]
Eunjin Oh, Luis Barba and Hee-Kap Ahn
Given a set of sites (points) in a simple polygon, the farthest-point geodesic Voronoi diagram partitions the polygon into cells, at most one cell per site, such that every point in a cell has the same farthest site with respect to the geodesic metric. We present an O((n+m)loglogn)-time algorithm to compute the farthest-point geodesic Voronoi diagram for m sites lying on the boundary of a simple n-gon.
15:00 On Visibility Representations of Non-planar Graphs [DOI] [+]
Therese Biedl, Giuseppe Liotta and Fabrizio Montecchiani
A rectangle visibility representation (RVR) of a graph consists of an assignment of axis-aligned rectangles to vertices such that for every edge there exists a horizontal or vertical line of sight between the rectangles assigned to its endpoints. Testing whether a graph has an RVR is known to be NP-hard. In this paper, we study the problem of finding an RVR under the assumption that an embedding in the plane of the input graph is fixed and we are looking for an RVR that reflects this embedding. We show that in this case the problem can be solved in polynomial time for general embedded graphs and in linear time for 1-plane graphs (i.e., embedded graphs having at most one crossing per edge). The linear time algorithm uses a precise list of forbidden configurations, which extends the set known for straight-line drawings of 1-plane graphs. These forbidden configurations can be tested for in linear time, and so in linear time we can test whether a 1-plane graph has an RVR and either compute such a representation or report a negative witness. Finally, we discuss some extensions of our study to the case when the embedding is not fixed but the RVR can have at most one crossing per edge. Qualitative Symbolic Perturbation [DOI] [+]
Olivier Devillers, Menelaos Karavelas and Monique Teillaud
In a classical Symbolic Perturbation scheme, degeneracies are handled by substituting some polynomials in epsilon for the inputs of a predicate. Instead of a single perturbation, we propose to use a sequence of (simpler) perturbations. Moreover, we look at their effects geometrically instead of algebraically; this allows us to tackle cases that were not tractable with the classical algebraic approach.
15:20 Inserting Multiple Edges into a Planar Graph
Markus Chimani and Petr Hlineny
Let G be a connected planar (but not yet embedded) graph and F a set of additional edges not in G. The multiple edge insertion problem (MEI) asks for a drawing of G+F with the minimum number of pairwise edge crossings, such that the subdrawing of G is plane. An optimal solution to this problem is known to approximate the crossing number of the graph G+F. Finding an exact solution to MEI is NP-hard for general F, but linear time solvable for the special case of |F|=1 [Gutwenger et al, SODA 2001/Algorithmica] and polynomial time solvable when all of F are incident to a new vertex [Chimani et al, SODA 2009]. The complexity for general F but with constant k=|F| was open, but algorithms both with relative and absolute approximation guarantees have been presented [Chuzhoy et al, SODA 2011], [Chimani-Hlineny, ICALP 2011]. We show that the problem is fixed parameter tractable (FPT) in k for biconnected G, or if the cut vertices of G have bounded degrees. We give the first exact algorithm for this problem; it requires only O(|V(G)|) time for any constant k. Fixed Points of the Restricted Delaunay Triangulation Operator [DOI] [+]
Marc Khoury and Jonathan Richard Shewchuk
The restricted Delaunay triangulation can be conceived as an operator that takes as input a k-manifold (typically smooth) embedded in Rd and a set of points sampled with sufficient density on that manifold, and produces as output a k-dimensional triangulation of the manifold, the input points serving as its vertices. What happens if we feed that triangulation back into the operator, replacing the original manifold, while retaining the same set of input points? If k = 2 and the sample points are sufficiently dense, we obtain another triangulation of the manifold. Iterating this process, we soon reach an iteration for which the input and output triangulations are the same. We call this triangulation a fixed point of the restricted Delaunay triangulation operator. With this observation, and a new test for distinguishing ``critical points'' near the manifold from those near its medial axis, we develop a provably good surface reconstruction algorithm for R3 with unusually modest sampling requirements. We develop a similar algorithm for constructing a simplicial complex that models a 2-manifold embedded in a high-dimensional space Rd, also with modest sampling requirements (especially compared to algorithms that depend on sliver exudation). The latter algorithm builds a non-manifold representation similar to the flow complex, but made solely of Delaunay simplices. The algorithm avoids the curse of dimensionality: its running time is polynomial, not exponential, in d.
15:40 Strongly Monotone Drawings of Planar Graphs
Stefan Felsner, Alexander Igamberdiev, Philipp Kindermann, Boris Klemz, Tamara Mchedlidze and Manfred Scheucher
A straight-line drawing of a graph is a monotone drawing if for each pair of vertices there is a path which is monotonically increasing in some direction, and it is called a strongly monotone drawing if the direction of monotonicity is given by the direction of the line segment connecting the two vertices. We present algorithms to compute crossing-free strongly monotone drawings for some classes of planar graphs; namely, 3-connected planar graphs, outerplanar graphs, and 2-trees. The drawings of 3-connected planar graphs are based on primal-dual circle packings. Our drawings of outerplanar graphs depend on a new algorithm that constructs strongly monotone drawings of trees which are also convex. For irreducible trees, these drawings are strictly convex. Incremental Voronoi Diagrams [DOI] [+]
Luis Barba, Stefan Langerman, Sarah R. Allen and John Iacono
We study the amortized number of combinatorial changes (edge insertions and removals) needed to update the graph structure of the Voronoi diagram VD(S) (and several variants thereof) of a set S of n sites in the plane as sites are added to the set. To that effect, we define a general update operation for planar graphs that can be used to model the incremental construction of several variants of Voronoi diagrams as well as the incremental construction of an intersection of halfspaces in R3. We show that the amortized number of edge insertions and removals needed to add a new site to the Voronoi diagram is O(n1/2). A matching Omega(n1/2) combinatorial lower bound is shown, even in the case where the graph representing the Voronoi diagram is a tree. This contrasts with the O(log(n)) upper bound of Aronov et al. [Aronov et al., in proc. of LATIN, 2006] for farthest-point Voronoi diagrams in the special case where points are inserted in clockwise order along their convex hull. We then present a semi-dynamic data structure that maintains the Voronoi diagram of a set S of n sites in convex position. This data structure supports the insertion of a new site p (and hence the addition of its Voronoi cell) and finds the asymptotically minimal number K of edge insertions and removals needed to obtain the diagram of S U (p) from the diagram of S, in time O(K polylog n) worst case, which is O(n1/2 polylog n) amortized by the aforementioned combinatorial result. The most distinctive feature of this data structure is that the graph of the Voronoi diagram is maintained explicitly at all times and can be retrieved and traversed in the natural way; this contrasts with other known data structures supporting nearest neighbor queries. Our data structure supports general search operations on the current Voronoi diagram, which can, for example, be used to perform point location queries in the cells of the current Voronoi diagram in O(log n) time, or to determine whether two given sites are neighbors in the Delaunay triangulation.
16:10 Coffee break
Chair: Sergio Cabello Session 2B
Chair: Erin Chambers
16:30 Hyperplane Separability and Convexity of Probabilistic Point Sets [DOI] [+]
Martin Fink, John Hershberger, Nirman Kumar and Subhash Suri
We describe an O(nd) time algorithm for computing the exact probability that two d-dimensional probabilistic point sets are linearly separable, for any fixed d >= 2. A probabilistic point in d-space is the usual point, but with an associated (independent) probability of existence. We also show that the d-dimensional separability problem is equivalent to a (d+1)-dimensional convex hull membership problem, which asks for the probability that a query point lies inside the convex hull of n probabilistic points. Using this reduction, we improve the current best bound for the convex hull membership by a factor of n [Agarwal et al., ESA, 2014]. In addition, our algorithms can handle "input degeneracies" in which more than k+1 points may lie on a k-dimensional subspace, thus resolving an open problem in [Agarwal et al., ESA, 2014]. Finally, we prove lower bounds for the separability problem via a reduction from the k-SUM problem, which shows in particular that our O(n2) algorithms for 2-dimensional separability and 3-dimensional convex hull membership are nearly optimal. Polynomial-sized Topological Approximations Using the Permutahedron [DOI] [+]
Aruni Choudhary, Michael Kerber and Sharath Raghvendra
Classical methods to model topological properties of point clouds, such as the Vietoris-Rips complex, suffer from the combinatorial explosion of complex sizes. We propose a novel technique to approximate a multi-scale filtration of the Rips complex with improved bounds for size: precisely, for n points in Rd, we obtain a O(d)-approximation with at most n2O(d log k) simplices of dimension k or lower. In conjunction with dimension reduction techniques, our approach yields a O(polylog (n))-approximation of size nO(1) for Rips filtrations on arbitrary metric spaces. This result stems from high-dimensional lattice geometry and exploits properties of the permutahedral lattice, a well-studied structure in discrete geometry. Building on the same geometric concept, we also present a lower bound result on the size of an approximate filtration: we construct a point set for which every (1+epsilon)-approximation of the Cech filtration has to contain nOmega(log log n) features, provided that epsilon < 1/(log1+cn for c in (0,1).
16:50 On the Separability of Stochastic Geometric Objects, with Applications [DOI] [+]
Jie Xue, Yuan Li and Ravi Janardan
In this paper, we study the linear separability problem for stochastic geometric objects under the well-known unipoint/multipoint uncertainty models. Let S=SR U SB be a given set of stochastic bichromatic points, and define n = min{|SR|, |SB|} and N = max{|SR|, |SB|}. We show that the separable-probability (SP) of S can be computed in O(nNd-1) time for d >= 3 and O(min{nN log N, N2}) time for d=2, while the expected separation-margin (ESM) of S can be computed in O(nNd) time for d >= 2. In addition, we give an Omega(nNd-1) witness-based lower bound for computing SP, which implies the optimality of our algorithm among all those in this category. Also, a hardness result for computing ESM is given to show the difficulty of further improving our algorithm. As an extension, we generalize the same problems from points to general geometric objects, i.e., polytopes and/or balls, and extend our algorithms to solve the generalized SP and ESM problems in O(nNd) and O(nNd+1) time, respectively. Finally, we present some applications of our algorithms to stochastic convex-hull related problems. Eliminating Higher-Multiplicity Intersections, II. The Deleted Product Criterion in the r-Metastable Range [DOI] [+]
Isaac Mabillard and Uli Wagner
Motivated by Tverberg-type problems in topological combinatorics and by classical results about embeddings (maps without double points), we study the question whether a finite simplicial complex K can be mapped into Rd without higher-multiplicity intersections. We focus on conditions for the existence of almost r-embeddings, i.e., maps f: K -> Rd such that the intersection of f(sigma_1), ..., f(sigma_r) is empty whenever sigma_1,...,sigma_r are pairwise disjoint simplices of K. Generalizing the classical Haefliger-Weber embeddability criterion, we show that a well-known necessary deleted product condition for the existence of almost r-embeddings is sufficient in a suitable r-metastable range of dimensions: If r d > (r+1) dim K + 2 then there exists an almost r-embedding K-> Rd if and only if there exists an equivariant map of the r-fold deleted product of K to the sphere Sd(r-1)-1. This significantly extends one of the main results of our previous paper (which treated the special case where d=rk and dim K=(r-1)k, for some k> 2), and settles an open question raised there.
17:10 Separating a Voronoi Diagram via Local Search [DOI] [+]
V.S.P. Vijay Bhattiprolu and Sariel Har-Peled
Given a set P of n points in Rd , we show how to insert a set Z of O(n1-1/d) additional points, such that P can be broken into two sets P1 and P2 , of roughly equal size, such that in the Voronoi diagram V(P u Z), the cells of P1 do not touch the cells of P2 ; that is, Z separates P1 from P2 in the Voronoi diagram (and also in the dual Delaunay triangulation). In addition, given such a partition (P1 , P2 ) of P , we present an approximation algorithm to compute a minimum size separator realizing this partition. We also present a simple local search algorithm that is a PTAS for approximating the optimal Voronoi partition. Delaunay Triangulations on Orientable Surfaces of Low Genus [DOI] [+]
Mikhail Bogdanov, Monique Teillaud and Gert Vegter
Earlier work on Delaunay triangulation of point sets on the 2D flat torus, which is locally isometric to the Euclidean plane, was based on lifting the point set to a locally isometric 9-sheeted covering space of the torus. Under mild conditions the Delaunay triangulation of the lifted point set, consisting of 9 copies of the input set, projects to the Delaunay triangulation of the input set. We improve and generalize this work. First we present a new construction based on an 8-sheeted covering space, which shows that eight copies suffice for the standard flat torus. Then we generalize this construction to the context of compact orientable surfaces of higher genus, which are locally isometric to the hyperbolic plane. We investigate more thoroughly the Bolza surface, homeomorphic to a sphere with two handles, both because it is the hyperbolic surface with lowest genus, and because triangulations on the Bolza surface have applications in various fields such as neuromathematics and cosmological models. While the general properties (existence results of appropriate covering spaces) show similarities with the results for the flat case, explicit constructions and their proofs are much more complex, even in the case of the apparently simple Bolza surface. One of the main reasons is the fact that two hyperbolic translations do not commute in general. To the best of our knowledge, the results in this paper are the first ones of this kind. The interest of our contribution lies not only in the results, but most of all in the construction of covering spaces itself and the study of their properties.
17:30 On Variants of k-means Clustering [DOI] [+]
Sayan Bandyapadhyay and Kasturi Varadarajan
Clustering problems often arise in fields like data mining and machine learning. Clustering usually refers to the task of partitioning a collection of objects into groups with similar elements, with respect to a similarity (or dissimilarity) measure. Among the clustering problems, k-means clustering in particular has received much attention from researchers. Despite the fact that k-means is a well studied problem, its status in the plane is still open. In particular, it is unknown whether it admits a PTAS in the plane. The best known approximation bound achievable in polynomial time is 9+epsilon. In this paper, we consider the following variant of k-means. Given a set C of points in Rd and a real f > 0, find a finite set F of points in Rd that minimizes the quantity f*|F|+sum_{p in C} min_{q in F} {||p-q||}2. For any fixed dimension d, we design a PTAS for this problem that is based on local search. We also give a ``bi-criterion'' local search algorithm for k-means which uses (1+epsilon)k centers and yields a solution whose cost is at most (1+epsilon) times the cost of an optimal k-means solution. The algorithm runs in polynomial time for any fixed dimension. The contribution of this paper is two-fold. On the one hand, we are able to handle the square of distances in an elegant manner, obtaining a near-optimal approximation bound. This leads us towards a better understanding of the k-means problem. On the other hand, our analysis of local search might also be useful for other geometric problems. This is important considering that little is known about the local search method for geometric approximation. Structure and Stability of the 1-Dimensional Mapper [DOI] [+]
Mathieu Carrière and Steve Oudot
Given a continuous function f:X->R and a cover I of its image by intervals, the Mapper is the nerve of a refinement of the pullback cover f-1(I). Despite its success in applications, little is known about the structure and stability of this construction from a theoretical point of view. As a pixelized version of the Reeb graph of f, it is expected to capture a subset of its features (branches, holes), depending on how the interval cover is positioned with respect to the critical values of the function. Its stability should also depend on this positioning. We propose a theoretical framework relating the structure of the Mapper to that of the Reeb graph, making it possible to predict which features will be present and which will be absent in the Mapper given the function and the cover, and for each feature, to quantify its degree of (in-)stability. Using this framework, we can derive guarantees on the structure of the Mapper, on its stability, and on its convergence to the Reeb graph as the granularity of the cover I goes to zero.
17:50 Testing Convexity of Figures Under the Uniform Distribution [DOI] [+]
Piotr Berman, Meiram Murzabulatov and Sofya Raskhodnikova
We consider the following basic geometric problem: Given epsilon in (0,1/2), a 2-dimensional figure that consists of a black object and a white background is epsilon-far from convex if it differs in at least an epsilon fraction of the area from every figure where the black object is convex. How many uniform and independent samples from a figure that is epsilon-far from convex are needed to detect a violation of convexity with probability at least 2/3? This question arises in the context of designing property testers for convexity. Specifically, a (1-sided error) tester for convexity gets samples from the figure, labeled by their color; it always accepts if the black object is convex; it rejects with probability at least 2/3 if the figure is epsilon-far from convex. We show that Theta(epsilon-4/3) uniform samples are necessary and sufficient for detecting a violation of convexity in an epsilon-far figure and, equivalently, for testing convexity of figures with 1-sided error. Our testing algorithm runs in time O(epsilon-4/3) and thus beats the Omega(epsilon-3/2) sample lower bound for learning convex figures under the uniform distribution from the work of Schmeltz (Data Structures and Efficient Algorithms,1992). It shows that, with uniform samples, we can check if a set is approximately convex much faster than we can find an approximate representation of a convex set. Convergence Between Categorical Representations of Reeb Space and Mapper [DOI] [+]
Elizabeth Munch and Bei Wang
The Reeb space, which generalizes the notion of a Reeb graph, is one of the few tools in topological data analysis and visualization suitable for the study of multivariate scientific datasets. First introduced by Edelsbrunner et al., it compresses the components of the level sets of a multivariate mapping and obtains a summary representation of their relationships. A related construction called mapper, and a special case of the mapper construction called the Joint Contour Net have been shown to be effective in visual analytics. Mapper and JCN are intuitively regarded as discrete approximations of the Reeb space, however without formal proofs or approximation guarantees. An open question has been proposed by Dey et al. as to whether the mapper construction converges to the Reeb space in the limit. In this paper, we are interested in developing the theoretical understanding of the relationship between the Reeb space and its discrete approximations to support its use in practical data analysis. Using tools from category theory, we formally prove the convergence between the Reeb space and mapper in terms of an interleaving distance between their categorical representations. Given a sequence of refined discretizations, we prove that these approximations converge to the Reeb space in the interleaving distance; this also helps to quantify the approximation quality of the discretization at a fixed resolution.
20:45 Social event: The Black Rose Irish pub. [+]
Address: 160 State st., Boston 02109
Walking from SoCG: 1.4km, 18 min.
Subway from SoCG: orange line towards Oak Grove. Stop at State St. Walk East on State st.
Wednesday June 15
Chair: Monique Teillaud Session 3B
Chair: David Mount
9:20 A Quasilinear-Time Algorithm for Tiling the Plane Isohedrally with a Polyomino [DOI] [+]
Stefan Langerman and Andrew Winslow
A plane tiling consisting of congruent copies of a shape is isohedral provided that for any pair of copies, there exists a symmetry of the tiling mapping one copy to the other. We give a O(n*log2n)-time algorithm for deciding if a polyomino with n edges can tile the plane isohedrally. This improves on the O(n18)-time algorithm of Keating and Vince and generalizes recent work by Brlek, Provençal, Fédou, and the second author. Weak 1/r-nets for Moving Points [DOI] [+]
Alexandre Rok and Shakhar Smorodinsky
In this paper, we extend the weak 1/r-net theorem to a kinetic setting where the underlying set of points is moving polynomially with bounded description complexity. We establish that one can find a kinetic analog N of a weak 1/r-net of cardinality O(rd(d+1)/2logd r) whose points are moving with coordinates that are rational functions with bounded description complexity. Moreover, each member of N has one polynomial coordinate.
9:40 Congruence Testing of Point Sets in 4-space
Heuna Kim and Günter Rote
We give a deterministic O(n log n)-time algorithm to decide if two n-point sets in 4-dimensional Euclidean space are the same up to rotations and translations. It has been conjectured that O(n log n) algorithms should exist for any fixed dimension. The best algorithms in d-space so far are a deterministic algorithm by Brass and Knauer [Int. J. Comput. Geom. Appl., 2000] and a randomized Monte Carlo algorithm by Akutsu [Comp. Geom., 1998]. They take time O(n2 log n) and O(n3/2 log n) respectively in 4-space. Our algorithm exploits many geometric structures and properties of 4-dimensional space. New Lower Bounds for epsilon-nets [DOI] [+]
Nabil Mustafa, Andrey Kupavskiy and Janos Pach
Following groundbreaking work by Haussler and Welzl (1987), the use of small epsilon-nets has become a standard technique for solving algorithmic and extremal problems in geometry and learning theory. Two significant recent developments are: (i) an upper bound on the size of the smallest epsilon-nets for set systems, as a function of their so-called shallow-cell complexity (Chan, Grant, Konemann, and Sharpe); and (ii) the construction of a set system whose members can be obtained by intersecting a point set in R4 by a family of half-spaces such that the size of any epsilon-net for them is at least (1/(9*epsilon)) log (1/epsilon) (Pach and Tardos). The present paper completes both of these avenues of research. We (i) give a lower bound, matching the result of Chan et al., and (ii) generalize the construction of Pach and Tardos to half-spaces in Rd, for any d >= 4, to show that the general upper bound of Haussler and Welzl for the size of the smallest epsilon-nets is tight.
10:45 Invited Talk: Daniela Rus, Toward Pervasive Robots [+]
The digitization of practically everything coupled with the mobile Internet, the automation of knowledge work, and advanced robotics promises a future with democratized use of machines and wide-spread use of robots and customization. However, pervasive use of robots remains a hard problem. Where are the gaps that we need to address in order to advance toward a future where robots are common in the world and they help reliably with physical tasks? What is the role of geometric reasoning along this trajectory?
In this talk I will discuss challenges toward pervasive use of robots and recent developments in geometric algorithms for customizing robots. I will focus on a suite of geometric algorithms for automatically designing, fabricating, and tasking robots using a print-and-fold approach. I will also describe how geometric reasoning can play a role in creating robots more capable of reasoning in the world. By enabling on-demand creation of programmable robots, we can begin to imagine a world with one robot for every physical task.
11:35 Best Paper: The Number of Holes in the Union of Translates of a Convex Set in Three Dimensions [DOI] [+]
Boris Aronov, Otfried Cheong, Michael Gene Dobbins and Xavier Goaoc
We show that the union of translates of a convex body in three dimensional space can have a cubic number holes in the worst case, where a hole in a set is a connected component of its compliment. This refutes a 20-year-old conjecture. As a consequence, we also obtain improved lower bounds on the complexity of motion planning problems and of Voronoi diagrams with convex distance functions.
11:50 YRF Preview (for Wednesday's talks)
Chair: Jean Cardinal Session 4B
Chair: Kevin Verbeek
12:15 Who Needs Crossings? Hardness of Plane Graph Rigidity [DOI] [+]
Zachary Abel, Erik D. Demaine, Martin L. Demaine, Sarah Eisenstat, Jayson Lynch and Tao Schardl
We exactly settle the complexity of graph realization, graph rigidity, and graph global rigidity as applied to three types of graphs: "globally noncrossing" graphs, which avoid crossings in all of their configurations; matchstick graphs, with unit-length edges and where only noncrossing configurations are considered; and unrestricted graphs (crossings allowed) with unit edge lengths (or in the global rigidity case, edge lengths in {1,2}). We show that all nine of these questions are complete for the class Exists-R, defined by the Existential Theory of the Reals, or its complement Forall-R; in particular, each problem is (co)NP-hard. One of these nine results - that realization of unit-distance graphs is Exists-R-complete - was shown previously by Schaefer (2013), but the other eight are new. We strengthen several prior results. Matchstick graph realization was known to be NP-hard (Eades & Wormald 1990, or Cabello et al. 2007), but its membership in NP remained open; we show it is complete for the (possibly) larger class Exists-R. Global rigidity of graphs with edge lengths in {1,2} was known to be coNP-hard (Saxe 1979); we show it is Forall-R-complete. The majority of the paper is devoted to proving an analog of Kempe's Universality Theorem - informally, "there is a linkage to sign your name" - for globally noncrossing linkages. In particular, we show that any polynomial curve phi(x,y)=0 can be traced by a noncrossing linkage, settling an open problem from 2004. More generally, we show that the nontrivial regions in the plane that may be traced by a noncrossing linkage are precisely the compact semialgebraic regions. Thus, no drawing power is lost by restricting to noncrossing linkages. We prove analogous results for matchstick linkages and unit-distance linkages as well. Approximating Dynamic Time Warping and Edit Distance for a Pair of Point Sequences [DOI] [+]
Pankaj Agarwal, Kyle Fox, Jiangwei Pan and Rex Ying
We present the first subquadratic algorithms for computing similarity between a pair of point sequences in Rd, for any fixed d > 1, using dynamic time warping (DTW) and edit distance, assuming that the point sequences are drawn from certain natural families of curves. In particular, our algorithms compute (1 + eps)-approximations of DTW and ED in near-linear time for point sequences drawn from k-packed or k-bounded curves, and subquadratic time for backbone sequences. Roughly speaking, a curve is k-packed if the length of its intersection with any ball of radius r is at most kr, and it is k-bounded if the sub-curve between two curve points does not go too far from the two points compared to the distance between the two points. In backbone sequences, consecutive points are spaced at approximately equal distances apart, and no two points lie very close together. Recent results suggest that a subquadratic algorithm for DTW or ED is unlikely for an arbitrary pair of point sequences even for d = 1. The commonly used dynamic programming algorithms for these distance measures reduce the problem to computing a minimum-weight path in a grid graph. Our algorithms work by constructing a small set of rectangular regions that cover the grid vertices. The weights of vertices inside each rectangle are roughly the same, and we develop efficient procedures to compute the approximate minimum-weight paths through these rectangles.
12:35 Configurations of Lines in 3-space and Rigidity of Planar Structures [DOI] [+]
Orit E. Raz
Let L be a sequence (l1,l2,...,ln) of n lines in C3. We define the *intersection graph* GL=([n],E) of L, where [n]:={1,..., n}, and with {i,j} in E if and only if i neq j and the corresponding lines li and lj intersect, or are parallel (or coincide). For a graph G=([n],E), we say that a sequence L is a *realization* of G if Gsubset GL. One of the main results of this paper is to provide a combinatorial characterization of graphs G=([n],E) that have the following property: For every *generic* realization L of G, that consists of n pairwise distinct lines, we have GL=Kn, in which case the lines of L are either all concurrent or all coplanar. The general statements that we obtain about lines, apart from their independent interest, turns out to be closely related to the notion of graph rigidity. The connection is established due to the so-called Elekes--Sharir framework, which allows us to transform the problem into an incidence problem involving lines in three dimensions. By exploiting the geometry of contacts between lines in 3D, we can obtain alternative, simpler, and more precise characterizations of the rigidity of graphs. On Computing the Fréchet Distance Between Surfaces [DOI] [+]
Amir Nayyeri and Hanzhong Xu
We describe two (1+epsilon)-approximation algorithms for computing the Fréchet distance between two homeomorphic piecewise linear surfaces R and S of genus zero and total complexity n, with Fréchet distance delta. (1) A 2^{O((n + ( (Area(R)+Area(S))/(epsilon.delta)^2 )^2 )} time algorithm if R and S are composed of fat triangles (triangles with angles larger than a constant). (2) An O(D/(epsilon.delta)^2) n + 2^{O(D^4/(epsilon^4.delta^2))} time algorithm if R and S are polyhedral terrains over [0,1]2 with slope at most D. Although, the Fréchet distance between curves has been studied extensively, very little is known for surfaces. Our results are the first algorithms (both for surfaces and terrains) that are guaranteed to terminate in finite time. Our latter result, in particular, implies a linear time algorithm for terrains of constant maximum slope and constant Fréchet distance.
12:55 Crossing Number is Hard for Kernelization [DOI] [+]
Petr Hlineny and Marek Derňár
The graph crossing number problem, cr(G)<=k, asks for a drawing of a graph G in the plane with at most k edge crossings. Although this problem is in general notoriously difficult, it is fixed-parameter tractable for the parameter k [Grohe, STOC 2001]. This suggests a closely related question of whether this problem has a polynomial kernel, meaning whether every instance of cr(G)<=k can be in polynomial time reduced to an equivalent instance of size polynomial in k (and independent of |G|). We answer this question in the negative. Along the proof we show that the tile crossing number problem of twisted planar tiles is NP-hard, which has been an open problem for some time, too, and then employ the complexity technique of cross-composition. Our result holds already for the special case of graphs obtained from planar graphs by adding one edge. Grouping Time-varying Data for Interactive Exploration [DOI] [+]
Arthur van Goethem, Marc Van Kreveld, Maarten Löffler, Bettina Speckmann and Frank Staals
We present algorithms and data structures that support the interactive analysis of the grouping structure of one-, two-, or higher-dimensional time-varying data while varying all defining parameters. Grouping structures characterise important patterns in the temporal evaluation of sets of time-varying data. We follow Buchin et al. [JoCG 2015] who define groups using three parameters: group-size, group-duration, and inter-entity distance. We give upper and lower bounds on the number of maximal groups over all parameter values, and show how to compute them efficiently. Furthermore, we describe data structures that can report changes in the set of maximal groups in an output-sensitive manner. Our results hold in Rd for fixed d.
14:30 Lunch: Hei La Moon dim sum restaurant [+]
Location: 88 Beach St, Boston, MA 02111
(5 minute walk from SoCG: exit building, turn left, walk on Harrison until Beach, turn right, pass gate, cross large avenue, restaurant is on the left)
Multimedia room open through lunch and the afternoon
16:00 A: Young Researchers Forum B: Workshop 1:
5th Mini Symposium on Computational Topology, Part I [Web] [+]
Organizers: Justin Curry, Pawel Dlotko, Michael Lesnick and Clement Maria
Software solutions for geometric and topological understanding of high dimensional data. This part focuses on the state of the art implementations of algorithms for computing topological features in data analysis. Existing software for statistics in topology (e.g, TDA R-package, persistence landscape toolbox, kernel for persistence) will also be showcased. C: Workshop 2:
Geometric Computing on Uncertain Data [Web] [+]
Organizers: Pankaj Agarwal, Nirman Kumar, Ben Raichel and Subhash Suri
There is a growing need for geometric algorithms that can gracefully operate under data uncertainty. The sources of data uncertainty can vary widely, from measurement noise to missing information and strategic randomness, among others. A number of researchers within computational geometry have recently explored a variety of data uncertainty models and problem-specific approaches, demonstrating a breadth of interest and scope. The research, however, is still in a state of infancy, and ripe for a broader exchange of ideas. The goal of the workshop is to provide a forum for computational geometers interested in this topic to learn about the current state of the art, stimulate discussions about new directions and challenges, and to foster collaborations.
14:30 -- Universal Guards: Guarding all Polygonalizations of a Point Set in the Plane [+]
Sándor Fekete, Qian Li, Joseph Mitchell and Christian Scheffer
The Art Gallery Problem (AGP) seeks to find the fewest guards to see all of a given domain; in its classic combinatorial variant (posed by Victor Klee), it asks for the number of guards that always suffice and are sometimes necessary to guard any simple $n$-gon: the answer is the well known $\lfloor n/3\rfloor$. While Klee's question was posed about guarding an $n$-vertex {\em simple polygon}, a related question about {\em point sets} was posed at the 2014 Goodman-Pollack Fest (NYU, November 2014): Given a set $S$ of $n$ points in the plane, how many guards always suffice to guard any simple polygon with vertex set $S$? A set of guards that guard every polygonalization of $S$ is said to be a set of {\em universal guards} for the point set. The question is how many universal guards are always sufficient, and sometimes necessary, for any set of $n$ points? We give the first set of results on universal guarding. We focus here on the case in which guards must be placed at a subset (the {\em guarded points}) of the input set $S$ and thus will be vertex guards for any polygonalization of~$S$.
14:45 -- All-Pairs Shortest Paths in Unit Disk Graphs in Slightly Subquadratic Time [+]
Timothy Chan and Dimitrios Skrepetos
In this paper we study the all-pairs shortest paths problem in (unweighted) unit disk graphs. The previous best solution for this problem required O(n\log n) time, by running the O(n log n)-time single-source shortest path algorithm of Cabello and Jeji (2015) from every source vertex, where n is the number of vertices. We not only manage to eliminate the logarithmic factor, but also obtain the first (slightly) subquadratic algorithm for the problem, running in O(n2\sqrt (loglog(n)/ log n)) time. Our algorithm computes an implicit representation of all the shortest paths, and, in the same amount of time, can also compute the diameter of the graph.
15:00 -- Improved Bounds on the Growth Constant of Polyiamonds [+]
Gill Barequet and Mira Shalah
In this paper we provide improved lower and upper bounds on the asymptotic growth constant of polyiamonds, proving that it is between~2.8424 and~3.6050.
15:15 -- Approximate Range Counting Revisited [+]
Saladi Rahul
This work presents several new results on approximate range counting. For a given query, if the actual count is $k$, then the data structures in this paper output a value, $\tau$, lying in the range $[(1-\vare)k,(1+\vare)k]$. The key results are the following: [A] A new technique for efficiently solving any approximate range counting problem is presented. This technique can be viewed as an enhancement of Aronov and Har-Peled's technique [{\it SIAM Journal of Computing, 2008}]; key reasons being the following: (1) The new technique is sensitive to the value of $k$: As an application, this work presents a structure for approximate halfspace range counting in $\IR^d, d\geq 4$ which occupies $O(n)$ space and solves the query in $O((n/k)^{1-1/\lfloor d/2\rfloor})$ time. When $k=\Theta(n)$, then the query time is $O(1)$. (2) The new technique handles colored range searching problems: As an application, the orthogonal colored range counting problem is solved. Existing structures for exact counting use $O(n^d)$ space to answer the query in $O(polylog n)$ query time. Improving these bounds substantially would require improving the best exponent of matrix multiplication. Therefore, if one is willing for an approximation, an attractive result is obtained: an $O(n polylog n)$ space data structure and an $O(polylog n)$ query time algorithm. [B] An optimal solution for approximate rectangle stabbing counting problems in $\IR^2$. This is achieved by a non-trivial reduction to planar point location. [C] Finally, an efficient solution is obtained for $3$-sided orthogonal colored range counting. The result is obtained by a non-trivial combination of two different types of random sampling techniques and a reduction to non-colored range searching problem.
15:30 -- Exact and Approximation Algorithms for Time-Window TSP [+]
Su Jia, Jie Gao and Joseph Mitchell
We study a version of the time-window traveling salesman problem (TWTSP): given a speed bound B for a robot, a set of cities each having a time window during which it must be visited, and a shortest path for the robot to visit all cities within their respective time windows, if possible to do so.
15:45 -- Recognition of the Spherical Laguerre Voronoi Diagram [+]
Supanut Chaidee and Kokichi Sugihara
We construct an algorithm for judging whether or not a given tessellation on a sphere is a spherical Laguerre Voronoi diagram. This algorithm is based on the properties of polyhedra corresponding to the spherical Laguerre Voronoi diagram and their transformation in projective spaces. 14:30 - 14:50
Yasu Hiraoka
Topological Data Analysis on Materials Science: Statistical Characterization of Glass Transition
Gunnar Carlsson
Topological Modeling of Complex Data
David Spivak
Pixel Matrices for Big, Messy, Real-World Data [+] Natural, real-world data does not come equipped with mathematical equations that it satisfies. A data set is just a collection of observed relationships, so it is naturally messy and probabilistic. When combining and analyzing these observed relationships, it is destructive, yet common practice, to first "normalize" the data by choosing models, fitting curves, removing outliers, or estimating parameters. It would often be preferable to forgo the clean-up step and simply combine the data sets directly. Pixel matrices are an applied category-theoretic technique for doing just that. We will explain how pixel matrices offer a massively parallelizable method for approximating the solution set for systems of nonlinear equations, and how the same idea can be applied when working with messy data. 14:30 - 15:00
Maarten Löffler
Where are we Going? Uncertainty in Motion
Jeff Phillips
Coresets for Uncertain Data
Yuan Li
On the Arrangement of Stochastic Lines in R2
18:00 A: Young Researchers Forum B: Workshop 1 (continued)
5th Mini Symposium on Computational Topology, Part I C: Workshop 2 (continued)
Geometric Computing on Uncertain data
16:30 -- Computing the Expected Area of an Induced Triangle [+]
Vissarion Fisikopoulos, Frank Staals and Constantinos Tsirogiannis
Consider the following problem: given a set P of n points in the plane, compute the expected area of a triangle induced by P, that is, a triangle whose vertices are selected uniformly at random from the points in P. This problem is a special case of computing the expected area of the convex hull of k points, selected uniformly at random from P. These problems are important in computing the functional diversity in Ecology. We present a simple exact algorithm for the problem that computes the expected triangle area in O(n2log n) time, and extends to the case of computing the area of the convex hull of a size k subset. Additionally, we present a (1\pm\epsilon)-approximation algorithm for the case in which the ratio \rho between the furthest pair distance and the closest pair distance of the points in P is bounded. With high probability (whp.) our algorithm computes an answer in the range [(1-\epsilon)A*,(1+\epsilon)A], where A is the true expected triangle area, in O(\frac{1}{\epsilon8/3} \rho4n5/3log4/3n) expected time.
16:45 -- A Geometric Approach to k-SUM [+]
Jean Cardinal, John Iacono and Aurélien Ooms
It is known that $k$-SUM can be solved using $\tilde{O}(n^{\lceil\frac{k}{2}\rceil})$ time and linear queries (here the notation $\tilde{O}$ ignores polylogarithmic factors). On the other hand, there is a point location algorithm due to Meiser that shows the existence of $\tilde{O}(n^4)$-depth algebraic computation trees for $k$-SUM. By streamlining Meiser's algorithm, we prove $k$-SUM can be solved using $\tilde{O}(n^3)$ expected linear queries in \tilde{O}(n^{\lceil (k/2) \rceil+8}) expected time. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of $k$. The new algorithms we present rely heavily on fundamental tools from computational geometry: $\varepsilon$-nets and cuttings. A preprint is available arXiv.
17:00 -- Network Optimization on Partitioned Pairs of Points [+]
Tyler Speaker Mayer, Gui Citovsky, Joseph Mitchell, Esther Arkin, Aritra Banik, Paz Carmi, Matthew Katz and Su Jia
Given a set $S = \bigcup_{i=1}^{n}\{p_i, q_i\}$ of $n$ pairs of points in a metric space, we study the problem of computing, what we call, a {\em feasible} partition $S = S_1 \cup S_2$ such that $p_i \in S_1$ if and only $q_i \in S_2 \: \forall \: i$. The partition should optimize the cost of a {\em pair} of networks, one built on $S_1$, and one built on $S_2$. In this work we consider the network structures to be matchings, minimum spanning trees (MSTs), traveling salesman tours (TSP tours), or their bottleneck equivalents. Let $f(X)$ be some network structure computed on point set $X$ and let $\lambda(f(X))$ be the bottleneck edge of that network. For each of the aforementioned network structures we consider the objective of (1) minimizing $|f(S_1)| + |f(S_2)|$, (2) minimizing $max\{|f(S_1)|,|f(S_2)|\}$, or (3) minimizing $max\{|\lambda(f(S_1))|, |\lambda(f(S_2))|\}$. Here, $|\cdot|$ denotes the sum of the edge lengths. We show several hardness results and an $O(1)$ approximation for every objective considered. Our results are summarized in Table \ref{results_table} and full details can be found in \cite{arkin2016Network}.
17:15 -- Computing the Planar Slope Number [+]
Udo Hoffmann
The planar slope number of a planar graph G is defined as the minimal number of slopes that is required for a straight line drawing of G. We show that determining the planar slope number is hard in the existential theory of the reals. We point out consequences for drawings that minimize the planar slope number. 16:30 - 16:50
Michael Kerber
Geometry Helps to Compare Persistence Diagrams
Clement Maria
Zigzag Persistent Homology: Algorithms, Software and Applications
Pawel Dlotko
Geometry Understanding in Higher Dimensions, the Gudhi Library 16:30 - 17:00
Nancy Amato
Dealing with Uncertainty in Sampling-Based Motion Planning
Guy Rosman
Using Coresets for Video Summaries
Don Sheehy
Sampling Uncertain Manifolds
21:00 Social event: Skywalk observatory reception [+]
Location: Prudential Center, 52nd floor
Walking from SoCG: under 2km. Follow Kneeland / Stuart street all the way.
Subway from SoCG:
Green line E from Boylston towards Heath. Stop at Prudential.
Orange line towards Forest Hills. Stop at Back Bay.
Walk from NEU: 10 min. on Huntington Ave.
Thursday June 16
Chair: Yusu Wang Session 5B
Chair: Marc van Kreveld
9:20 Avoiding the Global Sort: A Faster Contour Tree Algorithm [DOI] [+]
Benjamin Raichel and C. Seshadhri
We revisit the classical problem of computing the contour tree of a scalar field f:M to R, where M is a triangulated simplicial mesh in Rd. The contour tree is a fundamental topological structure that tracks the evolution of level sets of f and has numerous applications in data analysis and visualization. All existing algorithms begin with a global sort of at least all critical values of f, which can require (roughly) Omega(n log n) time. Existing lower bounds show that there are pathological instances where this sort is required. We present the first algorithm whose time complexity depends on the contour tree structure, and avoids the global sort for non-pathological inputs. If C denotes the set of critical points in M, the running time is roughly O(sum_{v in C} log lv), where lv is the depth of v in the contour tree. This matches all existing upper bounds, but is a significant asymptotic improvement when the contour tree is short and fat. Specifically, our approach ensures that any comparison made is between nodes that are either adjacent in M or in the same descending path in the contour tree, allowing us to argue strong optimality properties of our algorithm. Our algorithm requires several novel ideas: partitioning M in well-behaved portions, a local growing procedure to iteratively build contour trees, and the use of heavy path decompositions for the time complexity analysis. Applications of Incidence Bounds in Point Covering Problems [DOI] [+]
Ingo van Duijn, Edvin Berglin, Peyman Afshani and Jesper Sindahl Nielsen
In the Line Cover problem a set of n points is given and the task is to cover the points using either the minimum number of lines or at most k lines. In Curve Cover, a generalization of Line Cover, the task is to cover the points using curves with d degrees of freedom. Another generalization is the Hyperplane Cover problem where points in d-dimensional space are to be covered by hyperplanes. All these problems have kernels of polynomial size, where the parameter is the minimum number of lines, curves, or hyperplanes needed. First we give a non-parameterized algorithm for both problems in O*(2n) (where the O*(.) notation hides polynomial factors of n) time and polynomial space, beating a previous exponential-space result. Combining this with incidence bounds similar to the famous Szemeredi-Trotter bound, we present a Curve Cover algorithm with running time O*((Ck/log k)^((d-1)k)), where C is some constant. Our result improves the previous best times O*((k/1.35)^k) for Line Cover (where d=2), O*(k^(dk)) for general Curve Cover, as well as a few other bounds for covering points by parabolas or conics. We also present an algorithm for Hyperplane Cover in R3 with running time O*((Ck^2/log^(1/5) k)^k), improving on the previous time of O*((k^2/1.3)^k).
9:40 Finding Non-orientable Surfaces in 3-manifolds [DOI] [+]
Benjamin A. Burton, Arnaud De Mesmay and Uli Wagner
We investigate the complexity of finding an embedded non-orientable surface of Euler genus g in a triangulated 3-manifold. This problem occurs both as a natural question in low-dimensional topology, and as a first non-trivial instance of embeddability of complexes into 3-manifolds. We prove that the problem is NP-hard, thus adding to the relatively few hardness results that are currently known in 3-manifold topology. In addition, we show that the problem lies in NP when the Euler genus g is odd, and we give an explicit algorithm in this case. On the Number of Maximum Empty Boxes Amidst n Points [DOI] [+]
Adrian Dumitrescu and Minghui Jiang
We revisit the following problem (along with its higher dimensional variant): Given a set S of n points inside an axis-parallel rectangle U in the plane, find a maximum-area axis-parallel sub-rectangle that is contained in U but contains no points of S. (I) We prove that the number of maximum-area empty rectangles amidst n points in the plane is O(n log n 2^alpha(n)), where alpha(n) is the extremely slowly growing inverse of Ackermann's function. The previous best bound, O(n2), is due to Naamad, Lee, and Hsu (1984). (II) For any d at least 3, we prove that the number of maximum-volume empty boxes amidst n points in Rd is always O(n^d) and sometimes Omega(n^floor(d/2)). This is the first superlinear lower bound derived for this problem. (III) We discuss some algorithmic aspects regarding the search for a maximum empty box in R3. In particular, we present an algorithm that finds a (1-epsilon)-approximation of the maximum empty box amidst n points in O(epsilon^{-2} n^{5/3} log^2 {n}) time.
10:00 Efficient Algorithms to Decide Tightness [DOI] [+]
Bhaskar Bagchi, Benjamin A. Burton, Basudeb Datta, Nitin Singh and Jonathan Spreer
Tightness is a generalisation of the notion of convexity: a space is tight if and only if it is "as convex as possible", given its topological constraints. For a simplicial complex, deciding tightness has a straightforward exponential time algorithm, but more efficient methods to decide tightness are only known in the trivial setting of triangulated surfaces. In this article, we present a new polynomial time procedure to decide tightness for triangulations of 3-manifolds -- a problem which previously was thought to be hard. In addition, for the more difficult problem of deciding tightness of 4-dimensional combinatorial manifolds, we describe an algorithm that is fixed parameter tractable in the treewidth of the 1-skeletons of the vertex links. Finally, we show that simpler treewidth parameters are not viable: for all non-trivial inputs, we show that the treewidths of both the 1-skeleton and the dual graph must grow too quickly for a standard treewidth-based algorithm to remain tractable. Coloring Points with Respect to Squares [DOI] [+]
Eyal Ackerman, Balázs Keszegh and Mate Vizer
We consider the problem of 2-coloring geometric hypergraphs. Specifically, we show that there is a constant m such that any finite set S of points in the plane can be 2-colored such that every axis-parallel square that contains at least m points from S contains points of both colors. Our proof is constructive, that is, it provides a polynomial-time algorithm for obtaining such a 2-coloring. By affine transformations this result immediately applies also when considering homothets of a fixed parallelogram.
10:20 On Expansion and Topological Overlap [DOI] [+]
Dominic Dotterrer, Tali Kaufman and Uli Wagner
We give a detailed and easily accessible proof of Gromov's Topological Overlap Theorem. Let X be a finite simplicial complex or, more generally, a finite polyhedral cell complex of dimension d. Informally, the theorem states that if X has sufficiently strong higher-dimensional expansion properties (which generalize edge expansion of graphs and are defined in terms of cellular cochains of X) then X has the following topological overlap property: for every continuous map X rightarrow Rd there exists a point p in Rd whose preimage intersects a positive fraction mu > 0 of the d-cells of X. More generally, the conclusion holds if Rd is replaced by any d-dimensional piecewise-linear (PL) manifold M, with a constant mu that depends only on d and on the expansion properties of X, but not on M. Anchored Rectangle and Square Packings [DOI] [+]
Kevin Balas, Adrian Dumitrescu and Csaba Toth
For points p1,...,pn in the unit square [0,1]2, an anchored rectangle packing consists of interior-disjoint axis-aligned empty rectangles r1,...,rn in [0,1]2 such that point pi is a corner of the rectangle ri (that is, ri is anchored at pi) for i=1,...,n. We show that for every set of n points in [0,1]2, there is an anchored rectangle packing of area at least 7/12-O(1/n), and for every n, there are point sets for which the area of every anchored rectangle packing is at most 2/3. The maximum area of an anchored square packing is always at least 5/32 and sometimes at most 7/27. The above constructive lower bounds immediately yield constant-factor approximations, of 7/12 -epsilon for rectangles and 5/32 for squares, for computing anchored packings of maximum area in O(n log n) time. We prove that a simple greedy strategy achieves a 9/47-approximation for anchored square packings, and 1/3 for lower-left anchored square packings. Reductions to maximum weight independent set (MWIS) yield a QPTAS and a PTAS for anchored rectangle and square packings in nO(1/epsilon) and exp(poly(log (n/epsilon))) time, respectively.
Chair: Éric Colin de Verdière Session 6B
Chair: Takeshi Tokuyama
11:10 All-Pairs Minimum Cuts in Near-Linear Time for Surface-Embedded Graphs [DOI] [+]
Glencora Borradaile, David Eppstein, Christian Wulff-Nilsen and Amir Nayyeri
For an undirected n-vertex graph G with non-negative edge-weights, we consider the following type of query: given two vertices s and t in G, what is the weight of a minimum st-cut in G? We solve this problem in preprocessing time O(n log3 n) for graphs of bounded genus, giving the first sub-quadratic time algorithm for this class of graphs. Our result also improves by a logarithmic factor a previous algorithm by Borradaile, Sankowski and Wulff-Nilsen (FOCS 2010) that applied only to planar graphs. Our algorithm constructs a Gomory-Hu tree for the given graph, providing a data structure with space O(n) that can answer minimum-cut queries in constant time. The dependence on the genus of the input graph in our preprocessing time is 2^{O(g2)}. Approximating Convex Shapes with Respect to Symmetric Difference Under Homotheties [DOI] [+]
Juyoung Yon, Sang Won Bae, Siu-Wing Cheng, Otfried Cheong and Bryan Wilkinson
The symmetric difference is a robust operator for measuring the error of approximating one shape by another. Given two convex shapes P and C, we study the problem of minimizing the volume of their symmetric difference under all possible scalings and translations of C. We prove that the problem can be solved by convex programming. We also present a combinatorial algorithm for convex polygons in the plane that runs in O((m+n) log3(m+n)) expected time, where n and m denote the number of vertices of P and C, respectively.
11:30 Shortest Path Embeddings of Graphs on Surfaces [DOI] [+]
Alfredo Hubard, Vojtěch Kaluža, Arnaud de Mesmay and Martin Tancer
The classical theorem of Fáry states that every planar graph can be represented by an embedding in which every edge is represented by a straight line segment. We consider generalizations of Fáry's theorem to surfaces equipped with Riemannian metrics. In this setting, we require that every edge is drawn as a shortest path between its two endpoints and we call an embedding with this property a shortest path embedding. The main question addressed in this paper is whether given a closed surface S, there exists a Riemannian metric for which every topologically embeddable graph admits a shortest path embedding. This question is also motivated by various problems regarding crossing numbers on surfaces. We observe that the round metrics on the sphere and the projective plane have this property. We provide flat metrics on the torus and the Klein bottle which also have this property. Then we show that for the unit square flat metric on the Klein bottle there exists a graph without shortest path embeddings. We show, moreover, that for large g, there exist graphs G embeddable into the orientable surface of genus g, such that with large probability a random hyperbolic metric does not admit a shortest path embedding of G, where the probability measure is proportional to the Weil-Petersson volume on moduli space. Finally, we construct a hyperbolic metric on every orientable surface S of genus g, such that every graph embeddable into S can be embedded so that every edge is a concatenation of at most O(g) shortest paths. Random Sampling with Removal [DOI] [+]
Bernd Gärtner, Johannes Lengler and May Szedlak
Random sampling is a classical tool in constrained optimization. Under favorable conditions, the optimal solution subject to a small subset of randomly chosen constraints violates only a small subset of the remaining constraints. Here we study the following variant that we call random sampling with removal: suppose that after sampling the subset, we remove a fixed number of constraints from the sample, according to an arbitrary rule. Is it still true that the optimal solution of the reduced sample violates only a small subset of the constraints? The question naturally comes up in situations where the solution subject to the sampled constraints is used as an approximate solution to the original problem. In this case, it makes sense to improve cost and volatility of the sample solution by removing some of the constraints that appear most restricting. At the same time, the approximation quality (measured in terms of violated constraints) should remain high. We study random sampling with removal in a generalized, completely abstract setting where we assign to each subset R of the constraints an arbitrary set V(R) of constraints disjoint from R; in applications, V(R) corresponds to the constraints violated by the optimal solution subject to only the constraints in R. Furthermore, our results are parametrized by the dimension d, i.e., we assume that every set R has a subset B of size at most d with the same set of violated constraints. This is the first time this generalized setting is studied. In this setting, we prove matching upper and lower bounds for the expected number of constraints violated by a random sample, after the removal of k elements. For a large range of values of k, the new upper bounds improve the previously best bounds for LP-type problems, which moreover had only been known in special cases. We show that this bound on special LP-type problems, can be derived in the much more general setting of violator spaces, and with very elementary proofs.
11:50 Minimum Cycle and Homology Bases of Surface Embedded Graphs [DOI] [+]
Glencora Borradaile, Erin Wolf Chambers, Kyle Fox and Amir Nayyeri
We study the problems of finding a minimum cycle basis (a minimum weight set of cycles that form a basis for the cycle space) and a minimum homology basis (a minimum weight set of cycles that generates the 1-dimensional (Z2)-homology classes) of an undirected graph embedded on an orientable surface of genus g. The problems are closely related, because the minimum cycle basis of a graph contains its minimum homology basis, and the minimum homology basis of the 1-skeleton of any graph is exactly its minimum cycle basis. For the minimum cycle basis problem, we give a deterministic O(n^omega + 2^2g n^2)-time algorithm. The best known existing algorithms for surface embedded graphs are those for general sparse graphs: an O(n^omega) time Monte Carlo algorithm [Amaldi et. al., ESA '09] and a deterministic O(n3) time algorithm [Mehlhorn and Michail, TALG '09]. For the minimum homology basis problem, we give an O(g3 n log n)-time algorithm, improving on existing algorithms for many values of g and n. Max-sum Diversity via Convex Programming
Alfonso Cevallos, Friedrich Eisenbrand and Rico Zenklusen
Diversity maximization is an important concept in information retrieval, computational geometry and operations research. Usually, it is a variant of the following problem: Given a ground set, constraints, and a function f that measures diversity of a subset, the task is to select a feasible subset S such that f(S) is maximized. The sum-dispersion function f(S) which is the sum of the pairwise distances in S, is in this context a prominent diversification measure. The corresponding diversity maximization is the "max-sum" or "sum-sum" diversification. Many recent results deal with the design of constant-factor approximation algorithms of diversification problems involving sum-dispersion function under a matroid constraint. In this paper, we present a PTAS for the max-sum diversity problem under a matroid constraint for distances d(.,.) of negative type. Distances of negative type are, for example, metric distances stemming from the L2 and L1 norms, as well as the cosine or spherical, or Jaccard distance which are popular similarity metrics in web and image search. Our algorithm is based on techniques developed in geometric algorithms like metric embeddings and convex optimization. We show that one can compute a fractional solution of the usually non-convex relaxation of the problem which yields an upper bound on the optimum integer solution. Starting from this fractional solution, we employ a deterministic rounding approach which only incurs a small loss in terms of objective, thus leading to a PTAS. This technique can be applied to other previously studied variants of the max-sum dispersion function, including combinations of diversity with linear-score maximization, improving over the previous constant-factor approximation algorithms.
12:10 Untangling Planar Curves [DOI] [+]
Hsien-Chih Chang and Jeff Erickson
Any generic closed curve in the plane can be transformed into a simple closed curve by a finite sequence of local transformations called homotopy moves. We prove that simplifying a planar closed curve with n self-crossings requires Theta(n3/2) homotopy moves in the worst case. Our algorithm improves the best previous upper bound O(n2), which is already implicit in the classical work of Steinitz; the matching lower bound follows from the construction of closed curves with large *defect*, a topological invariant of generic closed curves introduced by Aicardi and Arnold. This lower bound also implies that Omega(n3/2) degree-1 reductions, series-parallel reductions, and Delta-Y transformations are required to reduce any planar graph with treewidth Omega(sqrt{n}) to a single edge, matching known upper bounds for rectangular and cylindrical grid graphs. Finally, we prove that Omega(n2) homotopy moves are required in the worst case to transform one non-contractible closed curve on the torus to another; this lower bound is tight if the curve is homotopic to a simple closed curve. Finding Global Optimum for Truth Discovery: Entropy Based Geometric Variance [DOI] [+]
Hu Ding, Jing Gao and Jinhui Xu
Truth Discovery is an important problem arising in data analytics related fields such as data mining, database, and big data. It concerns about finding the most trustworthy information from a dataset acquired from a number of unreliable sources. Due to its importance, the problem has been extensively studied in recent years and a number techniques have already been proposed. However, all of them are of heuristic nature and do not have any quality guarantee. In this paper, we formulate the problem as a high dimensional geometric optimization problem, called Entropy based Geometric Variance. Relying on a number of novel geometric techniques (such as Log-Partition and Modified Simplex Lemma), we further discover new insights to this problem. We show, for the first time, that the truth discovery problem can be solved with guaranteed quality of solution. Particularly, we show that it is possible to achieve a (1+eps)-approximation within nearly linear time under some reasonable assumptions. We expect that our algorithm will be useful for other data related applications.
12:25 YRF Preview (for Thursday's talks)
14:00 Lunch
5th Mini Symposium on Computational Topology,
Part II [Web] [+]
Applications and algorithms for computational topology. This part focuses on practical problems in which topology is used in (large scale) engineering and life sciences computations. This includes granular and material science, fluid dynamics, electromagnetism, neurobiology and surrounding areas. We will also present recent algorithmic developments and applications of computational topology. C: Workshop 4:
Tutorials on Ricci flow and optimal transportation [Web] [+]
Organizers: David Xianfeng Gu and Na Lei
This tutorial has two parts.
Part I is on (surface) Ricci flow. Surface Ricci flow gives the solution to a highly non-linear metric PDE: Yamabe's equation. The computational algorithms is based on the convex optimization, and the theoretical framework has been applied to graphics, vision, geometric modeling, networking and medical imaging fields.
Part II is on Optimal transportation. Optimal mass transportation map transforms one probability measure to the other in the most economic way. This tutorial introduces a variational framework to solve the problem. The algorithm is closely related to upper envelop, power Voronoi diagram, power Delaunay triangulation in classical computational geometry.
14:00 -- On Minimum Area Homotopies [+]
Brittany Fasy, Selcuk Karakoc and Carola Wenk
In this work, we will define the minimum homotopy area for closed curves and we will give a method to calculate it. The minimum homotopy area between two simple curves are defined by E. Chambers and Y. Wang, and they gave an algorithm to compute it. This work is a generalization of theirs. Our method will be first calculating it for a class of curves, namely self-overlapping curves and then breaking a general curve into self-overlapping sub-curves. We will also give some properties of a minimum homotopy, one that minimizes the area.
14:15 -- Homology Localization by Hierarchical Blowups [+]
Ahmed Abdelrazek
Topological descriptors such as the generators of homology groups are very useful in the analysis of complex data sets. It is often desired to find the smallest such generators to help localize the interesting features. One interpretation of localization utilizes a covering of the underlying space and computes generators contained within these covers. A similar construction was later used to compute persistence homology for smaller subsets in parallel before gluing the results. In this presentation, we describe a more efficient version of this construction and discuss how it can be used to find generators within a large class of subspaces.
14:30 -- Certified Homology Inference [+]
Nicholas Cavanna, Kirk Gardner and Donald Sheehy
The goal of homology inference is to compute the shape of a space from a finite point set sampled near it. Given such a sample, one may want to know when we can reliably infer the homology of the space in question. Naturally, this requires making assumptions on the sample as well as the underlying space. Niyogi, Smale, and Weinberger showed that one can infer the homology of a smooth manifold from finite points chosen uniformly at random. Chazal and Lieutier relaxed this to include non-smooth bounded spaces in R^d via the so-called weak feature size. Both assume good samples in the sense that there must be a sampled point within ? of every point in the space. Vin de Silva and Robert Ghrist showed how to certify if a point set contains a good sample of a shrunken version of a space assuming one can compute the distance from a point to the boundary. We show when and how these approaches can be combined in order to provide a computable inference of the homology of the domain. We do so on more general spaces in R^d, only assuming a lower bound on the weak feature size of a compact, locally contractible domain, and that we can compute the distance to the boundary.
14:45 -- Generalized Coverage in Homological Sensor Networks [+]
Kirk Gardner, Nicholas Cavanna and Don Sheehy
In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that it's possible to certify the coverage of a coordinate-free sensor network even with very minimal knowledge of the space to be covered [2]. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion (TCC) that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This allows us to extend the TCC to support k-coverage, in which the domain is covered by k sensors, and weighted coverage, in which sensors have varying sensing radii.
15:00 -- Local Structures for Approximating Rips Filtrations [+]
(Vietoris-)Rips filtrations are important structures used to infer topological properties of metric spaces. Unfortunately, they pose a significant computational challenge, particularly when the data has high dimension. We present a new technique to $O(\sqrt{d})$-approximate the topological information carried by the Rips filtration, with at most $n2^{O(d\log d)}$ $d$-simplices per scale in the filtration.
15:15 -- Homology Preserving Simplification for Top-based Representations [+]
Federico Iuricich, Riccardo Fellegara and Leila De Floriani
Topological features provide global quantitative and qualitative information about a shape, such as the number of the connected components, and the number of holes and tunnels. These information are especially important in high-dimensional data analysis, where pure geometric tools are usually not sufficient. When dealing with simplicial homology, the size of simplicial complex is the major concern, since all the algorithms available in literature are mainly affected by the number of simplices of the complex. The classical approach for computing the homology of a simplicial complex is based on the Smith Normal Form (SNF) reduction applied to the boundary matrices describing the boundary maps. This method is theoretically valid in any dimension, but it has intrinsic limitations linked to the size of matrices and to the high complexity of reduction algorithms. Other methods have been proposed for improving the performances of SNF reduction, but the only scalable approach still remains the simplification of the original complex in order to reduce its size without changing its homology. Edge contraction is from a long time a tool of choice for simplifying simplicial complexes. It has been used in computer graphics and visualization and more recently in topological data analysis for reducing the size of higher dimensional simplicial complexes. Edge contraction on its own does not preserve the homological information but a check, called link condition, has been developed for verifying whether the contraction of an edge preserves homology or not. In our work, we consider the efficient definition of a simplification algorithm, that combines an efficient top-based representation for a simplicial complex with an edge contraction simplification procedure. Our contribution is: (i) the definition of a dimension-independent edge contraction and of a link condition for top-based representations; (ii) the implementation of an efficient algorithm for computing and simplifying a d-simplicial complex on a specific top-based representation, the Stellar tree and (iii) an experimental comparison with respect to the state-of-the-art data structure for performing edge contractions, the Skeleton Blockers. 14:00 - 14:20
Tamal Dey
SimBa: A Tool for Approximating the Persistence of Rips Filtrations Efficiently
Matthew Wright
Visualizing Multidimensional Persistent Homology
Jisu Kim
R Package TDA for Statistical Inference on Topological Data Analysis 14:00 - 14:45
David Gu
Theories of Discrete Ricci Flow
Jie Gao
Applications of Discrete Ricci Flow
Part II C: Workshop 4 (continued)
Tutorials on Ricci Flow and Optimal Transportation
16:00 -- Classification of Normal Curves on a Tetrahedron [+]
Clément Maria and Jonathan Spreer
In this article, we give a combinatorial classification of all normal curves drawn on the boundary of a tetrahedron. We characterise normal curves in terms of intersection numbers with the edges of the tetrahedron.
16:15 -- Barycentric Coordinate Neighbourhoods in Riemannian Manifolds [+]
Ramsay Dyer, Gert Vegter and Mathijs Wintraecken
We quantify conditions that ensure that a signed measure on a Riemannian manifold has a well defined centre of mass. We then use this result to quantify the extent of a neighbourhood on which the Riemannian barycentric coordinates of a set of $n+1$ points on an $n$-manifold provide a true coordinate chart, i.e., the barycentric coordinates provide a diffeomorphism between a neighbourhood of a Euclidean simplex, and a neighbourhood containing the points on the manifold.
16:30 -- Spectral Properties of Distance Matrices of High Dimensional Mixtures [+]
Jean-Daniel Boissonnat, David Cohen-Steiner and Alba Chiara De Vitis
We use spectral analysis of the distance matrices of high dimensional mixtures to learn a mixture of distributions. Our approach focuses on high-dimensional mixtures and uses concentration of measure. It applies to any distribution with concentration properties.
16:45 -- Towards the Analysis of Multivariate Data Based on Discrete Morse Theory [+]
Sara Scaramuccia, Federico Iuricich, Claudia Landi and Leila De Floriani
We propose a new algorithm for computing a Forman gradient on a CW-complex on which a vector-valued function is defined. We prove that our algorithm is equivalent to the state-of-the-art algorithms, i.e., it retrieves a Forman gradient compatible with the multidimensional persistent homology induced by the vector-valued function, being at the same time faster and more compact.
17:00 -- Transforming Hierarchical Trees on Metric Spaces [+]
Mahmoodreza Jahanseir and Donald Sheehy
We show how a simple metric hierarchical tree called a cover tree transforms into a more complex one called a net-tree in linear time. We also propose two linear time algorithms to make a trade-off between depth and the degree of nodes in cover trees.
17:15 -- Almost All Even Yao-Yao Graphs Are Spanners [+]
Wei Zhan and Jian Li
In this abstract we show that, for any integer $k\geq 42$, the Yao-Yao graph $\YY{2k}$ is a $t_k$-spanner, with stretch factor $t_k=4.27+O(k^{-1})$ when $k$ tends to infinity. Our result generalizes the best known result which asserts that all $\YY{6k}$ are spanners for $k$ large enough [Bauer and Damian, SODA'13]. 16:00 - 16:20
Yongjin Lee
Topological Data Analysis of Nanoporous Materials Genome Using Pore-Geometry Recognition Technique
Ellen Gasparovic
Multi-Scale Modeling for Stratified Space Data
Pablo Camara
Topological Methods for Molecular Phylogenetics 16:00 - 16:45
Feng Luo
Theory of Optimal Mass Transportation
Na Lei
Applications of Optimal Mass Transportation
19:00 Business meeting
Friday June 17
Chair: Wolfgang Mulzer Session 7B
Chair: Jeff Phillips
9:20 Dimension Reduction Techniques for Lp (1<p<2), with Applications [DOI] [+]
Yair Bartal and Lee-Ad Gottlieb
For Euclidean space (L2), there exists the powerful dimension reduction transform of Johnson and Lindenstrauss [Conf. in modern analysis and probability, AMS 1984], with a host of known applications. Here, we consider the problem of dimension reduction for all Lp spaces 1<p<2. Although strong lower bounds are known for dimension reduction in L1, Ostrovsky and Rabani [JACM 2002] successfully circumvented these by presenting an L1 embedding that maintains fidelity in only a bounded distance range, with applications to clustering and nearest neighbor search. However, their embedding techniques are specific to L1 and do not naturally extend to other norms. In this paper, we apply a range of advanced techniques and produce bounded range dimension reduction embeddings for all of 1<p<2, thereby demonstrating that the approach initiated by Ostrovsky and Rabani for L1 can be extended to a much more general framework. We also obtain improved bounds in terms of the intrinsic dimensionality. As a result we achieve improved bounds for proximity problems including snowflake embeddings and clustering. On the Complexity of Minimum-Link Path Problems [DOI] [+]
Irina Kostitsyna, Maarten Löffler, Valentin Polishchuk and Frank Staals
We revisit the minimum-link path problem: Given a polyhedral domain and two points in it, connect the points by a polygonal path with minimum number of edges. We consider settings where the min-link path's vertices or edges can be restricted to lie on the boundary of the domain, or can be in its interior. Our results include bit complexity bounds, a novel general hardness construction, and a polynomial-time approximation scheme. We fully characterize the situation in 2D, and provide first results in dimensions 3 and higher for several versions of the problem. Concretely, our results resolve several open problems. We prove that computing the minimum-link diffuse reflection path, motivated by ray tracing in computer graphics, is NP-hard, even for two-dimensional polygonal domains with holes. This has remained an open problem [Ghosh et al. 2012] despite a large body of work on the topic. We also resolve the open problem from [Mitchell et al. 1992] mentioned in the handbook [Goodman and O'Rourke, 2004] (see Chapter 27.5, Open problem 3) and The Open Problems Project [Demaine et al. TOPP] (see Problem 22): "What is the complexity of the minimum-link path problem in 3-space?" Our results imply that the problem is NP-hard even on terrains (and hence, due to discreteness of the answer, there is no FPTAS unless P=NP), but admits a PTAS.
9:40 Simultaneous Nearest Neighbor Search [DOI] [+]
Piotr Indyk, Robert Kleinberg, Sepideh Mahabadi and Yang Yuan
Motivated by applications in computer vision and databases, we introduce and study the Simultaneous Nearest Neighbor Search (SNN) problem. Given a set of data points, the goal of SNN is to design a data structure that, given a *collection* of queries, finds a *collection* of close points that are *compatible* with each other. Formally, we are given k query points Q=q1,...,qk, and a compatibility graph G with vertices in Q, and the goal is to return data points p1,...,pk that minimize (i) the weighted sum of the distances from qi to pi and (ii) the weighted sum, over all edges (i,j) in the compatibility graph G, of the distances between pi and pj. The problem has several applications in computer vision and databases, where one wants to return a set of *consistent* answers to multiple related queries. Furthermore, it generalizes several well-studied computational problems, including Nearest Neighbor Search, Aggregate Nearest Neighbor Search and the 0-extension problem. In this paper we propose and analyze the following general two-step method for designing efficient data structures for SNN. In the first step, for each query point qi we find its (approximate) nearest neighbor point p'i; this can be done efficiently using existing approximate nearest neighbor structures. In the second step, we solve an off-line optimization problem over sets q1,...,qk and p'1,...,p'k; this can be done efficiently given that k is much smaller than n. Even though p'1,...,p'k might not constitute the optimal answers to queries q1,...,qk, we show that, for the unweighted case, the resulting algorithm satisfies a O(log k/log log k)-approximation guarantee. Furthermore, we show that the approximation factor can be in fact reduced to a constant for compatibility graphs frequently occurring in practice, e.g., 2D grids, 3D grids or planar graphs. Finally, we validate our theoretical results by preliminary experiments. In particular, we show that the *empirical approximation factor* provided by the above approach is very close to 1. Recognizing Weakly Simple Polygons [DOI] [+]
Hugo Akitaya, Greg Aloupis, Jeff Erickson and Csaba Toth
We present an O(n log n)-time algorithm that determines whether a given planar n-gon is weakly simple. This improves upon an O(n2 log n)-time algorithm by [Chang, Erickson, and Xu, SODA, 2015]. Weakly simple polygons are required as input for several geometric algorithms. As such, how to recognize simple or weakly simple polygons is a fundamental question.
10:00 Two Approaches to Building Time-Windowed Geometric Data Structures [DOI] [+]
Timothy M. Chan and Simon Pratt
Given a set of geometric objects each associated with a time value, we wish to determine whether a given property is true for a subset of those objects whose time values fall within a query time window. We call such problems time-windowed decision problems, and they have been the subject of much recent attention, for instance studied by Bokal, Cabello, and Eppstein [SoCG 2015]. In this paper, we present new approaches to this class of problems that are conceptually simpler than Bokal et al.'s, and also lead to faster algorithms. For instance, we present algorithms for preprocessing for the time-windowed 2D diameter decision problem in O(n log n) time and the time-windowed 2D convex hull area decision problem in O(n alpha(n) log n) time (where alpha is the inverse Ackermann function), improving Bokal et al.'s O(n log2 n) and O(n log n loglog n) solutions respectively. Our first approach is to reduce time-windowed decision problems to a generalized range successor problem, which we solve using a novel way to search range trees. Our other approach is to use dynamic data structures directly, taking advantage of a new observation that the total number of combinatorial changes to a planar convex hull is near linear for any FIFO update sequence, in which deletions occur in the same order as insertions. We also apply these approaches to obtain the first O(n polylog n) algorithms for the time-windowed 3D diameter decision and 2D orthogonal segment intersection detection problems. Subexponential Algorithms for Rectilinear Steiner Tree and Arborescence Problems [DOI] [+]
Fedor Fomin, Sudeshna Kolay, Daniel Lokshtanov, Fahad Panolan and Saket Saurabh
A rectilinear Steiner tree for a set T of points in the plane is a tree which connects T using horizontal and vertical lines. In the Rectilinear Steiner Tree problem, input is a set T of n points in the Euclidean plane (R2) and the goal is to find an rectilinear Steiner tree for T of smallest possible total length. A rectilinear Steiner arborescence for a set T of points and root r in T is a rectilinear Steiner tree S for T such that the path in S from r to any point t in T is a shortest path. In the Rectilinear Steiner Arborescence problem the input is a set T of n points in R2, and a root r in T, the task is to find an rectilinear Steiner arborescence for T, rooted at r of smallest possible total length. In this paper, we give the first subexponential time algorithms for both problems. Our algorithms are deterministic and run in 2^{O(sqrt{n}log n)} time.
10:20 Faster Algorithms for Computing Plurality Points [DOI] [+]
Mark de Berg, Joachim Gudmundsson and Mehran Mehr
Let V be a set of n points in Rd, which we call voters, where d is a fixed constant. A point p in Rd is preferred over another point p' in Rd by a voter v in V if dist(v,p) < dist(v,p'). A point p is called a plurality point if it is preferred by at least as many voters as any other point p'. We present an algorithm that decides in O(n log n) time whether V admits a plurality point in the L2 norm and, if so, finds the (unique) plurality point. We also give efficient algorithms to compute the smallest subset W of V such that V - W admits a plurality point, and to compute a so-called minimum-radius plurality ball. Finally, we consider the problem in the personalized L1 norm, where each point v in V has a preference vector <w1(v), ...,wd(v)> and the distance from v to any point p in Rd is given by sum_{i=1}d wi(v) cdot |xi(v)-xi(p)|. For this case we can compute in O(nd-1) time the set of all plurality points of V. When all preference vectors are equal, the running time improves to O(n). A lower Bound on Opaque Sets [DOI] [+]
Akitoshi Kawamura, Sonoko Moriyama, Yota Otachi and Janos Pach
It is proved that the total length of any set of countably many rectifiable curves, whose union meets all straight lines that intersect the unit square U, is at least 2.00002. This is the first improvement on the lower bound of 2 by Jones in 1964. A similar bound is proved for all convex sets U other than a triangle.
10:40 Tight Lower Bounds for Data-Dependent Locality-Sensitive Hashing [DOI] [+]
Alexandr Andoni and Ilya Razenshteyn
We prove a tight lower bound for the exponent rho for data-dependent Locality-Sensitive Hashing schemes, recently used to design efficient solutions for the c-approximate nearest neighbor search. In particular, our lower bound matches the bound of rho<= 1/(2c-1)+o(1) for the L1 space, obtained via the recent algorithm from [Andoni-Razenshteyn, STOC'15]. In recent years it emerged that data-dependent hashing is strictly superior to the classical Locality-Sensitive Hashing, when the hash function is data-independent. In the latter setting, the best exponent has been already known: for the L1 space, the tight bound is rho=1/c, with the upper bound from [Indyk-Motwani, STOC'98] and the matching lower bound from [O'Donnell-Wu-Zhou, ITCS'11]. We prove that, even if the hashing is data-dependent, it must hold that rho>=1/(2c-1)-o(1). To prove the result, we need to formalize the exact notion of data-dependent hashing that also captures the complexity of the hash functions (in addition to their collision properties). Without restricting such complexity, we would allow for obviously infeasible solutions such as the Voronoi diagram of a dataset. To preclude such solutions, we require our hash functions to be succinct. This condition is satisfied by all the known algorithmic results. Finding the Maximum Subset with Bounded Convex Curvature [DOI] [+]
Mikkel Abrahamsen and Mikkel Thorup
We describe an algorithm for solving an important geometric problem arising in computer-aided manufacturing. When machining a pocket in a solid piece of material such as steel using a rough tool in a milling machine, sharp convex corners of the pocket cannot be done properly, but have to be left for finer tools that are more expensive to use. We want to determine a tool path that maximizes the use of the rough tool. Mathematically, this boils down to the following problem. Given a simply-connected set of points P in the plane such that the boundary of P is a curvilinear polygon consisting of n line segments and circular arcs of arbitrary radii, compute the maximum subset Q of P consisting of simply-connected sets where the boundary of each set is a curve with bounded convex curvature. A closed curve has bounded convex curvature if, when traversed in counterclockwise direction, it turns to the left with curvature at most 1. There is no bound on the curvature where it turns to the right. The difference in the requirement to left- and right-curvature is a natural consequence of different conditions when machining convex and concave areas of the pocket. We devise an algorithm to compute the unique maximum such set Q. The algorithm runs in O(n log n) time and uses O(n) space. For the correctness of our algorithm, we prove a new generalization of the Pestov-Ionin Theorem. This is needed to show that the output Q of our algorithm is indeed maximum in the sense that if Q' is any subset of P with a boundary of bounded convex curvature, then Q' is a subset of Q.
Chair: Micha Sharir Session 8B
Chair: Nina Amenta
11:30 Peeling and Nibbling the Cactus: Subexponential-Time Algorithms for Counting Triangulations and Related Problems [DOI] [+]
Dániel Marx and Tillmann Miltzow
Given a set of n points S in the plane, a triangulation T of S is a maximal set of non-crossing segments with endpoints in S. We present an algorithm that computes the number of triangulations on a given set of n points in time n^{ (11+ o(1)) sqrt{n} }, significantly improving the previous best running time of O(2n n2) by Alvarez and Seidel [SoCG 2013]. Our main tool is identifying separators of size O(sqrt{n}) of a triangulation in a canonical way. The definition of the separators are based on the decomposition of the triangulation into nested layers (``cactus graphs''). Based on the above algorithm, we develop a simple and formal framework to count other non-crossing straight-line graphs in n^{O(sqrt{n})} time. We demonstrate the usefulness of the framework by applying it to counting non-crossing Hamilton cycles, spanning trees, perfect matchings, 3-colorable triangulations, connected graphs, cycle decompositions, quadrangulations, 3-regular graphs, and more. On the Combinatorial Complexity of Approximating Polytopes [DOI] [+]
Sunil Arya, Guilherme D. da Fonseca and David Mount
Approximating convex bodies succinctly by convex polytopes is a fundamental problem in discrete geometry. A convex body K of diameter diam(K) is given in Euclidean d-dimensional space, where d is a constant. Given an error parameter eps > 0, the objective is to determine a polytope of minimum combinatorial complexity whose Hausdorff distance from K is at most eps diam(K). By combinatorial complexity we mean the total number of faces of all dimensions of the polytope. A well-known result by Dudley implies that O(1/eps^{(d-1)/2}) facets suffice, and a dual result by Bronshteyn and Ivanov similarly bounds the number of vertices, but neither result bounds the total combinatorial complexity. We show that there exists an approximating polytope whose total combinatorial complexity is O-tilde(1/eps^{(d-1)/2}), where O-tilde conceals a polylogarithmic factor in /eps. This is an improvement upon the best known bound, which is roughly O(1/eps^{d-2}). Our result is based on a novel combination of both new and old ideas. First, we employ Macbeath regions, a classical structure from the theory of convexity. The construction of our approximating polytope employs a new stratified placement of these regions. Second, in order to analyze the combinatorial complexity of the approximating polytope, we present a tight analysis of a width-based variant of Barany and Larman's economical cap covering, which may be of independent interest. Finally, we use a deterministic variation of the witness-collector technique (developed recently by Devillers et al.) in the context of our stratified construction.
11:50 An Improved Lower Bound on the Number of Triangulations [DOI] [+]
Oswin Aichholzer, Victor Alvarez, Thomas Hackl, Alexander Pilz, Bettina Speckmann and Birgit Vogtenhuber
Upper and lower bounds for the number of geometric graphs of specific types on a given set of points in the plane have been intensively studied in recent years. For most classes of geometric graphs it is now known that point sets in convex position minimize their number. However, it is still unclear which point sets minimize the number of geometric triangulations; the so-called double circles are conjectured to be the minimizing sets. In this paper we prove that any set of n points in general position in the plane has at least Omega(2.631n) geometric triangulations. Our result improves the previously best general lower bound of Omega(2.43n) and also covers the previously best lower bound of Omega(2.63n) for a fixed number of extreme points. We achieve our bound by showing and combining several new results, which are of independent interest: (1) Adding a point on the second convex layer of a given point set (of 7 or more points) at least doubles the number of triangulations. (2) Generalized configurations of points that minimize the number of triangulations have at most n/2 points on their convex hull. (3) We provide tight lower bounds for the number of triangulations of point sets with up to 15 points. These bounds further support the double circle conjecture. Dynamic Streaming Algorithms for Epsilon-Kernels [DOI] [+]
Timothy M. Chan
Introduced by Agarwal, Har-Peled, and Varadarajan [J. ACM, 2004], an epsilon-kernel of a point set is a coreset that can be used to approximate the width, minimum enclosing cylinder, minimum bounding box, and solve various related geometric optimization problems. Such coresets form one of the most important tools in the design of linear-time approximation algorithms in computational geometry, as well as efficient insertion-only streaming algorithms and dynamic (non-streaming) data structures. In this paper, we continue the theme and explore dynamic streaming algorithms (in the so-called turnstile model). Andoni and Nguyen [SODA 2012] described a dynamic streaming algorithm for maintaining a (1+epsilon)-approximation of the width using O(polylog U) space and update time for a point set in [U]d for any constant dimension d and any constant epsilon>0. Their sketch, based on a "polynomial method", does not explicitly maintain an epsilon-kernel. We extend their method to maintain an epsilon-kernel, and at the same time reduce some of logarithmic factors. As an application, we obtain the first randomized dynamic streaming algorithm for the width problem (and related geometric optimization problems) that supports k outliers, using poly(k, log U) space and time.
12:55 Invited Talk: Jacob Fox, Discrete Geometry, Algebra, and Combinatorics [+]
Many problems in discrete and computational geometry can be viewed as finding patterns in graphs or hypergraphs which arise from geometry or algebra. Famous Ramsey, Turán, and Szemerédi-type results prove the existence of certain patterns in graphs and hypergraphs under mild assumptions. We survey recent results which show much stronger/larger patterns for graphs and hypergraphs that arise from geometry or algebra. We further discuss whether the stronger results in these settings are due to geometric, algebraic, combinatorial, or topological properties of the graphs.
Workshop in Honor of Bernard Chazelle's 60th Birthday [Web]
14:40 Pankaj Agarwal, Opening Remarks
15:20 David Dobkin, Bernard's Research in the 70's [+]
As Bernard's thesis advisor, I introduced him to the field of computational geometry when he was a first year graduate student. In this talk, I survey the situation at that time, his early results and further research they have inspired.
16:00 Micha Sharir, From 60 BC to BC 60: Computational Geometry and Bernard (and me) [+]
In the talk I will review some significant milestones in Bernard Chazelle's work during the late 1980s and the 1990s, Including his work on cuttings, arrangements, range searching, discrepancy, and some other topics. I will mix the scientific contents with nostalgic anecdotes that highlight lighter aspects of Chazelle's otherwise deep, fundamental, and highly influential contributions to geometry.
17:10 Ken Clarkson, The Thrill Goes On: Bernard Yesterday and Today [+]
From the excruciatingly difficult to the achingly elegant, Bernard Chazelle's work on algorithms, especially geometric or natural ones, has been profoundly influential. I'll sketch a few examples that have been inspiring to me, including 1-dimensional range queries, low-stabbing spanning trees, high-order Voronoi diagram construction, deterministic constructions, and the s-energy of a system.
18:00 Presentation by former students of Bernard
20:00 BC60 evening reception [+]
Saturday June 18, SoCG + STOC, Hyatt Regency in Cambridge, MA
10:45 Workshop A:
1st International Workshop on Geometry and Machine Learning [Web] [+]
Organizers: Jonathan Lenchner, Eli Packer, Jeff M. Phillips and Jinhui Xu
Computational geometry plays a crucial and natural role in interacting with machine learning. Since geometric algorithms often come with quality guaranteed solutions, they are of critical importance in formalizing the effectiveness of various techniques and in developing new ones. On the other hand, problems in machine learning serve as important motivation for work central to geometric algorithms. This workshop is intended to provide a forum for those working in the fields of computational geometry, algorithms, machine learning and the various theoretical and algorithmic challenges to promote their interplay. As a joint STOC/SoCG workshop, we hope researchers who normally frequent only one of STOC or SoCG, but work in geometric algorithms for machine learning, will converge together sharing their insights and developments. Workshop B:
Spanners: Graphs and Geometry [Web] [+]
Organizers: Shiri Chechik, Michael Dinitz, and Virginia Vassilevska Williams
Spanners, i.e. subgraphs which approximately preserve distances, have been studied in both graph and geometric settings for over 25 years. While there has been some limited interaction between researchers working in graph-theoretic settings and researchers working in geometric settings, work in the two communities has generally proceeded in parallel rather than in cooperation. The goal of this workshop is to bring together researchers interested in spanners (and other aspects of distance compression) in order to summarize and explore the state of the art, transition definitions and techniques between the communities, and explore questions about spanners and related objects that are of interest to both communities. Workshop C:
Algorithms on Topologically Restricted Graphs [Web] [+]
Organizers: Ken-ichi Kawarabayashi and Anastasios Sidiropoulos
The workshop will focus on recent algorithmic advancements on problems on topologically restricted graphs, such as planar, surface-embedded, small treewidth, minor-free graphs, and graphs of small crossing number. The goal will be to emphasize developments that are of importance in the areas of fixed parameter tractability, approximation algorithms, topological graph theory, and metric embeddings. The program will attempt to highlight developments that are of importance in all of these areas, and emphasize technical connections between them. Workshop D:
Tutorial: Hardness of Learning [Web] [+]
Organizer: Amit Daniely
Proving hardness of learning problems is a key challenge in Valiant's PAC learning model. As reductions from NP-hard problems do not seem to apply in this context, this area evolved somewhat separately. Traditionally, hardness of learning was proved under cryptographic assumptions. Such assumptions imply that it is hard to learn log-depth circuits, intersections of halfspaces, and more. More recently a new technique was developed for proving hardness of learning based on hardness on average of Constraint Satisfaction Problems like K-SAT and K-XOR. In particular, such assumptions imply that already very simple classes, like DNF formulas, are hard to learn. The tutorial will cover most central hardness of learning techniques and results, with emphasis on the aforementioned recent progress.
Piotr Indyk
Invited Talk: New Algorithms for Similarity Search in High Dimensions
Chao Chen
High Dimensional Mode Estimation via Graphical Models
Mickael Buchet, Tamal K. Dey, Jiayuan Wang, and Yusu Wang
Declutter and Resample: Towards Parameter Free Denoising 9:00 - 9:15
Virginia Vassilevska-Williams
Shay Solomon
A non-Geometric Approach to Geometric Spanners [+]
A geometric (1 + epsilon)-spanner for a point set S in the plane is a sparse subgraph of the complete Euclidean graph corresponding to S, which preserves all pairwise distances to within a factor of 1 + epsilon. This definition can be extended to higher-dimensional Euclidean spaces, and more generally, to metric spaces of low intrinsic dimension.
Geometric spanners have been intensively studied since the mid-80s, with applications ranging from compact routing and distance oracles to robotics and machine learning. In many of these applications, the spanners should achieve additional properties besides sparseness, in particular small weight, degree and (hop-)diameter. Understanding the inherent tradeoffs between these parameters is a fundamental challenge in this area.
In this talk I will discuss novel non-geometric techniques to geometric spanners, and demonstrate their effectiveness in solving several long-standing open questions. The main message of my talk is somewhat surprising -- the right approach to geometric spanners is often non-geometric.
Fabrizio Grandoni
New Fault-Tolerant Preservers and Spanners [+]
In this paper we study the existence and computation of sparse (pairwise) f-fault-tolerant (f-FT) preservers and additive spanners, i.e. subgraphs that preserve (exactly or with some small additive error) the distances between given pairs of nodes, under the presence of f edge or vertex failures. We present a variety of results in this setting, including:
(1) The first subquadratic upper bounds on the size of single-source FT preservers in directed and undirected unweighted graphs for any number of faults f. Previously such preservers where known only for f\leq 2. Our preservers can also be used to build the first subquadratic-size 2-additive f-FT spanners in undirected unweighted graphs.
(2) A surprising lower-bound construction showing that, for example, any O(n^{2-\epsilon})-size spanner needs to have additive stretch at least Omega(\epsilon f), even to approximately preserve distances for a single pair of nodes in undirected unweighted graphs. This matches asymptotically known upper bounds holding for (all pairs!) f-FT spanners.
(3) Motivated by the above lower bound, we study the case of single-pair preservers in weighted, directed and undirected, graphs. For f\geq 2, we give a strong Omega(n^2) lower bound for the sparsity of any f-FT preserver, that holds for both the directed and the undirected case. Thus, the only non-trivial results possible are in the setting f=1. For undirected weighted graphs we show that for f=1, O(n) edges are both necessary and sufficient, and we prove an extension of this theorem that shows tight bounds for any number of node pairs. For directed graphs, we show that the 1-FT single-pair preserver problem is equivalent to the pairwise preserver problem in the non-faulty setting where Theta(n) pairs are to be preserved, thus implying that the 1-FT preserver sparsity is between Omega(n^{4/3}) and O(n^{3/2}).
Joint work with Greg Bodwin, Merav Parter, and Virginia Vassilevska Williams
Prosenjit Bose
On Spanning Properties of Various Delaunay Graphs [+]
A geometric graph G is a graph whose vertices are points in the plane and whose edges are line segments weighted by the Euclidean distance between their endpoints. In this setting, a t-spanner of G is a connected spanning subgraph G' with the property that for every pair of vertices x, y, the shortest path from x to y in G' has weight at most t ? 1 times the shortest path from x to y in G. The parameter t is commonly referred to as the spanning ratio or the stretch factor. Among the many beautiful properties that Delaunay graphs possess, a constant spanning ratio is one of them. We provide an overview of various results concerning the spanning ratio among other properties of different types of Delaunay graphs and their subgraphs. 9:00 - 9:30
Petr Hlineny
Toroidal Grid Minors, Embedding Stretch, and Crossing Number [+]
We introduce a new embedding density parameter for graphs embedded on orientable surfaces, called the stretch, and approximately relate this parameter to the size of the largest possible (and nontrivial) toroidal grid which is a minor of the graph. The approximation bounds depend only on the genus and the maximum degree. We show how to efficiently compute the stretch of a given embedding and how the stretch relates to the (planar) crossing number of the embedded graph.
Daniel Lokshtanov
Subexponential Parameterized Algorithms for Planar and Apex-minor-free Graphs via Low Treewidth Pattern Covering [+]
We prove the following theorem. Given a planar graph $G$ and an integer $k$, it is possible in polynomial time to randomly sample a subset $A$ of vertices of $G$ with the following properties:
--- $A$ induces a subgraph of $G$ of treewidth $O(\sqrt{k}\log k)$, and
--- for every connected subgraph $H$ of $G$ on at most $k$ vertices, the probability that $A$ covers the whole vertex set of $H$ is at least $(2^{O(\sqrt{k}\log^2 k)}\cdot n^{O(1)})^{-1}$, where $n$ is the number of vertices of $G$.
Together with standard dynamic programming techniques for graphs of bounded treewidth, this result gives a versatile technique for obtaining (randomized) subexponential parameterized algorithms for problems on planar graphs, usually with running time bound $2^{O(\sqrt{k} \log^2 k)} n^{O(1)}$. The technique can be applied to problems expressible as searching for a small, connected pattern with a prescribed property in a large host graph; examples of such problems include Directed $k$-Path, Weighted $k$-Path, Vertex Cover Local Search, and Subgraph Isomorphism, among others. Up to this point, it was open whether these problems can be solved in subexponential parameterized time on planar graphs, because they are not amenable to the classic technique of bidimensionality. Furthermore, all our results hold in fact on any class of graphs that exclude a fixed apex graph as a minor, in particular on graphs embeddable in any fixed surface.
Based on a joint work with Fedor V. Fomin, Daniel Marx, Marcin Pilipczuk, Micha Pilipczuk, and Saket Saurabh
Amir Nayyeri
Minimum Cuts on Surface Embedded Graphs [+]
The algorithmic problem of computing minimum cuts in graphs has been the focus of many studies in the past few decades. In planar graphs, nearly linear time algorithms has been discovered for computing an st-minimum cut, a global minimum cut, and the Gomory-Hu tree (that represents all minimum cuts). All these algorithms rely heavily on the fact that in planar graphs the dual of a cut is a cycle. Generalizing these algorithm has been challenging as this duality cease to exist in surfaces of positive genus.
In this talk, I will describe algorithms for computing minimum cuts on genus g surfaces. The running time of all these algorithm are of the form $O(f(g)n\text{polylog} n)$. The key insight is computing short cycles in all homology classes and combining them to construct the dual of the minimum cut. I describe ideas for computing minimum homologous cycles and assembling them.
I will explain methods from three different papers that are results of joint work with Glencora Borradaile, Jeff Erickson, David Eppstein, Kyle Fox, Christian Wulff-Nilsen. 1. Introduction
The PAC model. What is known about basic PAC problems
2. Proving Hardness: Learning vs. Computation
Inapplicability of NP-hardness techniques. Boosting vs. Hardness of Approximation
3. Hardness Under Cryptographic assumptions
4. Hardness Under Average Case assumptions
12:10 Invited Talk: Santosh Vempala, The Interplay of Sampling and Optimization in High Dimension [+]
In the first part of this talk, we show how simulated annealing can be used for both logconcave sampling and convex optimization by varying parameter settings. The resulting algorithm for optimization can be viewed as an interior-point method needing only a membership oracle and achieving the same worst-case iteration complexity as the best possible barriers. In the second part, we present a faster sampling algorithm for polytopes which achieves subquadratic mixing time. The advantage of the algorithm, called geodesic gliding, comes from using non-Euclidean geometries where effects of the boundary are milder. Roughly speaking, geodesic gliding is an efficient discrete-time simulation of a stochastic differential equation on a Riemannian manifold whose metric is defined by the Hessian of a convex function.
The talk is based on joint works with Ben Cousins, Adam Kalai, Yin Tat Lee and Laci Lovasz.
15:00 Invited Talk: Timothy Chan, Computational Geometry, from Low to High Dimensions [+]
Classical exact algorithms in computational geometry usually have run times that are exponential in the dimension. Recently, slightly subquadratic results have been found for some basic problems, such as offline orthogonal range search and offline Hamming nearest neighbor search, even when the dimension goes above logarithmic, surprisingly. I will survey these recent findings (including some work in progress and a new joint work with Josh Alman and Ryan Williams).
Along the way, we will see how old geometric divide-and-conquer ideas (variants of range trees and kd trees) can help solve nongeometric problems, such as all-pairs shortest paths, Boolean matrix multiplication, and 0-1 integer linear programming. In the opposite direction, we will also see how nongeometric techniques (fast matrix multiplication, circuit complexity, probabilistic polynomials, and Chebyshev polynomials) can help computational geometry. The latter techniques also give new results on offline approximate nearest neighbor search.
18:00 Workshop A (continued)
1st International Workshop on Geometry and Machine Learning Workshop B (continued)
Spanners: Graphs and Geometry Workshop C (continued)
Algorithms on Topologically Restricted Graphs Workshop E:
Geometric Representations of Graphs [Web] [+]
Organizers: Steven Chaplick and Piotr Micek
The study of graphs via geometric representations is a classic topic in both computational geometry and the theory of computing. In recent years, a lot of progress has been made in the study of geometric intersection graphs. Some notable solved problems include Scheinerman's conjecture, Erdos's problem on \chi-boundedness of segment graphs, and the hardness of the clique problem for segment graphs. This workshop aims to gather researchers working on these topics in order to exchange the ideas for further research. It will focus more on open problems and possible approaches rather than particular known results. Some potential topics include: computational complexity of hard geometric problems; algebraic approaches to topological drawings (e.g, Hanani-Tutte); constrained graph embedding problems; colorings of geometric intersection graphs; and approximation algorithms in geometric settings.
Sanjoy Dasgupta
Invited talk: The Geometry of Interactive Learning [+]
In the usual setup of supervised learning, there is little interaction between human and machine: a human being labels a data set and then vanishes from the picture; and at some later time, a machine is started up, given that data, and told to find a good classifier.
"Interactive learning" refers to scenarios in which the human engages with the machine while learning is actually taking place. This can happen in many ways, for example:
1. The machine might ask for labels of specific, highly informative points that are chosen adaptively, rather than requiring everything to be labeled in advance.
2. If prompted, the human might indicate relevant features, for instance by highlighting a few words within a document that are highly indicative of its label.
3. For tasks that are traditionally considered unsupervised, such as clustering or embedding, an iterative refinement process can be designed in which the human keeps giving feedback on the current structure until it is finally satisfactory.
I will describe a general protocol for interactive learning that includes such scenarios and that admits generic learning algorithms. The central question I'll consider is: how much interaction is needed to learn well? Can interaction significantly reduce the total human effort in the learning process?
This is largely an open area of research, but it is clear that the key determiner of "interaction complexity" is the geometry of the class of structures being learned. The quantities of interest are quite different from traditional parameters associated with learning, like VC dimension.
Kush R. Varshney and Karthikeyan Natesan Ramamurthy
Persistent Homology of Classifier Decision Boundaries
Justin Eldridge, Mikhail Belkin, and Yusu Wang
Consistent Estimation of the Graphon Cluster Tree
Hu Ding, Yu Liu, Lingxiao Huang, and Jian Li
A Geometric Approach for K-Means Clustering with Distributed Dimensions
Cyril J. Stark
Global Completability of Matrix Factorizations 15:30 - 16:00
Shiri Chechik
Near-Optimal Light Spanners [+]
A spanner H of a weighted undirected graph G is a "sparse" subgraph that approximately preserves distances between every pair of vertices in G. We refer to H as a k-spanner (or as a spanner with stretch k) of G for some parameter k if the distance in H between every vertex pair is at most a factor k bigger than in G. Two main measures of the sparseness of a spanner are the size (number of edges) and the total weight (the sum of weights of the edges in the spanner). It is well-known that for any positive integer k, one can efficiently construct a (2k-1)-spanner of G with O(n^{1+1/k}) edges where n is the number of vertices [Althöfer et al. 93]. This size-stretch tradeoff is conjectured to be optimal based on a girth conjecture of Erdös. However, the current state of the art for the second measure is not yet optimal.
In this talk I am going to discuss a construction for spanners with near optimal bounds with respect to the stretch, the weight and the number of edges.
Based on joint work with Christian Wulff-Nilsen.
Ljubomir Perkovic
Spanners via p-gon Distance Delaunay Triangulations [+]
What is the smallest maximum degree that can always be achieved for a plane (i.e., with no crossing edges) spanner of a complete Euclidean graph? This is a fundamental question and the Delaunay triangulation, a well-known plane spanner, is a natural starting point for attempts to answer it. Delaunay triangulations have, however, proven difficult to work with. Recent progress on answering the question has relied instead on Delaunay triangulations defined using alternative distance functions. Rather than using the standard Euclidean distance based on a circle, those Delaunay triangulations are defined using distances based on (regular) p-gons.
Delaunay triangulations defined using p-gon distances have local structural properties that make them amenable to analysis. In addition to their use in bounding the degree of plane spanners, they have been used in developing geometric routing schemes. They may also help in tackling the question of determining the exact spanning ratio (i.e., stretch factor) of Delaunay triangulations. Over the past 30 years, there has been intense interest in computing the exact spanning ratio of classic (circle distance) Delaunay triangulations. Despite that, until recently the only type of Delaunay triangulation for which the answer has been known is the triangle distance Delaunay triangulation (Chew '89). It is likely that work on the spanning ratio of p-gon distance Delaunay triangulations will build the bridge leading to a better understanding of the spanning ratio of classic Delaunay triangulations.
In this talk, I will review a selection of recent work that rely on p-gon distance Delaunay triangulations and highlight some of their properties as well as techniques that have been developed.
Greg Bodwin
Understanding Additive Spanners via Distance Preservers [+]
An Additive Spanner of a graph is a sparse subgraph that preserves all pairwise distances within +k error. A Distance Preserver of a graph is a sparse subgraph that exactly preserves pairwise distances within a small set P of node pairs. There have been several recent successful lines of research that have converted existing knowledge of distance preservers into a new understanding of additive spanners. In this talk, we will discuss some of the results and techniques used in this line of attack. In particular, we will overview the recent result (to appear in STOC '16) that there are no additive spanner constructions with +n^{o(1)} error and n^{4/3 - eps} edges (which is proved by reduction to distance preserver lower bounds). We will also survey some of the major results and open questions in distance preservers research, and demonstrate further relationships between these problems and some of the major remaining questions in additive spanners research.
Wolfgang Mulzer
Efficient Spanner Construction for Directed Transmission Graphs [+]
Let P be a set of n points in the plane, each with an associated radius r(p) > 0. The transmission graph G for P has vertex set P and a directed edge from p to q if and only if q lies in the ball with radius r(p) around p.
Let t > 1. A t-spanner H for G is a sparse subgraph such that for any two vertices p and q connected by a path of length l in G, there is a path of length at most tl from p to q in H. Given G implicitly as points with radii, we show how to compute a t-spanner for G in time O(n (log n + log Psi)), where Psi is the ratio of the largest and smallest radius in P. Furthermore, we extend this construction to be independent of Psi at the expense of a polylogarithmic overhead in the running time. As a bonus, it turns out that the properties of our spanner also allow us to compute a BFS tree for G and any given start vertex s in the same time.
Based on joint work with Haim Kaplan, Liam Roditty and Paul Seiferth.
Michael Dinitz
Approximating Spanners [+]
This talk will survey recent work on approximating spanners, in which we consider the computational problem of finding the "best" spanner given an input graph. This is in contrast to the study of "universal" bounds, i.e. bounds on spanner size/quality which hold universally for all (or a defined subclass) of graphs. We will see that in some cases we can give approximation bounds even when universal bounds do not exist, and when universal bounds do exist we can sometimes optimize to find even better spanners (if they exist). 15:30 - 16:00
Eric Colin de Verdiere
Multicuts in Planar and Bounded-Genus Graphs with Bounded Number of Terminals [+]
Given an undirected, edge-weighted graph G together with pairs of vertices, called pairs of terminals, the minimum multicut problem asks for a minimum-weight set of edges such that, after deleting these edges, the two terminals of each pair belong to different connected components of the graph. Relying on topological techniques, we provide a polynomial-time algorithm for this problem in the case where G$is embedded on a fixed surface of genus g (e.g., when G is planar) and has a fixed number t of terminals. The running time is a polynomial of degree O(\sqrt{g2+gt}} in the input size.
In the planar case, our result corrects an error in an extended abstract by Bentz [Int. Workshop on Parameterized and Exact Computation, 109--119, 2012]. The minimum multicut problem is also a generalization of the multiway cut problem, a.k.a. multiterminal cut problem; even for this special case, no dedicated algorithm was known for graphs embedded on surfaces.
Robert Krauthgamer
Cutting Corners Cheaply, or How to Remove Steiner Points [+]
I will show how the Steiner Point Removal (SPR) problem can always be solved with polylogarithmic distortion, which answers a question posed by Chan, Xia, Konjevod, and Richa (2006). Specifically, for every edge-weighted graph G=(V,E,w) and a subset of terminals $T\subset V$, there is a graph only on the terminals, denoted $G'=(T,E',w')$, which is a minor of $G$ and the shortest-path distance between any two terminals is approximately equal in $G'$ and in $G$, namely within factor $O(\log^5 |T|)$. Our existence proof is constructive and gives a randomized polynomial-time algorithm.
Joint work with Lior Kamma and Huy L. Nguyen
Philip Klein
Approximation Schemes for Planar Graphs: A Survey of Methods [+]
In addressing an NP-hard problem in combinatorial optimization, one way to cope is to use an {\em approximation scheme}, an algorithm that, for any given \epsilon>0, produces a solution whose value is within a 1+\epsilon factor of optimal. For many problems on graphs, obtaining such accurate approximations is NP-hard if the input is allowed to be any graph but is tractable if the input graph is required to be planar.
Research on polynomial-time approximation schemes for optimization problems in planar graphs goes back to the pioneering work of Lipton and Tarjan (1977) and Baker (1983). Since then, however, the scope of problems amenable to approximation has broadened considerably. In this talk I will outline some of the approaches used, especially those that have led to recent results.
Chandra Chekuri
Constant Congestion Routing of Symmetric Demands [+]
Routing problems in directed graphs such as the maximum edge disjoint paths problem appear quite intractable. There is a polynomial-factor lower bound on the approximation ratio that can be achieved even if constant congestion is allowed. We consider the case of symmetric demands where a demand pair s-t requires a directed path from s to t and a directed path from t to s. This variation appears tractable. We discuss two recent positive results on this problem including routing in planar graphs. One important motivation to consider symmetric demands is the connection to directed treewidth and directed grid-minors. The goal of the talk is to high-light the connection and mention the open problems.
Based on joint work with Alina Ene and Marcin Pilipczuk.
Jean Cardinal
Geometric Representations of Graphs and the Existential Theory of the Reals [+]
Deciding the validity of sentences in the existential theory of the reals (ETR) amounts to checking the existence of a real solution to systems of polynomial equalities and inequalities. In the late eighties, Mnëv stated a theorem that led to a proof that the stretchability problem for pseudoline arrangements was ETR-complete. Since then, many more natural problems in computational geometry have been proven ETR-complete, in particular in the field of graph drawing and geometric graphs. We will describe the foundation of those proofs, some well-known results in this vein, and recent additions to the list.
Radoslav Fulek
Strong Hanani-Tutte for Radial Drawings [+]
A drawing of a graph G is radial if the vertices of G are placed on concentric circles C_1,..., C_k with common center c, and edges are drawn radially: every edge intersects every circle centered at c at most once. G is radial planar if it has a radial embedding, that is, a crossing-free radial drawing. If the vertices of G are ordered or partitioned into ordered levels (as they are for leveled graphs), we require that the assignment of vertices to circles corresponds to the given ordering or leveling. A pair of edges e and f in a graph is independent if e and f do not share a vertex. We show that a graph G is radial planar if G has a radial drawing in which every two independent edges cross an even number of times; the radial embedding has the same leveling as the radial drawing. In other words, we establish the variant of the Hanani-Tutte theorem for radial planarity. Our result implies a very simple polynomial-time algorithm for radial planarity based on solving a system of linear equations over Z_2. This is a joint work with M. Pelsmajer and M. Schaefer.
Csaba Tóth
Flexible Contact Representations [+]
Contact graphs are easy to handle when such a representation is unique or severely constrained. We summarize recent results on highly flexible representation. Two models are considered: body-and-hinge frameworks and contact graphs of convex bodies in the plane. We show that it is strongly NP-hard to decide (1) whether a given body-and-hinge framework is realizable in when the bodies are convex polygons and their contact graph is a tree; (2) whether a given tree is the contact graph of interior-disjoint unit disks in the plane. The main challenge in both cases is that any realization has a high degree of freedom, which raises several open problems about the realization space of contact representations.
Bartosz Walczak
Coloring Geometric Intersection Graphs [+]
Steven Chaplick
Open Problem Session
Overview of weekly schedule
Sections on This Page
© 2015 website templates by styleshout Home | CommonCrawl |
Design of a Gravity-Fed Hydrodynamic Testing Tunnel
The purpose of this project is to determine the feasibility of a water tunnel designed to meet certain constraints. The project goals are to tailor a design for a given location, and to produce a repeatable design sizing and shape…
The purpose of this project is to determine the feasibility of a water tunnel designed to meet certain constraints. The project goals are to tailor a design for a given location, and to produce a repeatable design sizing and shape process for specified constraints. The primary design goals include a 1 m/s flow velocity in a 30cm x 30cm test section for 300 seconds. Secondary parameters, such as system height, tank height, area contraction ratio, and roof loading limits, may change depending on preference, location, or environment. The final chosen configuration is a gravity fed design with six major components: the reservoir tank, the initial duct, the contraction nozzle, the test section, the exit duct, and the variable control exit nozzle. Important sizing results include a minimum water weight of 60,000 pounds, a system height of 7.65 meters, a system length of 6 meters (not including the reservoir tank), a large shallow reservoir tank width of 12.2 meters, and height of 0.22 meters, and a control nozzle exit radius range of 5.25 cm to 5.3 cm. Computational fluid dynamic simulation further supports adherence to the design constraints but points out some potential areas for improvement in dealing with flow irregularities. These areas include the bends in the ducts, and the contraction nozzle. Despite those areas recommended for improvement, it is reasonable to conclude that the design and process fulfill the project goals.
Zykan, Brandt Davis Healy (Author)
Wells, Valana (Thesis director)
Middleton, James (Committee member)
Galaxy evolution with hybrid methods
I combine, compare, and contrast the results from two different numerical techniques (grid vs. particle methods) studying multi-scale processes in galaxy and structure formation. I produce a method for recreating identical initial conditions for one method from those of the…
I combine, compare, and contrast the results from two different numerical techniques (grid vs. particle methods) studying multi-scale processes in galaxy and structure formation. I produce a method for recreating identical initial conditions for one method from those of the other, and explore methodologies necessary for making these two methods as consistent as possible. With this, I first study the impact of streaming velocities of baryons with respect to dark matter, present at the epoch of reionization, on the ability for small halos to accrete gas at high redshift. With the inclusion of this stream velocity, I find the central density profile of halos is reduced, overall gas condensation is delayed, and infer a delay in the inevitable creation of stars.
I then combine the two numerical methods to study starburst outflows as they interact with satellite halos. This process leads to shocks catalyzing the formation of molecular coolants that lead to bursts in star formation, a process that is better captured in grid methods. The resultant clumps of stars are removed from their initial dark matter halo, resemble precursors to modern-day globular clusters, and their formation may be observable with upcoming telescopes.
Finally, I perform two simulation suites, comparing each numerical method's ability to model the impact of energetic feedback from accreting black holes at the core of giant clusters. With these comparisons I show that black hole feedback can maintain a hot diffuse medium while limiting the amount of gas that can condense into the interstellar medium, reducing the central star formation by up to an order of magnitude.
Richardson, Mark Lawrence Albert (Author)
Scannapieco, Evan (Thesis advisor)
Rhoads, James (Committee member)
Scowen, Paul (Committee member)
Timmes, Frank (Committee member)
Young, Patrick (Committee member)
A fast fluid simulator using smoothed-particle hydrodynamics
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of…
This document presents a new implementation of the Smoothed Particles Hydrodynamics algorithm using DirectX 11 and DirectCompute. The main goal of this document is to present to the reader an alternative solution to the largely studied and researched problem of fluid simulation. Most other solutions have been implemented using the NVIDIA CUDA framework; however, the proposed solution in this document uses the Microsoft general-purpose computing on graphics processing units API. The implementation allows for the simulation of a large number of particles in a real-time scenario. The solution presented here uses the Smoothed Particles Hydrodynamics algorithm to calculate the forces within the fluid; this algorithm provides a Lagrangian approach for discretizes the Navier-Stockes equations into a set of particles. Our solution uses the DirectCompute compute shaders to evaluate each particle using the multithreading and multi-core capabilities of the GPU increasing the overall performance. The solution then describes a method for extracting the fluid surface using the Marching Cubes method and the programmable interfaces exposed by the DirectX pipeline. Particularly, this document presents a method for using the Geometry Shader Stage to generate the triangle mesh as defined by the Marching Cubes method. The implementation results show the ability to simulate over 64K particles at a rate of 900 and 400 frames per second, not including the surface reconstruction steps and including the Marching Cubes steps respectively.
Figueroa, Gustavo (Author)
Farin, Gerald (Thesis advisor)
Wang, Yalin (Committee member)
Effects of dynamic material strength on hydrodynamic instability and damage evolution in shock loaded copper
Characterization and modeling of deformation and failure in metallic materials under extreme conditions, such as the high loads and strain rates found under shock loading due to explosive detonation and high velocity-impacts, are extremely important for a wide variety of…
Characterization and modeling of deformation and failure in metallic materials under extreme conditions, such as the high loads and strain rates found under shock loading due to explosive detonation and high velocity-impacts, are extremely important for a wide variety of military and industrial applications. When a shock wave causes stress in a material that exceeds the elastic limit, plasticity and eventually spallation occur in the material. The process of spall fracture, which in ductile materials stems from strain localization, void nucleation, growth and coalescence, can be caused by microstructural heterogeneity. The analysis of void nucleation performed from a microstructurally explicit simulation of a spall damage evolution in a multicrystalline copper indicated triple junctions as the preferred sites for incipient damage nucleation revealing 75% of them with at least two grain boundaries with misorientation angle between 20-55°. The analysis suggested the nature of the boundaries connecting at a triple junction is an indicator of their tendency to localize spall damage. The results also showed that damage propagated preferentially into one of the high angle boundaries after voids nucleate at triple junctions. Recently the Rayleigh-Taylor Instability (RTI) and the Richtmyer-Meshkov Instability (RMI) have been used to deduce dynamic material strength at very high pressures and strain rates. The RMI is used in this work since it allows using precise diagnostics such as Transient Imaging Displacement Interferometry (TIDI) due to its slower linear growth rate. The Preston-Tonks-Wallace (PTW) model is used to study the effects of dynamic strength on the behavior of samples with a fed-thru RMI, induced via direct laser drive on a perturbed surface, on stability of the shock front and the dynamic evolution of the amplitudes and velocities of the perturbation imprinted on the back (flat) surface by the perturbed shock front. Simulation results clearly showed that the amplitude of the hydrodynamic instability increases with a decrease in strength and vice versa and that the amplitude of the perturbed shock front produced by the fed-thru RMI is also affected by strength in the same way, which provides an alternative to amplitude measurements to study strength effects under dynamic conditions. Simulation results also indicate the presence of second harmonics in the surface perturbation after a certain time, which were also affected by the material strength.
Gautam, Sudrishti (Author)
Peralta, Pedro (Thesis advisor)
Oswald, Jay (Committee member)
Solanki, Kiran (Committee member)
Evolution, Disruption, and Composition of Galactic Outflows Around Starburst Galaxies
The interaction between galaxies and the surrounding gas plays a key role in galaxy formation and evolution. Feedback processes driven by star formation and active galactic nuclei facilitate the exchange of mass and energy between the galaxy and the circumgalactic…
The interaction between galaxies and the surrounding gas plays a key role in galaxy formation and evolution. Feedback processes driven by star formation and active galactic nuclei facilitate the exchange of mass and energy between the galaxy and the circumgalactic medium through inflowing and outflowing gas. These outflows have a significant impact on the star formation rate and metallicity of the galaxy. Observations of outflows have provided evidence that these outflows are multi-phase in nature, identifying both low energy ions such as Mg II and C III and high energy ions such as O VI. The underlying physics maintaining the two phases as well as the ionization mechanism for these phases remains unclear. In order to better understand galactic outflows, hydrodynamic simulations are used to study the evolution of wind-cloud interactions. In this work, I carried out a suite of magnetohydrodynamic simulations to characterize the influence of magnetic fields on the evolution and lifetime of cold clouds. I found magnetic fields either provided little improvement to cloud stability over other influences such as radiative cooling or accelerated cloud disruption by pushing cloud material in the direction orthogonal to the wind and magnetic fields. To investigate the ionization mechanism of the material within outflows I first considered estimating the column densities of various ions within wind-cloud simulations with the post-processing tool Trident. Under the assumption of ionization equilibrium, the simulations did not reproduce the observed absorption profiles demonstrating the need for a more detailed treatment of the ionization processes. I then performed a new set of simulations with the non-equilibrium chemistry solver, MAIHEM. The column densities produced in the non-equilibrium model alter the evolution of the cloud and highlight the increased ionization along the boundary of the cloud.
Blough-Swingen, J'Neil (Author)
Groppi, Christopher (Committee member)
Borthakur, Sanchayeeta (Committee member)
Mauskopf, Phillip (Committee member)
Anomalous Chiral Plasmas in the Hydrodynamic Regime
Chiral symmetry and its anomalous and spontaneous breaking play an important role
in particle physics, where it explains the origin of pion and hadron mass hierarchy
among other things.
Despite its microscopic origin chirality may also lead to observable effects
in macroscopic physical systems -- relativistic plasmas made of chiral
(spin-$\frac{1}{2}$) particles.
Such plasmas are called \textit{chiral}.
The effects include non-dissipative currents in external fields that could be present
even in quasi-equilibrium, such as the chiral magnetic (CME) and separation (CSE)
effects, as well as a number of inherently chiral collective modes
called the chiral magnetic (CMW) and vortical (CVW) waves.
Applications of chiral plasmas are truly interdisciplinary, ranging from
hot plasma filling the early Universe, to dense matter in neutron stars,
to electronic band structures in Dirac and Weyl semimetals, to quark-gluon plasma
produced in heavy-ion collisions.
The main focus of this dissertation is a search for traces of chiral physics
in the spectrum of collective modes in chiral plasmas.
I start from relativistic chiral kinetic theory and derive
first- and second-order chiral hydrodynamics.
Then I establish key features of an equilibrium state that describes many
physical chiral systems and use it to find the full spectrum of collective modes
in high-temperature and high-density cases.
Finally, I consider in detail the fate of the two inherently chiral waves, namely
the CMW and the CVW, and determine their detection prospects.
The main results of this dissertation are the formulation of a fully covariant
dissipative chiral hydrodynamics and the calculation of the spectrum of collective
modes in chiral plasmas.
It is found that the dissipative effects and dynamical electromagnetism play
an important role in most cases.
In particular, it is found that both the CMW and the CVW are heavily damped by the usual
Ohmic dissipation in charged plasmas and the diffusion effects in neutral plasmas.
These findings prompt a search for new physical observables in heavy-ion collisions,
as well as a revision of potential applications of chiral theories in
cosmology and solid-state physics.
Rybalka, Denys (Author)
Shovkovy, Igor (Thesis advisor)
Lunardini, Cecilia (Committee member)
Timmes, Francis (Committee member)
Vachaspati, Tanmay (Committee member)
Scannapieco, Evan 2
Young, Patrick 2
Blough-Swingen, J'Neil 1
Borthakur, Sanchayeeta 1
Farin, Gerald 1
Figueroa, Gustavo 1
Gautam, Sudrishti 1
Groppi, Christopher 1
Lunardini, Cecilia 1
Mauskopf, Phillip 1
Mechanical and Aerospace Engineering Program 1
Middleton, James 1
Oswald, Jay 1
Peralta, Pedro 1
Rhoads, James 1
Richardson, Mark Lawrence Albert 1
Rybalka, Denys 1 | CommonCrawl |
Determining effect of small variable force on planetary perihelion precession
Is there an analytical technique for determining the effect of a small variable transverse acceleration upon the rate of aspides precession (strictly not a precession but rotation of the line of aspides) of a planet orbitting around the Sun in a 2D plane according to Newtonian gravity law?
I have modelled such effects in a reiterative computer model and would like to verify those measurements.
The transverse acceleration formula is
$$At = (K/c^2)*Vr*Vt * Ar.$$
Where:-
c is speed of light,
K is a constant of magnitude between 0 and +/-3, such that $K/(c^2) << 1$.
Ar is the acceleration of the planet towards the Sun due to Newtonian gravitational influence of the Sun, ($Ar = GM/r^2$).
Vr is radial component of planet velocity relative to the Sun (+ = motion away from the Sun)
Vt is transverse component of planet velocity relative to the Sun (+ = direction of planet forward motion along its orbital path). Vectorially Vt = V - Vr where V is the total instantaneous velocity vector of the planet relative to the Sun.
Assume planet mass is small relative to the Sun
No other bodies are in the system
All motions and accelerations are confined to the two-dimensional plane of the orbit.
The reason why this is interesting to me is that a value of K = +3 in my computer model produces anomalous (Non-Newtonian) periapse rotation rate values very close within about 1% of those predicted by General Relativity and within a few percent of those observed by astronomers (Le Verrier, updated by Newcomb).
Formula (Einstein, 1915) for GR-derived periapse rotation (radians per orbit) from http://en.wikipedia.org/wiki/Apsidal_precession $$ \boldsymbol{\omega}=24.\pi^3.a^2.T^{-2}.c^{-2}.(1-e^2)^{-1} $$
I have accepted Walter's answer. Not only did he answer the original question (Is there a technique...?) but also his analysis produces a formula which not only confirms the computer-simulated effects of the transverse acceleration formula (for K = 3) but which also (unexpectedly to me) is essentially equivalent to the Einstein 1915 formula.
from Walter's Summary (in Walter's answer below):-
: (from first order peturbation analysis) semi-major axis and eccentricity are unchanged, but the direction of periapse rotates in the plane of the orbit at rate $$ \omega=\Omega \frac{v_c^2}{c^2} \frac{K}{1-e^2}, $$ where $\Omega$ is the orbital frequency and $v_c=\Omega a$ with $a$ the semi-major axis. Note that (for $K=3$) this agrees with the general relativity (GR) precession rate at order $v_c^2/c^2$ (given by Einstein 1915).
gravity eccentric-orbit
steveOw
steveOwsteveOw
$\begingroup$ Are you still seeking an answer? $\endgroup$
– Walter
$\begingroup$ @Walter. Yes I am. I have asked similar question at physics.stackexchange.com/questions/123685/… but no solid answer received yet. $\endgroup$
– steveOw
$\begingroup$ @Walter. I also asked at math.stackexchange.com/questions/866836/…. $\endgroup$
$\begingroup$ Yes, there are approximate analytical methods (perturbation theory), valid in the limit of $K\ll1$. Perhaps you can clarify your question a bit. What's the direction of the transverse acceleration (I understand 'transverse' to mean perpendicular to the instantaneous velocity, but it's not clear whether the acceleration is in the plane of the orbit or perpendicular or a mixture). $\endgroup$
$\begingroup$ There is a difference between your question here and that on mathematics (and physics): here the transverse acceleration is proportional to the radial acceleration and $K$ is a dimensionless number, there the radial acceleration has no effect on the transverse acceleration and $K$ must be an acceleration (though you talk about a 'number'). $\endgroup$
You may want to use perturbation theory. This only gives you an approximate answer, but allows for analytic treatment. Your force is considered a small perturbation to the Keplerian elliptic orbit and the resulting equations of motion are expanded in powers of $K$. For linear perturbation theory, only terms linear in $K$ are retained. This simply leads to integrating the perturbation along the unperturbed original orbit. Writing your force as a vector, the perturbing acceleration is $$ \boldsymbol{a} = K \frac{GM}{r^2c^2}v_r\boldsymbol{v}_t $$ with $v_r=\boldsymbol{v}{\cdot}\hat{\boldsymbol{r}}$ the radial velocity ($\boldsymbol{v}\equiv\dot{\boldsymbol{r}}$) and $\boldsymbol{v}_t=(\boldsymbol{v}-\hat{\boldsymbol{r}}(\boldsymbol{v}{\cdot}\hat{\boldsymbol{r}}))$ the rotational component of velocity (the full velocity minus the radial velocity). Here, the dot above denotes a time derivative and a hat the unit vector.
Now, it depends what you mean with 'effect'. Let's work out the changes of the orbital semimajor axis $a$, eccentricity $e$, and direction of periapse.
To summarise the results below: semi-major axis and eccentricity are unchanged, but the direction of periapse rotates in the plane of the orbit at rate $$ \omega=\Omega \frac{v_c^2}{c^2} \frac{K}{1-e^2}, $$ where $\Omega$ is the orbital frequency and $v_c=\Omega a$ with $a$ the semi-major axis. Note that (for $K=3$) this agrees with the general relativity (GR) precession rate at order $v_c^2/c^2$ (given by Einstein 1915 but not mentioned in the original question).
change of semimajor axis
From the relation $a=-GM/2E$ (with $E=\frac{1}{2}\boldsymbol{v}^2-GMr^{-1}$ the orbital energy) we have for the change of $a$ due to an external (non-Keplerian) acceleration $$ \dot{a}=\frac{2a^2}{GM}\boldsymbol{v}{\cdot}\boldsymbol{a}. $$ Inserting $\boldsymbol{a}$ (note that $\boldsymbol{v}{\cdot}\boldsymbol{v}_t=h^2/r^2$ with angular momentum vector $\boldsymbol{h}\equiv\boldsymbol{r}\wedge\boldsymbol{v}$), we get $$ \dot{a}=\frac{2a^2Kh^2}{c^2}\frac{v_r}{r^4}. $$ Since the orbit average $\langle v_r f(r)\rangle=0$ for any function $f$ (see below), $\langle\dot{a}\rangle=0$.
change of eccentricity
From $\boldsymbol{h}^2=(1-e^2)GMa$, we find $$ e\dot{e}=-\frac{\boldsymbol{h}{\cdot}\dot{\boldsymbol{h}}}{GMa}+\frac{h^2\dot{a}}{2GMa^2}. $$ We already know that $\langle\dot{a}\rangle=0$, so only need to consider the first term. Thus, $$ e\dot{e}=-\frac{(\boldsymbol{r}\wedge\boldsymbol{v}){\cdot}(\boldsymbol{r}\wedge\boldsymbol{a})}{GMa} =-\frac{r^2\;\boldsymbol{v}{\cdot}\boldsymbol{a}}{GMa} =-\frac{Kh^2}{ac^2}\frac{v_r}{r^2}, $$ where I have used the identity $(\boldsymbol{a}\wedge\boldsymbol{b}){\cdot}(\boldsymbol{c}\wedge\boldsymbol{d}) =\boldsymbol{a}{\cdot}\boldsymbol{c}\;\boldsymbol{b}{\cdot}\boldsymbol{d}- \boldsymbol{a}{\cdot}\boldsymbol{d}\;\boldsymbol{b}{\cdot}\boldsymbol{c}$ and the fact $\boldsymbol{r}{\cdot}\boldsymbol{a}_p=0$. Again $\langle v_r/r^2\rangle=0$ and hence $\langle\dot{e}\rangle=0$.
change of the direction of periapse
The eccentricity vector $ \boldsymbol{e}\equiv\boldsymbol{v}\wedge\boldsymbol{h}/GM - \hat{\boldsymbol{r}} $ points (from the centre of gravity) in the direction of periapse, has magnitude $e$, and is conserved under the Keplerian motion (validate all that as an exercise!). From this definition we find its instantaneous change due to external acceleration $$ \dot{\boldsymbol{e}}= \frac{\boldsymbol{a}\wedge(\boldsymbol{r}\wedge\boldsymbol{v}) +\boldsymbol{v}\wedge(\boldsymbol{r}\wedge\boldsymbol{a})}{GM} =\frac{2(\boldsymbol{v}{\cdot}\boldsymbol{a})\boldsymbol{r} -(\boldsymbol{r}{\cdot}\boldsymbol{v})\boldsymbol{a}}{GM} =\frac{2K}{c^2}\frac{h^2v_r\boldsymbol{r}}{r^4} -\frac{K}{c^2}\frac{v_r^2\boldsymbol{v}_t}{r} $$ where I have used the identity $\boldsymbol{a}\wedge(\boldsymbol{b}\wedge\boldsymbol{c})=(\boldsymbol{a}{\cdot}\boldsymbol{c})\boldsymbol{b}-(\boldsymbol{a}{\cdot}\boldsymbol{b})\boldsymbol{c}$ and the fact $\boldsymbol{r}{\cdot}\boldsymbol{a}=0$. The orbit averages of these expression are considered in the appendix below. If we finally put everything together, we get $ \dot{\boldsymbol{e}}=\boldsymbol{\omega}\wedge\boldsymbol{e} $ with [corrected again] $$ \boldsymbol{\omega}=\Omega K \frac{v_c^2}{c^2} (1-e^2)^{-1}\, \hat{\boldsymbol{h}}. $$ This is a rotation of periapse in the plane of the orbit with angular frequency $\omega=|\boldsymbol{\omega}|$. In particular $\langle e\dot{e}\rangle=\langle\boldsymbol{e}{\cdot}\dot{\boldsymbol{e}}\rangle=0$ in agreement with our previous finding.
Don't forget that due to our usage of first-order perturbation theory these results are only strictly true in the limit $K(v_c/c)^2\to0$. At second-order perturbation theory, however, both $a$ and/or $e$ may change. In your numerical experiments, you should find that the orbit-averaged changes of $a$ and $e$ are either zero or scale stronger than linear with perturbation amplitude $K$.
disclaimer No guarantee that the algebra is correct. Check it!
Appendix: orbit averages
Orbit averages of $v_rf(r)$ with an abitrary (but integrable) function $f(r)$ can be directly calculated for any type of periodic orbit. Let $F(r)$ be the antiderivative of $f(r)$, i.e. $F'\!=f$, then the orbit average is: $$ \langle v_r f(r)\rangle = \frac{1}{T}\int_0^T v_r(t)\,f\!\left(r(t)\right) \mathrm{d}t = \frac{1}{T} \left[F\left(r(t)\right)\right]_0^T = 0 $$ with $T$ the orbital period.
For the orbit averages required in $\langle\dot{\boldsymbol{e}}\rangle$, we must dig a bit deeper. For a Keplerian elliptic orbit $$ \boldsymbol{r}=a\left((\cos\eta-e)\hat{\boldsymbol{e}}+\sqrt{1-e^2}\sin\eta\,\hat{\boldsymbol{k}}\right)\qquad\text{and}\qquad r=a(1-e\cos\eta) $$ with eccentricity vector $\boldsymbol{e}$ and $\hat{\boldsymbol{k}}\equiv\hat{\boldsymbol{h}}\wedge\hat{\boldsymbol{e}}$ a vector perpendicular to $\boldsymbol{e}$ and $\boldsymbol{h}$. Here, $\eta$ is the eccentric anomaly, which is related to the mean anomaly $\ell$ via $ \ell=\eta-e\sin\eta, $ such that $\mathrm{d}\ell=(1-e\cos\eta)\mathrm{d}\eta$ and an orbit average becomes $$ \langle\cdot\rangle = (2\pi)^{-1}\int_0^{2\pi}\cdot\;\mathrm{d}\ell = (2\pi)^{-1}\int_0^{2\pi}\cdot\;(1-e\cos\eta)\mathrm{d}\eta. $$ Taking the time derivative (note that $\dot{\ell}=\Omega=\sqrt{GM/a^3}$ the orbital frequency) of $\boldsymbol{r}$, we find for the instantaneous (unperturbed) orbital velocity $$ \boldsymbol{v}=v_c\frac{\sqrt{1-e^2}\cos\eta\,\hat{\boldsymbol{k}}-\sin\eta\,\hat{\boldsymbol{e}}}{1-e\cos\eta} $$ where I have introduced $v_c\equiv\Omega a=\sqrt{GM/a}$, the speed of the circular orbit with semimajor axis $a$. From this, we find the radial velocity $v_r=\hat{\boldsymbol{r}}{\cdot}\boldsymbol{v}=v_c e\sin\eta(1-e\cos\eta)^{-1}$ and the rotational velocity $$ \boldsymbol{v}_t = v_c\frac{\sqrt{1-e^2}(\cos\eta-e)\,\hat{\boldsymbol{k}}-(1-e^2)\sin\eta\,\hat{\boldsymbol{e}}}{(1-e\cos\eta)^2}. $$
With these, we have [corrected again] $$ \left\langle \frac{h^2v_r\boldsymbol{r}}{r^4}\right\rangle = \Omega v_c^2\,\hat{\boldsymbol{k}}\, \frac{e(1-e^2)^{3/2}}{2\pi}\int_0^{2\pi}\frac{\sin^2\!\eta}{(1-e\cos\eta)^4}\mathrm{d}\eta =\frac{\Omega v_c^2e}{2(1-e^2)}\hat{\boldsymbol{k}} \\ \left\langle \frac{v_r^2\boldsymbol{v}_t}{r}\right\rangle = \Omega v_c^2\, \hat{\boldsymbol{k}}\, \frac{e^2(1-e^2)^{1/2}}{2\pi}\int_0^{2\pi}\frac{\sin^2\!\eta(\cos\eta-e)}{(1-e\cos\eta)^4}\mathrm{d}\eta=0, $$ in particular, the components in direction $\hat{\boldsymbol{e}}$ average to zero. Thus [corrected again] $$\left\langle 2\frac{h^2v_r\boldsymbol{r}}{r^4}-\frac{v_r^2\boldsymbol{v}_t}{r}\right\rangle =\frac{\Omega v_c^2e\,\hat{\boldsymbol{k}}}{(1-e^2)} $$
WalterWalter
$\begingroup$ Comments are not for extended discussion; this conversation has been moved to chat. $\endgroup$
– called2voyage ♦
Not the answer you're looking for? Browse other questions tagged gravity eccentric-orbit or ask your own question.
How do you calculate the effects of precession on elliptical orbits?
On gravitational wave radiation and arrangement of galaxies post- big bang | CommonCrawl |
Convergence of derivative of Szász type operators involving Charlier polynomials
MFC Home
doi: 10.3934/mfc.2021018
Sharp upper bounds on the maximum $M$-eigenvalue of fourth-order partially symmetric nonnegative tensors
Yuyan Yao and Gang Wang ,
School of Management Science, Qufu Normal University, Rizhao Shandong, 276800, China
*Corresponding author: Gang Wang
Received June 2021 Revised August 2021 Early access September 2021
Fund Project: The authors were supported by the Natural Science Foundation of Shandong Province (ZR2020MA025), the Natural Science Foundation of China (12071250) and High Quality Curriculum of Postgraduate Education in Shandong Province (SDYKC20109)
$ M $-eigenvalues of partially symmetric nonnegative tensors play important roles in the nonlinear elastic material analysis and the entanglement problem of quantum physics. In this paper, we establish two upper bounds for the maximum $ M $-eigenvalue of partially symmetric nonnegative tensors, which improve some existing results. Numerical examples are proposed to verify the efficiency of the obtained results.
Keywords: Partially symmetric nonnegative tensors, maximum $ M $-eigenvalue, upper bounds.
Mathematics Subject Classification: Primary: 15A18, 15A42; Secondary: 15A69.
Citation: Yuyan Yao, Gang Wang. Sharp upper bounds on the maximum $M$-eigenvalue of fourth-order partially symmetric nonnegative tensors. Mathematical Foundations of Computing, doi: 10.3934/mfc.2021018
H. Che, H. Chen and Y. Wang, On the $M$-eigenvalue estimation of fourth-order partially symmetric tensors, J. Ind. Manag. Optim., 16 (2020), 309-324. doi: 10.3934/jimo.2018153. Google Scholar
S. Chirita, A. Danescu and M. Ciarletta, On the strong ellipticity of the anisotropic linearly elastic materials, J. Elasticity, 87 (2007), 1-27. doi: 10.1007/s10659-006-9096-7. Google Scholar
B. Dacorogna, Necessary and sufficient conditions for strong ellipticity for isotropic functions in any dimension, Discrete Contin. Dyn. Syst. Ser. B., 1 (2001), 257-263. doi: 10.3934/dcdsb.2001.1.257. Google Scholar
G. Dahl, J. M. Leinaas, J. Myrheim and E. Ovrum, A tensor product matrix approximation problem in quantum physics, Linear Algebra Appl., 420 (2007), 711-725. doi: 10.1016/j.laa.2006.08.026. Google Scholar
W. Ding, L. Qi and Y. Wei, $M$-tensors and nonsingular $M$-tensors, Linear Algebra Appl., 439 (2013), 3264-3278. doi: 10.1016/j.laa.2013.08.038. Google Scholar
W. Ding, J. Liu, L. Qi and H. Yan, Elasticity $M$-tensors and the strong ellipticity condition, Appl. Math. Comput., 373 (2020), 124982. doi: 10.1016/j.amc.2019.124982. Google Scholar
A. Doherty, P. Parillo and M. Spedalieri, Distinguishing separable and entangled states, Phys. Rev. Lett., 88 (2002), 187904. doi: 10.1103/PhysRevLett.88.187904. Google Scholar
D. Han, H. Dai and L. Qi, Conditions for strong ellipticity of anisotropic elastic materials, J. Elasticity, 97 (2009), 1-13. doi: 10.1007/s10659-009-9205-5. Google Scholar
C. Li, Y. Li and X. Kong, New eigenvalue inclusion sets for tensors, Numer. Linear Algebra Appl., 21 (2014), 39-50. doi: 10.1002/nla.1858. Google Scholar
C. Li and Y. Li, An eigenvalue localization set for tensors with applications to determine the positive (semi-) deffiniteness of tensors, Linear Multilinear Algebra, 64 (2016), 587-601. doi: 10.1080/03081087.2015.1049582. Google Scholar
S. Li and Y. Li, Bounds for the $M$-spectral radius of a fourth-order paritally symmetric tensor, J. Inequal. Appl., 218 (2018), 7pp. doi: 10.1186/s13660-018-1610-5. Google Scholar
L. Qi, H. Dai and D. Han, Conditions for strong ellipticity and $M$-eigenvalues, Front. Math. China., 4 (2009), 349-364. doi: 10.1007/s11464-009-0016-6. Google Scholar
L. Qi and Z. Luo, Tensor Analysis: Spectral Theory and Special Tensors, SIAM, 2017. Google Scholar
C. Sang, A new Brauer-type $Z$-eigenvalue inclusion set for tensors, Numer. Algorithms, 80 (2019), 781-794. doi: 10.1007/s11075-018-0506-2. Google Scholar
J. Walton and J. Wilber, Sufficient conditions for strong ellipticity for a class of anisotropic materials, Int. J. Non-Linear Mech., 38 (2003), 411-455. doi: 10.1016/S0020-7462(01)00066-X. Google Scholar
G. Wang, G. Zhou and L. Caccetta, $Z$-eigenvalue inclusion theorems for tensors, Discrete Contin. Dyn. Syst. Ser. B., 22 (2017), 187-198. doi: 10.3934/dcdsb.2017009. Google Scholar
G. Wang, Y. Wang and Y. Wang, Some Ostrowski-type bound estimations of spectral radius for weakly irreducible nonnegative tensors, Linear Multilinear Algebra, 68 (2020), 1817-1834. doi: 10.1080/03081087.2018.1561823. Google Scholar
G. Wang, L. Sun and L. Liu, $M$-eigenvalues-based sufficient conditions for the positive definiteness of fourth-order partially symmetric tensors, Complexity, 2020 (2020), 2474278. Google Scholar
G. Wang and Y. Zhang, $Z$-eigenvalue exclusion theorems for tensors, J. Ind. Manag. Optim., 16 (2020), 1987-1998. doi: 10.3934/jimo.2019039. Google Scholar
G. Wang, L. Sun and X. Wang, Sharp bounds of the minimum $M$-eigenvalue of elasticity $Z$-tensors and identifying strong ellipticity, J. Appl. Anal. Comput., 11 (2021), 2114-2130. Google Scholar
Y. Wang, L. Qi and X. Zhang, A practical method for computing the largest $M$-eigenvalue of a fourth-order partially symmetric tensor, Numer. Linear Algebra Appl., 16 (2009), 589-601. doi: 10.1002/nla.633. Google Scholar
G. Zhou, L. Qi and S. Wu, Efficient algorithms for computing the largest eigenvalue of a nonnegative tensor, Front. Math. China, 8 (2013), 155-168. doi: 10.1007/s11464-012-0268-4. Google Scholar
G. Zhou, G. Wang, L. Qi and M. Alqahtani, A fast algorithm for the spectral radii of weakly reducible nonnegative tensors, Numer. Linear Algebra Appl., 25 (2018), e2134. doi: 10.1002/nla.2134. Google Scholar
L. Zubov and A. Rudev, On necessary and sufficient conditions of strong ellipticity of equilibrium equations for certain classes of anisotropic linearly elastic materials, ZAMM Z. Angew. Math. Mech., 96 (2016), 1096-1102. doi: 10.1002/zamm.201500167. Google Scholar
Table1
References upper bounds
Theorem 1 of [11] $ \rho_M(\mathcal{A})\leq 12.6843 $
Theorem 3.1 of [1] $ \rho_M(\mathcal{A})\leq 15.7124 $
Theorem 3.2 $ \rho_M(\mathcal{A})\leq 8.1096 $
Theorem 1 of [11] $ \rho_M(\mathcal{A})\leq 5.3333 $
Theorem 3.1 of [1] $ \rho_M(\mathcal{A})\leq 4.7889 $
Haitao Che, Haibin Chen, Yiju Wang. On the M-eigenvalue estimation of fourth-order partially symmetric tensors. Journal of Industrial & Management Optimization, 2020, 16 (1) : 309-324. doi: 10.3934/jimo.2018153
Haitao Che, Haibin Chen, Guanglu Zhou. New M-eigenvalue intervals and application to the strong ellipticity of fourth-order partially symmetric tensors. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3685-3694. doi: 10.3934/jimo.2020139
Yining Gu, Wei Wu. Partially symmetric nonnegative rectangular tensors and copositive rectangular tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 775-789. doi: 10.3934/jimo.2018070
Chaoqian Li, Yaqiang Wang, Jieyi Yi, Yaotang Li. Bounds for the spectral radius of nonnegative tensors. Journal of Industrial & Management Optimization, 2016, 12 (3) : 975-990. doi: 10.3934/jimo.2016.12.975
Chong Wang, Gang Wang, Lixia Liu. Sharp bounds on the minimum $M$-eigenvalue and strong ellipticity condition of elasticity $Z$-tensors-tensors. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021205
Zhen Wang, Wei Wu. Bounds for the greatest eigenvalue of positive tensors. Journal of Industrial & Management Optimization, 2014, 10 (4) : 1031-1039. doi: 10.3934/jimo.2014.10.1031
Jun He, Guangjun Xu, Yanmin Liu. Some inequalities for the minimum M-eigenvalue of elasticity M-tensors. Journal of Industrial & Management Optimization, 2020, 16 (6) : 3035-3045. doi: 10.3934/jimo.2019092
Guimin Liu, Hongbin Lv. Bounds for spectral radius of nonnegative tensors using matrix-digragh-based approach. Journal of Industrial & Management Optimization, 2021 doi: 10.3934/jimo.2021176
Juan Meng, Yisheng Song. Upper bounds for Z$ _1 $-eigenvalues of generalized Hilbert tensors. Journal of Industrial & Management Optimization, 2020, 16 (2) : 911-918. doi: 10.3934/jimo.2018184
Gang Wang, Yiju Wang, Yuan Zhang. Brualdi-type inequalities on the minimum eigenvalue for the Fan product of M-tensors. Journal of Industrial & Management Optimization, 2020, 16 (5) : 2551-2562. doi: 10.3934/jimo.2019069
Xifu Liu, Shuheng Yin, Hanyu Li. C-eigenvalue intervals for piezoelectric-type tensors via symmetric matrices. Journal of Industrial & Management Optimization, 2021, 17 (6) : 3349-3356. doi: 10.3934/jimo.2020122
Qilong Zhai, Ran Zhang. Lower and upper bounds of Laplacian eigenvalue problem by weak Galerkin method on triangular meshes. Discrete & Continuous Dynamical Systems - B, 2019, 24 (1) : 403-413. doi: 10.3934/dcdsb.2018091
Nur Fadhilah Ibrahim. An algorithm for the largest eigenvalue of nonhomogeneous nonnegative polynomials. Numerical Algebra, Control & Optimization, 2014, 4 (1) : 75-91. doi: 10.3934/naco.2014.4.75
Yaotang Li, Suhua Li. Exclusion sets in the Δ-type eigenvalue inclusion set for tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 507-516. doi: 10.3934/jimo.2018054
Gang Wang, Guanglu Zhou, Louis Caccetta. Z-Eigenvalue Inclusion Theorems for Tensors. Discrete & Continuous Dynamical Systems - B, 2017, 22 (1) : 187-198. doi: 10.3934/dcdsb.2017009
Alexandre Girouard, Iosif Polterovich. Upper bounds for Steklov eigenvalues on surfaces. Electronic Research Announcements, 2012, 19: 77-85. doi: 10.3934/era.2012.19.77
Yining Gu, Wei Wu. New bounds for eigenvalues of strictly diagonally dominant tensors. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 203-210. doi: 10.3934/naco.2018012
Wen Li, Wei-Hui Liu, Seak Weng Vong. Perron vector analysis for irreducible nonnegative tensors and its applications. Journal of Industrial & Management Optimization, 2021, 17 (1) : 29-50. doi: 10.3934/jimo.2019097
Hua Chen, Hong-Ge Chen. Estimates the upper bounds of Dirichlet eigenvalues for fractional Laplacian. Discrete & Continuous Dynamical Systems, 2022, 42 (1) : 301-317. doi: 10.3934/dcds.2021117
Chaoqian Li, Yajun Liu, Yaotang Li. Note on $ Z $-eigenvalue inclusion theorems for tensors. Journal of Industrial & Management Optimization, 2021, 17 (2) : 687-693. doi: 10.3934/jimo.2019129
Impact Factor:
Yuyan Yao Gang Wang | CommonCrawl |
The Development of Algebra - 2
Article by Leo Rogers
Published February 2011.
The first part of this brief history of algebra focussed on the important practical origins of the problems that led to the procedures we have for solving equations, and the ways in which the problems were visualised as manipulation of geometrical shapes.
This second part shows how the visual images slowly give way to literal representations, abbreviations, and finally, in the 17th century, to a more developed algebraic symbolism close to what we use in schools today [see: Note 1].
1. Mediaeval Algebra
The expansion of the Arab Empire into Asia Minor (Modern Turkey) at the end of the 11th century led to a series of Crusades to recapture the Holy Land. The major period of these military and political disturbances lasted until the end of the 14th century. In 1340 the 'Black Death' spread into Western Europe killing some 40% of the population by about 1370. In spite of these upheavals, exchange of ideas and translations of Arab scholarship were brought to Europe. After 1450 the printing press enabled many people to read the Latin translations of Arab and Greek science.
Mediaeval Algebra in Western Europe was first learnt from the works of al-Khowarizmi, Abu Kamil and Fibonacci. The algebra consisted of simple linear and quadratic equations and a few cubic equations, together with the methods for solving them; rules for operating with positive and negative numbers, finding squares, cubes and their roots; the rule of False Position (see History of Algebra Part 1 ) and the Rule of Three (simple proportion). These methods were applied to business and legal problems. There was some justification of the solution using diagrams, but mostly it was a matter of 'memorising the rules' and applying them to standard problems.
Leonardo Fibonacci (1170 - 1250)
Well known for his collection of mathematical techniques [see: Note 2] and the promotion of the Hindu numeral system in the Liber Abaci of 1202, he also wrote Flos , a book where he shows that the root of the cubic equation $10x + 2x^2 + x^3 = 20$ can neither be a rational number, nor the square root of a rational number [see: Note 3].
In his Book of Squares 1225 he turned much of Euclid's geometrical work into arithmetic, developing new ideas from Arab science, and other techniques used by the 'Abacus Masters' who taught commercial arithmetic [see: Note 4]. He organised the rules as a series of logical propositions and supported his arguments using proportional triangles, squares and rectangles, which as we have seen, are perfectly general.
Proposition 1 shows how the sum of the odd numbers always makes a perfect square. In our notation, the substance of Leonardo's argument is:$$(1+3+5+7)+9=(1+3+5+7+9)$$
Proposition 2 states "Any square number exceeds the square before it by the sum of the roots." Leonardo's first example is simple:
$$5^2 - 4^2 = 25 - 16 \mbox{, which is } 9 \mbox {, the sum of } 5 \mbox { and } 4\mbox {, which are the roots of }25 \mbox{ and } 16.$$
Proposition 10 finds the sum of a sequence of square numbers: $$6(1^2 + 2^2 + 3^2 +.... + n^2) = n(n + 1)( 2n + 1)$$
Proposition 19 shows how to find numbers $A$, $B$, and $C$ such that: $$B^2 - B = A^2 \mbox { and } B^2 + B = C^2$$
These and many other numerical relationships helped mathematicians of the 15th and 16th centuries to develop techniques for solving quadratic and cubic equations.
Jordanus de Nemore (1225 - 1260)
The mediaeval student's attitude towards solving equations was quite different to ours. Today we arrange the algebra to isolate the unknown and we make $x$ the 'subject' of the equation. In Mediaeval times, the student worked with the known numbers in order to find the 'root'. Jordanus' book, De Numeris Datis (Concerning given numbers) was written about 1250 and is considered to be the first advanced algebra written in Western Europe since Diophantos . It offers a generalised treatment of quadratic, simultaneous and proportional equations. For solving quadratics, the procedure earlier described by Abu Kamil (c850 - c930) as 'completing the square' was simplified, Part 1 (Section 7 Case 5) , and for a problem like 'A square and 10 of its roots equal 39' you make a square, and attach to two of its sides rectangles whose length is half the number of roots, and proceed to fill in the empty space with another square.
In the diagram, since the rectangles are $5$ roots long the dotted square must be $25$ roots. But the total of square and roots are given as $39$, so the side of the unknown square must be $3$, and the whole square $64$.
The 'Datis ' shows how, by analysis of basic geometrical properties and using letters of the alphabet to represent numerical relationships, it is possible to establish generalised knowledge. Jordanus then illustrates each case with numerical examples.
For example, in Book IV Proposition 6, he shows that "If the ratio of two numbers and the sum of their squares is known, then each of the numbers is known." [see: Note 6].
In modern symbols, he proceeds like this:$$\mbox{Given, }x : y = a \mbox { and } x^2 + y^2 = b$$
Let $d$ be the square of $x$, and $c$ the square of $y$, and let $d + c$ be known , so we have:
$$x : y = a, ~~~~~~~x^2 = d, ~~~~~~~y^2 = c, ~~\mbox {and so } ~~d + c = b $$
Now the ratio of $d$ to $c$ is the square of the ratio of $x$ to $y$, so $$\frac{d}{c}=\frac{x^2}{y^2}=a^2 ~~~~~~~~\frac{d}{c}y^2=b-y^2 ~~~~~~~~ \left(\frac{d}{c}+1\right)y^2=b~~~~~~~~ \mbox {and so }y=\sqrt{\frac{b}{\left(a^2+1\right)}}$$
$$ \text{For example, if }\frac{x}{y}=2 \mbox { and } x^2+y^2=500 \mbox { then }y=\sqrt{\frac{500}{\left(4+1\right)}} = 10\mbox {, and }x=20 ~~~~~~~~~~~~~~~$$
The work of Jordanus was a significant change in the way equations were tackled. Much more emphasis was given to the number relationships that were based on proportional reasoning. It was the understanding of these relationships that was so important to Francois Viete later in the sixteenth century.
2. Early Renaissance Algebra
Nicolas Chuquet was described as an 'algoriste' and his manuscript on Le Triparty en la Science des Nombres (1484) was known only to a few of his contemporaries. 'Triparty' means three parts, and the first section was on Numbers and their operations; the second on Surds $\left(3+\sqrt{5}\right)$ and Roots of Surds $\sqrt{\left(3+\sqrt{5}\right)}$; and the third on Algebra, where he invented special symbols for the unknown, squares and cubes up to the fourth power, and a system of indices which included $x^0 = 1$. He also used the first letters of operations like p for plus and m for minus. His algebra was developed as a series of general methods and in this work negative numbers appear as coefficients, exponents and solutions to problems. His rules for solving arithmetic problems also used zero and negative numbers. Unfortunately his work was little known, and not published until 1880, but his ideas reappear in the early 17th century.
Luca Pacioli (1445 - 1517)
Pacioli was a contemporary of Chuquet, and famous for the Summa de Arithmetica, geometria, proportioni et proportionibus (1494) and the Divina Proportione (1509) Pacioli's works were important popular collections of current practical mathematics and were more useful for passing on known techniques and problems than for any original contributions. Pacioli is also famous for publishing the first description of double entry book-keeping where negative numbers had an obvious practical significance as debts or losses, and numerous works on accounting soon appeared in other languages, obviously copied from Pacioli [see: Note 7].
3. Late Renaissance and Early Modern Algebra
Girolamo Cardano (1501 - 1576)
Cardano earned his living as a doctor and by casting horoscopes; he wrote on probability and published other books but his importance for us rests on his Artis Magnae sive de Regulis Algebraicis Liber Unus (1545) "Of the Great Art, or the First Book on the Rules of Algebra" the 'Ars Magna' as it is often called [see: Note 8].
While the methods for solving quadratic problems were well known as a collection of geometrically based proportional relations and arithmetical algorithms, a unified and general approach was still not commonly available.
The big unsolved problem of the time was finding solutions to cubic equations. By this time mathematicians had identified about 13 different cases of cubic equations which included various combinations of cubes, squares and numbers [see: Note 9]. The Ars Magna contains the proportional methods and rules that had been developed by mathematicians before him, his own work was also a significant contribution, and he acknowledged the discoveries of his contemporaries.
Existing methods for solving cubic equations relied on finding substitutions to reduce them to quadratics; many of these tricks could only be applied to special cases. For example this is the first problem in Chapter XXV:
When the cube is equal to the first power and a constant, divide the coefficient of x into two parts that the sum of each multiplied by the square root of the other is half the constant of the equation. The roots of these two parts added together make the value of $x$.
Example: $x^3 = 10x + 24$
Solution: "Ten divides into two parts, $9$ and $1$, either of which multiplied by the square root of the other makes $9$ and $3$, the sum of which is $12$, one half of $24$. Therefore R $9$ plus R $1$ which are $3$ and $1$ added together, produce $4$, the value of x."[see: Note 10]
Throughout the book, every rule was written in Latin; the only notations used were p for plus and m for minus and an elaborate capital R for radix, to indicate square roots.
$~~~~~$
This symbol was invented by Regiomontanus (1436 - 1476)
This symbol was not commonly used until about 1630
Discovering Imaginary Numbers
In Chapter XXXVII Cardano discusses the use of negative numbers in calculations, and in Rule II appears his first use of negative square roots.
The problem is to 'divide $10$ into two parts whose product is $40$'
His method is exactly the same as the 'Babylonian Algorithm' shown in Part 1 (Teachers' notes 5) and if we think of the problem as: $ x(10 - x) = 40$, we have a quadratic equation $x^2 + 40 = 10x$ (squares and numbers equal roots) with $10$ as the coefficient of $x$ which we divide in half, and proceed with the algorithm .
Cardano gives two solutions
$5$p : R : m $15$ and $5$m : R : m $15~~$. [see: Note 11]
The product of these two results is, in fact $40$, and he demonstrates how to solve four other problems that give negative square roots. He was clearly mystified by these 'imaginary' square roots of negative numbers, and this is still a problem for many who meet them for the first time.
4. Introducing the 17th Century
Francois Viete (1540-1603)
In his two major works, In Artem Analyticem Isagoge (Introduction to the Analytic Art, 1591) and De Potestatum Resolutione (On the Numerical Resolution of Powers 1600) Viete made probably the most important contributions to the development of algebra at this time.
Viete used a consistent symbolic notation with vowels A, E, I, O, U and Y for unknown quantities, and given terms by the letters B, G, D and other consonants.
Addition and Subtraction used the symbols + and -. For Multiplication he used the word 'in' and for Division used the fraction bar. So $$\frac{\mbox {B in C}}{\mbox {AQ}} \mbox { meant } \frac{\mbox {BC}}{A^2}$$
Powers were N (for numerus - a pure number), Q (quadratus - a square) C (cubus - a cube) further powers were expressed by combinations of these symbols, so QQ was a fourth power, CQ, a fifth power and so on.
For Roots he used the symbol $L$ (for Latus - a side) and sometimes the $R$ symbol . So $L9$ meant the square root of $9$, and $LC 27$ meant the cube root of $27$.
All his equations were homogeneous - that means the dimensions of all the terms in the equation had to be the same.
In the equation $~~~~AQ + AB = Z Q, ~~~~~AQ$ and $ZQ$ represent squares, and AB a rectangle.
Some of these notational ideas were already being used; for example, Johannes Scheubel (1494 - 1570) writing in 1551 used special combinations of symbols for powers of the unknown:
The English mathematician, Robert Recorde used the same notation in his book [see: Note 12] on algebra in 1557 but describes the powers of numbers in this way:
Using his literal notation, Viete exposed the structural aspects of polynomial equations and gave solution methods for whole classes of equations.
Viete's solution of quadratic equations used the three proportional triangles in a semi-circle shown in Part 1 (Section 6. Greek Geometry)
The equation: $ A$ quadratus $+ AB = Z $ quadratus or $ (A^2 + AB = Z^2) $, in Viete's symbols is $AQ + AB = Z Q$, which can be written as $A(A+B)=ZZ$ and rearranged as equal proportions becomes: $$ \frac{A}{Z}=\frac{Z}{(A+B)}$$
In the diagram there are three lines of increasing magnitude, $FC, FD$ and $FB$.
$FC$ is $A$, $~~~~~FD$ is $Z~~$ and $FB$ is $A+B$
For three magnitudes in proportion, the well-known rule is:
'the product of the extremes is equal to the square of the mean'.
The lines $A$ and $(A+B)$ are the extremes and $Z$ is the mean .
For the equation $A^2 + 10A = 144, ~~~~~Z = 12 ~~~$ and $~~~A : 12 = 12 : (A+10)$ so we have to find a number $A$, so that the ratios are equal. The three numbers are $8, 12$ and $18$.
The Fundamental Theorem of Algebra
Thomas Harriot (c1560 - 1621) and Albert Girard (1595-1632)
By this time, many mathematicians were working on similar problems and developing their own notations. The most important of these were Thomas Harriot, an English mathematician and explorer, and Albert Girard, a Dutch Army Engineer. Harriot's work remains mainly in manuscript form, even today. His Artis Analyticae Praxis (The Practice of the Analytical Art) only appeared in 1661 well after his death, but his papers show that he had developed a sophisticated notation, almost like we use today, using $aa$ for $a^2, aaa$ for $a^3$ etc. and through this realised that multiplying brackets like $(b - a)(c - a)(d - a)$ led to a clear relationship between the roots and coefficients of an equation.
The first explicit statement of the idea that every polynomial equation of degree $n$ has $n$ roots appeared in 1629 in Girard's L'Invention Nouvelle en L'Algebre (A New Discovery in Algebra). Here, Girard states his basic theorem,
"Every algebraic equation .... admits of as many solutions as the denomination of the highest quantity indicates ..."
Girard gave examples, but did not show how he derived his theorem, and did not clearly account for 'imaginary' roots. The search for a general proof occupied mathematicians for many years to come.
Rene Descartes (1596 - 1650)
Even though many improvements in notation had begun to expose the structure of algebraic equations, and begun to transform the representation of the problems from geometric images to written expressions, mathematicians still used geometry as a way of demonstrating the 'truth' and generality of the algebra.
In 1637 he published his Discours de la Methode'... a work on 'how to undertake investigation to make ideas clear, and to seek for truth in the sciences'. As an example of this method he included La Geometrie which states on the first page that,
"Any problem in geometry can easily be reduced to such terms that a knowledge of the length of certain straight lines is sufficient for its construction."
Descartes demonstrates how this may be done:
The equation is $z^2 \varpropto az-bb$. Construct $LM$ and $LN$ so that $LM =b$ and $LN = \frac{a}{2}$
The unknown $z$, is $OM$. In triangle$NLM$, $$ NM^2 =\left(\frac{a}{2} \right)^2 + bb ~~\mbox { and }~~NM = \sqrt{\left(\frac{a}{2} \right)^2 + bb}~~~ \mbox{. So } z=\frac{a}{2}+\sqrt{\left(\frac{a}{2} \right)^2 + bb}.$$
However, Descartes still appealed to a theorem from Euclid Book II Proposition 6 to justify his solution to the reader.
The rectangle contained by $OM$ and $PM$, plus the square on $NP$, is equal in area to the square on $NM$. ($N$ bisects $OP$.) The area of the green rectangle is the same as the areas of the red rectangles.
$$ OM.PM + NP^2 = NM^2 $$
Here is Descartes' original description of the problem:
5. Notation and Representation [see: Note 13]
By the middle of the 17th Century the representation of elementary algebraic problems and relations looked much as it is today. The major factors influencing change were the printing press that provided wider communication of ideas, and the slow appreciation of the similarity in the structure of the algorithms brought about by the changes in notation. Most of this happened in the period from 1500 to 1650; by then the standard notation had become generally accepted.
There were no clear stages in this process. Some historians proposed a 'literal' stage where all the problems are written, sometimes in very complicated language; a 'syncopated' stage with a mixture of words and symbols; and a final 'symbolic' stage where the mathematics consists only of symbols. But this is not the case, even today when you look at a text, words are still there. Another aspect was the technical language . Translating from Greek and Arabic into Latin, and then into the common language of a country, new words were invented to describe new ideas, and were then taken over by others. For example, the German for 'thing' or the unknown, was 'Die Coss', the title of a book by Michael Stifel (1487 - 1567). So, in England, algebra became known as the Cossic Art . Borrowing from the French, the pentagon was known as a cinqangle and so on. There are many more examples among the writings of English mathematicians of the 16th and 17th centuries.
Representation of the objects, relationships, operations, and the structure of processes together with the evolution of the printing press, were the most important aspects which aided the development of mathematics during this period.
The signs for Addition and Subtraction first appear in print in 1526, and the Equals sign appears in 1557. The cross X for Multiplication is later, about 1628; the Colon (: ) for Division in 1633 and the Obelus ($\div$) in 1659. The signs for inequality , > and < are first used in 1631.
Square and other roots start as the capital R (as in the picture above) in about 1465 and eventually our usual sign $\sqrt{}$ by 1630; Powers were first expressed by whole number indices in 1484 and had become widely accepted by 1637, and negative indices also appear in 1484, but fractional indices not until 1676.
The symbols for the unknown and for constants are greatly varied. All kinds of signs and combinations of signs were initially used. Jordanus used letters to replace numbers as a sign of generality, Vieta was the first to use vowels (A, E, I, O, U) for the unknown and consonants (A, B, C, ...) for known quantities, and Descartes introduced the convention of letters at the end of the alphabet (x, y, z) for unknown and at the beginning (a, b, c) for known quantities which is what we use today.
N.B. Teachers' notes related to the history of algebra discussed here can be found by clicking on the 'Notes' tab at the top of this article .
It is important to note that there were many people throughout this period from the 11th to the 17th century who made significant contributions to the story of the development of algebra, and have not been mentioned here. Further information can be found by consulting the references.
We are very lucky that copies of Fibonacci's books have survived. The Liber Abaci (1202 ), Practica Geometriae (1220), Flos and the Book of Squares both produced in 1225 tell us a great deal about mathematics in the early Mediaeval period.
Fibonnaci's approximate result is correct to nine decimal places. This equation was solved by Omar Khayyam (1048 - 1122) using the intersection of a circle with a hyperbola.
The Abacus Schools were training courses for merchants in commercially useful arithmetic, but they also included 'puzzle problems'. Fibonacci has been mistakenly seen as the father of the Abacus Schools, but they existed well before his time.
The Arithmetica of Diophantos (c200 - c284) had been translated and developed by the Arabs and was available in Latin at this time.
I have taken this example from the translation by Barnabas Hughes, pages 167-168.
It is interesting to contrast the social circumstances and the quality of the work of Chuquet and Pacioli. Chuquet had access to many mathematical works, and made the most of his opportunities in developing original ideas. However he was relatively isolated, hardly went outside his home city of Lyon, and his manuscript work was not printed at the time. Pacioli on the other hand was born into a commercial milleu in Italy, was known to two famous artists, Piero della Francesca and Leone Battista Alberti, was a tutor to the sons of powerful people in Venice and Rome, and had his books printed. This is not to denigrate Pacioli, but only to point out that different circumstances and the printing press played a large part in their fortunes.
Most recent histories of mathematics give versions of the story of the solution of equations in 15th and 16th century Italy. Chapter 4 of John Derbyshire's Unknown Quantity gives a good account of the convolutions surrounding Cardan's work.
We know that a cubic equation has three roots that are real or imaginary according to whether the graph cuts the x-axis in three places, touches the axis at a minimum, or cuts it only once. Clearly, these concepts were not available to Cardano.
The substitution 'trick' comes from the works of Fibonacci and Jordanus: in our notation, if $x^3=ax+N$, let $a=f+g $ and let $ f \sqrt{g}+g\sqrt{f}=\frac{N}{2}$ then $x=\sqrt{f}+\sqrt{g}$
$5+\sqrt{-15}$ and $5-\sqrt{-15}$
Robert Recorde The Whetstone of Witte 1557. Another interesting aspect of the evolution of mathematical understanding is the development of a universal technical language. In England, many words were taken over from French (moitie meaning half) and German (zenzike meaning square). This is where the strange z symbol comes from.
For websites with more detailed information on the development of notation see the list of References.
a) General sources covering the mathematics of the Middle Ages and Renaissance:
Boyer, C. B. (1968) A History of Mathematics . London. John Wiley. A popular book with many reprints. Chapters XV and XVI cover the Middle Ages and the Renaissance.
Cajori, F. A. (2007) A History of Mathematical Notations 2 Vols. . This is the principal source book for information in this area. Originally published in 1928/9 by Open Court, there is a new paperback edition available from Amazon at £ 17 for each volume. A bargain for your college library.
Derbyshire, J. (2008) The Unknown Quantity . London. Atlantic Books Now in paperback at £9.99 this is a popular story of the problem of finding the 'thing' of ancient algebra up to the 20th Century. The Introduction and Part 1 up to page 94 1591cover the ideas in both parts of this NRICH account. There are useful sections giving reasonable straightforward explanations of the mathematics involved.
Katz, Victor, J. (1999) (Second Edition, Corrected) A History of Mathematics : An Introduction Harlow, England. Addison-Wesley The best and most comprehensive and up-to-date general history of mathematics available. Chapter 9 'Algebra in the Renaissance' (pages 342 - 384) covers most of the material in this article.
Kline, M. (1972) Mathematical Thought from Ancient to Modern Times. Oxford. O.U.P. Before Katz, this was the best available and has been reprinted a number of times. Chapters 11 to 13 on the Renaissance and its mathematics are still very useful.
b) More specialised sources available in translation:
Girolamo Cardano (1545) The Great Art, or the Rules of Algebra. (Translated by R. Witmer) 1968. M.I.T. Press.
Nicolas Chuquet, Renaissance Mathematician (1985) Translated by Graham Flegg, Cynthia Hay and Barbara Moss of Le Triparty en la Science des Nombres . Lancaster Reidel Publishing Company.
Rene Descartes (1637) La Geometrie (Translated by D.E. Smith and Marcia Latham) 1954. This Dover edition of the Geometry is still available. The English translation is on one page, and a facsimile of the original French on the facing page. The French is not too difficult, and the notation for a quadratic equation virtually the same as today.
Leonardo Fibonacci The Book of Squares (Translated by L.E. Sigler ) 1987) London. Academic Press.
Jordanus de Nemore De Numeris Datis (Translated by Barnabas Hughes) 1981 University of California Press.
Recorde, Robert (1557) The Whetstone of Witte From the original English Text.
Johannes Scheubel (1551) Algebrae Compendiosa From the Original Latin Text Francois
Viete (1591) The Analytic Art (Translated by R. Witmer) 1983 Kent State University Press
The 'MACTUTOR ' is the most comprehensive easily navigable website for the History of Mathematics.
http://www-history.mcs.st-and.ac.uk/history/
Here you can find the biographies of the mathematicians mentioned in this article, including some detail of the relevant mathematics involved.
The earliest use of various mathematical symbols can be found at:
http://jeff560.tripod.com/mathsym.html
NRICH Links
Mathematical symbols:
http://nrich.maths.org/public/viewer.php?obj_id=2549
Girard stated the Fundamental Theorem of Algebra, but the proof for all kinds of polynomials was difficult to achieve.
Proof: A Brief Historical Survey:
The search for general solutions to polynomial equations led to the development of Galois Theory.
Introduction to Galois Theory:
Here is an Algebra Problem from 1525.
Rudolff's Problem:
http://nrich.maths.org/public/viewer.php?obj_id=278
Diophantos' mathematics was translated by the Arabs in the 10th century. Many of his problems appeared in the work of Fibonacci and Jordanus
Diophantine N-tuples:
The first diagram in this article represents some ideas that are also found in Fibbonacci's Book of Squares
Picturing Pythagorean Triples:
This is a modern treatment of the 'Imaginary' numbers discovered by Cardano
What are Complex Numbers? | CommonCrawl |
Cluster B personality disorders and its associated factors among psychiatric outpatients in Southwest Ethiopia: institutional-based cross-sectional study
Muzeyen Jemal1,
Worknesh Tessema2 &
Liyew Agenagnew2
Diagnosis of co-occurring personality disorders, particularly the most comorbid cluster B personality disorders in psychiatric patients is clinically important because of their association with the duration, recurrence, and outcome of the comorbid disorders. The study aimed to assess the prevalence of cluster B personality disorders and associated factors among psychiatric outpatients in Jimma Medical Center.
An institution-based cross-sectional study was conducted among 404 patients with mental illnesses at Jimma Medical Center from July 15 to September 14, 2021. A systematic random sampling method was used to recruit the participants. Personality disorder questionnaire four (PDQ-4) was used to assess the prevalence of cluster B personality disorders through a face-to-face interview. Data were entered into Epi Data Version 4.6 and exported to SPSS Version 26 for analysis. Logistic regression analysis was done and variables with a p-value less than 0.05 with a 95% confidence interval in the final fitting model were declared as independent predictors of cluster B personality disorders.
Amongst 401 respondents with response rate of 99.3%, slightly less than one-fourth (23.19%, N = 93) were found to have cluster B personality disorders. Unable to read and write(AOR = 3.28, 95%CI = 1.43—7.51), unemployment(AOR = 2.32, 95%CI = 1.19—4.49), diagnosis of depressive (AOR = 3.72, 95%CI = 1.52–9.10) and bipolar-I disorders (AOR = 2.94, 95%CI = 1.37—6.29), longer duration of illness (AOR = 2.44, 95%CI = 1.33—4.47), multiple relapses (AOR = 2.21, 95%CI = 1.18–4.15)), history of family mental illnesses (AOR = 2.05, 95%CI = 1.17—3.62), recent cannabis use (AOR = 4.38, 95%CI = 1.61—11.95), recent use of alcohol (AOR = 2.86, 95%CI = 1.34—6.10), starting to use substance at earlier age (AOR = 4.42, 95%CI = 1.51 -12.96), and suicidal attempt (AOR = 2.24, 95%CI = 1.01—4.96), were the factors significantly associated with cluster B personality disorders.
The prevalence of cluster B personality disorders was high among mentally ill outpatients and found to be important for mental health professionals working in the outpatient departments to screen for cluster B personality disorders as part of their routine activities, particularly those who have mood disorders, longer duration of illness, multiple relapses, history of family mental illnesses, suicidal attempt and are a current user of alcohol and cannabis.
Personality disorders (PDs) are defined as "an enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual's culture, is pervasive and inflexible, has an onset in adolescence or early adulthood, it is stable over time, and leads to distress or impairment". It is categorized into three clusters (A, B, and C), personality changes due to another medical condition, other specified personality disorders, and unspecified personality disorders. Cluster B or dramatic cluster consists of 4 subtypes, which are antisocial, borderline, histrionic, and narcissistic PD [1]. They are excessively demanding, manipulative, emotionally unstable, interpersonally inappropriate, and may attempt to create relationships that cross professional boundaries that place physicians in difficult or compromising positions [2].
Cluster B PDs are the most common personality disorders in clinical settings and are characterized by severe functional impairments, substantial treatment utilization, and a high mortality rate by suicide, which is 50 times higher than the rate in the general population [3, 4]. People with these disorders present with psychosocial functioning problems, suicidal behaviors, and more psychiatric comorbidities especially with other personality disorders, substance misuse, and other axes I conditions [5,6,7]. Patients with axis-I and cluster B PDs comorbidity presented with an earlier onset, more severity in suicide attempts, hospitalizations, self-harm behaviors, and accounting for more impairment in functioning than patients only with axis-I disorders [3, 8]. Many studies reported that factors like unemployment, mood disorders, a history of family mental illness, number of relapses and admissions, substance abuse, and suicidal behaviors were directly associated with cluster B PDs [9,10,11], while factors like age and educational level were inversely related [7].
Lastly, cluster B PDs is a chronic condition associated with a multitude of medical and social problems [12] and becomes increasingly common in mental health services, the judicial system, and prison settings [13]. Therefore, diagnosing a co-occurring personality disorder in psychiatric patients with another disorder is clinically important because of their association with the duration, recurrence, and outcome of comorbid disorders [14].
Studies of the frequency and correlates of PDs should be replicated in clinical populations where the disorder and comorbidity rates are higher to provide the clinicians with information that has more direct clinical uses [15, 16]. Cluster B personality disorders are the most frequent among outpatients [16, 17]; have the highest prevalence of any co-occurrence with other mental illnesses (83.8%) with a predominance of mood disorders (48.8%) [17].
Globally, the overall prevalence estimate of cluster B personality disorders among mentally ill outpatients was 23% [18], ranging from 9.8% [19] to 66.7% [20]. It influences the prognosis, costs, and response to treatment of many clinical syndromes; increases morbidity, and mortality of the patients, and is significantly associated with global, cognition, and social interaction impairments [19, 21, 22]. Moreover, personality disorders are a predisposing factor for many other psychiatric disorders, including substance use disorders, suicide, mood disorders, impulse-control disorders, eating disorders, and anxiety disorders [21].
Despite the aforementioned importance in diagnosing this disorder, little attention is given by the clinicians in their daily activities [3] and they are sometimes reluctant to diagnose them [23], especially in developing countries. Finally, since PD is by its nature ego-syntonic [21], most of the patients present for treatment fail to complain to their clinician. Due to this it is underdiagnosed and got very less consideration in the treatment, despite its great contribution to social and functional impairment, adverse outcomes like more prolonged hospitalization, episodes of illnesses, and high direct costs through high utilization of healthcare systems.
Almost all of the studies done in this area are from developed countries and most of them are done on subclinical populations. Thus, detection and treatments of those disorders among psychiatric outpatients are far-reaching significances to minimize adverse outcomes and reduce mortality and morbidity associated with it, especially in developing countries, where data is not available currently in the study area. Recognizing the magnitude of the problem is important for designing early and appropriate intervention; to reveal health professionals' insight into the prevalence of cluster B PDs which is vital for identifying treatment needs and for the provision of psychiatric services. Also, it will be helpful to understand patients with this problem and improve their quality of life by addressing their problems accordingly. Moreover, it will lay a background for further studies and will be added to the limited body of literature on the prevalence of cluster B PDs in the developing region. Thus, the overall aim of this study was to assess cluster B PDs and associated factors among mentally ill patients attending outpatient treatment at Jimma medical center (JMC).
The institution-based cross-sectional study was conducted from July 15 to September 14, 2021, at JMC. JMC is found in Jimma town, Oromia regional state, 352 km far from Addis Ababa to the southwest. JMC is one of the oldest governmental hospitals, which was established in 1937 during the Italian occupation for the service of their soldiers. Since then, it has been running as a public hospital under the ministry of health by different names, currently named "JMC" and gave services to about 15 million populations in southwest Ethiopia. The psychiatric clinic of JMC was established in 1996 next to Amanuel mental health specialized hospital. Currently, more than 1000 patients are attending follow-up treatments at the outpatient department (OPD) monthly. Officially the psychiatric clinic has 60 beds for inpatient services and 4 OPD.
All patients with mental illnesses attending outpatient treatment at JMC were the source population of the study, while those who were available during data collection were the study population. Patients with mental illnesses who were age18 years and above were included whereas those who were acutely disturbed and unable to communicate well were excluded.
Sample size determination
The sample size was determined by using the single population proportion formula, with the assumptions of 95% confidence interval, 5% marginal error, and the (p), the proportion of cluster B PDs to be 50%.
Where n = minimum required sample size
$$n=\frac{{\left(\frac{{\varvec{z}}\boldsymbol{\alpha }}{2}\right)}^{2}{\varvec{p}}{\varvec{q}}}{{{\varvec{d}}}^{2}} {\left(1.96\right)}^{2}{\mathrm{x}0.5\mathrm{ x }0.5/0.05}^{2}=0.9604/0.0025=384$$
By adding a 5% non-response rate, the final sample size was n = 404.
Sampling techniques and procedures
Systematic random sampling was used to select the representative sample. The sampling interval was done by dividing the total number of patients visiting the outpatients within two months into the final sample size. K = 2000/404 = 5. Thus, the sample was selected every five intervals by using a registration book order. The registration book is a book in which every patient's name and medical record numbers are recorded immediately after their arrival before they got any clinical service. The data was collected through face-to-face interviews by using pre-tested interviewer-administered questionnaires. Four BSc psychiatric professionals were employed for two months of data collection periods and supervised by one mental health specialist. Study participants were identified by data collectors by reviewing the patient's registration book. Then, data was collected from selected study participants.
Data collection instruments
The prevalence of cluster B personality disorder was measured using the personality diagnostic questionnaire (PDQ-4 +). It is a self-report tool with a true–false format; reflects a single DSM diagnostic criterion. Besides, a brief structured interview; the clinical significance scale, follows the self-report and either confirms or does not confirm the diagnosis for each PD scoring at/over the threshold. This interview directly reflects the principal DSM-IV/V general criteria for PDs assessing whether: (a) the trait is enduring (criterion D for DSM); (b) it is present in the absence of a psychopathological state, the effects of a substance, or any medical condition (criteria E and F); and (c) it leads to distress or impairment (criterion C) [24]. It has proven to have suitable psychometric properties both in its original version and in its adaptation to other languages and cultures, and clinical and non-clinical samples [25,26,27,28,29]. Its sensitivity ranges from 0.5 (histrionic PD) to 1 (antisocial PD) and specificity from 0.90 (borderline PD) to 0.98 (histrionic & narcissistic) [30] and diagnostic agreement (kappa) between PDQ-4 + and SCID-II was moderate (0.43) [25]. The reliability coefficient in this study was 0.93.
The level of social support was measured by the social support scale (Oslo-3). The social support scale (Oslo-3) was used to collect data regarding the strength of social support. The sum score is categorized into three broad categories of social support;3–8 poor social support, 9–11 moderate social support, and 12–14 strong social support [31]. The Cronbach's alpha in this study was 0.83. The questionnaire has also covered a range of topics including socio-demographic factors, clinical factors, substance use, and risky behaviors (questions that assess suicidal thought (passive and active), suicidal attempt, homicidal thought, and attempt). Other mental illnesses diagnosis was obtained from the charts of the patients.
Data quality control
The questionnaire was prepared first in English and translated into Afaan Oromo and Amharic by two of the authors and language experts with back translation to English by other language experts and mental health specialists who were not familiar with the purpose of the study. Some variations in language meaning and translation differences were resolved through a focus-group discussion. The training was given to four data collectors. A pre-test was conducted (5% of the sample size, n = 21) at Shenen Gibe general hospital which is around 15 km from JMC. Regular supervision and support were made for data collectors by the supervisor and principal investigator. Data were checked for completeness and consistency on daily basis during data collection time.
Operational definitions
If an individual fulfilled the diagnostic DSM-V threshold for specific PD through PDQ-4 measurement and confirmed by its clinical significance scale, the individual has a personality disorder [24].
Cluster B PD
If an individual was positive for at least one of four (borderline, antisocial, histrionic, and narcissistic) PDs [24].
Ever (lifetime) and current( within past three months) use of any psychoactive substance [32].
The score of (3-8) poor, (9-11) moderate, and (12-14) strong social support on OSLO 3 scale (31).
Data processing and analysis
Data were entered into Epi Data version 4.6 and analyzed by statistical package for social sciences (SPSS) version 26. Descriptive analysis was done using frequency, percentage, mean and standard deviation. The prevalence of self-reported PDQ-4 + scales was analyzed using the DSM-V thresholds. The clinical significance scale interview was confirming the diagnosis for screened-positive disorders, leading to dichotomous present/absent outcomes.
All variables were entered into a bivariate logistic regression to identify associated factors of cluster B PDs among people with a psychiatric disorder, and variables with a p-value < 0.25 were considered candidates for multivariable logistic regression analysis. In multivariable logistic regression analysis using a backward method, variables with a p-value less than 0.05 at a 95% confidence interval were considered statistically significant. Finally, the test for model fitness was done using the Hosmer–Lemeshow model test. The multicollinearity of the independent variables was checked by the variance inflation factor (VIF).
Sociodemographic characteristics of respondents
Among 404 patients approached for an interview a total of 401 have participated in this study with a response rate of 99.3%. The mean age of the respondents was 34.69 (SD = ± 10.94) years. Details of the demographic characteristics of study participants are presented in (Table 1).
Table 1 Socio-demographic characteristics of psychiatric outpatients in Jimma medical center, Southwest Ethiopia
Clinical related characteristics of respondents
Most of the study participants (40.1%, N = 161) had a diagnosis of major depressive disorder followed by schizophrenia (32.4%, N = 130). The mean duration of illnesses was 101.29(SD = ± 73.4) months and the mean age onset of illnesses was 26.53 (SD = ± 8.28) years. The mean duration of treatment was 86.92 (SD = ± 75.4) months and the mean number of admissions was 1.39 (SD = ± 0.49) times. The mean duration stayed in the hospital for those admitted was 1.25 (SD = ± 0.44) months and the mean number of relapses was 1.54 (SD = ± 0.49) times. More than one-third (35.4%, N = 142) of the respondents have a history of family mental illnesses.
Substance use characteristic of respondents
The lifetime prevalence of alcohol, Khat, tobacco, and cannabis use among respondents was (16.2%, N = 65), (52.1%, N = 209), (16.5%, N = 66), and (6.2%, N = 25) respectively. About (8%, N = 32), (32%, N = 131), (8.5%, N = 34), and (9%, N = 36) of respondents were current users of alcohol, Khat, tobacco, and cannabis respectively, and more than half (57.9%, N = 71) of them started to use substance before age 17.
Risky behaviors and social support status of respondents
Among study participants (47.1%, N = 189) had a history of passive suicidal thought, (33.9%, N = 136) active suicidal thought, (16%, N = 64) suicidal attempt, (19.2%, N = 77), homicidal thought, and (12.5%, N = 50) had a history of homicidal attempt in their life. Regarding the social support status of respondents (44.6%, N = 179) reported as they have poor social support according to the Oslo-3 social support scale measurement.
Prevalence of cluster B personality disorder
Of all respondents, about 93(23.19%, 95%CI = 19 – 27) of them have cluster B personality disorder as measured by the PDQ-4 + with its clinical significance scale. The frequency of each cluster B personality disorders was 35(8.7%, 95%CI = 6–12), 29(7.2%, 95%CI = 5–10), 26(6.5%, 95%CI = 4–9), and 13(3.2%, 95%CI = 2–5) for borderline, antisocial, narcissistic and histrionic personality disorder respectively.
Factors associated with cluster B personality disorders
From those candidate variables in bivariate analysis educational status, having a diagnosis of major depressive and bipolar-I disorder, duration of illness, number of relapses, family history of mental illness, suicidal attempt, current use of alcohol and cannabis, and having earlier age at substance-using started were found to be statistically significant.
Participants who can't able to read and write were three (adjusted odds ratio (AOR) = 3.28, 95%CI = 1.43—7.51) times more likely to have cluster B personality disorder than those who have the educational status of college and above. Respondents who have no occupation is more than two (AOR = 2.32, 95%CI = 1.19—4.49) times more likely to have the disorder than their counterpart. Major depressive and bipolar-I disorder patients were near four (AOR = 3.72, 95%CI = 1.52–9.10) and almost three (AOR = 2.94, 95%CI = 1.37—6.29) times more likely to have cluster B personality disorders than schizophrenia patients respectively. Likewise, those patients who have a longer duration of illness (above the mean) and many relapses (above the mean) were more than two times more likely to have cluster B PDs than their counterparts (AOR = 2.44, 95%CI = 1.33—4.47) and (AOR = 2.21, 95%CI = 1.18—4.15) respectively. Those patients who have a family history of mental illnesses were two times more likely to have cluster B PD than those who have not (AOR = 2.05, 95%CI = 1.17—3.62), and also cluster B PD was more than two times more likely to present among those who have a history of suicidal attempt (AOR = 2.24, 95%CI = 1.01—4.96). Patients who use alcohol currently are almost three (AOR = 2.86, 95%CI = 1.34—6.10) times more likely than those who are not the user. Cluster B personality disorder is four times more likely to present among respondents who are using cannabis currently (AOR = 4.38, 95%CI = 1.61—11.95) and more than four (AOR = 4.42, 95%CI = 1.51 -12.96) times more likely to present among those who started to use substance earlier (before age 17) (Table 2).
Table 2 Bivariate and multivariable analysis of factors associated with cluster B personality disorders among psychiatric outpatients in Jimma medical center, Southwest Ethiopia
In this study out of the total respondents, the prevalence of cluster B PD was found to be 93(23.19%, 95%CI = 19–27). The finding is in agreement with a study conducted in Turkey, Netherland, and Italy which reported the prevalence of cluster B PD was 23.2%, 24.5%, and 25.8% respectively [33,34,35].
The figure is higher than the studies conducted in Kenya, Rhode Island, and China which revealed that the prevalence of cluster B PD was 17.6%, 13%, and 9.8% respectively [9, 16, 36]. The difference might be due to the difference in the instrument used in which structured clinical interview is used in those studies, a tool which is known for its low-frequency report compared to self-report screening tools like ours (PDQ-4 +). The other issue that might explain the disparity is the study population, in which the study was conducted among admitted patients in Kenya, and a different set of studies, from psychiatric and psycho counseling clinics, was used in China in contrast to ours which was only from the psychiatric outpatient department.
The frequency of our study was found to be lower than the study conducted in Canada and Oxford which reported the prevalence of cluster B PDs was 32% and 28.8% respectively [37, 38]. The difference in prevalence is likely to be due to differences in participants of the study in which only alcohol use disorder patients in the Canada study and deliberate self-harm patients in the Oxford study participated. It might be also due to the difference in the tool used, which structured interviews for DSM-IV PDs in Canada, and the personality assessment schedule in Oxford was used.
Borderline PD was found to be two times more prevalent among females and antisocial was three times more prevalent among males, while there was no significant difference between histrionic and narcissistic PDs in terms of sex. This is in agreement with a study conducted in Kenya among admitted patients [9]. Among the respondents 4(1%) of them have borderline and antisocial, 2(0.5%) borderline and histrionic, and 2(0.5%) borderline and narcissistic personality disorders. This indicated borderline PD was found to be comorbid with all other disorders within the cluster, which is supported by different studies conducted in different countries which documented that borderline PD was the most comorbid disorder with other PDs [39, 40].
Regarding associated factors of cluster B PD, those respondents who can't read and write were more than three times more likely to have the disorder than those who attended college and above. The finding is in line with a study from the USA which reported that educational status is inversely related to cluster B PDs [7]. Refusal of going to school, early dropout, and low educational attainment among those with cluster B PD could be the explanation. Those respondents who have no occupation are more than two times more likely to have the disorder than those who have an occupation. The finding is consistent with a study from the USA which reports the positive relationship between unemployment and the disorder [7]. In this study cluster B PD was almost four and nearly three times more likely to present among major depressive and bipolar-I disorder patients respectively than schizophrenia patients. The finding is supported by a study from Kenya which stated that mood disorder was the most comorbid with PD (46%) [9] and a study conducted in China which revealed that cluster B PD was more common among patients with affective disorders(12.2%) than schizophrenia patients(3.8%) [10]. Those respondents who have a longer duration of illness (101.3 months and above) were more than two times more likely to have the disorder than their counterparts. The earlier onset of symptoms, obstacles to treatment like non-adherence due to interpersonal functioning impairment, and poorer response to treatment among those who have comorbid PD could be the reason [35, 41,42,43].
In this study, the disorder was more than two times more likely to present among the respondents who have multiple relapses. The finding is in agreement with the studies from France and Dutch which revealed that those patients who have comorbid PDs experienced more relapses than those who do not [11, 44]. The respondents who have a history of family mental illness were two times more likely to have cluster B PD than those who have not. This is similar to the study from Kenya that explained a family history of mental illness was significantly associated with positive and negative scores of PD [9]. The reason might be almost all psychiatric illnesses including personality disorders are genetically influenced and run around the family [21].
The disorder was more than two times more likely to present among participants who have a history of suicidal attempts than their counterparts. The finding is supported by the study conducted in Japan which reported a greater number of suicidal attempts among cluster B PD [45] and a study from Oxford that explained suicidal attempts to be more common among depressed patients with comorbid borderline PD than depressive patients without comorbidity [37]. It is also supported by the multisite collaborative longitudinal study which reported that 12.5% of respondents who have the disorder attempted suicide within three years of follow-up [46] and a study conducted in Turkey which documented that a history of suicide attempts was significantly common in patients comorbid with any cluster B personality disorders [35]. Even though it was not associated with the prevalence of cluster B PDs the frequency of homicidal attempts was high (12.5%) in this study which was not a common report in the outpatient department. This might be due to the fact that JMC gives services including for prisoners from Jimma zonal correctional institute since there was no psychiatric clinic that gives services for the patients in the compound unlike in developed countries. However, at the beginning of 2022(after this study was conducted) the psychiatric clinic is established in the correctional institute after long-time efforts of Jimma University, psychiatry department staff.
Current users of alcohol were almost three times more likely to have the disorder than those who are not the user. It is in agreement with studies conducted in Japan and the USA which documented a high frequency of alcoholism among those with cluster B PDs [7, 45]. The disorder was more than four times more likely to be found among those participants who are current users of cannabis. The finding is supported by a study from Kenya which reported that there was a significant association between cluster B PDs and cannabis use (31%) [9], a study conducted in Connecticut southeastern USA, reported the highest rate of recent cannabis use among individuals with PDs [47] and study from Turkey which revealed the frequency of cannabis among the participants with the disorder is 67% [48]. Additionally, our finding indicated that those participants who started to use substances at an earlier age (before age 17) are more than four times more likely to have the disorder than those who started later. The reason might be due to common etiologic processes with early expression of impaired impulse control and affective dysregulation [49]. The age of onset of the personality disorders, which is in adolescence/early adulthood the time at independence from family/caregiver and trying new events like substance use despite its consequences are exercised, the type of defense mechanism used by this group that is most of the time acting out and their inability to conform to the social norms might be the other reasons [21]. The high amount ((52.1%) respondents of this study were Khat users, even if it is not significantly associated (p-value = 0.15, AOR = 1.96, 95%CI = 0.79- 4.84) with the prevalence of cluster B PD. This high frequency could be due to the fact that Southwest Ethiopia is one of the areas where Khat plant is broadly grown and the Khat chewing is highly encouraged culturally.
It is important to note that there are some limitations to this study. Firstly, the PDQ-4 + was not validated in the language of the study population, even though we found it sufficient after a pretest. Additionally, some questions like that asked for childhood care giver was assessed retrospectively several years backward which there is chance of recall bias. The other limitation of the study is the severity of substance use was not assessed.
This study revealed that the prevalence of cluster B personality disorders was high among mentally ill outpatients. The presence of the diagnosis of mood disorders, longer duration of illness, multiple relapses, family history of mental illness, history of suicidal attempts, recent use of alcohol and cannabis, and early age at substance use started were significantly associated with cluster B PDs. Therefore, it is important to give more emphasis to screening comorbid cluster B PDs as daily routine activities to give PD-oriented psychotherapy and engage and retain these patients in treatment, which helps to improve the course and treatment of the other disorder that patients typically identify as their chief complaint.
The datasets used for analysis in this study are not publicly available due to the privacy of our respondents, and it contains additional information which was not included in this paper but is available from the corresponding author on reasonable request.
Crude odds ratio
Diagnostic and statistical manual of psychiatry
JMC:
Jimma medical center
OPD:
PDs:
PDQ-4 + :
Personality disorders questionnaire four-plus
SCID-II:
Structured clinical interview for DSM-IV personality disorder
Arlington VA, Jeffrey Akaka MD, Carol A. Bernstein BrL·^^, Crowley AS, Everett J, GellerMP. Diagnostic and statistical manual of mental disorders, fifth edition. 5th edition. American Psychiatric Association; 2013. p. 645.
Randy K, Ward M. Assessment and management of personality disorders. Am Fam Physician. 2004;70(8):1505–12.
Gunderson JG, Mcglashan TH, Dyck IR, Stout RL, Ph D, Bender DS, et al. Functional impairment in patients with schizotypal, borderline, avoidant, or obsessive-compulsive personality disorder. Am J Psychiatry. 2002;159:276–83.
Oldham JM, Gabbard GO, Soloff P, Spiegel D, Stone M, Phillips KA, et al. Treatment of Patients With borderline personality disorder. APA Pract Guidel. 2005;107:1–82.
Health NCC for M, Unit RC of PR and T. Borderline personality disorder : The nice guideline on treatment and management. British Library Cataloguing-in-Publication Data; 2009. p. 433–44.
Hasin DS, Stinson FS, Ogburn E, Grant BF. Prevalence, correlates, disability, and comorbidity of DSM-IV alcohol abuse and dependence in the United States. Arch gen psychiatry. 2007;64(7):830–42.
Lenzenweger MF, Lane MC, Loranger AW, Kessler RC. DSM-IV personality disorders in the national comorbidity survey replication. Biol Psychiatry. 2008;62(6):553–64.
Apfelbaum GS, Regalado P, Herman L, Teitelbaum J, Gagliesi P. Comorbidity between bipolar disorder and cluster B personality disorders as indicator of affective dysregulation and clinical severity. Actas esp Psiquiatr. 2013;41(5):269–78.
Thuo J, Ndetei DM, Maru H, Kuria M. The prevalence of personality disorders in a Keniyan inpatient sample. J Pers Disord. 2008;22(2):217–20.
Wei Y, Zhang T, Chow A, Tang Y, Xu L, Dai Y, et al. Co-morbidity of personality disorder in schizophrenia among psychiatric outpatients in China. BMC Psychiatry. 2016;16:224.
Quilty LC, De FF, Rolland J, Kennedy SH. Dimensional personality traits and treatment outcome in patients with major depressive disorder. J Affect Disord. 2008;108:241–50.
Moran P. The epidemiology of personality disorder. Soc psychiatry psychiatry epidemiol. 1999;34:231–42.
O'Brien M, Mortimer L, Singleton N, Meltzer H. Psychiatric morbidity among women prisoners in England and Wales. International review of psychiatry. 2003;15(1–2):153–7.
Alnaes RTS. Personality and personality disorders predict development and relapses of major depression. Acta Psychintr Scand. 1997;95:336–42.
Berkson J. Limitations of the application of fourfold table analysis to hospital data. Int J Epidemiol. 2014;43(2):511–5.
Zimmerman M, Rothschild L, Ph D. The prevalence of DSM-IV personality disorders in psychiatric outpatients. Am J psychiatry. 2005;162(10):1911–8.
Shea MT, Pagano ME, Morey LC, Stout RL. Associations in the course of personality disorders and axis I disorders over time. J Abnorm Psychol. 2004;113(4):499–508.
Bezerra-filho S, Galva A, Studart P, Rocha MV. Personality disorders in euthymic bipolar patients : a systematic review. Rev Bras Psiquiatr. 2015;37:162–7.
Zhang T, Chow A, Wang L, Dai Y, Xiao Z. Role of childhood traumatic experience in personality disorders in China. Compr Psychiatry. 2012;53(6):829–36. https://doi.org/10.1016/j.comppsych.2011.10.004 (Available from).
Zheng Y, Severino F, Hui L, Wu H, Wang J, Zhang T. Co-morbidity of DSM-IV dersonality disorder in major depressive disorder among psychiatric outpatients in China. Front psychiatry. 2019;10:1–9.
Benjamin James Sadock, M.D, Virginia Alcott Sadock, M.D, Pedro Ruiz MD. Kaplan & Sadock's behavioral sciences/clinical Psychiatry, eleventh edition. American Psychiatric Association; 2015. p. 1594.
Santana GL, Coelho BM, Wang Y, Porto D, Filho C, Viana MC, et al. The epidemiology of personality disorders in the Sao Paulo megacity general population. PLos/one. 2018;13(4):1–20. https://doi.org/10.1371/journal.pone.0195581 (Available from).
Hillman JL, Stricker G, Zweig RA. Clinical psychologists' judgments of older adult patients with character pathology. Prof Psychol Res Pract. 1997;28(2):179–83.
Hyler D. Personality Questionnaire Developed Pdq -4+. www.pdq4.com. Am J Psychiatry. 1994;8(212):1–12.
Abdin E, Subramaniam M. Validity of the personality diagnostic questionnarrie — 4 ( PDQ-4 + ) among mentally ill prison inmates in Singapore. J Pers Disord. 2011;25(6):834–41.
Calvo N, Gutiérrez F, Casas M. Diagnostic agreement between the personality diagnostic questionnaire-4 + ( PDQ-4 + ) and its clinical significance scale. Psicothema. 2013;25(4):427–32.
Fossati A, Porro FV, Maffei C, Borroni S. Are the DSM-IV personality disorders related to mindfulness ? An Italian study on clinical participants. J ofclinical Psychol. 2012;68(6):672–83.
Bottesi G, Novara C, Ghisi M, Lang M, Sanavio E. The M illon clinical multiaxial inventory – III ( MCMI - III ) and the personality diagnostic questionnaire - 4 + ( PDQ - 4 + ) in mixed Italian psychiatric sample. Nova Science Publisher INC; 2008. p. 1–29.
Wilberg T, Dammen T, Friis S. Comparing personality diagnostic questionnaire-4+ with longitudinal, expert, all data (LEAD) standard diagnoses in a sample with a high prevalence of axis I and axis II disorders. Compr Psychiatry. 2000;41(4):295–302.
Calvo N, Gutiérrez F, Andión Ó, Caseras X, Torrubia R, Casas M. Psychometric properties of the Spanish version of the self-report personality diagnostic questionnaire-4 + ( PDQ-4 + ) in psychiatric outpatients. Psicothema. 2012;24(1):156–60.
Kocalevent RD, Berg L, Beutel ME, Hinz A, Zenger M, Härter M, et al. Social support in the general population: Standardization of the Oslo social support scale (OSSS-3). BMC Psychol. 2018;6(31):1–8.
WHO ASSIST working group. The alcohol, smoking and substance involvement screening test (ASSIST): development, reliability and feasibility. Addiction. 2010;97:1–74.
Berghuis H, Kamphuis JH. Core feature of personality diaorder : differentiating general personality dysfunctioning from persoality traits. J Personal Disord. 2012;26(28):1–13.
Nicolò G, Semerari A, Lysaker PH, Dimaggio G, Conti L. Alexithymia in personality disorders : Correlations with symptoms and interpersonal functioning. Psychiatry Res. 2011;1016:1–7. https://doi.org/10.1016/j.psychres.2010.07.046 (Available from).
Mustafa Ozkan AA. Comorbidity of personality disorders in subjects with panic disorder: which personality disorders increase clinical severity. Dicle tıp Derg. 2003;30:102–11.
Zhang T, Wang L, Good MJD, Good BJ, Chow A, Dai Y, Yu J, Zhang H, Xiao Z. Prevalence of personality disorders using two diagnostic systems in psychiatric outpatients in Shanghai China. Soc Psychiatry Psychiatr epidemiol. 2012;47(9):1409–17.
Haw C, Ton KHAW, Houston K, Townsend E. Psychiatric and personality disorders in deliberate self-harm patients. Br J psychiatry. 2001;178:48–54.
Eugenia Zikos, MD, Frcpc; Kathryn J Gill, PhD; Dara A Charney, MDCM F. Personality disorders among alcoholic outpatients: Prevalence and course in treatment. Can J Psychiatry. 2010;55(2):65–73.
Th M, Cm G, Ae S, Jg G, Mt S. The collaborative longitudinal personality disorders study : baseline axis I / II and II / II diagnostic co-occurrence. Acta Psychiatr Scand. 2000;102(2).
Lanlan Wang, MD, Colin A. Ross, MD, Tianhong Zhang, MD, Yunfei Dai, MD, Haiyin Zhang, MD, PhD, Mingyi Tao, MD, Jianying Qin, MD, PhD, Jue Chen, MD, PhD, Yanling He, MD, Mingyuan Zhang, MD, and Zeping Xiao, MD P. Frequency of borderline personality disorder among paychiatric outpatients in Shanghai. J Pers disord 2012. 2012;26(3):393–401.
Mulder RT. Reviews and overviews personality pathology and treatment Outcome in major depression. Am J Psychiatry. 2002;159(3):359–71.
Newton-howes G, Tyrer P. Personality disorder and the outcome of depression : meta-analysis of published studies. Br J psychiatry. 2006;188:13–6.
Gerhardstein KR, Griffin PT, Hormes JM, Diagnostic T, Disorders M, Edition F, et al. Personality disorders lead to risky behavior, treatment obstacles. HIV Clin. 2015;23(2):6–7.
Spijker J, Graaf RD, Oldehinkel AJ, Nolen WA, Ormel J. Are the vulnerability effects of personality and psychosocial functioning on depression accounted for by subthreshild symptoms? Depress Anxiety. 2007;24(11):472–8.
Matsunaga H, Kiriike N, Nagata T, Yamagami S. Personality disorders in patients with eating disorders in Japan. Int J eat disord. 1998;23(1):399–408.
Pagano ME, Shea MT, Grilo CM, Gunderson JG. Recent life events preceding suicide attempts in a personality disorder Sample: Findings from the collaborative longitudinal personality disorders study. J Consult Clin Psychol. 2012;73(1):99–105.
Mueser KT, Crocker AG, Frisman LB, Drake RE, Covell NH, Essock SM. Conduct disorder and antisocial personality disorder in persons with severe psychiatric and substance use disorders. Schizophr Bull. 2006;32(4):626–36.
Tümer Ö, Blazer D. Substance use disorders in men with Antisocial personality disorder : A Study in Turkish sample. Subst Use Misuse. 2006;41:1167–74.
Sher KJ, Trull TJ. Substance use disorder and personality disorder. Curr Psychiatry Rep. 2002;4:25–9.
We would like to thank Jimma University for approving our research and facilitating the data collection process. Also, we would like to extend our appreciation to the participants of the study for their commitment to respond to our questions.
The research was conducted with financial funding from Mettu University.
College of Health and Medical Science, Department of Psychiatry, Mettu University, Mettu, Ethiopia
Muzeyen Jemal
Institute of Health, Faculty of Medicine, Department of Psychiatry, Jimma University, Jimma, Ethiopia
Worknesh Tessema & Liyew Agenagnew
Worknesh Tessema
Liyew Agenagnew
Jemal designed the study, supervised data collection, analyzed the data, and drafted the manuscript; Agenagnew supervised data collection, analyzed the data, and critically reviewed the manuscript; Tessema was involved in data analysis, and reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Muzeyen Jemal.
The study was approved by the Institutional Review Board (IRB) of Jimma University and the study was conducted in accordance with Helsinki's declaration. The aims of the study were clearly explained to study participants. The data was collected after informed consent was given and written consent was obtained from each participant. Assurance of the maintenance of confidentiality and anonymity was also given. Appropriate measurements for Covid-19 prevention were taken during the data collection period to secure data collectors and participants.
The authors declared that there is no conflict of interest in this work.
Jemal, M., Tessema, W. & Agenagnew, L. Cluster B personality disorders and its associated factors among psychiatric outpatients in Southwest Ethiopia: institutional-based cross-sectional study. BMC Psychiatry 22, 500 (2022). https://doi.org/10.1186/s12888-022-04143-3
Cluster B personality disorders
Outpatients | CommonCrawl |
IIT JAM MS 2021 Question Paper | Set A | Problems & Solutions
This post discusses the solutions to the problems from IIT JAM Mathematical Statistics (MS) 2021 Question Paper - Set A. You can find solutions in video or written form.
Note: This post is getting updated. Stay tuned for solutions, videos, and more.
IIT JAM Mathematical Statistics (MS) 2021 Problems & Solutions (Set A)
The value of the limit
\lim_{n \rightarrow \infty} \sum_{k=0}^{n}\left(\begin{array}{c}
2 n \\
\end{array}\right) \frac{1}{4^{n}}
Options-
$\frac{1}{4}$
$1$
Answer: $\frac{1}{2}$
Video Solution
If the series $\sum_{n=1}^{\infty} a_{n}$ converges absolutely, then which of the following series diverges?
$\sum_{n=1}^{\infty}\left|a_{2 n}\right|$
$\sum_{n=1}^{\infty}\left(a_{n}\right)^{3}$
$\sum_{n=2}^{\infty}\left(\frac{1}{(\ln n)^{2}}+a_{n}\right)$
$\sum_{n=1}^{\infty} \frac{a_{n}+a_{n+1}}{2}$
Answer: $\sum_{n=2}^{\infty}\left(\frac{1}{(\ln n)^{2}}+a_{n}\right)$
Let $X$ be a $U(0,1)$ random variable and let $Y=X^{2}$. If $\rho$ is the correlation coefficient between the random variables $X$ and $Y$, then $48 \rho^{2}$ is equal to
1.$48$;
4.$35$.
Answer: $45$
Let $\{X_{n}\}_{n \geq 1}$ be a sequence of independent and identically distributed random variables with probability density function
f(x)={1,0, if 0<x<1 otherwise
Then. the value of the limit
limn→∞P(−1n∑i=1nlnXi≤1+1n−−√)
Options -
$0$;
$\Phi(2)$;
$\frac{1}{2}$.
Answer: $\Phi(1)$
Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a function defined by
f(x)=x^{7}+5 x^{3}+11 x+15, x \in \mathbb{R}
Then, which of the following statements is TRUE?
$f$ is onto but NOT one-one
$f$ is one-one but NOT onto
$f$ is both one-one and onto
$f$ is neither one-one nor onto
Answer: $f$ is both one-one and onto
There are three urns, labeled. Urn $1$ , Urn $2$ and Urn $3$ . Urn $1$ contains $2$ white balls and $2$ black balls, Urn $2$ contains $1$ white ball and $3$ black balls and Urn $3$ contains $3$ white balls and $1$ black ball. Consider two coins with probability of obtaining head in their single trials as $0.2$ and $0.3 .$ The two coins are tossed independently once, and an urn is selected according to the following scheme:
Urn $1$ is selected if $2$ heads are obtained: Urn $3$ is selected if $2$ tails are obtained; otherwise Urn $2$ is
selected. A ball is then drawn at random from the selected urn. Then
$P($ Urn 1 is selected $\mid$ the ball drawn is white $)$ is equal to
$\frac{12}{109}$
$\frac{1}{18}$
$\frac{6}{109}$
Answer: $\frac{6}{109}$
Let $X$ be a random variable with probability density function
f(x)=\frac{1}{2} e^{-|x|}, \quad-\infty<x<\infty
Then, which of the following statements is FALSE?
$E\left(|X| \sin \left(\frac{X}{|X|}\right)\right)=0$
$E(X|X|)=0$
$E\left(|X| \sin ^{2}\left(\frac{X}{|X|}\right)\right)=0$
$E\left(X|X|^{2}\right)=0$
Answer: $E\left(|X| \sin ^{2}\left(\frac{X}{|X|}\right)\right)=0$
\lim _{n \rightarrow \infty}\left(\left(1+\frac{1}{n}\right)\left(1+\frac{2}{n}\right) \cdots\left(1+\frac{n}{n}\right)\right)^{\frac{1}{n}}
$\frac{3}{e}$
$e$
Answer: $\frac{4}{e}$
Let $M$ be a $3 \times 3$ real matrix. Let $\left(\begin{array}{l}1 \\ 2 \\ 3\end{array}\right),\left(\begin{array}{l}1 \\ 1 \\ 1\end{array}\right)$ and $\left(\begin{array}{c}0 \\ -1 \\ \alpha\end{array}\right)$ be the eigenvectors of $M$ corresponding to three distinct eigenvalues of $M$. where $\alpha$ is a real number. Then. which of the following is NOT a possible value of $\alpha$ ?
1.$1$
2 .$-2$
Answer: $-2$
Problem 10
\lim _{x \rightarrow 0} \frac{e^{-3 x}-e^{x}+4 x}{5(1-\cos x)}
$$ is equal to
Consider a sequence of independent Bernoulli trials with probability of success in each trial as $\frac{1}{3}$. The probability that three successes occur before four failures is equal to
1.$\frac{179}{841}$
4.$\frac{179}{1215}$
Answer: $\frac{179}{1215}$
S=\sum_{k=1}^{\infty}(-1)^{k-1} \frac{1}{k}\left(\frac{1}{4}\right)^{k} \text { and } T=\sum_{k=1}^{\infty} \frac{1}{k}\left(\frac{1}{5}\right)^{k}
1.$5 S-4 T=0$
2.$S-T=0$
$16 S-25 T=0$
$4 S-5 T=0$
Answer: $S-T=0$
IIT JAM 2021 - Problem 13
Let $a_{1}=5$ and define recursively
a_{n+1}=3^{\frac{1}{4}}\left(a_{n}\right)^{\frac{3}{4}}, \quad n \geq 1
$\{a_{n}\}$ is monotone decreasing, and $\lim _{n \rightarrow \infty} a{n}=3$
$\{a_{n}\}$ is decreasing, and $\lim_ {n \rightarrow \infty} a{n}=0$
$\{a_{n}\}$ is non-monotone, and $\lim_ {n \rightarrow \infty} a{n}=3$
$\{a_{n}\}$ is monotone increasing, and $\lim _{n \rightarrow \infty} a{n}=3$
Answer:$\{a_{n}\}$ is monotone decreasing, and $\lim _{n \rightarrow \infty} a{n}=3$
Let $E_{1}, E_{2}$ and $E_{3}$ be three events such that $P\left(E_{1}\right)=\frac{4}{5}, P\left(E_{2}\right)=\frac{1}{2}$ and $P\left(E_{3}\right)=\frac{9}{10}$
Then. which of the following statements is FALSE?
$P\left(E_{1} \cup E_{2} \cup E_{3}\right) \geq \frac{9}{10}$
$P\left(E_{1} \cup E_{2}\right) \geq \frac{4}{5}$
$P\left(E_{2} \cap E_{3}\right) \leq \frac{1}{2}$
$P\left(E_{1} \cap E_{2} \cap E_{3}\right) \leq \frac{1}{6}$
Answer: $P\left(E_{1} \cap E_{2} \cap E_{3}\right) \leq \frac{1}{6}$
Let $E_{1}, E_{2}, E_{3}$ and $E_{4}$ be four events such that
P\left(E_{i} \mid E_{4}\right)=\frac{2}{3}, i=1,2,3 ; P\left(E_{i} \cap E_{j}^{c} \mid E_{4}\right)=\frac{1}{6}, i, j=1,2,3 ; i \neq j \text { and } P\left(E_{1} \cap E_{2} \cap E_{3}^{c} \mid E_{4}\right)=\frac{1}{6}
Then. $P\left(E_{1} \cup E_{2} \cup E_{3} \mid E_{4}\right)$ is equal to
Let $X$ be a random variable having the probability density function
f(x)=\begin{cases}
e^{-x}, & x>0 \\
0, & x \leq 0
\end{cases}.
Define $Y=[X]$, where $[X]$ denotes the largest integer not exceeding $X$. Then, $E\left(Y^{2}\right)$ is equal to
1.$\frac{e+1}{(e-1)^{2}}$
$\frac{(e+1)^{2}}{(e-1)^{2}}$
$\frac{e(e+1)^{2}}{e-1}$
$\frac{e(e+1)}{e-1}$
Answer: $\frac{e+1}{(e-1)^{2}}$
Let $X$ be a continuous random variable with distribution function
0, & \text { if } x<0 \\
a x^{2}, & \text { if } 0 \leq x<2 \\
1, & \text { if } x \geq 2
for some real constant $a$. Then, $E(X)$ is equal to
2 .$ \frac{4}{3}$
Answer: $ \frac{4}{3}$
Let $X_{1}, X_{2}, \ldots, X_{n}(n \geq 2)$ be a random sample from $U(\theta-5, \theta+5),$ where $\theta \in(0, \infty)$ is unknown. Let $T=\max \{X_{1}, X_{2}, \ldots, X_{n}\}$ and $U=\min \{X_{1}, X_{2}, \ldots, X_{n}\} .$ Then, which of the following statements is TRUE?
$U+8$ is an MLE of $\theta$
$\frac{T+U}{2}$ is the unique $\mathrm{MLE}$ of $\theta$
MLE of $\frac{1}{\theta}$ does NOT exist
$\frac{2}{T+U}$ is an $\mathrm{MLE}$ of $\frac{1}{\theta}$
Answer: $\frac{2}{T+U}$ is an $\mathrm{MLE}$ of $\frac{1}{\theta}$
Consider the problem of testing $H_{0}: X \sim f_{0}$ against $H_{1}: X \sim f_{1}$ based on a sample of size 1 , where
$f_{0}(x)=\begin{cases}1, 0 \leq x \leq 1 \\ 0, \text { otherwise }\end{cases}.$ and $f_{1}(x)=\begin{cases}2-2 x , 0 \leq x \leq 1 \\ 0, \text { otherwise }\end{cases}$.
Then, the probability of Type II error of the most powerful test of size $\alpha=0.1$ is equal to
Answer: 0.81
Let $X$ and $Y$ be random variables having chi-square distributions with 6 and 3 degrees of freedom, respectively. Then, which of the following statements is TRUE?
$P(X<6)>P(Y<6)$
$P(X>0.7)>P(Y>0.7)$
$P(X>3)3)$
$P(X>0.7)0.7)$
Answer: $P(X>0.7)>P(Y>0.7)$
Let $f: \mathbb{R}^{2} \rightarrow \mathbb{R}$ be a function defined by
$f(x, y)=\begin{cases}\frac{y^{3}}{x^{2}+y^{2}}, & (x, y) \neq(0,0) \ 0, & (x, y)=(0,0)\end{cases}$.
Let $f_{x}(x, y)$ and $f_{y}(x, y)$ denote the first order partial derivatives of $f(x, y)$ with respect to $x$ and $y$,
respectively, at the point $(x, y)$. Then, which of the following statements is FALSE?
$f$ is NOT differentiable at (0,0)
$f_{y}(0,0)$ exists and $f_{y}(x, y)$ is continuous at (0,0)
$f_{y}(x, y)$ exists and is bounded at every $(x, y) \in \mathbb{R}^{2}$
$f_{x}(x, y)$ exists and is bounded at every $(x, y) \in \mathbb{R}^{2}$
Answer: $f_{y}(0,0)$ exists and $f_{y}(x, y)$ is continuous at (0,0)
Let $(X, Y)$ be a random vector with joint moment generating function
M\left(t_{1}, t_{2}\right)=\frac{1}{\left(1-\left(t_{1}+t_{2}\right)\right)\left(1-t_{2}\right)}, \quad-\infty<t_{1}<\infty,-\infty<t_{2}<\min \{1,1-t_{1}\}
Let $Z=X+Y$. Then. $Var(Z)$ is equal to
Answer: 5
Let $X_{1}, X_{2}, \ldots, X_{n}$ be a random sample from an exponential distribution with probability density function
f(x ; \theta)=\begin{cases}
\theta e^{-\theta x}, x>0 \\
0, \text { otherwise }
\end{cases}
where $\theta \in(0, \infty)$ is unknown. Let $\alpha \in(0,1)$ be fixed and let $\beta$ be the power of the most powerful test of size $\alpha$ for testing $H_{0}: \theta=1$ against $H_{1}: \theta=2$.
Consider the critical region
$R=\{\left(x_{1}, x_{2}, \ldots, x_{n}\right) \in \mathbb{R}^{n} ; \sum_{l=1}^{n} x_{i}>\frac{1}{2} \chi_{2 n}^{2}(1-\alpha)\}$
where for any $\gamma \in(0,1), \chi_{2 n}^{2}(\gamma)$ is a fixed point such that $P\left(x_{2 n}^{2}>x_{2 n}^{2}(\gamma)\right)=\gamma .$ Then, the
critical region $R$ corresponds to the
1.most powerful test of size $\beta$ for testing $H_{0}^{}: \theta=2$ against $H_{1}^{}: \theta=1$
2.most powerful test of size $\alpha$ for testing $H_{0}: \theta=1$ against $H_{1}: \theta=2$
3.most powerful test of size $1-\beta$ for testing $H_{0}^{}: \theta=2$ against $H_{1}^{}: \theta=1$
4.most powerful test of size $1-\alpha$ for testing $H_{0}^{}: \theta=2$ against $H_{1}^{}: \theta=1$
Answer: most powerful test of size $\alpha$ for testing $H_{0}^{}: \theta=2$ against $H_{1}^{}: \theta=1$ [No Option]
Consider three coins having probabilities of obtaining head in a single trial as $\frac{1}{4}, \frac{1}{2}$ and $\frac{3}{4}$, respectively, A player selects one of these three coins at random (each coin is equally likely to be selected). If the player tosses the selected coin five times independently, then the probability of obtaining two tails in five tosses is equal to
$\frac{125}{384}$
Answer: $\frac{85}{384}$
For $a \in \mathbb{R}$, consider the system of linear equations
$\begin{array}{ll}a x+a y & =a+2 \\ x+a y+(a-1) z & =a-4 \\ a x+a y+(a-2) z & =-8\end{array}$
in the unknowns $x, y$ and $z$. Then. which of the following statements is $\mathbf{T R U E}$ ?
The given system has a unique solution for $a=-2$
The given system has a unique solution for $a=1$
The given system has infinitely many solutions for $a=-2$
The given system has infinitely many solutions for $a=2$
Answer: The given system has a unique solution for $a=-2$
Let $X$ and $Y$ be independent $N(0,1)$ random variables and $Z=\frac{|X|}{|Y|} .$ Then, which of the
following expectations is finite?
$E(Z)$
$E\left(\frac{1}{Z \sqrt{Z}}\right)$
$E(Z \sqrt{Z})$
$E\left(\frac{1}{\sqrt{Z}}\right)$
Answer: $E\left(\frac{1}{\sqrt{Z}}\right)$
Let $\{X_{n}\}_{n>1}$ be a sequence of independent and identically distributed $N(0,1)$ random variables.
\lim _{n \rightarrow \infty} P\left(\frac{\sum{i=1}^{n} X_{i}^{4}-3 n}{\sqrt{32 n}} \leq \sqrt{6}\right)
$\Phi(\sqrt{2})$
$\Phi(1)$
Answer: $\Phi(\sqrt{2})$
Let $X$ be a continuous random variable having the moment generating function
M(t)=\frac{e^{t}-1}{t}, \quad t \neq 0
Let $\alpha=P\left(48 X^{2}-40 X+3>0\right)$ and $\beta=P\left((\ln X)^{2}+2 \ln X-3>0\right)$.
Then, the value of $\alpha-2 \ln \beta$ is equal to
$\frac{10}{3}$
Answer: $\frac{19}{3}$
Let $X_{1}, X_{2}, \ldots, X_{n}(n \geq 3)$ be a random sample from Poisson $(\theta),$ where $\theta \in(0, \infty)$ is unknown and
T=\sum_{i=1}^{n} X_{i}
Then, the uniformly minimum variance unbiased estimator of $e^{-2 \theta} \theta^{3}$
is $\quad \frac{T}{n}\left(\frac{T}{n}-1\right)\left(\frac{T}{n}-2\right)\left(1-\frac{2}{n}\right)^{T-3}$
is $\frac{T(T-1)(T-2)(n-2)^{T-3}}{n^{T}}$
does NOT exist
is $e^{-\frac{2 T}{n}\left(\frac{T}{n}\right)^{3}}$
Answer: $\frac{T(T-1)(T-2)(n-2)^{T-3}}{n^{T}}$
Let $\{a_{n}\}_{n \geq 1}$ be a sequence of real numbers such that $a_{n} \geq 1$, for all $n \geq 1$. Then, which of the following conditions imply the divergence of $\{a_{n}\}_{n \geq 1} ?$
1$\sum_{n=1}^{\infty} b_{n}$ converges, where $b_{1}=a_{1}$ and $b_{n}=a_{n+1}-a_{n},$ for all $n>1$
$\{\sqrt{a_{n}}\}_{n \geq 1}$ converges
$\lim _{n \rightarrow \infty} \frac{a{2 n+1}}{a_{2 n}}=\frac{1}{2}$
$\{a_{n}\}_{n} \geq 1$ is non-increasing
Answer: $\lim _{n \rightarrow \infty} \frac{a{2 n+1}}{a_{2 n}}=\frac{1}{2}$
Some Useful Links:
IIT JAM MS (Set B) 2021 Question Paper - Problems and Solutions
IIT JAM MS (Set C) 2021 Question Paper - Problems and Solutions
How to Prepare for IIT JAM Statistics?
Know about the learning Paths
Our Statistics Program | CommonCrawl |
Back to Conference View
4 Nov 2019, 08:00 → 8 Nov 2019, 13:00 Australia/Adelaide
Paul Douglas Jackson (University of Adelaide), Waseem Kamleh (University of Adelaide)
Welcome to the 24th International Conference on Computing in High-Energy and Nuclear Physics. The CHEP conference series addresses the computing, networking and software issues for the world's leading data‐intensive science experiments that currently analyse hundreds of petabytes of data using worldwide computing resources.
CHEP 2019 will be held in Adelaide, South Australia, between Monday-Friday 4-8 November 2019. The venue for the CHEP 2019 conference is the Adelaide Convention Centre, conveniently located on the picturesque banks of the Torrens Lake in the heart of the city.
The optional pre-conference WLCG & HSF workshop will be held at the North Terrace campus of the University of Adelaide between Saturday-Sunday 2-3 November 2019.
More details about the conference can be found at the conference website: CHEP2019.org
epj-woc-latex.zip saga_help.pdf [Venue location - Google Maps] woc_1col.doc woc_1col.pdf
Monday, 4 November
Mon, 4 Nov
Tue, 5 Nov
Wed, 6 Nov
Thu, 7 Nov
Registration 1h Foyer F
Foyer F
Plenary: Welcome / DUNE / Belle II Hall G
Hall G
Convener: Simone Campana (CERN)
Welcome 30m
CHEP2019_Welcome.pdf
Collaborative computing needs for DUNE 30m
Speaker: Heidi Marie Schellman (Oregon State University (US))
Schellman_CHEP2019_v4-wide.pdf Schellman_CHEP2019_v4-wide.pdf Schellman_CHEP2019_v4-wide.pptx
Belle II 30m
Speaker: David Dossett (University of Melbourne)
belle2-status.pdf
Break 30m
Track 1 – Online and Real-time Computing: Data acquisition (DAQ) Riverbank R5
Riverbank R5
Convener: Chunhua Li
A novel centralized slow control and board management solution for ATCA blades based on the Zynq Ultrascale+ System-on-Chip 15m
Data acquisition systems (DAQ) for high energy physics experiments utilize complex FPGAs to handle unprecedented high data rates. This is especially true in the first stages of the processing chain. Developing and commissioning these systems becomes more complex as additional processing intelligence is placed closer to the detector, in a distributed way directly on the ATCA blades, in the other hand, sophisticated slow control is as well desirable. In this contribution, we introduce a novel solution for ATCA based systems, which combines the IPMI, a Linux based slow-control software, and an FPGA for custom slow-control tasks in one single Zynq Ultrascale+ (US+) System-on-Chip (SoC) module.
The Zynq US+ SoC provides FPGA logic, high-performance ARM-A53 multi-core processors and two ARM-R5 real-time capable processors. The ARM-R5 cores are used to implement the IPMI/IPMC functionality and communicate via backplane with the shelf manager at power-up. The ARM-R5 are also connected to the power supply (via PMBus), to voltage and current monitors, to clock generators and jitter cleaners (via I2C, SPI). Once full power is enabled from the crate, a Linux based operating system starts on the ARM-A53 cores. The FPGA is used to implement some of the low-level interfaces, including IPBus, or glue-logic. The SoC is the central entry point to the main FPGAs on the motherboard via IPMB and TCP/IP based network interfaces. The communication between the Zynq US+ SoC and the main FPGAs uses the AXI chip-to-chip protocol via MGT pairs keeping infrastructure requirements in the main FPGAs to a minimum.
Speaker: Oliver Sander (KIT - Karlsruhe Institute of Technology (DE))
CHEP19_ZUSP_IPMC.pdf
DAQling: an open source data acquisition framework 15m
The data acquisition (DAQ) software for most applications in high energy physics is composed of common building blocks, such as a networking layer, plug-in loading, configuration, and process management. These are often re-invented and developed from scratch for each project or experiment around specific needs. In some cases, time and available resources can be limited and make development requirements difficult or impossible to meet.
Moved by these premises, our team developed an open-source lightweight C++ software framework called DAQling, to be used as the core for the DAQ systems of small and medium-sized experiments and collaborations.
The framework offers a complete DAQ ecosystem, including communication layer based on the widespread ZeroMQ messaging library, configuration management based on the JSON format, control of distributed applications, extendable operational monitoring with web-based visualization, and a set of generic utilities. The framework comes with minimal dependencies, and provides automated host and build environment setup based on the Ansible automation tool. Finally, the end-user code is wrapped in so-called "Modules", that can be loaded at configuration time, and implement specific roles.
Several collaborations already chose DAQling as the core for their DAQ systems, such as FASER, RD51, and NA61. We will present the framework and project-specific implementations and experiences.
Speaker: Enrico Gamberini (CERN)
CHEP_DAQling_gamberini.pdf
DUNE DAQ R&D integration in ProtoDUNE Single-Phase at CERN 15m
The DAQ system of ProtoDUNE-SP successfully proved its design principles and met the requirements of the beam run of 2018. The technical design of the DAQ system for the DUNE experiment has major differences compared to the prototype due to different requirements and the environment. The single-phase prototype in CERN is the major integration facility for R&D aspects of the DUNE DAQ system. This covers the exploration of additional data processing capabilities and optimization of the FELIX system, which is the chosen TPC readout solution for the DUNE Single Phase supermodules. One of the fundamental differences is that DUNE DAQ relies on self-triggering. Therefore real-time processing of the data stream for hit and trigger primitive finding is essential for the requirement of continuous readout, where Intel AVX register instructions are used for better performance. The supernova burst trigger requires a large and fast buffering technique, where 3D XPoint persistent memory solutions are evaluated and integrated. In order to maximize resource utilization of the FELIX hosting servers, the elimination of the 100Gb network communication stack is desired. This implies the design and development of a single-host application layer, which is a fundamental element of the self-triggering chain.
This paper discusses the evaluation and integration of these developments for the DUNE DAQ, in the ProtoDUNE environment.
Speaker: Roland Sipos (CERN)
RolandSipos-DUNE-DAQ-RnD-CHEP2019.pdf
FELIX: commissioning the new detector interface for the ATLAS trigger and readout system 15m
After the current LHC shutdown (2019-2021), the ATLAS experiment will be required to operate in an increasingly harsh collision environment. To maintain physics performance, the ATLAS experiment will undergo a series of upgrades during the shutdown. A key goal of this upgrade is to improve the capacity and flexibility of the detector readout system. To this end, the Front-End Link eXchange (FELIX) system has been developed. FELIX acts as the interface between the data acquisition; detector control and TTC (Timing, Trigger and Control) systems; and new or updated trigger and detector front-end electronics. The system functions as a router between custom serial links from front end ASICs and FPGAs to data collection and processing components via a commodity switched network. The serial links may aggregate many slower links or be a single high bandwidth link. FELIX also forwards the LHC bunch-crossing clock, fixed latency trigger accepts and resets received from the TTC system to front-end electronics. FELIX uses commodity server technology in combination with FPGA-based PCIe I/O cards. FELIX servers run a software routing platform serving data to network clients. Commodity servers connected to FELIX systems via the same network run the new multi-threaded Software Readout Driver (SW ROD) infrastructure for event fragment building, buffering and detector-specific processing to facilitate online selection. This presentation will cover the design of FELIX, the SW ROD, and the results of the installation and commissioning activities for the full system in summer 2019.
Speaker: William Panduro Vazquez (Royal Holloway, University of London)
FELIX CHEP 2019 WPV.pdf
Integration of custom DAQ Electronics in a SCADA Framework 15m
LHCb is one of the 4 experiments at the LHC accelerator at CERN. During the upgrade phase of the experiment, several new electronic boards and Front End chips that perform the data acquisition for the experiment will be added by the different sub-detectors. These new devices will be controlled and monitored via a system composed of GigaBit Transceiver (GBT) chips that manage the bi-directional slow control traffic to the Slow Control Adapter(s) (SCA) chips. The SCA chips provide multiple field buses to interface the new electronics devices (I2C, GPIO, etc). These devices will need to be integrated in the Experiment Control System (ECS) that drives LHCb. A set of tools was developed that provide an easy integration of the control and monitoring of the devices in the ECS. A server (GbtServ) provides the low level communication layer with the devices via the several user buses in the SCA chip and exposes an interface for control to the experiment SCADA (WinCC OA), the fwGbt component provides the interface between the SCADA and the GbtServ and the fwHw component, a tool that allows the abstraction of the devices models into the ECS. Using the graphical User Interfaces or XML files describing the structure and registers of the devices it creates the necessary model of the hardware as a data structure in the SCADA. It allows then the control and monitoring of the defined registers using their name, without the need to know the details of the hardware behind. The fwHw tool also provides the facility of defining and applying recipes - named sets of configurations - which can be used to easily configure the hardware according to specific needs.
Speaker: Luis Granado Cardoso (CERN)
20191104_CHEP2019_Integration_of_custom_DAQ_Electronics_in_a_SCADA_Framework.pdf
Zero-deadtime processing in beta spectroscopy for measurement of the non-zero neutrino mass 15m
The Project 8 collaboration seeks to measure, or more tightly bound, the mass of the electron antineutrino by applying a novel spectroscopy technique to precision measurement of the tritium beta-decay spectrum. For the current, lab-bench-scale, phase of the project a single digitizer produces 3.2 GB/s of raw data. An onboard FPGA uses digital down conversion to extract three 100 MHz wide (roughly 1.6 keV) frequency regions of interest, and transmits both the time and frequency domain representation of each region over a local network connection for a total of six streams, each at 200 MB/s. Online processing uses the frequency-domain representation to implement a trigger based on excesses of power within a narrow frequency band and extended in time. When the trigger condition is met, the corresponding time-domain data are saved, reducing the total data volume and rate while allowing for more sophisticated offline event reconstruction. For the next phase of the experiment, the channel count will increase by a factor of sixty. Each channel will receive signals from the full source volume, but at amplitudes below what is detectable. Phase-shifted combinations of all channels will coherently enhance the amplitude of signals from a particular sub-volume to detectable levels, and tiling set of such phase shifts will allow the entire source volume to be scanned for events. We present the online processing system which has successfully been deployed for the current, single-channel, phase. We also present the status and design for a many-channel platform.
Speaker: Benjamin LaRoque
CHEP2019_LaRoque_zero_deadtime.pdf
Track 2 – Offline Computing: ML Reconstruction & PID Riverbank R6
Convener: Teng Jian Khoo (Universite de Geneve (CH))
A Graph Neural Network Approach for Neutrino Signal Reconstruction from LarTPC Raw Data 15m
The Deep Underground Neutrino Experiment (DUNE) is an international effort to build the next-generation neutrino observatory to answer fundamental questions about the nature of elementary particles and their role in the universe. Integral to DUNE is the process of reconstruction, where the raw data from Liquid Argon Time Projection Chambers (LArTPC) are transformed into products that can be used for physics analysis. Experimental data is currently obtained from a prototype of DUNE (ProtoDUNE) that is built as a full scale engineering prototype and uses a beam of charged particles, rather than a neutrino beam to test the detector response. The reconstruction software consumes on average 35% of the computational resources in Fermilab. For DUNE it is expected that reconstruction will play a far greater, and computationally more expensive role, as signal activity will be significantly reduced upon deployment of the neutrino beam. Consequently, identifying signals within the raw data will be a much harder task. Alternative approaches to neutrino signal reconstruction must be investigated in anticipation of DUNE. Machine learning approaches for reconstruction are being investigated, but currently, no end-to-end solution exists. As part of an end-to-end reconstruction solution, we propose an approach using Graph Neural Networks (GNN) to identify signals (i.e. hits) within the raw data. In particular, since the raw data of LarTPCs are both spatial and temporal in nature, Graph Spatial-Temporal Networks (GSTNs), capable of capturing dependency relationships among hits, are promising models. Our solution can be deployed for both online (trigger-level) and offline reconstruction. In this work, we describe the methodology of GNNs (and GSTNs in particular) for neutrino signal reconstruction and the preliminary results.
Speaker: Federico Carminati (CERN)
GraphNNDune.pdf GraphNNDune.pptx
Particle Reconstruction with Graph Networks for irregular detector geometries 15m
We use Graph Networks to learn representations of irregular detector geometries and perform on it typical tasks such as cluster segmentation or pattern recognition. Thanks to the flexibility and generality of the graph architecture, this kind of network can be applied to detector of arbitrarly geometry, representing the detector elements through a unique detector identification (e.g., physical position) and the readout value and embedding as vertices in a graph. We apply this idea to tasks related to calorimetry and tracking in LHC-like conditions, investigating original graph architectures to optimise performance and memory footprint.
Speaker: Maurizio Pierini (CERN)
Pierini.pdf
Interaction networks for jet characterisation at the LHC 15m
We study the use of interaction networks to perform tasks related to jet reconstruction. In particular, we consider jet tagging for generic boosted-jet topologies, tagging of large-momentum H$\to$bb decays, and anomalous-jet detection. The achieved performance is compared to state-of-the-art deep learning approaches, based on Convolutional or Recurrent architectures. Unlike these approaches, Interaction Networks allow to reach state-of-the art performance without making assumptions on the underlying data (e.g., detector geometry or resolution, particle ordering criterion, etc.). Given their flexibility, Interaction Networks provide an interesting possibility for deployment-friendly deep learning algorithms for the LHC experiments.
Speaker: Jean-Roch Vlimant (California Institute of Technology (US))
vlimant_CHEP19_JEDI_Nov19.pdf
A deep neural network method for analyzing the CMS High Granularity Calorimeter (HGCAL) events 15m
For the High Luminosity LHC, the CMS collaboration made the ambitious choice of a high granularity design to replace the existing endcap calorimeters. The thousands of particles coming from the multiple interactions create showers in the calorimeters, depositing energy simultaneously in adjacent cells. The data are analog to 3D gray-scale image that should be properly reconstructed.
In this talk we will investigate how to localize and identify the thousands of showers in such events with a Deep Neural Network model. This problem is well-known in the Vision domain, it belongs to the challenging class: "Object Detection" which is significantly a harder task than "only" an image classification/regression because of the mixed goals : the cluster/pattern identification (cluster type), its localization (bounding box), and the object segmentation (mask) in the scene.
Our project presents a lot of similarities with the ones treated in Industry but accumulates several technological challenges like the 3D treatment. We will present the Mask R-CNN model which has already proven its efficiency in Industry (for 2D images) and how we extended it to tackle 3D HGCAL data. To conclude we will present the first results of this challenge.
Speaker: Gilles Grasseau (Centre National de la Recherche Scientifique (FR))
CHEP-19-HGCAL2D-v2.pdf
GRAAL: A novel package to reconstruct data of triple-GEM detectors 15m
Micro-Pattern Gas Detectors (MPGDs) are the new frontier in between the gas tracking systems. Among them, the triple Gas Electron Multiplier (triple-GEM) detectors are widely used. In particular, cylindrical triple-GEM (CGEM) detectors can be used as inner tracking devices in high energy physics experiments. In this contribution, a new offline software called GRAAL (Gem Reconstruction And Analysis Library) is presented: digitization, reconstruction, alignment algorithms and analysis of the data collected with APV-25 and TIGER ASICs within GRAAL framework are reported. An innovative cluster reconstruction method based on charge centroid, micro-TPC and their merge is discussed, and the detector performances evaluated experimentally for both planar triple-GEM and CGEM prototypes.
Speaker: Riccardo Farinelli (Universita e INFN, Ferrara (IT))
20191024_GRAAL_v4.pdf
Particle identification algorithms for the Panda Barrel DIRC 15m
The innovative Barrel DIRC (Detection of Internally Reflected Cherenkov light) counter will provide hadronic particle identification (PID) in the central region of the PANDA experiment at the new Facility for Antiproton and Ion Research (FAIR), Darmstadt, Germany. This detector is designed to separate charged pions and kaons with at least 3 standard deviations for momenta up to 3.5 GeV/c covering the polar angle range of 22-140 degree.
An array of microchannel plate photomultiplier tubes is used to detect the location and arrival time of the Cherenkov photons with a position resolution of 2 mm and time precision of about 100 ps. Two reconstruction algorithms have been developed to make optimum use of the observables and to determine the performance of the detector. The "geometrical reconstruction" performs PID by reconstructing the value of the Cherenkov angle and using it in a track-by-track maximum likelihood fit. This method mostly relies on the position of the detected photons in the reconstruction, while the "time imaging" utilizes both, position and time information, and directly performs the maximum likelihood fit using probability density functions determined analytically or from detailed simulations.
Geant4 simulations and data from the particle beams where used to optimize both algorithms in terms of PID performance and reconstruction speed. We will present current status of development and discuss advantages of each algorithm.
Speaker: Dr Roman Dzhygadlo (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
chep19_416_rdzhygadlo.pdf
Track 3 – Middleware and Distributed Computing: Workload Management & Cost Model Riverbank R3
Convener: Catherine Biscarat (L2I Toulouse, IN2P3/CNRS (FR))
The DIRAC interware: current, upcoming and planned capabilities and technologies 15m
Efficient access to distributed computing and storage resources is mandatory for the success of current and future High Energy and Nuclear Physics Experiments. DIRAC is an interware to build and operate distributed computing systems. It provides a development framework and a rich set of services for the Workload, Data and Production Management tasks of large scientific communities. A single DIRAC installation provides a complete solution for the distributed computing of one, or more than one collaboration. The DIRAC Workload Management System (WMS) provides a transparent, uniform interface for managing computing resources. The DIRAC Data Management System (DMS) offers all the necessary tools to ensure data handling operations: it supports transparent access to storage resources based on multiple technologies, and is easily expandable. Distributed Data management can be performed, also using third party services, and operations are resilient with respect to failures. DIRAC is highly customizable and can be easily extended. For these reasons, a vast and heterogeneous set of scientific collaborations have adopted DIRAC as the base for their computing models. Users from different experiments can interact with the system in different ways, depending on their specific tasks, expertise level and previous experience using command line tools, python APIs or Web Portals. The requirements of the diverse DIRAC user communities and hosting infrastructures triggered multiple developments to improve the system usability: examples include the adoption of industry standard authorization and authentication infrastructure solutions, the management of diverse computing resources (cloud, HPC, GPGPU, etc.), the handling of high-intensity work and data flows, but also advanced monitoring and accounting using no-SQL based solutions and message queues. This contribution will highlight DIRAC's current, upcoming and planned capabilities and technologies.
Speaker: Federico Stagni (CERN)
DIRAC_CHEP2019.pdf
Automated and Distributed Monte Carlo Generation for GlueX 15m
MCwrapper is a set of system that manages the entire Monte Carlo production workflow for GlueX and provides standards for how that Monte Carlo is produced. MCwrapper was designed to be able to utilize a variety of batch systems in a way that is relatively transparent to the user, thus enabling users to quickly and easily produce valid simulated data at home institutions worldwide. Additionally, MCwrapper supports an autonomous system that takes user's project submissions via a custom web application. The system then atomizes the project into individual jobs, matches these jobs to resources, and monitors the jobs status. The entire system is managed by a database which tracks almost all facets of the systems from user submissions to the individual jobs themselves. Users can interact with their submitted projects online via a dashboard or, in the case of testing failure, can modify their project requests from a link contained in an automated email. Beginning in 2018 the GlueX Collaboration began to utilize the Open Science Grid (OSG) to handle a bulk of simulation tasks; these tasks are currently being performed on the OSG automatically via MCwrapper. This talk will outline the entire system of MCwrapper, its use cases, and the unique challenges facing the system.
Speaker: Thomas Britton (JLab)
CHEP_2019_MCwrapper.odp CHEP_2019_MCwrapper.pdf
Production processing and workflow management software evaluation in the DUNE collaboration 15m
The Deep Underground Neutrino Experiment (DUNE) will be the world's foremost neutrino detector when it begins taking data in the mid-2020s. Two prototype detectors, collectively known as ProtoDUNE, have begun taking data at CERN and have accumulated over 3 PB of raw and reconstructed data since September 2018. Particle interaction within liquid argon time projection chambers are challenging to reconstruct, and the collaboration has set up a dedicated Production Processing group to perform centralized reconstruction of the large ProtoDUNE datasets as well as to generate large-scale Monte Carlo simulation. Part of the production infrastructure includes workflow management software and monitoring tools that are necessary to efficiently submit and monitor the large and diverse set of jobs needed to meet the experiment's goals. We will give a brief overview of DUNE and ProtoDUNE, describe the various types of jobs within the Production Processing group's purview, and discuss the software and workflow management strategies are currently in place to meet existing demand. We will conclude with a description of our requirements in a workflow management software solution and our planned evaluation process.
Speaker: Dr Kenneth Richard Herner (Fermi National Accelerator Laboratory (US))
DUNE_Prod_CHEP2019.pdf
Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era 15m
Efforts in distributed computing of the CMS experiment at the LHC at CERN are now focusing on the functionality required to fulfill the projected needs for the HL-LHC era. Cloud and HPC resources are expected to be dominant relative to resources provided by traditional Grid sites, being also much more diverse and heterogeneous. Handling their special capabilities or limitations and maintaining global flexibility and efficiency, while also operating at scales much higher than the current capacity, are the major challenges being addressed by the CMS Submission Infrastructure team. This contribution will discuss the risks to the stability and scalability of the CMS HTCondor infrastructure extrapolated to such a scenario, thought to be derived mostly from its growing complexity, with multiple Negotiators and schedulers flocking work to multiple federated pools. New mechanisms for enhanced customization and control over resource allocation and usage, mandatory in this future scenario, will be also presented.
Speaker: Antonio Perez-Calero Yzquierdo (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
CMS_SubInf_Evol_CHEP2019.pdf
Efficient Iterative Calibration on the Grid using iLCDirac 15m
Software tools for detector optimization studies for future experiments need to be efficient and reliable. One important ingredient of the detector design optimization concerns the calorimeter system. Every change of the calorimeter configuration requires a new set of overall calibration parameters which in its turn requires a new calorimeter calibration to be done. An efficient way to perform calorimeter calibration is therefor essential in any detector optimization tool set.
In this contribution, we present the implementation of a calibration system in iLCDirac, which is an extension of the DIRAC grid interware. Our approach provides more direct control over the grid resources to reduce overhead of file download and job initialisation, and provides more flexibility during the calibration process. The service controls the whole chain of a calibration procedure, collects results from finished iterations and redistributes new input parameters among worker nodes. A dedicated agent monitors the health of running jobs and resubmits them if needed. Each calibration has an up-to-date backup which can be used for recovery in case of any disruption in the operation of the service.
As a use case, we will present a study of optimization of the calorimetry system of the CLD detector concept for FCC-ee, which has been adopted from the CLICdet detector model. The detector has been simulated with the DD4hep package and calorimetry performance have been studied with the particle flow package PandoraPFA.
Speaker: Andre Sailer (CERN)
191104_ilcdirac_calibration_chep19.pdf
New developments in cost modeling for the LHC computing 15m
The increase in the scale of LHC computing during Run 3 and Run 4 (HL-LHC) will certainly require radical changes to the computing models and the data processing of the LHC experiments. The working group established by WLCG and the HEP Software Foundation to investigate all aspects of the cost of computing and how to optimise them has continued producing results and improving our understanding of this process. In particular, experiments have developed more sophisticated ways to calculate their resource needs, we have a much more detailed process to
calculate infrastructure costs. This includes studies on the impact of HPC and GPU based resources on meeting the computing demands. We have also developed and perfected tools to quantitatively study the performance of experiments workloads and we are actively collaborating with other activities related to data access, benchmarking and technology cost evolution. In this contribution we expose our recent developments and results and outline the directions of future work.
Speaker: Andrea Sciabà (CERN)
Cost model CHEP2019.pdf Cost model CHEP2019.pptx
Track 4 – Data Organisation, Management and Access: community input, experiments and new perspectives Riverbank R8
Convener: Xavier Espinal (CERN)
ServiceX – A Distributed, Caching, Columnar Data Delivery Service 15m
We will describe a component of the Intelligent Data Delivery Service being developed in collaboration with IRIS-HEP and the LHC experiments. ServiceX is an experiment-agnostic service to enable on-demand data delivery specifically tailored for nearly-interactive vectorized analysis. This work is motivated by the data engineering challenges posed by HL-LHC data volumes and the increasing popularity of python and Spark-based analysis workflows.
ServiceX gives analyzers the ability to query events by dataset metadata. It uses containerized transformations to extract just the data required for the analysis. This operation is collocated with the data lake to avoid transferring unnecessary branches over the WAN. Simple filtering operations are supported to further reduce the amount of data transferred.
Transformed events are cached in a columnar datastore to accelerate delivery of subsequent similar requests. ServiceX will learn commonly related columns and automatically include them in the transformation to increase the potential for cache hits by other users.
Selected events are streamed to the analysis system using an efficient wire protocol that can be readily consumed by a variety of computational frameworks. This reduces time-to-insight for physics analysis by delegating to ServiceX the complexity of event selection, slimming, reformatting, and streaming.
Speaker: Benjamin Galewsky
BenGalewskyCHEP2019.pdf BenGalewskyCHEP2019.pptx
OSiRIS: A Distributed Storage and Networking Project Update 15m
We will report on the status of the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) after its fourth year. OSiRIS is delivering a distributed Ceph storage infrastructure coupled together with software-defined networking to support multiple science domains across Michigan's three largest research universities. The project's goal is to provide a single scalable, distributed storage infrastructure that allows researchers at each campus to work collaboratively with other researchers across campus or across institutions. The NSF CC*DNI DIBBs program which funded OSiRIS is seeking solutions to the challenges of multi-institutional collaborations involving large amounts of data and we are exploring the creative use of Ceph and networking to address those challenges.
We will present details on the current status of the project and its various science domain users and use-cases. In the presentation we will cover the various design choices, configuration, tuning and operational challenges we have encountered in providing a multi-institutional Ceph deployment interconnected by a monitored, programmable network fabric. We will conclude with our plans for the final year of the project and its longer term outlook.
Speaker: Shawn Mc Kee (University of Michigan (US))
CHEP2019-OSiRIS.pdf
Distributed data management on Belle II 15m
The Belle II experiment started taking physics data in March 2019, with an estimated dataset of order 60 petabytes expected by the end of operations in the mid-2020s. Originally designed as a fully integrated component of the BelleDIRAC production system, the Belle II distributed data management (DDM) software needs to manage data across 70 storage elements worldwide for a collaboration of nearly 1000 physicists. By late 2018, this software required significant performance improvements to meet the requirements of physics data taking and was seriously lacking in automation. Rucio, the DDM solution created by ATLAS, was an obvious alternative but required tight integration with BelleDIRAC and a seamless yet non-trivial migration. This contribution describes the work done on both DDM options, the current status of the software running successfully in production and the problems associated with trying to balance long-term operations cost against short term risk.
Speaker: Siarhei Padolski (BNL)
CHEP_Padolski.pdf
Jiskefet, a bookkeeping application for ALICE 15m
A new bookkeeping system called Jiskefet is being developed for A Large Ion Collider Experiment (ALICE) during Long Shutdown 2, to be in production until the end of LHC Run 4 (2029).
Jiskefet unifies two functionalities. The first is gathering, storing and presenting metadata associated with the operations of the ALICE experiment. The second is tracking the asynchronous processing of the physics data.
It will replace the existing ALICE Electronic Logbook and AliMonitor, allowing for a technology refresh and the inclusion of new features based on the experience collected during Run 1 and Run 2.
The front end leverages web technologies much in use nowadays such as TypeScript and NodeJS and is adaptive to various clients such as tablets, mobile device and other screens. The back end includes a Swagger based REST API and a relational database.
This paper will describe the current status of the development, the initial experience in detector standalone commissioning setups and the future plans. It will also describe the organization of the work done by various student teams who work on Jiskefet in sequential and parallel semesters and how continuity is guaranteed by using guidelines on coding, documentation and development.
Speaker: Marten Teitsma (Amsterdam University of Applied Sciences (NL))
chep2019_jiskefet_presentation.pdf
Development of the JUNO Conditions Data Management System 15m
(On behalf of the JUNO collaboration)
The JUNO (Jiangmen Underground Neutrino Observatory) experiment is designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters with an unprecedented energy resolution of 3% at 1MeV. It is composed of a 20kton liquid scintillator central detector equipped with 18000 20" PMTs and 25000 3" PMTs, a water pool with 2000 20" PMTs, and a top tracker. Conditions data, coming from calibration and detector monitoring, are heterogeneous, different type of conditions data has different write rates, data format and data volume. JUNO conditions data management system (JCDMS) is developed to homogeneously treat all these heterogeneous conditions data in order to provide easy management and access with both Restful API and web interfaces, support good scalability and maintenance for long time running. we will present the status and development of JCDMS including the data model, workflows, interfaces, data caching and performance of the system.
Speaker: Prof. Xingtao Huang (Shandong University)
CHEP2019_Database_v3.pdf
Evaluation of the ATLAS model for remote access to database resident information for LHC Run 3 15m
The ATLAS model for remote access to database resident information relies upon a limited set of dedicated and distributed Oracle database repositories complemented with the deployment of Frontier system infrastructure on the WLCG. ATLAS clients with network access can get the database information they need dynamically by submitting requests to a squid server in the Frontier network which provides results from its cache or passes new requests along the network to launchpads co-located at one of the Oracle sites (the master Oracle database at CERN or one of the Tier 1 Oracle database replicas). Since the beginning of LHC Run 1, the system has evolved in terms of client, squid, and launchpad optimizations but the distribution model has remained fundamentally unchanged.
On the whole, the system has been broadly successful in providing data to clients with relatively few disruptions even while site databases were down due to redundancy overall. At the same time, its quantitative performance characteristics, such as the global throughput of the system, the load distribution between sites, and the constituent interactions that make up the whole, were largely unknown. But more recently, information has been collected from launchpad and squid logs into an Elastic Search repository which has enabled a wide variety of studies of various aspects of the system.
This presentation will describe dedicated studies of the data collected in Elastic Search over the previous year to evaluate the efficacy of the distribution model. Specifically, we will quantify any advantages that the redundancy of the system offers as well as related aspects such as the geographical dependence of wait times seen by clients in getting a response to its requests. These studies are essential so that during LS2 (the long shutdown between LHC Run 2 and Run 3), we can adapt the system in preparation for the expected increase in the system load in the ramp up to Run 3 operations.
Speaker: Elizabeth Gallas (University of Oxford (GB))
191104_CHEP2019_ATLAS_DB_Dist_final.pdf
Track 5 – Software Development: General frameworks and schedulers Riverbank R2
Convener: Dr Martin Ritter (LMU / Cluster Universe)
A software framework for FCC studies: status and plans 15m
The Future Circular Collider (FCC) is designed to provide unprecedented luminosity and unprecedented centre-of-mass energies. The physics reach and potential of the different FCC options - $e^+e^-$, $pp$, $e^-p$ - has been studied and published in dedicated Conceptual Design Reports (CDRs) published at the end of 2018.
Conceptual detector designs have been developed for such studies and tested with a mixture of fast and full simulations. These investigations have conducted using a common software framework called FCCSW.
In this presentation, after summarising the improvements implemented in FCCSW to achieve the results included in the CDRs, we will present the current development plans to support the continuation of the physics potential and detector concept optimization studies in view of future strategic decisions, in particular for the electron-positron machine.
Speaker: Gerardo Ganis (CERN)
chep2019_FCC-software.pdf
LArSoft and Future Framework Directions at Fermilab 15m
The diversity of the scientific goals across HEP experiments necessitates unique bodies of software tailored for achieving particular physics results. The challenge, however, is to identify the software that must be unique, and the code that is unnecessarily duplicated, which results in wasted effort and inhibits code maintainability.
Fermilab has a history of supporting and developing software projects that are shared among HEP experiments. Fermilab's scientific computing division currently expends effort in maintaining and developing the LArSoft toolkit, used by liquid argon TPC experiments, as well as the event-processing framework technologies used by LArSoft, CMS, DUNE, and the majority of Fermilab-hosted experiments. As computing needs for DUNE and the HL-LHC become clearer, the computing models are being rethought. This talk will focus on Fermilab's plans for addressing the evolving software landscape as it relates to LArSoft and the event-processing frameworks, and how commonality among experiment software can be achieved while still supporting customizations necessary for a given experiment's physics goals.
Speaker: Christopher Jones (Fermi National Accelerator Lab. (US))
chep2019-larsoft-knoepfel.pdf
ALFA: A framework for building distributed applications 15m
The ALFA framework is a joint development between ALICE Online-Offline and FairRoot teams. ALFA has a distributed architecture, i.e. a collection of highly maintainable, testable, loosely coupled, independently deployable processes.
ALFA allows the developer to focus on building single-function modules with well-defined interfaces and operations. The communication between the independent processes is handled by FairMQ transport layer. FairMQ offers multiple implementations of its abstract data transport interface, it integrates some popular data transport technologies like ZeroMQ and nanomsg. But also provides shared memory and RDMA transport (based on libfabric) for high throughput, low latency applications. Moreover, FairMQ allows the single process to use multiple and different transports at the same time.
FairMQ based processes can be controlled and orchestrated via different systems by implementing the corresponding plugin. However, ALFA delivers also the Dynamic Deployment System (DDS) as an independent set of utilities and interfaces, providing a dynamic distribution of different user processes on any Resource Management System (RMS) or a laptop.
ALFA is already being tested and used by different experiments in different stages of data processing as it offers an easy integration of heterogeneous hardware and software. Examples of ALFA usage in different stages of event processing will be presented; in a detector read-out as well as in an online reconstruction and in a pure offline world of detector simulations.
Speaker: Mohammad Al-Turany (GSI - Helmholtzzentrum fur Schwerionenforschung GmbH (DE))
alfa_chep2019.pdf alfa_chep2019.pptx
Using OpenMP for HEP Framework Algorithm Scheduling 15m
The OpenMP standard is the primary mechanism used at high performance computing facilities to allow intra-process parallelization. In contrast, many HEP specific software (such as CMSSW, GaudiHive, and ROOT) make use of Intel's Threading Building Blocks (TBB) library to accomplish the same goal. In this talk we will discuss our work to compare TBB and OpenMP when used for scheduling algorithms to be run by a HEP style data processing framework (i.e. running hundreds of interdependent algorithms at most once for each event read from the detector). This includes both scheduling of different algorithms to be run concurrently as well as scheduling concurrent work within one algorithm. As part of the discussion we present an overview of the OpenMP threading model. We also explain how we used OpenMP when creating a simplified HEP-like processing framework. Using that simplified framework, and a similar one written using TBB, we will present performance comparisons between TBB and different compiler versions of OpenMP.
OpenMP CHEP 2019.pdf
Configuration and scheduling of the LHCb trigger application 15m
The high-level trigger (HLT) of LHCb in Run 3 will have to process 5 TB/s of data, which is about two orders of magnitude larger compared to Run 2. The second stage of the HLT runs asynchronously to the LHC, aiming for a throughput of about 1 MHz. It selects analysis-ready physics signals by O(1000) dedicated selections totaling O(10000) algorithms to achieve maximum efficiency. This poses two problems: correct configuration of the application and low-overhead execution of individual algorithms and evaluation of the decision logic.
A python-based system for configuring the data and control flow of the Gaudi-based application, including all components, is presented. It is designed to be user-friendly by using functions for modularity and removing indirection layers employed previously in Run 2. Robustness is achieved by fully eliminating global state and instead building the data flow graph in a functional manner while keeping configurability of the full call stack.
A prototype of the second HLT stage comprising all recent features including a new scheduling algorithm, a faster data store and the above mentioned configuration system is benchmarked, demonstrating the performance of the framework with the expected application complexity.
Speaker: Niklas Nolte (CERN / Technische Universitaet Dortmund (DE))
CHEP_configuration_talk (1).pdf
Allen: A software framework for the GPU High Level Trigger 1 of LHCb 15m
As part of the LHCb detector upgrade in 2021, the hardware-level trigger will be removed, coinciding with an increase in luminosity. As a consequence, about 40 Tbit/s of data will be processed in a full-software trigger, a challenge that has prompted the exploration of alternative hardware technologies. Allen is a framework that permits concurrent many-event execution targeting many-core architectures. We present the core infrastructure of this R&D project developed in the context of the LHCb Upgrade I. Data transmission overhead is hidden with a custom memory manager, and GPU resource usage is maximized employing a deterministic scheduler. Our framework is extensible and covers the control flow and data dependency requirements of the LHCb High Level Trigger 1 algorithms. We discuss the framework design, performance and integration aspects of a full realization of a GPU High Level Trigger 1 in LHCb.
Speaker: Daniel Hugo Campora Perez (Universidad de Sevilla (ES))
dcampora_chep2019.pdf
Track 6 – Physics Analysis: Analysis Tools & Methods Hall G
Convener: Ross Young (University of Adelaide)
zfit: scalable pythonic fitting 15m
Statistical modelling is a key element for High-Energy Physics (HEP) analysis. Currently, most of this modelling is performed with the ROOT/RooFit toolkit which is written in C++ and provides Python bindings which are only loosely integrated into the scientific Python ecosystem. We present zfit, a new alternative to RooFit, written in pure Python. Built on top of TensorFlow (a modern, high level computing library for massive computations), zfit provides a high level interface for advanced model building and fitting. It is also designed to be extendable in a very simple way, allowing the usage of cutting-edge developments from the scientific Python ecosystem in a transparent way. In this talk, the main features of zfit are introduced, and its extension to data analysis, especially in the context of HEP experiments, is discussed.
Speaker: Jonas Eschle (Universitaet Zuerich (CH))
LOOKUP_zfit_CHEP19_Jonas_Eschle.pdf zfit_CHEP19_Jonas_Eschle.pdf zfit on github
Machine Learning with ROOT/TMVA 15m
ROOT provides, through TMVA, machine learning tools for data analysis at HEP experiments and beyond. In this talk, we present recently included features in TMVA and the strategy for future developments in the diversified machine learning landscape. Focus is put on fast machine learning inference, which enables analysts to deploy their machine learning models rapidly on large scale datasets. The new developments are paired with newly designed C++ and Python interfaces supporting modern C++ paradigms and full interoperability in the Python ecosystem.
We present as well a new deep learning implementation for convolutional neural network using the cuDNN library for GPU. We show benchmarking results in term of training time and inference time, when comparing with other machine learning libraries such as Keras/Tensorflow.
Speaker: Stefan Wunsch (KIT - Karlsruhe Institute of Technology (DE))
CHEP 2019_ Machine Learning with ROOT_TMVA.pdf
Evolution of web-based analysis for Machine Learning and LHC Experiments: power of integrating storage, interactivity and collaboration with JupyterLab in SWAN 15m
SWAN (Service for Web-based ANalysis) is a CERN service that allows users to perform interactive data analysis in the cloud, in a "software as a service" model. The service is a result of the collaboration between IT Storage and Databases groups and EP-SFT group at CERN. SWAN is built upon the widely-used Jupyter notebooks, allowing users to write - and run - their data analysis using only a web browser. SWAN is a data analysis hub: users have immediate access to user storage CERNBox, entire LHC data repository on EOS, software (CVMFS) and computing resources, in a pre-configured, ready-to-use environment. Sharing of notebooks is fully integrated with CERNBox and users can easily access their notebook projects on all devices supported by CERNBox.
In the first quarter of 2019 we have recorded more than 1300 individual users of SWAN, with a majority from all four LHC experiments. Integration of SWAN with CERN Spark clusters is at the core of the new controls data logging system for the LHC. Every month new users discover SWAN through tutorials on data analysis and machine learning.
The SWAN service evolves, driven by the user's needs. In the future SWAN will provide access to GPUs, to the more powerful interface of Jupyterlab - that replaces Jupyter notebooks - and to a more configurable, easier to use and more shareable way of setting the software environment of Projects and notebooks.
This presentation will update the HEP community with the status of this effort and its future direction, together with the general evolution of SWAN.
Speaker: Jakub Moscicki (CERN)
SWAN-CHEP2019.pdf
The F.A.S.T. toolset: Using YAML to make tables out of trees 15m
The Faster Analysis Software Taskforce (FAST) is a small, European group of HEP researchers that have been investigating and developing modern software approaches to improve HEP analyses. We present here an overview of the key product of this effort: a set of packages that allows a complete implementation of an analysis using almost exclusively YAML files. Serving as an analysis description language (ADL), this toolset builds on top of the evolving technologies from the Scikit-HEP and IRIS-HEP projects as well as industry-standard libraries such as Pandas and Matplotlib. Data processing starts with event-level data (the trees) and can proceed by adding variables, selecting events, performing complex user-defined operations and binning data, as defined in the YAML description. The resulting outputs (the tables) are stored as Pandas dataframes or FlatBuffers defined by Aghast, which can be programmatically manipulated. The F.A.S.T. tools can then convert these into plots or inputs for fitting frameworks. No longer just a proof-of-principle, these tools are now being used in CMS analyses, the LUX-ZEPLIN experiment, and by students on several other experiments. In this talk we will showcase these tools through examples, highlighting how they address the different experiments' needs, and compare them to other similar approaches.
Speaker: Benjamin Krikler (University of Bristol (GB))
191104_ The FAST-HEP toolkit @ CHEP19.pdf
High-dimensional data visualisation with the grand tour 15m
In physics we often encounter high-dimensional data, in the form of multivariate measurements or of models with multiple free parameters. The information encoded is increasingly explored using machine learning, but is not typically explored visually. The barrier tends to be visualising beyond 3D, but systematic approaches for this exist in the statistics literature. I will use examples from particle and astrophysics to show how we can use the "grand tour" for such multidimensional visualisations, for example to explore grouping in high dimension and for visual identification of multivariate outliers. I will then discuss the idea of projection pursuit, i.e. searching the high-dimensional space for "interesting" low dimensional projections, and illustrate how we can detect complex associations between multiple parameters.
Speaker: Ursula Laa (Monash University)
Faster RooFitting: Automated Parallel Computation of Collaborative Statistical Models 15m
RooFit is the statistical modeling and fitting package used in many experiments to extract physical parameters from reduced particle collision data. RooFit aims to separate particle physics model building and fitting (the users' goals) from their technical implementation and optimization in the back-end. In this talk, we outline our efforts to further optimize the back-end by automatically running major parts of user models in parallel on multi-core machines.
A major challenge is that RooFit allows users to define many different types of models, with different types of computational bottlenecks. Our automatic parallelization framework must then be flexible, while still reducing run-time by at least an order of magnitude, preferably more.
We have performed extensive benchmarks and identified at least three bottlenecks that will benefit from parallelization. To tackle these and possible future bottlenecks, we designed a parallelization layer that allows us to parallelize existing classes with minimal effort, but with high performance and retaining as much of the existing class's interface as possible.
The high-level parallelization model is a task-stealing approach. Our multi-process approach uses ZeroMQ socket-based communication. Preliminary results show speed-ups of factor 2 to 20, depending on the exact model and parallelization strategy.
We will integrate our parallelization layer into RooFit in such a way that impact on the end-user interface is minimal. This constraint, together with new features introduced in a concurrent RooFit project on vectorization and dataflow redesign, warrants a redesign of the RooFit internal classes for likelihood evaluation and other test statistics. We will briefly outline the implications of this for users.
Speaker: Dr Carsten Daniel Burgard (Nikhef National institute for subatomic physics (NL))
Track 7 – Facilities, Clouds and Containers: Monitoring and benchmarking Riverbank R7
Convener: Oksana Shadura (University of Nebraska Lincoln (US))
Machine Learning-based Anomaly Detection of Ganglia Monitoring data in HEP Data Center 15m
The IHEP local cluster is a middle-sized HEP data center which consists of 20'000 CPU slots, hundreds of data servers, 20 PB disk storage and 10 PB tape storage. After data taking of JUNO and LHAASO experiment, the data volume processed at this center will approach 10 PB data per year. Facing the current cluster scale, anomaly detection is a non-trivial task in daily maintenance. Traditional methods such as static thresholding of performance metrics, key words searching in system logs, etc., require expertise of certain software systems, and cannot be easy to transplant. Besides, these methods cannot easily adapt to the changes of workloads and hardware configurations. Anomalies are data points which are either different from the majority of others or different from the expectation of a reliable prediction model in a time series. With a sufficient training sample dataset, machine learning-based anomaly detections which leverage these statistical characteristics can largely avoid the disadvantages of traditional methods. The Ganglia monitoring system at IHEP collects billions of timestamped monitoring data from the cluster every year. It provides sufficient data samples to train machine learning models. In this presentation, we firstly developed a generic anomaly detection framework to facilitate different detection task. It facilities common tasks such as data sample building, retagging and visualization, model calling, deviation measurement and performance measurement in machine learning-based anomaly detection methods. Then, for massive storage system, we developed and trained a spatial anomaly detection model based on Isolation Forest algorithm and a time series anomaly detection model based on LSTM recurrent neural networks to validate our idea. Initial performance comparison of our methods and traditional methods will be provided at the end of the presentation.
Speakers: Ms Juan Chen (IHEP), juan chen (IHEP)
chep2019_chenjuan.pdf
Anomaly detection using Unsupervised Machine Learning for Grid computing site operation 15m
A Grid computing site consists of various services including Grid middlewares, such as Computing Element, Storage Element and so on. Ensuring a safe and stable operation of the services is a key role of site administrators. Logs produced by the services provide useful information for understanding the status of the site. However, it is a time-consuming task for site administrators to monitor and analyze the service logs everyday. Therefore, a support framework (gridalert), which detects anomaly logs and alerts to site administrators, has been developed using Machine Learning techniques.
Typical classifications using Machine Learning require pre-defined labels. It is difficult to collect a large amount of anomaly logs to build a Machine Learning model that covers all possible pre-defined anomalies. Therefore, Unsupervised Machine Learning based on clustering algorithms is used in the gridalert to detect anomaly logs. Several clustering algorithms, such as k-means, DBSCAN and IsolationForest, and its parameters have been compared in order to maximize the performance of the anomaly detection for Grid computing site operations. The gridalert has been deployed to Tokyo Tier2 site, which is one of the Worldwide LHC Computing Gird sites, and is used in operation. In this presentation, studies about Machine Learning algorithms for the anomaly detection and our operational experiences of the gridalert will be reported.
Speaker: Tomoe Kishimoto (University of Tokyo (JP))
Tomoe_CHEP2019.pdf
Using HEP experiment workflows for the benchmarking and accounting of computing resources 15m
The benchmarking and accounting of CPU resources in WLCG has been based on the HEP-SPEC06 (HS06) suite for over a decade. HS06 is stable, accurate and reproducible, but it is an old benchmark and it is becoming clear that its performance and that of typical HEP applications have started to diverge. After evaluating several alternatives for the replacement of HS06, the HEPIX benchmarking WG has chosen to focus on the development of a HEP-specific suite based on actual software workloads of the LHC experiments, rather than on a standard industrial benchmark like the new SPEC CPU 2017 suite.
This presentation will describe the motivation and implementation of this new benchmark suite, which is based on container technologies to ensure portability and reproducibility. This approach is designed to provide a better correlation between the new benchmark and the actual production workloads of the experiments. It also offers the possibility to separately explore and describe the independent architectural features of different computing resource types, which is expected to be increasingly important with the growing heterogeneity of the HEP computing landscape. In particular, an overview of the initial developments to address the benchmarking of non-traditional computing resources such as HPCs and GPUs will also be provided.
Speaker: Andrea Valassi (CERN)
20191104-CHEP2019-BMK-AV-v005.pdf 20191104-CHEP2019-BMK-AV-v005.pptx
WLCG Networks: Update on Monitoring and Analytics 15m
WLCG relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any network issues, including connection failures, congestion and traffic routing. The OSG Networking Area, in partnership with WLCG, is focused on being the primary source of networking information for its partners and constituents. It was established to ensure sites and experiments can better understand and fix networking issues, while providing an analytics platform that aggregates network monitoring data with higher level workload and data transfer services. This has been facilitated by the global network of the perfSONAR instances that have been commissioned and are operated in collaboration with WLCG Network Throughput Working Group. An additional important update is the inclusion of the newly funded NSF project SAND (Service Analytics and Network Diagnosis) which is focusing on network analytics.
In this talk we'll describe the current state of the network measurement and analytics platform and summarise the activities taken by the working group and our collaborators, focusing mainly on the throughput issues that have been reported and resolved during the recent period with the help of the perfSONAR network. We will also cover the updates on the higher level services that were developed to help bring perfSONAR network to its full potential. This includes the progress being made in providing higher level analytics, alerting and alarming from the rich set of network metrics we are gathering. . Finally, we will discuss and propose potential R&D areas related to improving the network throughput in general as well as prepare the infrastructure for the foreseen major changes in the way network will be provisioned and operated in the future.
Speaker: Pedro Andrade (CERN)
CHEP 2019 OSG_WLCG Networking Update.pdf
WLCG Dashboards with Unified Monitoring 15m
Monitoring of the CERN Data Centres and the WLCG infrastructure is now largely based on the MONIT infrastructure provided by CERN IT. This is the result of the migration from several old in-house developed monitoring tools into a common monitoring infrastructure based on open source technologies such as Collectd, Flume, Kafka, Spark, InfluxDB, Grafana and others. The MONIT infrastructure relies on CERN IT services (OpenStack, Puppet, Gitlab, DBOD, etc) and covers the full range of monitoring tasks: metrics and logs collection, alarms generation, data validation and transport, data enrichment and aggregation (where applicable), dashboards visualisation, reports generation, etc. This contribution will present the different services offered by the MONIT infrastructure today, highlight the main monitoring use cases from the CERN Data Centres, WLCG, and Experiments, and analyse the last years experience of moving from legacy well-established custom monitoring tools into a common open source-based infrastructure.
2019-11-04_MONIT_CHEP2019.pdf
Large Elasticsearch cluster management 15m
The Centralised Elasticsearch Service at CERN runs the infrastructure to
provide Elasticsearch clusters for more than 100 different use cases.
This contribution presents how the infrastructure is managed, covering the
resource distribution, instance creation, cluster monitoring and user
support. The contribution will present the components that have been identified as
critical in order to share resources and minimize the amount of clusters and
machines needed to run the service.In particular, all the automation for the
instance configuration, including index template management, backups and
Kibana settings, will be explained in detail.
Speaker: Pablo Saiz (CERN)
CERN_16-9.pdf CERN_16-9.pptx
Track 8 – Collaboration, Education, Training and Outreach: Collaborative tools Riverbank R1
Convener: Tibor Simko (CERN)
Challenges and opportunities when migrating CERN e-mail system to open source 15m
E-mail service is considered as a critical collaboration system. We will share our experience regarding technical and organizational challenges when migrating 40 000 mailboxes from Microsoft Exchange to free and open source software solution: Kopano.
Speaker: Pawel Grzywaczewski (CERN)
03.11_-_Migration_of_CERN_e-mail_system_to_open_source.pdf 03.11_-_Migration_of_CERN_e-mail_system_to_open_source.pptx
Experience finding MS Project Alternatives at CERN 15m
As of March 2019, CERN is no longer eligible for academic licences of Microsoft products. For this reason, CERN IT started a series of task forces to respond to the evolving requirements of the user community with the goal of reducing as much as possible the need for Microsoft licensed software. This exercise was an opportunity to understand better the user requirements for all office applications. Here we focus on MS Project, the dominant PC-based project management software, which has been used at CERN for many years. There were over 1500 installations at CERN when the task force started, with a heterogeneous pool of users in terms of required functionality, area of work and expertise. This paper will present an evaluation of users' needs and whether they could be fulfilled with cheaper and less advanced solutions for project management and scheduling. Moreover, selected alternatives, their deployment and lessons learned will be described in more detail. Finally, it will present the approach on how to communicate, train and migrate users to the proposed solutions.
Speakers: Maria Alandes Pradillo (CERN), Sebastian Bukowiec (CERN)
MALT_CHEP2019.pdf
eXtreme monitoring: CERN video conference system and audio-visual IoT device infrastructure 15m
In this talk the approach chosen to monitor firstly a world-wide video conference server infrastructure and secondly a wide diversity of audio-visual devices that build up the audio-visual conference room ecosystem at CERN will be presented.
CERN video conference system is a complex ecosystem which is being used by most HEP institutes, together with Swiss Universities through SWITCH. As a proprietary platform, on its on-premise version, Vidyo offers a very limited monitoring. In order to improve support to our user community together with a better understanding of the Vidyo platform for service managers and video conference supporters a set of tools to monitor the system has been developed keeping in mind simplicity, flexibility, maintainability and cost efficiency reusing as much as possible technologies offered by IT services: Elasticsearch stack, Influxdb, Openshift, Kubernetes, Openstack, etc. The result is a set of dashboards that greatly simplify access to information required by CERN IT helpdesk and service managers and that could be provided to the users. Most of the components developed are open source [1,2], and could be reused for services facing similar problems.
With the arrival of IP devices in the Audio-Visual and Conferencing (AVC) equipment, the possibilities to develop an agnostic solution for monitoring this IoT jungle (video encoders, videoconference codecs, screens, projectors, microphones, clocks,..) becomes feasible. After trying with no real success existing commercial products for monitoring, CERN is now developing an opensource solution to effectively monitor/operate the AVC ecosystem using existing opensource components and central services provided by the IT deparment: node-red/mqtt, telegraf/influxdb/grafana, beats/logstash/elasticseach/kibana, openshift, etc.
[1] https://github.com/CERNCDAIC/aggsvidyo
[2] https://github.com/CERNCDAIC/resthttpck
Speaker: Ruben Domingo Gaspar Aparicio (CERN)
eXtreme monitoring.pdf
Development of a Versatile, Full-Featured Search Functionality for Indico 15m
Indico, CERN's popular open-source tool for event management, is in widespread use among facilities that make up the HEP community. It is extensible through a robust plugin architecture that provides features such as search and video conferencing integration. In 2018, Indico version 2 was released with many notable improvements, but without a full-featured search functionality that could be implemented easily outside of CERN. At both Fermi and Brookhaven National Laboratories, the user community viewed the lack of this popular feature as a significant impediment to deployment of the new software. In the meantime, CERN embarked upon a major redesign of their core search service, one that would also necessitate a rewrite of the Indico search interface. Seeing this pressing need, the two US labs decided to collaborate, with assistance from the CERN development team, on a project to develop the requisite search functionality for the larger user community. The resulting design exploits the simplified schema defined in the new CERN Search micro-service, based on Invenio and Elasticsearch, while still providing a flexible path to implementation for alternative backend search services. It is intended to provide a software package that can be installed easily and used out of the box, by anyone at any site. This presentation will discuss the design choices and architectural challenges, and provide an overview of the deployment and use of these new plugins.
Speaker: Penelope Constanta (Fermilab)
CHEP2019_indicoCollaboration.pdf CHEP2019_indicoCollaboration.pdf CHEP2019_indicoCollaboration.pptx
Evolution of the CERNBox platform to support collaborative applications and MALT 15m
CERNBox is the CERN cloud storage hub for more than 16000 users at CERN. It allows synchronising and sharing files on all major desktop and mobile platforms (Linux, Windows, MacOSX, Android, iOS) providing universal, ubiquitous, online- and offline access to any data stored in the CERN EOS infrastructure. CERNBox also provides integration with other CERN services for big science: visualisation tools, interactive data analysis and real-time collaborative editing.
Over the last two years, CERNBox has evolved from a pure cloud sync and share platform into a collaborative service, to support new applications such as DrawIO for diagrams and organigrams sketching, OnlyOffice and Collabora Online for documents editing, and DXHTML Gantt for project management, as alternatives to traditional desktop applications. Moving to open source applications has the advantage to reduce licensing costs and enables easier integration within the CERN infrastructure.
Leveraging large and diverse set of applications at CERN, we propose a bring-your-own-application model where user groups can easily integrate their specific web-based applications in all available CERNBox workflows. We report on our experience managing such integrations and applicable use-cases, also in a broader scientific context where an emerging community including other institutes and SMEs is evolving the standards for sync & share storage.
Speaker: Hugo Gonzalez Labrador (CERN)
20190528_CHEP-CBOXMALT.pdf
Operating the Belle II Collaborative Services and Tools 15m
Collaborative services are essential for any experiment.
They help to integrate global virtual communities by allowing to share
and exchange relevant information among members.
Typical examples are public and internal web pages, wikis, mailing
list services, issue tracking systems, and services for meeting
organizations and documents.
After reviewing their collaborative services with respect to security,
reliability, availability, and scalability, the Belle II collaboration
decided in 2016 to migrate services and tools into the existing IT
infrastructure at DESY. So far missing services were added and
workflows adapted. As a new development, a membership management
system which serves the needs of a global collaboration as of to-date
with 968 scientists of more than 113 institutions in 25 countries all
around the world, was put into operation in the beginning of 2018.
Almost all essential service of a living collaboration were subject to
modifications, some of them with major or complete changes. Moreover,
an already productive collaboration had to give up accustomed systems
and adopt to new ones with new look-and-feels and more restrictive
security rules.
In the contribution to CHEP2019 we will briefly review the planning
and realization of the migration process and thoroughly discuss
experiences which we gained while supporting the daily work of the
Belle II collaboration.
Speaker: Thomas Kuhr (Ludwig Maximilians Universitat (DE))
chep2019_belle2_talk-554.pdf
Track 9 – Exascale Science: HPC facilities Riverbank R4
Convener: Fabio Hernandez (IN2P3 / CNRS computing centre)
Extension of the INFN Tier-1 on a HPC system 15m
The INFN Tier-1 located at CNAF in Bologna (Italy) is a major center of the WLCG e-Infrastructure, supporting the 4 major LHC collaborations and more than 30 other INFN-related experiments.
After multiple tests towards elastic expansion of CNAF compute power via Cloud resources (provided by Azure, Aruba and in the framework of the HNSciCloud project), but also building on the experience gained with the production quality extension of the Tier-1 farm on remote owned sites, the CNAF team, in collaboration with experts from the ATLAS, CMS, and LHCb experiments, has been working to put in production a solution of an integrated HTC+HPC system with the PRACE CINECA center, located nearby Bologna. Such extension will be implemented on the Marconi A2 partition, equipped with Intel Knights Landing (KNL) processors. A number of technical challenges were faced and solved in order to successfully run on low RAM nodes, as well as to overcome the closed environment (network, access, software distribution, … ) that HPC systems deploy with respect to standard GRID sites. We show preliminary results from a large scale integration effort, using resources secured via the successful PRACE grant N. 2018194658, for 30 million KNL core hours.
Speaker: Tommaso Boccali (Universita & INFN Pisa (IT))
CHEP 2019 Adelaide.pdf
Large-scale HPC deployment of Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) 15m
The NSF-funded Scalable CyberInfrastructure for Artificial Intelligence and Likelihood Free Inference (SCAILFIN) project aims to develop and deploy artificial intelligence (AI) and likelihood-free inference (LFI) techniques and software using scalable cyberinfrastructure (CI) built on top of existing CI elements. Specifically, the project has extended the CERN-based REANA framework, a cloud-based data analysis platform deployed on top of Kubernetes clusters that was originally designed to enable analysis reusability and reproducibility. REANA is capable of orchestrating extremely complicated multi-step workflows, and uses Kubernetes clusters both for scheduling and distributing container-based workloads across a cluster of available machines, as well as instantiating and monitoring the concrete workloads themselves.
This work describes the challenges and development efforts involved in extending REANA and the components that were developed in order to enable large scale deployment on High Performance Computing (HPC) resources.
Using the Virtual Clusters for Community Computation (VC3) infrastructure as a starting point, we implemented REANA to work with a number of differing workload managers, including both high performance and high throughput, while simultaneously removing REANA's dependence on Kubernetes support at the workers level. Performance results derived from running AI/LFI training workflows on a variety of large HPC sites will be presented.
Speaker: Mike Hildreth (University of Notre Dame (US))
CHEP_2019_Hildreth.pdf CHEP_2019_Hildreth.pdf
EuroEXA: an innovative and scalable FPGA-based system for extreme scale computing in Europe 15m
Nowadays, a number of technology R&D activities has been launched in Europe trying to close the gap with traditional HPC providers like USA and Japan and more recently emerging ones like China.
The EU HPC strategy, funded through EuroHPC initiative, leverages on two different pillars: the first one targets the procurement and the hosting of two/three commercial pre-Exascale systems, in order to provide the HPC user community with world-level class computing systems; the second one aims at boosting industry-research collaboration in order to design a new generation of Exascale systems which is to be mainly based on European technology.
In this framework, analysis and validation of the HPC-enabling technologies is a very critical task and the FETHPC H2020 EuroEXA project (https://euroexa.eu) is prototyping a medium size (but scalable to extreme level) computing platform as proof-of-concept of an EU-designed HPC system.
EuroEXA exploits FPGA devices, with their ensemble of either standard and custom high-performance interfaces, DSP blocks for task acceleration and a huge number of user- assigned logic cells. FPGA adoption allows us to design European innovative IPs such as application-tailored acceleration hardware (for high performances in the computing node) and low latency, high throughput custom network (for scalability).
The EuroEXA computing node is based on a single module hosting Xilinx UltraScale+ FPGAs for application code acceleration hardware, control and network implementation, and, in a later phase, even a new project-designed, ARM-based, low power multi-core chip.
EuroEXA interconnect is an FPGA-based hierarchical hybrid network characterized by direct topology at "blade" level (16 computing nodes on a board) and a custom switch, implementing a mix of full-crossbar and Torus topology, for interconnection with the upper levels.
EuroEXA will also introduce a new high density liquid-cooling technology for blade system and a new multirack modular assembly based on standard shipping containers in order to provide an effective solution for moving, placing and operating large scale EuroEXA system.
A complete and system-optimized programming software stack is under design and a number of scientific, engineering and AI-oriented applications are used to co-design, benchmark and validate the EuroEXA hardware/software solutions.
In this talk, we will introduce the main project motivations and goals, its positioning within the EuroHPC landscape, the status of hardware and software development and the possible synergies with HEP computing requirements.
Speaker: Piero Vicini (Sapienza Universita e INFN, Roma I (IT))
EuroEXA_CHEP2019.pdf EuroEXA_CHEP2019.pptx
Using HEP workloads to optimize requirements for procurement of a future HPC facility 15m
The Dutch science funding organization NWO is in the process of drafting requirements for the procurement of a future high-performance compute facility. To investigate the requirements for this facility to potentially support high-throughput workloads in addition to traditional high-performance workloads, a broad range of HEP workloads are being functionally tested on the current facility. The requirements obtained from this pilot will be presented, together with technical issues solved and requirements on HPC and HEP software and support.
Speaker: Roel Aaij (Nikhef National institute for subatomic physics (NL))
raaij_HOSS_v2.pdf
Deep Learning for HEP on HPC at NERSC 15m
We present recent work in supporting deep learning for particle physics and cosmology at NERSC, the US Dept. of Energy mission HPC center. We describe infrastructure and software to support both large-scale distributed training across (CPU and GPU) HPC resources and for productive interfaces via Jupyter notebooks. We also detail plans for accelerated hardware for deep learning in the future HPC machines at NERSC, 'Perlmutter' and beyond. We demonstrate these capabilities with a characterisation of the emerging deep learning workload running at NERSC. We also present use of these resources to implement specific cutting-edge applications including conditional Generative Adversarial Networks for particle physics and dark-matter cosmology simulations and bayesian inference via probabilistic programming for LHC analyses.
Speaker: Steven Farrell (Lawrence Berkeley National Lab (US))
CHEP2019-NERSC-DL.pdf
Lunch 1h 30m
Track 1 – Online and Real-time Computing: Monitoring and control systems Riverbank R5
AliECS: a New Experiment Control System for the ALICE Experiment 15m
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). To ensure the efficient operation of the upgraded experiment and of its newly designed computing system, a reliable, high performance, and automated experiment control system is being developed. The ALICE Experiment Control System (AliECS) is a distributed system based on state of the art cluster management and microservices which have recently emerged in the distributed computing ecosystem. Such technologies will allow the ALICE collaboration to benefit from a vibrant and innovating open source community. This communication describes the AliECS architecture. It provides an in-depth overview of the system's components, features, and design elements, as well as its performance. It also reports on the experience with AliECS as part of ALICE Run 3 detector commissioning setups.
Speaker: Teo Mrnjavac (CERN)
aliecs-deck.pdf
DAQExpert - the service to increase CMS data-taking efficiency 15m
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of the subsystems, an expert system, the DAQ Expert, has been developed. It aims at improving the data taking efficiency, reducing the human error in the operations and minimising the on-call expert demand. Introduced in the beginning of 2017, it assists the shift crew and the system experts in recovering from operational faults, streamlining the post mortem analysis and, at the end of Run 2, triggering the fully automatic recoveries without a human intervention. DAQ Expert analyses the real-time monitoring data originating from the DAQ components and the high-level trigger updated every few seconds. It pinpoints the data flow problem and recovers it automatically or after given operator approval. We analyse the CMS downtime in the 2018 run focusing on what was improved with the introduction of automated recoveries; present challenges and design of transforming the expert knowledge to automated recovery jobs. Furthermore, we demonstrate the web-based, ReactJS interfaces that ensure an effective cooperation between the human operators in control room and the automated recovery system. We report on the operational experience with automated recoveries.
Speaker: Maciej Szymon Gladki (University of Warsaw (PL))
DAQExpert CHEP2019 (13).pdf
ATLAS Operational Monitoring Data Archival and Visualization 15m
The Information Service (IS) is an integral part of the Trigger and Data Acquisition (TDAQ) system of the ATLAS experiment at the Large Hadron Collider (LHC) at CERN. The IS allows online publication of operational monitoring data, and it is used by all sub-systems and sub-detectors of the experiment to constantly monitor their hardware and software components including more than 25000 applications running on more than 3000 computers. The Persistent Back-End for the ATLAS Information System (P-BEAST) service stores all raw operational monitoring data for the lifetime of the experiment and provides programming and graphical interfaces to access them including Grafana dashboards and notebooks based on the CERN SWAN platform. During the ATLAS data taking sessions (for the full LHC Run 2 period) P-BEAST acquired data at an average information update rate of 200 kHz and stored 20 TB of highly compacted and compressed data per year. This paper reports how over six years the P-BEAST became an essential piece of the experiment operations including details of the challenging requirements, the fails and successes of the various attempted implementations, the new types of monitoring data and the results of the time-series database technologies evaluations for the improvements during next LHC Run 3.
Speaker: Igor Soloviev (University of California Irvine (US))
ATL-COM-DAQ-2019-179.pdf
Data quality monitors of vertex detectors at the start of the Belle II experiment 15m
The Belle II experiment features a substantial upgrade of the Belle detector and will operate at the SuperKEKB energy-asymmetric $e^+ e^-$ collider at KEK in Tuskuba, Japan. The accelerator successfully completed the first phase of commissioning in 2016 and the Belle II detector saw its first electron-positron collisions in April 2018. Belle II features a newly designed silicon vertex detector based on double-sided strip and DEPFET pixel detectors. A subset of the vertex detector was operated in 2018 to determine background conditions (Phase 2 operation); installation of the full detector was completed early in 2019 and the experiment starts full data taking.
This talk will report on the final arrangement of the silicon vertex detector part of Belle II with focus on on-line and off-line monitoring of detector conditions and data quality, design and use of diagnostic and reference plots, and integration with the software framework of Belle II. Data quality monitoring plots will be discussed with a focus on simulation and acquired cosmic and collision data.
Speakers: Peter Kodys (Charles University), Peter Kodys (Charles University (CZ))
CHEP2019_DQMBelleII_Kodys_169ratio.pdf
Scalable monitoring data processing for the LHCb software trigger 15m
The LHCb high level trigger (HLT) is split in two stages. HLT1 is synchronous with collisions delivered by the LHC and writes its output to a local disk buffer, which is asynchronously processed by HLT2. Efficient monitoring of the data being processed by the application is crucial to promptly diagnose detector or software problems. HLT2 consists of approximately 50000 processes and 4000 histograms are produced by each process. This results in 200 million histograms that need to be aggregated for each of up to a hundred data taking intervals that are being processed simultaneously. This paper presents the multi-level hierarchical architecture of the monitoring infrastructure put in place to achieve this. Network bandwidth is minimised by sending histogram increments and only exchanging metadata when necessary, using a custom lightweight protocol based on boost::serialize. The transport layer is implemented with ZeroMQ, which supports IPC and TCP communication, queue handling, asynchronous request/response and multipart messages. The persistent storage to ROOT is parallelized in order to cope with data arriving from a hundred of data taking intervals being processed simultaneously by HLT2. The performance and the scalability of the current system are presented. We demonstrate the feasibility of such an approach for the HLT1 use case, where real-time feedback and reliability of the infrastructure are crucial. In addition, a prototype of a high-level transport layer based on the stream-processing platform Apache Kafka is shown, which has several advantages over the lower-level ZeroMQ solution.
Speaker: Stefano Petrucci (University of Edinburgh, CERN)
petrucci-chep.pdf
The ALICE data quality control system 15m
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020, which includes a new computing system called O² (Online-Offline). The raw data input from the ALICE detectors will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large amount of data, a new online-offline computing system, called O2, will be deployed.
One of the key software components of the O2 system will be the data Quality Control (QC) that replaces the existing online Data Quality Monitoring and offline Quality Assurance. It involves the gathering, the analysis by user-defined algorithms and the visualization of monitored data, in both the synchronous and asynchronous parts of the O2 system.
This paper presents the architecture and design, as well as the latest and upcoming features, of the ALICE O2 QC. In particular, we review the challenges we faced developing and scaling the object merging software, the trending and correlation infrastructure and the repository management. We also discuss the ongoing adoption of this tool amongst the ALICE collaboration and the measures taken to develop, in synergy with their respective teams, efficient monitoring modules for the detectors.
Speaker: Piotr Konopka (CERN, AGH University of Science and Technology (PL))
2019_11_4QC-CHEP.pdf
Track 2 – Offline Computing: ML and generative simulation Riverbank R6
Convener: Chiara Ilaria Rovelli (Sapienza Universita e INFN, Roma I (IT))
Fast Simulations at LHCb 15m
The LHCb detector at the LHC is a single forward arm spectrometer dedicated to the study of $b-$ and $c-$ hadron states. During Run 1 and 2, the LHCb experiment has collected a total of 9 fb$^{-1}$ of data, corresponding to the largest charmed hadron dataset in the world and providing unparalleled datatests for studies of CP violation in the $B$ system, hadron spectroscopy and rare decays, not to mention heavy ion and fixed target datasets. The LHCb detector is currently undergoing an upgrade to nearly all parts of the detector to cope with the increased luminosity of Run 3 and beyond. Simulation for the analyses of such datasets is paramount, but is prohibitively slow in generation and reconstruction due to the sheer number of simulated decays needed to match the collected datasets. In this talk, we explore the suite of fast simulations which LHCb has employed to meet the needs of the Run 3 and beyond, including the reuse of the underlying event and parameterized simulations, and the possibility of porting the framework to multithreaded environments.
Speaker: Adam Davis (University of Manchester (GB))
davis_fast_simulations_at_lhcb_v3.pdf
Fast simulation methods in ATLAS: from classical to generative models 15m
The ATLAS physics program relies on very large samples of GEANT4 simulated events, which provide a highly detailed and accurate simulation of the ATLAS detector. However, this accuracy comes with a high price in CPU, and the sensitivity of many physics analyses is already limited by the available Monte Carlo statistics and will be even more so in the future. Therefore, sophisticated fast simulation tools are developed. In Run-3 we aim to replace the calorimeter shower simulation for most samples with a new parametrized description of longitudinal and lateral energy deposits, including machine learning approaches, to achieve a fast and accurate description. Looking further ahead, prototypes are being developed using cutting edge machine learning approaches to learn the appropriate calorimeter response, which are expected to improve modeling of correlations within showers. Two different approaches, using Variational Auto-Encoders (VAEs) or Generative Adversarial Networks (GANs), are trained to model the shower simulation. Additional fast simulation tools will replace the inner detector simulation, as well as digitization and reconstruction algorithms, achieving up to two orders of magnitude improvement in speed. In this talk, we will describe the new tools for fast production of simulated events and an exploratory analysis of the deep learning methods.
Speaker: Johnny Raine (Universite de Geneve (CH))
20191104-FCSDNNCaloSim-CHEP_final.pdf
Learning high-level structures in HEP data with novel Deep Auto-Regressive Networks for Fast Simulation 15m
In High Energy Physics, simulation activity is a key element for theoretical models evaluation and detector design choices. The increase in the luminosity of particle accelerators leads to a higher computational cost when dealing with the orders of magnitude increase in collected data. Thus, novel methods for speeding up simulation procedures (FastSimulation tools) are being developed with the help of Deep Learning. For this task, unsupervised learning is performed based on a given training HEP dataset with generative models employed to render samples from the same distribution.
A novel Deep Learning architecture is proposed in this research based on autoregressive connections to model the simulation output by decomposing the event distribution as a product of conditionals. The aim is for the network to be able to capture nonlinear, long-range correlations and input varying dependencies with tractable, explicit probability densities. The following research report analyses the benefits of employing autoregressive models in comparison with previously proposed models and their ability for generalisation in the attempt of fitting multiple data distributions. The training dataset contains different simplified calorimeters simulations obtained with the Geant4 toolkit (such as: PbWO4, W/Si). Finally, testing procedures and results for network performance are developed and analysed.
Speaker: Ioana Ifrim (CERN)
Novel_Deep_Autoregressive_Networks_for_Fast_Simulation_Ioana_Ifrim_Keynote(2).pdf
Particle shower simulation in high granularity calorimeters using 3 dimensional convolutional Generative Adversarial Networks 15m
The future need of simulated events for the LHC experiments and their High Luminosity upgrades, is expected to increase dramatically. As a consequence, research on new fast simulation solutions, based on Deep Generative Models, is very active and initial results look promising.
We have previously reported on a prototype that we have developed, based on 3 dimensional convolutional Generative Adversarial Network, to simulate particle showers in high-granularity calorimeters.
As an example of future high-granularity geometries, we have chosen the electromagnetic calorimeter design developed in the context of the Linear Collider Detector studies characterised by 25 layers and 5mmx5mm cell granularity
The training dataset is simulated using Geant4 and the DDSim framework.
In this talk we will present improved results on a more realistic simulation of different particles (electrons and pions) characterized by variable energy and incident trajectory. Detailed validation studies, comparing our results to Geant4 Monte Carlo simulation, show very good agreement for high level physics quantities (such as energy shower shapes) and detailed calorimeter response (single cell response) over a large energy range. In particular, we will show how increasing the network representational power, introducing physics-based constraints and a transfer-learning approach to the training process improve the agreement to Geant4 results over a large energy range. Initial studies on a network optimisation based on the implementation of Genetic Algorithms will also be discussed.
Speaker: Ricardo Brito Da Rocha (CERN)
GANsimulation_T2_draft0.pdf
Generative Adversarial Networks for LHCb Fast Simulation 15m
LHCb is one of the major experiments operating at the Large Hadron Collider at CERN. The richness of the physics program and the increasing precision of the measurements in LHCb lead to the need of ever larger simulated samples. This need will increase further when the upgraded LHCb detector will start collecting data in the LHC Run 3. Given the computing resources pledged for the production of Monte Carlo simulated events in the next years, the use of fast simulation techniques will be mandatory to cope with the expected dataset size. In LHCb generative models, which are nowadays widely used for computer vision and image processing are being investigated in order to accelerate the generation of showers in the calorimeter and high-level responses of Cherenkov detector. We demonstrate that this approach provides high-fidelity results along with a significant speed increase and discuss possible implication of these results. We also present an implementation of this algorithm into LHCb simulation software and validation tests.
Speaker: Fedor Ratnikov (Yandex School of Data Analysis (RU))
GAN_FastsimLHCb_CHEP_191104_v2.pdf
Generation of Belle II pixel detector background data with a GAN 15m
Belle II uses a Geant4-based simulation to determine the detector response to the generated decays of interest. A realistic detector simulation requires the inclusion of noise from beam-induced backgrounds. This is accomplished by overlaying random trigger data to the simulated signal. To have statistically independent Monte-Carlo events a high number of random trigger events are desirable. However, the size of the background events, in particular the part of the pixel vertex detector (PXD), is so large that it is infeasible to record, store, and overlay the same amount as simulated signal events. Our approach to overcome the limitation of the simulation by storage resources is to use a Wasserstein generative adverserial network to generate PXD background data. A challenge is the high resolution of 250x768 pixels of in total 40 sensors with correlations between them. We will present the current status of this approach and assess its quality based on tracking performance studies.
BelleII_GAN.pdf
Track 3 – Middleware and Distributed Computing: HTC Sites and Related Services Riverbank R3
Convener: Stefan Roiser (CERN)
Lightweight site federation for CMS support 15m
There is a general trend in WLCG towards the federation of resources, aiming for increased simplicity, efficiency, flexibility, and availability. Although general, VO-agnostic federation of resources between two independent and autonomous resource centres may prove arduous, a considerable amount of flexibility in resource sharing can be achieved, in the context of a single WLCG VO, with a relatively simple approach. We have demonstrated this for PIC and CIEMAT, the Spanish Tier-1 and Tier-2 sites for CMS (separated by 600 Kms, ~10 ms latency), by making use of the existing CMS xrootd federation (AAA) infrastructure and profiting from the common CE/batch technology used by the two centres (HTCondor). This work describes how compute slots are shared between the two sites with transparent and efficient access to the input data irrespective of its location. This approach allows to dynamically increase the capacity of a site with idle execution slots from the remote site. Our contribution also includes measurements for diverse CMS workflows comparing performances between local and remote execution. In addition to enabling an increased flexibility in the use of the resources, this lightweight approach can be regarded as a benchmark to explore future potential scenarios, where storage resources would be concentrated in a reduced number of sites.
Speaker: Antonio Delgado Peris (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
CHEP2019_Antonio_Delgado_lightweight_federation_cms_final.pdf
Provision and use of GPU resources for distributed workloads via the Grid 15m
The Queen Mary University of London WLCG Tier-2 Grid site has been providing GPU resources on the Grid since 2016. GPUs are an important modern tool to assist in data analysis. They have historically been used to accelerate computationally expensive but parallelisable workloads using frameworks such as OpenCL and CUDA. However, more recently their power in accelerating machine learning, using libraries such as TensorFlow and Coffee, has come to the fore and the demand for GPU resources has increased. Significant effort is being spent in high energy physics to investigate and use machine learning to enhance the analysis of data. GPUs may also provide part of the solution to the compute challenge of the High Luminosity LHC. The motivation for providing GPU resources via the Grid is presented. The Installation and configuration of the SLURM batch system together with Compute Elements (Cream and ARC) for use with GPUs is shown. Real world use cases are presented and the success and issues observed will be discussed. Recommendations, informed by our experiences, and our future plans will also be given.
Speaker: Dr Daniel Peter Traynor (Queen Mary University of London (GB))
cvhep2019gpu.pdf
A Lightweight Job Submission Frontend and its Toolkits – HepJob 15m
In a HEP Computing Center, at least 1 batch systems are used. As an example, at IHEP, we've used 3 batch systems, PBS, HTCondor and Slurm. After running PBS as local batch system for 10 years, we replaced it by HTCondor (for HTC) and Slurm (for HPC). During that period, problems came up on both user and admin sides.
On user side, the new batch systems bring a set of new commands, which users have to learn and remember more. In particular, some users would have to use HTCondor and Slurm in the meantime. Furthermore, HTCondor and Slurm provide more functions, which means more complicated usage mode, compared to the simple PBS commands.
On admin side, HTCondor gives more freedom to users, which becomes a problem to admins. Admins have to find the solutions for many problems: preventing users from requesting the resources they are not allowed to use, checking if the required attributes are correct, deciding which site is requested (Slurm cluster, remote sites, virtual machine sites), etc.
For the above requirements, HepJob was developed. HepJob provides a set of simple commands to users, hep_sub, hep_q, hep_rm, etc. In the submission procedure, HepJob checks all the attributes and ensure all attributes are correct; Assigns the proper resources to users, the user and group info is obtained from the management database; Routes jobs to the targeted site; Goes through the remaining steps.
Users can start with HepJob very easily and admins can take many prevention actions in HepJob.
Speaker: Xiaowei Jiang (IHEP(中国科学院高能物理研究所))
CHEP_2019_HepJob_Xiaowei.pdf
Dirac-based solutions for JUNO production system 15m
The Jiangmen Underground Neutrino Observatory (JUNO) is a multipurpose neutrino experiment, which plans to take about 2PB raw data each year starting from 2021. The experiment data plans to be stored in IHEP and have another copy in Europe (CNAF, IN2P3, JINR data centers). MC simulation tasks are expected to be arranged and operated through a distributed computing system to share efforts among data centers. The paper will present the design of the JUNO distributed computing system based on DIRAC to meet the requirements of the JUNO workflow and dataflow among data centers according to the JUNO computing model. The production system to seamlessly manage the JUNO MC simulation workflow and dataflow together is designed within the DIRAC transformation framework, in which data flows among data centers for production groups are managed based on the DIRAC data management infrastructure which uses DFC as File Catalogue, request manager to interface with FTS and transformation system to manage a bundle of files. The muon simulation with optical photon which has huge memory and CPU time problems would be the most challenging part. Therefore, multicore supports and GPU federation are considered in the system to meet this challenge. The function and performance tests to evaluate the prototype system would be also presented in the paper.
Speaker: Xiaomei Zhang (Chinese Academy of Sciences (CN))
Dirac-based solutions for JUNO production system-final.pdf Dirac-based solutions for JUNO production system-final.pptx
Scalable processing for storage events and automation of scientific workflows 15m
Low latency, high throughput data processing in distributed environments is a key requirement of today's experiments. Storage events facilitate synchronisation with external services where the widely adopted request-response pattern does not scale because of polling as a long-running activity. We discuss the use of an event broker and stream processing platform (Apache Kafka) for storage events, with respect to automatised scientific workflows starting from file system events (dCache, GPFS) as triggers for data processing and placement.
In a brokered delivery, the broker provides the infrastructure for routing generated events to consumer services. A client connects to the broker system and subscribes to streams of storage events which consist of data transfer records for files being uploaded, downloaded and deleted. This model is complemented by direct delivery using W3C's Server-Sent Events (SSE) protocol. We also address the shaping of a security model, where authenticated clients are authorised to read dedicated subsets of events.
On the compute side, the messages feed into event-driven work-flows, either user supplied software stacks or solutions based on open-source platforms like Apache Spark as analytical framework and Apache OpenWhisk for Function-as-a-Service (FaaS) and more general computational microservices. Building on cloud application templates for scalable analysis platforms, desired services can be dynamically provisioned on DESY's on-premise OpenStack cloud as well as in commercial hybrid cloud environments. Moreover, this model supports also the integration of data management tools like Rucio to address data locality e.g. to move files subsequent to processing by event-driven work-flows.
Speaker: Michael Schuh (Deutsches Elektronen-Synchrotron DESY)
CHEP2019_scalable_event_processing_DESY_Schuh_final.pdf McGuffin.txt
Track 4 – Data Organisation, Management and Access: distributed computing software and beyond Riverbank R8
Convener: Brian Paul Bockelman (University of Nebraska Lincoln (US))
An Information Aggregation and Analytics System for ATLAS Frontier 15m
ATLAS event processing requires access to centralized database systems where information about calibrations, detector status and data-taking conditions are stored. This processing is done on more than 150 computing sites on a world-wide computing grid which are able to access the database using the squid-Frontier system. Some processing workflows have been found which overload the Frontier system due to the Conditions data model currently in use, specifically because some of the Conditions data requests have been found to have a low caching efficiency. The underlying cause is that non-identical requests as far as the caching are actually retrieving a much smaller number of unique payloads. While ATLAS is undertaking an adiabatic transition during LS2 and Run-3 from the current COOL Conditions data model to a new data model called CREST for Run 4, it is important to identify the problematic Conditions queries with low caching efficiency and work with the detector subsystems to improve the storage of such data within the current data model. For this purpose ATLAS put together an information aggregation and analytics system. The system is based on aggregated data from the squid-Frontier logs using the Elastic Search technology. This talk describes the components of this analytics system from the server based on Flask/Celery application to the user interface and how we use Spark SQL functionalities to filter data for making plots, storing the caching efficiency results into a PostgreSQL database and finally deploying the package via a Docker container.
Speaker: Andrea Formica (Université Paris-Saclay (FR))
Frontier-Analytics-CHEP2019.pdf
High Performance Data Format for CLAS12 15m
With increasing data volume from Nuclear Physics experiments requirements to data
storage and access are changing. To keep up with large data sets new data formats
are needed for efficient processing and analysis of the data. Frequently, in the
experiments data goes through stages from data acquisition to reconstruction and
data analysis and data is converted from one format to another causing waisted CPU
cycles.
In this work we present High Performance Output (HIPO) data format developed
for CLAS12 experiment at Jefferson National Laboratory. It was designed to fit the needs
of data acquisition and high level data analysis, to avoid data format conversions
at different stages of data processing. The new format was designed
to store different event topologies from reconstructed data in tagged form
for efficient access by different analysis groups. In centralized data skimming
applications HIPO data format significantly outperforms standard data formats
used in Nuclear and High Energy Physics (ROOT) and industry standard formats,
such as Apache Avro and Apache Parquet.
Speaker: Dr Gagik Gavalian (Jefferson Lab)
HIPO4-CHEP-2019-Gavalian.pdf HIPO4-CHEP-2019-Gavalian.pptx
XRootD 5.0.0: encryption and beyond 15m
For almost 10 years now XRootD has been very successful at facilitating data management of LHC experiments. Being the foundation and main component of numerous solutions employed within the WLCG collaboration (like EOS and DPM), XRootD grew into one of the most important storage technologies in the High Energy Physics (HEP) community. With the latest major release (5.0.0) XRootD framework brought not only architectural improvements and functional enhancements, but also introduced a TLS based, secure version of the xroot/root data access protocol (a prerequisite for supporting access tokens).
In this contribution we explain the xroots/roots protocol mechanics and focus on the implementation of the encryption component engineered to ensure low latencies and high throughput. We also give an overview of other developments finalized in release 5.0.0 (extended attributes support, verified close, etc.), and finally, we discuss what else is on the horizon.
Speaker: Andrew Bohdan Hanushevsky (SLAC National Accelerator Laboratory (US))
XR5-CHEP.pdf XR5-CHEP.pptx
FTS improvements for LHC Run-3 and beyond 15m
The File Transfer Service developed at CERN and in production since 2014, has become fundamental component for LHC experiments workflows.
Starting from the beginning of 2018 with the participation to the EU project Extreme Data Cloud (XDC) [1] and the activities carried out in the context of the DOMA TPC [2] and QoS [3] working groups, a series of new developments and improvements has been planned and performed taking also into account the requirements from the experiments.
This talk will mainly focus on the support for OpenID Connect and the QoS integration via CDMI as output of the XDC project.
The integration with OpenID Connect is also following the direction of the future Authentication and Authorisation Infrastructure (AAI) for WLCG experiments.
The service scalability enhancements, the support for Xrootd and HTTP TPC and the first 'non-gridftp' transfers experiences via FTS between WLCG production sites will be also described, with an emphasis on performance comparison.
The service enhancements are meeting the requirements for LHC Run-3 and facilitating the adoption for other HEP and non-HEP communities.
[1] http://www.extreme-datacloud.eu/
[2] https://twiki.cern.ch/twiki/bin/view/LCG/ThirdPartyCopy
[3] https://twiki.cern.ch/twiki/bin/view/LCG/QoS
Speaker: Edward Karavakis (CERN)
FTS_CHEP2019.pdf FTS_CHEP2019.pptx
XRootD and Object Store: A new paradigm 15m
The XRootD software framework is essential for data access at WLCG sites. The WLCG community is exploring and expanding XRootD functionality. This presents a particular challenge at the RAL Tier-1 as the Echo storage service is a Ceph based Erasure Coded object store. External access to Echo uses gateway machines which run GridFTP and XRootD servers. This paper will describe how third party copy, WebDav and additional authentication protocols have been added to these XRootD servers. This allows ALICE to use Echo as well as preparing for the eventual phase out of GridFTP.
Local jobs access Echo via XCaches on every worker node. Remote jobs are increasingly accessing data via XRootD on Echo. For CMS jobs this is via their AAA service. For ATLAS, who are consolidating their storage at fewer sites, jobs are increasingly accessing data remotely. This paper describes the coninuting work to optimise both types of data access by testing different caching methods, including remotely configured XCaches (using SLATE) running on the RAL OpenStack cloud infrastructure.
Speaker: Katy Ellis (Science and Technology Facilities Council STFC (GB))
XrootD_Echo_CHEP.pdf
Using WLCG data management software to support other communities 15m
When the LHC started data taking in 2009 the data rates were unprecedented for the time and forced the WLCG community develop a range of tools for managing their data across many different sites. A decade later other science communities are finding their data requirements have grown far beyond what they can easily manage and are looking for help. The RAL Tier-1's primary mission has always been to provide resources for the LHC experiments although 10% is set aside for non-LHC experiments. In the last 2 years the Tier-1 has received additional funding to support other scientific communities and now provides over 5 PB disk and 13PB Tape storage to them.
RAL has run an FTS service for the LHC experiments for several years. Smaller scientific communities have used this for moving large volumes of data between sites but have frequently reported difficulties. In the past RAL also provided an LFC service for managing files stored on the Grid. The RAL Tier-1 should provide a complete data management solution for these communities and it was therefore decided to setup a Rucio instance to do this.
In April 2018 a Rucio instance was setup at RAL which has so far been used by the AENEAS project. RAL is providing the development effort to allow a single Rucio instance to support multiple experiments. RAL also runs a DynaFed service which is providing an authentication and authorization layer in front of RAL S3 storage service.
Speaker: Ian Collier (Science and Technology Facilities Council STFC (GB))
CHEP2019Rucio.pdf CHEP2019Rucio.pptx
Track 5 – Software Development: Parallelism, accelerators and heterogenous computing Riverbank R2
Convener: Dr Felice Pantaleo (CERN)
Concurrent data structures in the ATLAS offline software 15m
In preparation for Run 3 of the LHC, the ATLAS experiment is modifying its offline software to be fully multithreaded. An important part of this is data structures that can be efficiently and safely concurrently accessed from many threads. A standard way of achieving this is through mutual exclusion; however, the overhead from this can sometimes be excessive. Fully lockless implementations are known for some data structures; however, they are typically complex, and the overhead they require can sometimes be larger than that required for locking implementations. An interesting compromise is to allow lockless access only for reading but not for writing. This often allows the data structures to be much simpler, while still giving good performance for read-mostly access patterns. This talk will show some examples of this strategy in data structures used by the ATLAS offline software. It will also give examples of synchronization strategies inspired by read-copy-update, as well as helpers for memoizing values in a multithreaded environment.
Speaker: Scott Snyder (Brookhaven National Laboratory (US))
snyder.pdf
MarlinMT - parallelising the Marlin framework 15m
Marlin is the event processing framework of the iLCSoft ecosystem. Originally developed
for the ILC more than 15 years ago, it is now widely used, e.g. by CLICdp, CEPC and
many test beam projects such as Calice, LCTPC and EU-Telescope. While Marlin is
lightweight and flexible it was originally designed for sequential processing only.
With MarlinMT we now evolved Marlin for parallel processing of events on multi-core
architectures based on multi-threading. We report on the necessary developments and
issues encountered, within Marlin as well as with the underlying LCIO EDM. A focus will be
put on the new parallel event processing (PEP) scheduler. We conclude with first
performance estimates, such as application speedup and memory profiling, based on parts
of the ILD reconstruction chain that have been ported to MarlinMT.
Speaker: Remi Ete (DESY)
CHEP2019_REte_MarlinMT.pdf
Bringing heterogeneity to the CMS software framework 15m
The advent of computing resources with co-processors, for example Graphics Processing Units (GPU) or Field-Programmable Gate Arrays (FPGA), for use cases like the CMS High-Level Trigger (HLT) or data processing at leadership-class supercomputers imposes challenges for the current data processing frameworks. These challenges include developing a model for algorithms to offload their computations on the co-processors as well as keeping the traditional CPU busy doing other work. The CMS data processing framework, CMSSW, implements multithreading using the Intel's Threading Building Blocks (TBB) library, that utilizes tasks as concurrent units of work. In this talk we will discuss a generic mechanism to interact effectively with non-CPU resources that has been implemented in CMSSW. In addition, configuring such a heterogeneous system is challenging. In CMSSW an application is configured with a configuration file written in the Python language. The algorithm types are part of the configuration. The challenge therefore is to unify the CPU and co-processor settings while allowing their implementations to be separate. We will explain how we solved these challenges while minimizing the necessary changes to the CMSSW framework. We will also discuss on a concrete example how algorithms would offload work to NVIDIA GPUs using directly the CUDA API.
Speaker: Oliver Gutsche (Fermi National Accelerator Lab. (US))
CHEP2019_wide.pdf
Heterogeneous reconstruction: combining an ARM processor with a GPU 15m
As the mobile ecosystem has demonstrated, ARM processors and GPUs promise to deliver higher compute efficiency with a lower power consumption. One interesting platform to experiment with architectures different from a traditional x86 machine is the NVIDIA AGX Xavier SoC, that pairs a 64-bit ARM processor 8 cores with a Volta-class GPU with 512 CUDA cores. The CMS reconstruction software was ported to run on the ARM architecture, and there is an ongoing effort to rewrite some of the most time-consuming algorithms to leverage NVIDIA GPUs. In this presentation we will explore the challenges of running the CMS reconstruction software on a smll embedded device, and compare its compute performance and power consumption with those of a traditional x86 server.
Speaker: Andrea Bocci (CERN)
Heterogeneous reconstruction: combining ARM processors with GPUs.pdf
GPU Usage in ATLAS Reconstruction and Analysis 15m
With GPUs and other kinds of accelerators becoming ever more accessible, High Performance Computing Centres all around the world using them ever more, ATLAS has to find the best way of making use of such accelerators in much of its computing.
Tests with GPUs -- mainly with CUDA -- have been performed in the past in the experiment. At that time the conclusion was that it was not advantageous for the ATLAS offline and trigger software to invest time and money into GPUs. However as the usage of accelerators has become cheaper and simpler in recent years, their re-evaluation in ATLAS's offline software is warranted.
We will show code designs and performance results of using OpenCL, OpenACC and CUDA to perform calculations using the ATLAS offline/analysis (xAOD) Event Data Model. We compare the performance and flexibility of these different offload methods, and show how different memory management setups affect our ability to offload different types of calculations to a GPU efficiently. So that an overall throughout increase could be achieved even without highly optimising our reconstruction code specifically for GPUs.
Speaker: Attila Krasznahorkay (CERN)
GPU Usage in ATLAS Reconstruction and Analysis 2019.11.04..pdf
GPU-based Offline Clustering Algorithm for the CMS High Granularity Calorimeter 15m
The future upgraded High Luminosity LHC (HL-LHC) is expected to deliver about 5 times higher instantaneous luminosity than the present LHC, producing pile-up up to 200 interactions per bunch crossing. As a part of its phase-II upgrade program, the CMS collaboration is developing a new end-cap calorimeter system, the High Granularity Calorimeter (HGCAL), featuring highly-segmented hexagonal silicon sensors (0.5-1.1 cm2) and scintillators (4-30 cm2), totalling more than 6 million channels in comparison to about 200k channels for the present CMS endcap calorimeters. For each event, the HGCAL clustering algorithm needs to reduce more than 100k hits into ~10k clusters, while keeping the fine shower structure. The same algorithm must reject pileup for further shower reconstruction. Due to the high pileup in the HL-LHC and high granularity, HGCAL clustering is confronted with an unprecedented surge of computation load. This motivates the concept of high-throughput heterogeneous computing in HGCAL offline clustering. Here we introduce a fully-parallelizable density-based clustering algorithm running on GPUs. It uses a tile-based data structure as input for a fast query of neighbouring cells and achieves an O(n) computational complexity. Within the CMS reconstruction framework, clustering on GPUs demonstrates at least a 10x throughout increase compared to current CPU-based clustering.
Speaker: Mr Ziheng Chen (Northwestern University)
chep2019.pdf
Track 6 – Physics Analysis: New Physics studies Hall G
Convener: Maurizio Pierini (CERN)
Signal versus Background Interference in $H^+\to t\bar b$ Signals for MSSM Benchmark Scenarios 15m
In this talk I will present an investigation into sizeable interference effects between a {heavy} charged Higgs boson signal produced via $gg\to t\bar b H^-$ (+ c.c.) followed by the decay $H^-\to b\bar t$ (+ c.c.) and the irreducible background given by $gg\to t\bar t b \bar b$ topologies at the Large Hadron Collider (LHC). I will show how such effects could spoil current $H^\pm$ searches where signal and background are normally treated separately. The reason for this is that a heavy charged Higgs boson can have a large total width, in turn enabling such interferences, altogether leading to very significant alterations, both at the inclusive and exclusive level, of the yield induced by the signal alone. This therefore implies that currently established LHC searches for such wide charged Higgs bosons require modifications. This is shown quantitatively using two different benchmark configurations of the minimal realisation of Supersymmetry, wherein such $H^\pm$ states naturally exist.
Speaker: Riley Patrick (The University of Adelaide)
interference.pdf
GAMBIT: The Global and Modular BSM Inference Tool 15m
GAMBIT is a modular and flexible framework for performing global fits to a wide range of theories for new physics. It includes theory and analysis calculations for direct production of new particles at the LHC, flavour physics, dark matter experiments, cosmology and precision tests, as well as an extensive library of advanced parameter-sampling algorithms. I will present the GAMBIT software framework and give a brief overview of the main physics results it has produced so far.
Speaker: Pat Scott (The University of Queensland)
PS_CHEP.pdf
Searching for dark matter signatures in 20 years of GPS atomic clock data 15m
Despite the overwhelming cosmological evidence for the existence of dark matter, and the considerable effort of the scientific community over decades, there is no evidence for dark matter in terrestrial experiments.
The GPS.DM observatory uses the existing GPS constellation as a 50,000 km-aperture sensor array, analysing the satellite and terrestrial atomic clock data for exotic physics signatures. In particular, the collaboration searches for evidence of transient variations of fundamental constants correlated with the Earth's galactic motion through the dark matter halo.
The initial results of the search lead to an orders-of-magnitude improvement in constraints on certain models of dark matter [1].
I will discuss the initial results and future prospects, including the method used for processing the data, and the "GPS simulator" and dark-matter signal generator we built to test to methods [2].
[1] B. M. Roberts, G. Blewitt, C. Dailey, M. Murphy, M. Pospelov, A. Rollings, J. Sherman, W. Williams, and A. Derevianko, Nat. Commun. 8, 1195 (2017).
[2] B. M. Roberts, G. Blewitt, C. Dailey, and A. Derevianko, Phys. Rev. D 97, 083009 (2018).
Speaker: Benjamin Roberts (The University of Queensland)
Roberts-CHEP-2019.pdf
Exploring the new physics in 2HDM 15m
In this talk, we discuss the new physics implication in Two Higgs doublet Model (2HDM) under various experimental constraints. As part work of Gambit group, our work is to use the global fit method to constrain the parameter space, find out the hints for new physics and try to make some predictions for further studies.
In our global fit, we include the constraints from LEP, LHC (SM-like Higgs boson search), the theoretical requirements ( Unitarity, Perturbativity, and vacuum stability), various flavour physics constraints ( radiative B Decay $B \to X_s \gamma$, rare fully leptonic B decays $B \to \mu^+\mu^-$ ,etc) and muon g-2 anomaly.
After the 7-parameter global fit, we have a detailed study about the result, analysing individual constraints effects, finding out advantages of every constraint in constraining parameters and discovering new particles. For the Type-II 2HDM, we find that the $\lambda_2$ is sensitive to the LHC SM-lkie Higgs boson search results. Our final results will be displayed in $\tan \beta$ - $\cos(\beta-\alpha)$, $m_A$ - $\tan \beta$, which are usually considered.
Speaker: Dr wei su (University of Adelaide)
chep-weisu.pdf
Monte Carlo event generator with model-independent new physics effect for B->K(*)ll decays 15m
At the high luminosity flavor factory experiments such as the Belle II
experiment, it is expected to find the new physics effect and
constrain the new physics models with the high statics and many
observables. In such analysis, the global analysis of the many
observables with the model-independent approach is important. One
difficulty in such global analysis is that the new physics could
affect the numerical results obtained by experiments assuming the
Standard Model, because of the changes of the reconstructed
kinematical distributions used in the event selection and in the
fitting to obtain the number of signal and background events.
Therefore, it is also important to prepare the event generator
including the new physics effects for the Monte Carlo simulation of
the detector response to estimate and consider the effects properly in
the global analysis.
In this work, we present development of the event generator of
B->K(*)ll decays including the new physics effect in the
model-independent way by parametrizing with the Wilson coefficients.
We implement the decay model using the EvtGen
[https://evtgen.hepforge.org/] framework so that it can be applicable
in the analysis software framework of the B physics experiments. For the
theoretical calculation of the new physics effect we consider the EOS
[https://eos.github.io/] library and other possible calculations. We
report the results obtained by the developed event generator and
application in the global analysis.
Speaker: Koji Hara (KEK)
20191104_CHEP_KHara_16_9.pdf
Ways of Seeing: Finding BSM physics at the LHC 15m
Searches for beyond-Standard Model physics at the LHC have thus far not uncovered any evidence of new particles, and this is often used to state that new particles with low mass are now excluded. Using the example of the supersymmetric partners of the electroweak sector of the Standard Model, I will present recent results from the GAMBIT collaboration that show that there is plenty of room for low mass solutions based on the LHC data. I will then present a variety of methods for designing new LHC analyses that can successfully target those solutions.
Speaker: Martin John White (University of Adelaide (AU))
CHEP-2019.pdf
Track 7 – Facilities, Clouds and Containers: Cloud computing Riverbank R7
Convener: Christoph Wissing (Deutsches Elektronen-Synchrotron (DE))
The DODAS experience on EGI Federated Cloud 15m
The EGI Cloud Compute service offers a multi-cloud IaaS federation that brings together research clouds as a scalable computing platform for research accessible with OpenID Connect Federated Identity. The federation is not limited to single sign-on, it also introduces features to facilitate the portability of applications across providers: i) a common VM image catalogue VM image replication to ensure these images will be available at providers whenever needed; ii) a GraphQL information discovery API to understand the capacities and capabilities available at each provider; and iii) integration with orchestration tools (such as Infrastructure Manager) to abstract the federation and facilitate using heterogeneous providers. EGI also monitors the correct function of every provider and collects usage information across all the infrastructure.
DODAS (Dynamic On Demand Analysis Service) is an open-source Platform-as-a-Service tool, which allows to deploy software applications over heterogeneous and hybrid clouds. DODAS is one of the so-called Thematic Services of the EOSC-hub project and it instantiates on-demand container-based clusters offering a high level of abstraction to users, allowing to exploit distributed cloud infrastructures with a very limited knowledge of the underlying technologies.
This work presents a comprehensive overview of DODAS integration with EGI Cloud Federation, reporting the experience of the integration with CMS Experiment submission infrastructure system.
Speaker: Daniele Spiga (Universita e INFN, Perugia (IT))
CHEP2019-DODAS_EGI.pdf
CloudScheduler V2: Distributed Cloud Computing in the 21st century 15m
The cloudscheduler VM provisioning service has been running production jobs for ATLAS and Belle II for many years using commercial and private clouds in Europe, North America and Australia. Initially released in 2009, version 1 is a single Python 2 module implementing multiple threads to poll resources and jobs, and to create and destroy virtual machine. The code is difficult to scale, maintain or extend and lacks many desirable features, such as status displays, multiple user/project management, robust error analysis and handling, and time series plotting, to name just a few examples. To address these shortcomings, our team has re-engineered the cloudscheduler VM provisioning service from the ground up. The new version, dubbed cloudscheduler version 2 or CSV2, is written in Python 3 runs on any modern Linux distribution, and uses current supporting applications and libraries. The system is composed of multiple, independent Python 3 modules communicating with each other through a central MariaDB (version 10) database. It features both graphical (web browser), and command line user interfaces and supports multiples users/projects with ease. Users have the ability to manage and monitor their own cloud resources without the intervention of a system administrator. The system is scalable, extensible, and maintainable. It is also far easier to use and is more flexible than its predecessor. We present the design, highlight the development process which utilizes unit tests, and show encouraging results from our operational experience with thousands of jobs and workernodes. We also present our experience with containers for running workloads, code development and software distribution.
Speaker: Randall Sobie (University of Victoria (CA))
Sobie-CS-CHEP.pdf
Increasing interoperability for research clouds: CS3APIs for connecting sync&share storage, applications and science environments 15m
Cloud Services for Synchronization and Sharing (CS3) have become increasing popular in the European Education and Research landscape in the last
years. Services such as CERNBox, SWITCHdrive, CloudStor and many more have become indispensable in everyday work for scientists, engineers and in administration
CS3 services represent an important part of the EFSS market segment (Enterprise File Sync and Share). According to the report at the last CS3 2019 Rome conference, 25 sites provide a service to the total of 395 thousand researchers and educators around the globe (in Europe and Australia, China, US, Brazil, South Africa and Russia) serving 2.7 billion files (corresponding to 11.5 PB of storage). CS3 provides easily accessible, sync&share services with intuitive and responsive user interfaces.
Although these services are becoming popular because of their intuitive interface for sharing and synchronization of data, availability on all platforms (mobile, desktop and web) and capabilities to adapt to different user scenarios such as offline work, the commercially developed sync&share platforms are not sufficiently integrated with research services, tools and applications. This lack of integration currently forms a major bottleneck for European collaborative research communities. In addition, services as operated by several European providers who are in CS3, are currently too fragmented.
The CS3 APIs is a set of APIs to make research clouds based on sync and share technology interoperable. The APIs are designed to decrease the burden of porting an application developed for one EFSS service to another one and also provide a standard way to connect the sync and share platform with existing and new storage repositories over a well-defined metadata control protocol. These interconnections increase the cohesion between services to create an easily-accessible and integrated science environment that facilitates research activities across institutions without having fragmented silos based on ad-hoc solutions.
We report on our experience designing the protocol and the reference implementation (REVA), and its future evolution to reduce the fragmentation in the pan-European research network.
20190528_CHEP-CS3APIS .pdf
Pre-Commercial Procurement: R&D as a Service for the European Open Science Cloud 15m
The use of commercial cloud services has gained popularity in research environments. Not only it is a flexible solution for adapting computing capacity to the researchers' needs, it also provides access to the newest functionalities on the market. In addition, most service providers offer cloud credits, enabling researchers to explore innovative architectures before procuring them at scale. Yet, the economical and contractual aspects linked to the production use of commercial clouds are often overlooked, preventing researchers to reap their full benefits.
CERN, in collaboration with leading European research institutes, has launched several initiatives to bridge this gap. Completed in 2018, the HNSciCloud Pre-Commercial Procurement (PCP) project successfully developed a European hybrid cloud platform to pioneer the convergence and deployment of commercial cloud, high-performance computing and big-data capabilities for scientific research. Leveraging many of the lessons learned from HNSciCloud, the OCRE project - Open Clouds for Research Environments - started in January 2019 in order to accelerate commercial cloud adoption in the European research community. In parallel, the ARCHIVER PCP project - Archiving and Preservation for Research Environments - will develop hybrid and scalable solutions for archiving and long-term preservation of scientific data whilst ensuring that research groups retain stewardship of their datasets.
With a total procurement budget exceeding €18 million, these initiatives are setting best practices for effective and sustainable procurement of commercial cloud services for research activities. These are highly relevant as, in the wider context of the European Open Science Cloud (EOSC), the engagement of commercial providers is considered fundamental to contribute to the creation of a sustainable, technologically advanced environment with open services for data management, analysis and re-use across disciplines, with transparent costing models.
In this contribution, we will detail the outcomes of the HNSciCloud PCP project, expand on the objectives of the subsequent OCRE and ARCHIVER projects and provide a vision for the role of the private sector within the EOSC.
Speaker: Marion Devouassoux (CERN)
Presentation CHEP_Pre-Commercial Procurement_ R&D-as-a-Service for the European Open Science Cloud (EOSC)_Marion Devouassoux.pdf
Dynamic integration of distributed, Cloud-based HPC and HTC resources using JSON Web Tokens and the INDIGO IAM Service 15m
In the last couple of years, we have been actively developing the Dynamic On-Demand Analysis Service (DODAS) as an enabling technology to deploy container-based clusters over any Cloud infrastructure with almost zero effort. The DODAS engine is driven by high-level templates written in the TOSCA language, that allows to abstract the complexity of many configuration details. DODAS is particularly suitable for harvesting opportunistic computing resources; this is why several scientific communities already integrated their computing use cases into DODAS-instantiated clusters automating the instantiation, management and federation of HTCondor batch system.
The increasing demand, availability and utilization of HPC by and for multidisciplinary user community, often mandates the possibility to transparently integrate, manage and mix HTC and HPC resources.
In this paper, we discuss our experience extending and using DODAS to connect HPC and HTC resources in the context of a distributed Italian regional infrastructure involving multiple sites and communities. In this use case, DODAS automatically generates HTCondor-based clusters on-demand, dynamically and transparently federating sites that may also include HPC resources managed by SLURM; DODAS allows user workloads to make opportunistic and automated use of both HPC and HTC resources, thus effectively maximizing and optimizing resource utilization.
We also report on our experience of using and federating HTCondor batch systems exploiting the JSON Web Token capabilities introduced in recent HTCondor versions, replacing the traditional X509 certificates in the whole chain of workload authorization. In this respect we also report on how we integrated HTCondor using OAuth with the INDIGO IAM service.
Speaker: Stefano Dal Pra (Universita e INFN, Bologna (IT))
CHEP19-CnafParma.pdf
Characterizing network paths in and out of the Clouds 15m
Cloud computing is becoming mainstream, with funding agencies moving beyond prototyping and starting to fund production campaigns, too. An important aspect of any production computing campaign is data movement, both incoming and outgoing. And while the performance and cost of VMs is relatively well understood, the network performance and cost is not.
We thus embarked on a network characterization campaign, documenting traceroutes, latency and throughput in various regions of Amazon AWS, Microsoft Azure and Google GCP Clouds, both between Cloud resources and major DTNs in the Pacific Research Platform, including OSG data federation caches in the network backbone, and inside the clouds themselves. We also documented the incurred cost while doing so.
Along the way we discovered that network paths were often not what the major academic network providers thought they were, and we helped them in improving the situation, thus improving peering between academia and commercial cloud.
In this talk we present the observed results, both during the initial test runs and the latest state of the art, as well as explain what it took to get there.
Speaker: Igor Sfiligoi (UCSD)
CHEP19_Networks.pdf CHEP19_Networks.pptx
Track 8 – Collaboration, Education, Training and Outreach: Education Riverbank R1
Convener: Marzena Lapka (CERN)
"Physics of Data", an innovative master programme in Physics 15m
Most of the challenges set by modern physics endeavours are related to the management, processing and analysis of massive amount of data. As stated in a recent Nature editorial (The thing about data, Nature Physics volume 13, page 717, 2017), "the rise of big data represents an opportunity for physicists. To take full advantage, however, they need a subtle but important shift in mindset". All this calls for a substantial change in the way future physicists are taught: statistics and probability, information theory, machine learning as well as scientific computing and hardware setups should be the pillars of the education of a new physics students generation. This is what an innovative master programme launched in fall 2018 by the University of Padua, "Physics of Data", aims at. This contribution summarises its actual implementation, describing the educational methods (all focused on "hands-on" activities and research projects) and reporting on the brilliant results obtained by the first enrolled students.
Speaker: Marco Zanetti (Universita e INFN, Padova (IT))
CHEP19_PoD_UniPD.pdf CHEP19_PoD_UniPD.pptx
Unplugged Computing for Children 15m
The number of women in technical computing roles in the HEP community hovers at around 15%. At the same time there is a growing body of research to suggest that diversity, in all its forms, brings positive impact on productivity and wellbeing. These aspects are directly in line with many organisations' values and missions, including CERN. Although proactive efforts to recruit more women in our organisations and institutes may help, the percentage of female applicants in candidate pools is similarly low and limits the potential for change. Factors influencing the career choice of girls have been identified to start as early as primary school and are closely tied to encouragement and exposure. It is the hope of various groups in the HEP community that, by intervening early, there may be a change in demographics over the years to come. During 2019, the Women in Technology Community at CERN developed two workshops for 6-9 year olds, which make the fundamental concepts of computer science accessible to young people with no prior experience and minimal assumed background knowledge. The immediate objectives were to demystify computer science, and to allow the children to meet a diverse set of role models from technical fields through our volunteer tutors. The workshops have been run for International Women's Day and Girls in ICT Day, and a variation will be incorporated into the IT contribution to CERN's Open Day in September 2019 (where both boys and girls will participate). We will present an overview of the statistics behind our motivation, describe the content of the workshops, results and lessons learnt and the future evolution of such activities.
Speaker: Hannah Short (CERN)
Slides Unplugged Computing for Children - Final.pdf
Delivering a machine learning course on HPC resources 15m
In recent years proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources both for training and inference. Especially for deep learning algorithms, training performances can be dramatically improved by exploiting Graphical Processing Units (GPU). The needed skill set for a data scientist is therefore extremely broad, and ranges from knowledge of ML models to distributed programming on heterogeneous resources. While most of the available training resources focus on ML algorithms and tools such as TensorFlow, we designed a course for doctoral students where model training is tightly coupled with underlying technologies that can be used to dynamically provision resources. Throughout the course, students have access to OCCAM, an HPC facility at the University of Torino, managed using container-based cloud-like technologies, where Computing Applications are run on Virtual Clusters deployed on top of the physical infrastructure.
Task scheduling over OCCAM resources is managed by an orchestration layer (such as Mesos or Kubernetes), leveraging Docker containers to define and isolate the runtime environment. The Virtual Clusters developed to execute ML workflows are accessed through a web interface based on JupyterHub. When a user authenticates on the Hub, a notebook server is created as a containerized application. A set of libraries and helper functions is provided to execute a parallelized ML task by automatically deploying a Spark driver and several Spark execution nodes as Docker containers. This solution automates the delivery of the software stack required by a typical ML workflow and enables scalability by allowing the execution of ML tasks, including training, over commodity (i.e. CPUs) or high-performance (i.e. GPUs) resources distributed over different hosts across a network.
Speaker: Federica Legger (Universita e INFN Torino (IT))
CHEP19 - ML course.pdf
The iTHEPHY project and its software platform: enhancing remote teacher-student collaboration 15m
iTHEPHY is an ERASMUS+ project which aims at developing innovative student-centered Deeper Learning Approaches (DPA) and Project-Based teaching and learning methodologies for HE students, contributing to increase the internationalization of physics master courses. In this talk we'll introduce the iTHEPHY project status and main goals attained, with a focus on the web-based virtual environment developed in order to support the groups of students and the teams of teachers during their DPA learning and teaching activities: the iTHEPHY DPA Platform. The iTHEPHY DPA platform will be described in detail, focusing on the methodologies and technologies which enabled us to deliver a modular, user friendly, open-source, scalable and reusable platform that can be adopted by other communities in a straightforward way. The presentation will describe the work carried out in order to integrate some well established tools like Moodle, Redmine, BigBlueButton, Rocketchat, Jitsi, Sharelatex and INDIGO-DataCloud IAM. Some aspects about containerization of services in Cloud will be also covered. Finally, some reflections about sustainability of the software platform delivered will be presented.
Speaker: Gianluca Peco (Universita e INFN, Bologna (IT))
chep-ithephy-final.pdf chep-ithephy-final.pptx
International Particle Physics Masterclasses - Current development to expand scope and global reach 15m
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. IPPOG's flagship activity is the International Particle Physics Masterclass programme, which provides secondary students with access to particle physics data using dedicated visualisation and analysis software. Students meet scientists, learn about particle physics, accelerators and detectors, perform physics measurements and search for new phenomena, then compare results in an end-of-day videoconference with other classes. The most recent of these events was held from 7 March to 16 April 2019 with thousands of students participating in 332 classes held in 239 institutes from 54 countries around the world. We report on the evolution of Masterclasses in recent years, in both physics and computing scope, as well as in global reach.
Speaker: Farid Ould-Saada (University of Oslo (NO))
IPPOG IMC - CHEP2019.pdf
Track 9 – Exascale Science: Strategies by experiments and organizations Riverbank R4
Convener: Wei Yang (SLAC National Accelerator Laboratory (US))
High Performance Computing for High Luminosity LHC 15m
High Performance Computing (HPC) centers are the largest facilities available for science. They are centers of expertise for computing scale and local connectivity and represent unique resources. The efficient usage of HPC facilities is critical to the future success of production processing campaigns of all Large Hadron Collider (LHC) experiments. A substantial amount of R&D investigations are being performed in order to harness the power provided by such machines. HPC facilities are early adopters of heterogenous accelerated computing architectures, which represent a challenge and an opportunity. The adoption of accelerated heterogenous architectures has the potential to dramatically increase the performance of specific workflows and algorithms. In this presentation we will discuss R&D work on using alternative architectures both in collaboration with industry through CERN openlab and with the DEEP-EST project, a European consortium to build a prototype modular HPC infrastructure at the exa-scale. We will present the work on a proof-of-concept container platform and batch integration for workload submissions to access HPC testbed resources for data intensive science applications. As strategic computing resources, HPC centers are often isolated with tight network security, which represents a challenge for data delivery and access. We will close by summarizing the requirements and challenges for data access, through the Data Organization Management and Access (DOMA) project of the WLCG. Facilitating data access is critical to the adoption of HPC centers for data intensive science.
Speaker: Maria Girone (CERN)
HPC_CHEP2019MG.pdf HPC_CHEP2019MG.pptx
CMS Strategy for HPC resource exploitation 15m
High Energy Physics (HEP) experiments will enter a new era with the start of the HL-LHC program, where computing needs required will surpass by large factors the current capacities. Looking forward to this scenario, funding agencies from participating countries are encouraging the HEP collaborations to consider the rapidly developing High Performance Computing (HPC) international infrastructures as a mean to satisfy at least a fraction of the foreseen HEP processing demands. Moreover, considering that HEP needs have been usually covered by facilities cost-optimized rather than performance-optimized, employing HPC centers would also allow access to more advanced resources. HPC systems are highly non-standard facilities, custom-built for use cases largely different from CMS demands, namely the processing of real and simulated particle collisions which can be analyzed individually without any correlation. The utilization of these systems by HEP experiments would not trivial, as each HPC center is different, increasing the level of complexity from the CMS integration and operations perspectives. Additionally, while CMS data is residing on a distributed highly-interconnected storage infrastructure, HPC systems are in general not meant for accessing large data volumes residing outside the facility. Finally, the allocation policies to these resources is quite different from the current usage of pledged resources deployed at CMS supporting Grid sites. This contribution will report on the CMS strategy developed to make effective use of HPC resources, involving a closer collaboration between CMS and HPC centers in order to further understand and subsequently overcome the present obstacles. Progress in the necessary technical and operational adaptations being made in CMS computing will be described.
CMS-HPC_Strategy_CHEP2019.pdf
Modeling of the CMS HL-LHC computing system 15m
The High-Luminosity LHC will provide an unprecedented data volume of complex collision events. The desire to keep as many of the "interesting" events for investigation by analysts implies a major increase in the scale of compute, storage and networking infrastructure required for HL-LHC experiments. An updated computing model is required to facilitate the timely publication of accurate physics results from HL-LHC data samples. This talk discusses the study of the computing requirements for CMS during the era of the HL-LHC. We will discuss how we have included requirements beyond the usual CPU, disk and tape estimates made by LHC experiments during Run 2, such as networking and tape read/write rate requirements. We will show how Run 2 monitoring data has been used to make choices towards a HL-LHC computing model. We will illustrate how changes to the computing infrastructure or analysis approach can impact total resource needs and cost. Finally, we will discuss the approach and status of the CMS process for evolving its HL-LHC computing model based on modeling and other factors.
Speaker: David Lange (Princeton University (US))
chep_lange.pdf
Integrating LHCb workflows on HPC resources: status and strategies 15m
High Performance Computing (HPC) supercomputers are expected to play an increasingly important role in HEP computing in the coming years. While HPC resources are not necessarily the optimal fit for HEP workflows, computing time at HPC centers on an opportunistic basis has already been available to the LHC experiments for some time, and it is also possible that part of the pledged computing resources will be offered as CPU time allocations at HPC centers in the future. The integration of the experiment workflows to make the most efficient use of HPC resources is therefore essential.
This presentation will describe the work that has been necessary to integrate LHCb workflows at HPC sites. This has required addressing two types of challenges: in the distributed computing area, for efficiently submitting jobs, accessing the software stacks and transferring data files; and in the software area, for optimising software performance on hardware architectures that differ significantly from those traditionally used in HEP. The talk will cover practical experience for the deployment of Monte Carlo generation and simulation workflows at the HPC sites available to LHCb. It will also describe the work achieved on the software side to improve the performance of these applications using parallel multi-process and multi-threaded approaches.
LHCb-HPCs.pdf
Enabling ATLAS big data processing on Piz Daint at CSCS 15m
Predictions for requirements for the LHC computing for Run 3 and Run 4 (HL_LHC) over the course of the next 10 years show a considerable gap between required and available resources, assuming budgets will globally remain flat at best. This will require some radical changes to the computing models for the data processing of the LHC experiments. Concentrating computational resources in fewer larger and more efficient centres should increase the cost-efficiency of the operation and, thus, of the data processing. Large scale general purpose HPC centres could play a crucial role in such a model. We report on the technical challenges and solutions adopted to enable the processing of the ATLAS experiment data on the European flagship HPC Piz Daint at CSCS, now acting as a pledged WLCG Tier-2 centre. As the transition of the Tier-2 from classic to HPC resources has been finalised, we also report on performance figures over two years of production running and on efforts for a deeper integration of the HPC resource within the ATLAS computing framework at different tiers.
Speaker: Francesco Giovanni Sciacca (Universitaet Bern (CH))
ATLAS-Daint-CHEP2019.pdf
Plenary: Computational astrophysics / Diversity Hall G
Convener: James Zanotti (University of Adelaide)
Square Kilometer Array computing - from SDP to SRC 30m
Speaker: Minh Huynh
SKA_SDP_SRC_CHEP_Huynh_upload.pdf SKA_SDP_SRC_CHEP_Huynh_upload.pptx
Evaluating Rucio outside ATLAS: Common experiences from Belle II, CMS, DUNE, SKA, and LIGO 30m
For many scientific projects, data management is an increasingly complicated challenge. The number of data-intensive instruments generating unprecedented volumes of data is growing and their accompanying workflows are becoming more complex. Their storage and computing resources are heterogeneous and are distributed at numerous geographical locations belonging to different administrative domains and organizations. These locations do not necessarily coincide with the places where data is produced nor where data is stored, analyzed by researchers, or archived for safe long-term storage. To fulfill these needs, the data management system Rucio has been developed to allow the high-energy physics experiment ATLAS to manage its large volumes of data in an efficient and scalable way.
But ATLAS is not alone, and several diverse scientific projects have started evaluating, adopting, and adapting the Rucio system for their own needs. As the Rucio community has grown many improvements have been introduced, customisations have been added, and many bugs have been fixed. Additionally, new dataflows have been investigated and operational experiences have been documented. In this article we collect and compare the common successes, pitfalls, and oddities which arose in the evaluation efforts of multiple diverse experiments, and compare them with the ATLAS experience. This includes the high-energy physics experiments CMS and Belle II, the neutrino experiment DUNE, as well as the LIGO and SKA astronomical observatories.
Speaker: Mario Lassnig (CERN)
Rucio CHEP19.pdf
Alert systems, from astrophysics experiments to telescopes 30m
Speaker: Mansi Kasliwal (California Institute of Technology)
Kasliwal_Plenary_Adelaide_CHEP2019_v3.pdf Kasliwal_Plenary_Adelaide_CHEP2019_v3.pptx
Diversity & Inclusivity 30m
Speaker: Lyn Beazley
Gaining diversity the short and long games.pptx
Diversity & Inclusivity at CHEP 2019 10m
Speaker: Waseem Kamleh (University of Adelaide)
Diversity & Inclusivity - Events.pdf
Social Event: Welcome Reception Foyer F
Tuesday, 5 November
Plenary: Real-time computing / Future strategy Hall G
Convener: Xiaomei Zhang (Chinese Academy of Sciences (CN))
Real-time data analysis model at the LHC and connections to other experiments / fields 30m
Speaker: Arantza De Oyanguren Campos (Univ. of Valencia and CSIC (ES))
Arantza_Oyanguren_RTA_CHEP19.pdf
ALICE continuous readout and data reduction strategy for Run3 30m
The ALICE experiment has originally been designed as a relatively low-rate experiment, in particular given the limitations of the Time Projection Chamber (TPC) readout system using MWPCs. This will not be the case anymore for LHC Run 3 scheduled to start in 2021.
After the LS2 upgrades, including a new silicon tracker and a GEM-based readout for the TPC, ALICE will operate at a peak Pb-Pb collision rate of 50 kHz.
To cope with this rate at least the TPC will be operated in continuous mode and all collisions will be read out, compressed and written to permanent storage without any trigger selection.
The First Level Processing (FLP) site will receive continuous raw data at the rate of 3.4 TB/s and send to Event Processing Nodes (EPN) 10-20 ms packets at a rate of about 640 GB/s.
EPNs will perform the data reconstruction and compression in a quasi-streaming and will send them for archival at the rate of 100 GB/s.
Here we present the details of this synchronous stage of ALICE data processing.
Speaker: Ruben Shahoyan (CERN)
chep19_rs_final.pdf
Future of software and computing in the scope of the European Strategy for Particle Physics 30m
Speaker: Xinchou Lou (Chinese Academy of Sciences (CN))
CHEP2019-XCLOU.pdf CHEP2019-XCLOU.pptx
Track 1 – Online and Real-time Computing: Trigger farms and networks Riverbank R5
Convener: Steven Schramm (Universite de Geneve (CH))
Assessment of the ALICE O2 readout servers 15m
The ALICE Experiment at CERN LHC (Large Hadron Collider) is undertaking a major upgrade during LHC Long Shutdown 2 in 2019-2020. The raw data input from the detector will then increase a hundredfold, up to 3.4 TB/s. In order to cope with such a large throughput, a new Online-Offline computing system, called O2, will be deployed.
The FLP servers (First Layer Processor) are the readout nodes hosting the CRU (Common Readout Unit) cards in charge of transferring the data from the detector links to the computer memory. The data then flows through a chain of software components until it is shipped over network to the processing nodes.
In order to select a suitable platform for the FLP, it is essential that the hardware and the software are tested together. Each candidate server is therefore equipped with multiple readout cards (CRU), one InfiniBand 100G Host Channel Adapter, and the O2 readout software suite. A series of tests are then run to ensure the readout system is stable and fulfils the data throughput requirement of 42Gb/s (highest data rate in output of the FLP equipped with 3 CRUs).
This paper presents the software and firmware features developed to evaluate and validate different candidates for the FLP servers. In particular we describe the data flow from the CRU firmware generating data, up to the network card where the buffers are sent over the network using RDMA. We also discuss the testing procedure and the results collected on different servers.
Speaker: Filippo Costa (CERN)
Assessment of the ALICE O2 readout servers_final.pdf Assessment of the ALICE O2 readout servers_final.pptx
CMS Event-Builder Performance on State-of-the-Art Hardware 15m
We report on performance measurements and optimizations of the event-builder software for the CMS experiment at the CERN Large Hadron Collider (LHC). The CMS event builder collects event fragments from several hundred sources. It assembles them into complete events that are then handed to the High-Level Trigger (HLT) processes running on O(1000) computers. We use a test system with 16 dual-socket Skylake-based computers interconnected with 100 Gbps Infiniband and Ethernet networks. The main challenge is the demanding message rate and memory performance required of the event-builder node to fully exploit the network capabilities. Each event-builder node has to process several TCP/IP streams from the detector backends at an aggregated bandwidth of 100 Gbps, distribute event fragments to other event-builder nodes at the fist level trigger rate of 100 kHz, verify and build complete events using fragments received from all other nodes, and finally make the complete events available to the HLT processors. The achievable performance on today's hardware and different system architectures is described. Furthermore, we compare native Infiniband with RoCE (RDMA over Converged Ethernet). We discuss the required optimizations and highlight some of the problems encountered. We conclude with an outlook on the baseline CMS event-builder design for the LHC Run 3 starting in 2021.
Speaker: Remi Mommsen (Fermi National Accelerator Lab. (US))
Mommsen_EvBperformance.pdf
Results from the CBM mini-FLES Online Computing Cluster Demonstrator 15m
The Compressed Baryonic Matter (CBM) experiment is currently under construction at the GSI/FAIR accelerator facility in Darmstadt, Germany. In CBM, all event selection is performed in a large online processing system, the "First-level Event Selector" (FLES). The data are received from the self-triggered detectors at an input-stage computer farm designed for a data rate of 1 TByte/s. The distributed input interface will be realized using custom FPGA-based PCIe add-on cards, which preprocess and index the incoming data streams. The data is then transferred to an online processing cluster of several hundred nodes, which will be located in the shared Green-IT data center on campus.
Employing a time-based container data format to decontextualize the time-stamped signal messages from the detectors, data segments of specific time intervals can be distributed on the farm and processed independently. Timeslice building, the continuous process of collecting the data of a time interval simultaneously from all detectors, places a high load on the network and requires careful scheduling and management. Optimizing the design of the online data management includes minimizing copy operations of data in memory, using DMA/RDMA wherever possible, reducing data interdependencies, and employing large memory buffers to limit the critical network transaction rate.
As a demonstrator for the future FLES system, the mini-FLES system has been set up and is currently in operation at the GSI/FAIR facility. Designed as a vertical slice of the full system, it contains a fraction of all foreseen components. It is used to verify the developed hardware and software architecture and includes an initial version of a FLES control system. As part of the mini-CBM experiment of FAIR Phase-0, it is also the central data acquisition and online monitoring system of a multi-detector setup for physics data taking. This presentation will give an overview of the mini-FLES system of the CBM experiment and discuss its performance. The presented material includes latest results from operation in several recent mini-CBM campaigns at the GSI/FAIR SIS18.
Speaker: Jan de Cuveland (Johann-Wolfgang-Goethe Univ. (DE))
2019-11-05_CHEP_FLES_02.pdf
Performance of Belle II High Level Trigger in the First Physics Run 15m
The Belle II experiment is a new generation B-factory experiment at KEK in Japan aiming at the search for New Physics in a huge sample of B-meson dacays. The commissioning of accelerator and detector for the first physics run has been started from March this year. The Belle II High Level Trigger (HLT) is fully
working in the beam run. The HLT is now operated with 1600 cores clusterized in 5 units of 16 processing servers, which is the 1/4 of full configuration.
In each unit, the event-by-event basis parallel processing is implemented using the IPC-based ring buffers with the event transport over the network socket connection. The load balancing is automatically ensured by the ring buffers. In each processing server with 20 cores, the parallel processing is implemented utilizing the multi-process approach to run the same code for different events without taking a special care. The copy-on-write fork of processes efficiently reduces the memory consumption.
The event selection is done using the same offline code in two steps. The first is the track finding and the calorimeter clustering, and the rough event selection is performed to discard off-vertex background events efficiently. The full event reconstruction is performed for the selected events and they are classified in multiple categories. Only the events in the categories of interest are finally sent out to the storage. The live data quality monitoring is also performed on HLT.
For the selected events, the reconstructed tracks are extrapolated to the surface of pixel detector(PXD) and lively fed back to the readout electronics for the real time data reduction by sending only the associated hits.
Currently the accelerator study is still on-going to increase the luminosity and the physics data taking is being performed by sharing the time. During the data taking, sometimes the background rate becomes high and the L1 trigger rate reaches close to 10kHz, which is the 1/3 of maximum design rate. In this condition, the performance of Belle II HLT is discussed with the detailed report on various kinds of troubles and their fixes.
Speaker: Prof. Ryosuke Itoh (KEK)
Belle2-HLT-Itoh.pdf
Design of the data distribution network for the ALICE Online-Offline (O2) facility 15m
ALICE (A Large Ion Collider Experiment), one of the large LHC experiments, is currently undergoing a significant upgrade. Increase in data rates planned for LHC Run3, together with triggerless continuous readout operation, requires a new type of networking and data processing infrastructure.
The new ALICE O2 (online-offline) computing facility consists of two types of nodes: First Level Processors (FLP): containing a custom PCIe cards to receive data from detectors, and Event Processing Nodes (EPN): compute dense nodes equipped with GPGPUs for fast online data compression. FLPs first buffer the detector data for a time interval into SubTimeFrame (STF) objects. A TimeFrame then aggregates all corresponding STFs from each FLP into the TimeFrame (TF) object, located on a designated EPN node where it can be processed. The data distribution network connects FLP and EPN nodes, enabling efficient TimeFrame aggregation and providing a high quality of service.
We present design details of the data distribution network tailored to the requirements of the ALICE O2 facility based on the InfiniBand HDR technology. Further, we will show a scheduling algorithm for TimeFrame distribution from FLP to EPN nodes, which evenly utilizes all available processing capacity and avoids creating long-term network congestion.
Speaker: Gvozden Neskovic (Johann-Wolfgang-Goethe Univ. (DE))
2019-11-05_CHEP19_rev1.pdf
Network simulation of a 40 MHz event building system for the LHCb experiment 15m
The LHCb experiment will be upgraded in 2021 and a new trigger-less readout system will be implemented. In the upgraded system, both event building (EB) and event selection will be performed in software for every collision produced in every bunch-crossing of the LHC. In order to transport the full data rate of 32 Tb/s we will use state of the art off-the-shelf network technologies, e.g. InfiniBand EDR.
The full event building system will require around 500 nodes interconnected together via a non blocking topology, because of the size of the system it very difficult to test at production scale, before the actual procurement. We resort therefore to network simulations as a powerful tool for finding the optimal configuration. We developed an accurate low level description of an InfiniBand based network with event building like traffic.
We will present a full scale simulation of a possible implementation of the LHCb EB network.
Speaker: Flavio Pisani (Universita e INFN, Bologna (IT))
fpisani_chep2019.pdf
Track 2 – Offline Computing: G4 and simulation frameworks Riverbank R6
Convener: Chris Pinkenburg
Geant4 electromagnetic physics progress 15m
The Geant4 electromagnetic (EM) physics sub-packages is an important component of LHC experiment simulations. During long shutdown 2 for LHC these packages are under intensive development and in this work we report a progress for the new Geant4 version 10.6. These developments includes modifications allowing speed-up computations for EM physics, improve EM models, extend set for models, and extend validations of EM physics. Results of EM tests and benchmarks will be discussed in details.
Speaker: Ivana Hrivnacova (Centre National de la Recherche Scientifique (FR))
G4EmCHEP2019_v6.pdf
Dealing with High Background Rates in Simulations of the STAR Heavy Flavor Tracker 15m
The STAR Heavy Flavor Tracker (HFT) has enabled a rich physics program, providing important insights into heavy quark behavior in heavy ion collisions. Acquiring data during the 2014 through 2016 runs at the Relativistic Heavy Ion Collider (RHIC), the HFT consisted of four layers of precision silicon sensors. Used in concert with the Time Projection Chamber (TPC), the HFT enables the reconstruction and topological identification of tracks arising from charmed hadron decays. The ultimate understanding of the detector efficiency and resolution demands large quantities of high quality simulations, accounting for the precise alignment of sensors, and the detailed response of the detectors and electronics to the incident tracks. The background environment presented additional challenges, as simulating the significant rates from pileup events accumulated during the long integration times of the tracking detectors could have quickly exceeded the available computational resources, and the relative contributions from different sources was unknown. STAR has long addressed these issues by embedding simulations into background events directly sampled during data taking at the experiment. This technique has the advantage of providing a completely realistic picture of the dynamic background environment while introducing minimal additional computational overhead compared to simulation of the primary collision alone, thus scaling to any luminosity. We will discuss how STAR has applied this technique to the simulation of the HFT, and will show how the careful consideration of misalignment of precision detectors and calibration uncertainties results in the detailed reproduction of basic observables, such as track projection to the primary vertex. We will further summarize the experience and lessons learned in applying these techniques to heavy-flavor simulations and discuss recent results.
Speaker: Jason Webb (Brookhaven National Lab)
CHEP-2019-STAR-Embedding-Final.pdf CHEP-2019-STAR-Embedding.pdf
A VecGeom navigator plugin for Geant4 15m
VecGeom is a geometry modeller library with hit-detection features as needed by particle detector simulation at the LHC and beyond. It was incubated by a Geant-R&D initiative and the motivation to combine the code of Geant4 and ROOT/TGeo into a single, better maintainable piece of software within the EU-AIDA program.
So far, VecGeom is mainly used by LHC experiments as a geometry primitive library called from Geant4, where it has had very positive impact on CPU time due to its faster algorithms for complex primitives.
In this contribution, we turn to a discussion of how VecGeom can be used as the navigating library in Geant4 in order to benefit from both the fast geometry primitives as well as its vectorized navigation module. We investigate whether this integration provides the speed improvements expected, on top of that obtained from geometry primitives. We discuss and benchmark the application of such a VecGeom-navigator plugin to Geant4 for the use-case of ALICE and show paths towards usage for other experiments.
Lastly, an update on the general developments of VecGeom is given.
This includes a review of how developments in VecGeom can further benefit from interfacing with external ray-tracing kernels such as Intel-Embree.
Speaker: Sandro Christian Wenzel (CERN)
CHEP19_Adelaide_SWENZEL_final.pdf
Integration and Performance of New Technologies in the CMS Simulation 15m
The HL-LHC and the corresponding detector upgrades for the CMS experiment will present extreme challenges for the full simulation. In particular, increased precision in models of physics processes may be required for accurate reproduction of particle shower measurements from the upcoming High Granularity Calorimeter. The CPU performance impacts of several proposed physics models will be discussed. There are several ongoing research and development efforts to make efficient use of new computing architectures and high performance computing systems for simulation. The integration of these new R&D products in the CMS software framework and corresponding CPU performance improvements will be presented.
Speaker: Kevin Pedro (Fermi National Accelerator Lab. (US))
CMS simulation technology CHEP2019.pdf
Status of JUNO simulation software 15m
The JUNO (Jiangmen Underground Neutrino Observatory) experiment is a multi-purpose neutrino experiment designed to determine the neutrino mass hierarchy and precisely measure oscillation parameters. It is composed of a 20kton liquid scintillator central detector equipped with 18000 20'' PMTs and 25000 3'' PMTs, a water pool with 2000 20'' PMTs, and a top tracker. Monte-Carlo simulation is a fundamental tool for optimizing the detector design, tuning reconstruction algorithms, and performing physics study. The status of the JUNO simulation software will be presented, including generator interface, detector geometry, physics processes, MC truth, pull mode electronic simulation and background mixing. This contribution will also present the latest update of JUNO simulation software, including Geant4 upgraded from 9.4 to 10.4, and their performance comparison. Previous electronic simulation algorithm can only work for central detector, a new electronic simulation package is designed to enable joint simulation of all sub-detectors by using the Task/sub-Task/Algorithm/Tool scheme provided by SNiPER framework. The full simulation of optical photons in large liquid scintillator is CPU intensive, especially for cosmic muons, atmospheric neutrinos and proton decay events. For proton decay, users are only interested in the proton decay events with energy deposition between 100MeV and 700MeV, number of Michel electrons larger than 0, and the energy of Michel electron larger than 10MeV. Only 10% of the simulated proton decay events meet these requirements. We made some improvements on simulation procedure to enable doing full optical photon simulation only on some pre-selected events and reduce a lot of computing resources. A pre-simulation without optical photon simulation is carried out firstly, with all the Geant4 steps and other necessary MC truth information saved. Then a pre-selection based on MC truth information can determine which event need to be full simulated with optical photon processes activated and the G4Steps as input. This simulation procedure and relative interfaces can also be used for MPI or GPU based muon events full simulation.
Speaker: Dr Ziyan Deng (Institute of High Energy Physics)
dengzy_JUNO_detsim_CHEP2019.pdf
System simulations for the ALICE ITS detector upgrade 15m
The ALICE experiment at the CERN LHC will feature several upgrades for run 3, one of which is a new inner tracking system (ITS). The ITS upgrade is currently under development and commissioning. The new ITS will be installed during the ongoing long shutdown 2.
The specification for the ITS upgrade calls for event rates of up to 100 kHz for Pb-Pb, and 400 kHz pp, which is two orders of magnitude higher than the existing system. The seven layers of ALPIDE pixel sensor chips significantly improve tracking with a total of 24120 pixel chips. This is a vast improvement over the existing inner tracker with six layers, of which only the two innermost layers were pixel sensors.
A number of factors will have an impact on the performance and readout efficiency of the upgraded ITS in run 3. While these factors are not limited to operating conditions such as run type and event rates, there are also a number of sensor configuration parameters that will have an effect. For instance the strobe length and the choice of sensor operating mode; triggered or continuous.
To that end we have developed a simplified simulation model of the readout hardware in the ALPIDE and ITS, using the SystemC library for system level modeling in C++. This simulation model is three orders of magnitude faster than a normal HDL simulation of the chip, and facilitates simulations of an increased number of events for a large portion of the detector.
In this paper we present simulation results where we have been able to quantify detector performance under different running conditions. The results are used for system configuration as well as ongoing development of the readout electronics.
Speaker: Simon Voigt Nesbo (Western Norway University of Applied Sciences (NO))
alice_its_simulations.pdf
Track 3 – Middleware and Distributed Computing: HPCs and Related Services Riverbank R3
Convener: James Letts (Univ. of California San Diego (US))
Harnessing the power of supercomputers using the PanDA Pilot 2 in the ATLAS Experiment 15m
The unprecedented computing resource needs of the ATLAS experiment have motivated the Collaboration to become a leader in exploiting High Performance Computers (HPCs). To meet the requirements of HPCs, the PanDA system has been equipped with two new components; Pilot 2 and Harvester, that were designed with HPCs in mind. While Harvester is a resource-facing service which provides resource provisioning and workload shaping, Pilot 2 is responsible for payload execution on the resource.
The presentation focuses on Pilot 2, which is a complete rewrite of the original PanDA Pilot used by ATLAS and other experiments for well over a decade. Pilot 2 has a flexible and adaptive design that allows for plugins to be defined with streamlined workflows. In particular, it has plugins for specific hardware infrastructures (HPC/GPU clusters) as well as for dedicated workflows defined by the needs of an experiment.
Examples of dedicated HPC workflows are discussed in which the Pilot either uses an MPI application for processing fine-grained event level service under the control of the Harvester service or acts like an MPI application itself and runs a set of job in an assemble.
In addition to describing the technical details of these workflows, results are shown from its deployment on Cori (NERSC), Theta (ALCF), Titan and Summit (OLCF).
Speaker: Paul Nilsson (Brookhaven National Laboratory (US))
Nilsson - Harnessing the power of supercomputers using the PanDA Pilot 2 in the ATLAS Experiment - CHEP 2019 slides.pdf
IceProd Supercomputer Mode: How IceCube Production Runs on Firewalled Clusters 15m
For the past several years, IceCube has embraced a central, global overlay grid of HTCondor glideins to run jobs. With guaranteed network connectivity, the jobs themselves transferred data files, software, logs, and status messages. Then we were given access to a supercomputer, with no worker node internet access. As the push towards HPC increased, we had access to several of these machines, but no easy way to use them. So we went back to the basics of running production jobs, staging data in and out and running offline jobs on the local queue. But we made sure it still integrated directly with our dataset management and file metadata systems, to not lose everything we had gained in recent years.
Speaker: David Schultz (University of Wisconsin-Madison)
IceProd Supercomputer.pdf
Scheduling, deploying and monitoring 100 million tasks 15m
The SKA will enable the production of full polarisation spectral line cubes at a very high spatial and spectral resolution. Performing a back-of-the-evelope estimate gives you the incredible amount of around 75-100 million tasks to run in parallel to perform a state-of-the-art faceting algorithm (assuming that it would spawn off just one task per facet, which is not the case). This simple estimate formed the basis of the development of a prototype, which had scalability as THE primary requirement. In this talk I will present the current status of the DALiuGE system, including some really exciting computer science research.
Speaker: Prof. Andreas Wicenec (International Centre of Radio Astronomy Research)
100 Million CHEP.pdf 100 Million CHEP.pptx 100 Million Tasks
Managing the ATLAS Grid through Harvester 15m
ATLAS Computing Management has identified the migration of all resources to Harvester, PanDA's new workload submission engine, as a critical milestone for Run 3 and 4. This contribution will focus on the Grid migration to Harvester.
We have built a redundant architecture based on CERN IT's common offerings (e.g. Openstack Virtual Machines and Database on Demand) to run the necessary Harvester and HTCondor services, capable of sustaining the load of O(1M) workers on the grid per day.
We have reviewed the ATLAS Grid region by region and moved as much possible away from blind worker submission, where multiple queues (e.g. single core, multi core, high memory) compete for resources on a site. Instead we have migrated towards more intelligent models that use information and priorities from the central PanDA workload management system and stream the right number of workers of each category to a unified queue while keeping late binding to the jobs.
We will also describe our enhanced monitoring and analytics framework. Worker and job information is synchronized with minimal delays to a CERN IT provided Elastic Search repository, where we can interact with dashboards to follow submission progress, discover site issues (e.g. broken Compute Elements) or spot empty workers.
The result is a much more efficient usage of the Grid resources with smart, built-in monitoring of resources.
Speaker: Fernando Harald Barreiro Megino (University of Texas at Arlington)
Harvester CHEP 2019 (6).pdf
Nordugrid ARC cache: Efficiency gains on HPC and cloud resources 15m
The WLCG is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the compute nodes. The NT1 runs ARC in the Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC cache are the most important features of ARC.
HPCs are getting increased interest within the WLCG, but so are cloud resources. With the ARC CE as an edge service to the cloud or HPC resource, all data transfers required by a job are downloaded by data transfer nodes on the edge of the cluster before the job starts running on the compute node. This ensures a highly efficient use of the compute nodes CPUs, as the job starts immediately after reaching the compute node compared to the traditional pilot model where the pilot job on the compute node is responsible for fetching the data. In addition, the ARC cache gives a possible several-fold gain if more jobs need the same data. ARCs data handling capabilities ensures very efficient data access to the jobs, and even better for HPC centers with its fast interconnects.
In this presentation we will describe the Nordugrid model with the ARC-CE as an edge service to an HPC or cloud resource and show the gain in efficiency this model provides compared to the pilot model.
Speaker: Maiken Pedersen (University of Oslo (NO))
Maiken_Pedersen_CHEP-2019.pdf Maiken_Pedersen_CHEP-2019.pptx
Reusing distributed computing software and patterns for midscale collaborative science 15m
Many of the challenges faced by the LHC experiments (aggregation of distributed computing resources, management of data across multiple storage facilities, integration of experiment-specific workflow management tools across multiple grid services) are similarly experienced by "midscale" high energy physics and astrophysics experiments, particularly as their data set volumes are increasing at comparable rates. Often these (international, multi-institution) collaborations have outgrown the computing resources offered by their home laboratories, or the capacities of any single member institution. Unlike the LHC experiments, however, these collaborations often lack the manpower required to build, integrate and operate the systems required to meet their scale. In the Open Science Grid, we have organized a team designed to support collaborative science organizations re-use proven software and patterns in distributed processing and data management, often but not restricted to software developed for the LHC. Examples are re-use of the Rucio and FTS3 software for reliable data transfer and management, XRootD for data access and caching, Ceph for large scale pre-processing storage, and Pegasus for workflow management across heterogeneous resources. We summarize experience with the VERITAS gamma ray observatory, the South Pole Telescope (CMB detector), and the XENON dark matter search experiment.
Speakers: Benedikt Riedel (University of Wisconsin-Madison), Benedikt Riedel (University of Wisconsin-Madison)
Reusing distributed computing software and patterns for midscale collaborative science.pdf Reusing distributed computing software and patterns for midscale collaborative science.pdf Reusing distributed computing software and patterns for midscale collaborative science.pptx
Track 4 – Data Organisation, Management and Access: Data transfer, data access and storage QoS Riverbank R8
Convener: Tigran Mkrtchyan (DESY)
Modernizing Third-Party-Copy Transfers in WLCG 15m
The "Third Party Copy" (TPC) Working Group in the WLCG's "Data Organization, Management, and Access" (DOMA) activity was proposed during a CHEP 2018 Birds of a Feather session in order to help organize the work toward developing alternatives to the GridFTP protocol. Alternate protocols enable the community to diversify; explore new approaches such as alternate authorization mechanisms; and reduce the risk due to the retirement of the Globus Toolkit, which provides a commonly used GridFTP protocol implementation.
Two alternatives were proposed to the TPC group for investigation: WebDAV and XRootD. Each approach has multiple implementations, allowing us to demonstrate interoperability between distinct storage systems. As the working group took as a mandate the development of alternatives - and not to select a single protocol - we have put together a program of work allowing both to flourish. This includes community infrastructure such as documentation pointers, email lists, or biweekly meetings, as well as continuous interoperability testing involving production & test endpoints, deployment recipes, scale testing, and debugging assistance.
Each major storage system utilized by WLCG sites now has at least one functional non-GridFTP protocol for performing third-party-copy. The working group is focusing on including a wider set of sites and helping sites deploy more production endpoints. We are interacting with WLCG VOs to perform production data transfers using WebDAV or XRootD at the participating sites with the objective that all sites deploy at least one of these alternative protocols.
Speaker: Alessandra Forti (University of Manchester (GB))
20191105_chep19_tpc-v1.pdf
Third-party transfers in WLCG using HTTP 15m
Since its earliest days, the Worldwide LHC Computational Grid (WLCG) has relied on GridFTP to transfer data between sites. The announcement that Globus is dropping support of its open source Globus Toolkit (GT), which forms the basis for several FTP client and servers, has created an opportunity to reevaluate the use of FTP. HTTP-TPC, an extension to HTTP compatible with WebDAV, has arisen as a strong contender for an alternative approach.
In this paper, we describe the HTTP-TPC protocol itself, along with the current status of its support in different implementations, and the interoperability testing done within the WLCG DOMA working group's TPC activity. This protocol also provides the first real use-case for token-based authorisation. We will demonstrate the benefits of such authorisation by showing how it allows HTTP-TPC to support new technologies (such as OAuth, OpenID Connect, Macaroons and SciTokens) without changing the protocol. We will also discuss the next steps for HTTP-TPC, improving documentation and plans to use the protocol for WLCG transfers.
Speaker: Brian Paul Bockelman (University of Nebraska Lincoln (US))
CHEP19-HTTP-TPC.pdf
Xrootd Third Party Copy for the WLCG and HL-LHC 15m
A Third Party Copy (TPC) has existed in the pure XRootD storage environment for many years. However using XRootD TPC in the WLCG environment presents additional challenges due to the diversity of the storage systems involved such as EOS, dCache, DPM and ECHO, requiring that we carefully navigate the unique constraints imposed by these storage systems and their site-specific environments through customized configuration and software development. To support multi-tenant setups seen at many WLCG sites, X509 based authentication and authorization in XRootD was significantly improved to meet both security and functionality requirements. This paper presents architecture of the pull based TPC with optional X509 credential delegation and how it is implemented in native XRootD and dCache. The paper discusses technical requirements, challenges, design choices and implementation details in the WLCG storage systems, as well as in FTS/gfal2. It also outlines XRootD's plan to support newer TPC and security models such as token based authorization.
Speaker: Wei Yang (SLAC National Accelerator Laboratory (US))
Xrootd Thirty Party Copy for WLCG and HL-LHC.pdf
Quality of Service (QoS) for cost-effective storage and improved performance 15m
The anticipated increase in storage requirements for the forthcoming HL-LHC data rates is not matched by a corresponding increase in budget. This results in a short-fall in available resources if the computing models remain unchanged. Therefore, effort is being invested in looking for new and innovative ways to optimise the current infrastructure, so minimising the impact of this shortfall.
In this paper, we describe an R&D effort targeting "Quality of Service" (QoS), as a working group within the WLCG DOMA activity. The QoS approach aims to reduce the impact of the shortfalls, and involves developing a mechanism that both allows sites to reduce the cost of their storage hardware, with a corresponding increase in storage capacity, while also supporting innovative deployments with radically reduced cost or improved performance.
We describe the strategy this group is developing to support these innovations, along with the current status and plans for the future.
WLCG QoS CHEP19.pdf
A distributed R&D storage platform implementing quality of service 15m
Optimization of computing resources, in particular storage, the costliest one, is a tremendous challenge for the High Luminosity LHC (HL-LHC) program. Several venues are being investigated to address the storage issues foreseen for HL-LHC. Our expectation is that savings can be achieved in two primary areas: optimization of the use of various storage types and reduction of the required manpower to operate the storage.
We will describe our work, done in the context of the WLCG DOMA project, to prototype, deploy and operate an at-scale research storage platform to better understand the opportunities and challenges for the HL-LHG era. Our multi-VO platform includes several storage technologies, from highly performant SSDs to low end disk storage and tape archives, all coordinated by the use of dCache. It is distributed over several major sites in the US (AGLT2, BNL, FNAL & MWT2) which are several tens of msec RTT apart with one extreme leg over the Atlantic in DESY to test extreme latencies. As a common definition of attributes for QoS characterizing storage systems in HEP has not yet been defined, we are using this research platform to experiment on several of them, e.g., number of copies, availability, reliability, throughput, iops and latency.
The platform provides a unique tool to explore the technical boundaries of the 'data-lake' concept and its potential savings in storage and operations costs.
We will conclude with a summary of our lessons learned and where we intend to go with our next steps.
Speaker: Patrick Fuhrmann
CHEP-RandD-DataLake-v2.pdf CHEP-RandD-DataLake-v2.pptx
The Quest to solve the HL-LHC data access puzzle. The first year of the DOMA ACCESS Working Group. 15m
HL-LHC will confront the WLCG community with enormous data storage, management and access challenges. These are as much technical as economical. In the WLCG-DOMA Access working group, members of the experiments and site managers have explored different models for data access and storage strategies to reduce cost and complexity, taking into account the boundary conditions given by our community.
Several of these scenarios have been studied quantitatively, such as the datalake model and incremental improvements of the current computing model with respect to resource needs, costs and operational complexity.
To better understand these models in depth, analysis of traces of current data accesses and simulations of the impact of new concepts have been carried out. In parallel, evaluations of the required technologies took place. These were done in testbed and production environments at small and large scale.
We will give an overview of the activities and results of the working group, describe the models and summarise the results of the technology evaluation focusing on the impact of storage consolidation in the form of datalakes, where the use of read-ahead caches (XCache) has emerged as a successful approach to reduce the impact of latency and bandwidth limitation.
We will describe the experience and evaluation of these approaches in different environments and usage scenarios. In addition we will present the results of the analysis and modelling efforts based on data access traces of experiments.
Speaker: Xavier Espinal (CERN)
The Quest to solve the HL-LHC data access puzzle CHEP 2019.pdf
Track 5 – Software Development: Heterogenous computing and workflow managers Riverbank R2
GNA — high performance fitting for neutrino experiments 15m
GNA is a high performance fitter, designed to handle large scale models with big number of parameters. Following the data flow paradigm the model in GNA is built as directed acyclic graph. Each node (transformation) of the graph represents a function, that operates on vectorized data. A library of transformations, implementing various functions, is precompiled. The graph itself is assembled at runtime in Python and may be modified without recompilation.
High performance is achieved via several ways. The computational graph is lazily evaluated. Output data of each node is cached and recalculated only in case it is required: when one of the parameters or inputs has been changed. Transformations, subgraphs or the complete graph may be executed on GPU with data transferred lazily between CPU and GPU.
The description of the framework as well as practical examples from Daya Bay and JUNO experiments will be presented.
Speaker: Dr Maxim Gonchar (Joint Institute for Nuclear Research)
gonchar_chep2019_gna.pdf
NUMA-aware workflow management system 15m
Modern hardware is trending towards increasingly parallel and heterogeneous architectures. Contemporary machine processors are spread across multiple sockets, where each socket can access some system memory faster than the rest, creating non-uniform memory access (NUMA). Efficiently utilizing these NUMA machines is becoming increasingly important. This paper examines latest Intel Skylake and Xeon Phi NUMA node architectures, indicating possible performance problems for multi-threaded, data processing applications, due to the kernel thread migration (TM) mechanism, that I designed to optimize power consumption. We discuss NUMA aware CLARA workflow management system that defines proper level of vertical scaling and process affinity, associating CLARA worker threads with particular processor cores. By minimizing thread migration and context-switching cost among cores, we were able to improve the data locality and reduce the cache-coherency traffic among the cores, resulting in sizable performance improvements.
Speaker: Vardan Gyurjyan (Jefferson Lab)
gyurjyan_chep19.pdf gyurjyan_chep19.pptx
Heterogeneous data-processing optimization with CLARA's adaptive workload orchestration 15m
The hardware landscape used in HEP and NP is changing from homogeneous multi-core systems towards heterogeneous systems with many different computing units, each with their own characteristics. To achieve data processing maximum performance the main challenge is to place the right computing on the right hardware.
In this paper we discuss CLAS12 charge particle tracking workload partitioning that allowed us to utilize both CPU and GPU to improve the performance. The tracking application algorithm was decomposed into micro-services that are deployed on CPU and GPU processing units, where the best features of both are intelligently combined to achieve maximum performance. In this heterogeneous environment CLARA aims to match the requirements of each micro-service to the strength of a CPU or a GPU architecture. In addition, CLARA performs load balancing to minimize idle time for both processing units. However predefined execution of a micro-service on a CPU or a GPU may not be the most optimal solution due to the streaming data-quantum size and the data-quantum transfer latency between CPU and GPU. So, we trained the CLARA workflow orchestrator to dynamically assign micro-service execution to a CPU or a GPU, based on the benchmark results analyzed for a period of the real-time data-processing.
Design Pattern for Analysis Automation on Interchangeable, Distributed Resources using Luigi Analysis Workflows 15m
In particle physics, workflow management systems are primarily used as tailored solutions in dedicated areas such as Monte Carlo production. However, physicists performing data analyses are usually required to steer their individual workflows manually which is time-consuming and often leads to undocumented relations between particular workloads.
We present the luigi analysis workflow (law) Python package which is based on the open-source pipelining tool luigi, originally developed by Spotify. It establishes a generic design pattern for analyses of arbitrary scale and complexity, and shifts the focus from executing to defining the analysis logic. Law provides the building blocks to seamlessly integrate with interchangeable remote resources without, however, limiting itself to a specific choice of infrastructure. In particular, it introduces the paradigm of complete separation between analysis algorithms on the one hand, and run locations, storage locations, and software environments on the other hand.
To cope with the sophisticated demands of end-to-end HEP analyses, law supports job execution on WLCG infrastructure (ARC, gLite) as well as on local computing clusters (HTCondor, LSF), remote file access via most common protocols through the Grid File Access Library (GFAL2), and an environment sandboxing mechanism with support for Docker and Singularity containers. Moreover, the novel approach ultimately aims for analysis preservation out-of-the-box.
Law is developed open-source and entirely experiment independent. It is successfully employed in ttH cross section measurements and searches for di-Higgs boson production with the CMS experiment.
Speaker: Marcel Rieger (CERN)
2019-11-05_Rieger_law.pdf
Raythena: a vertically integrated scheduler for ATLAS applications on heterogeneous distributed resources 15m
The ATLAS experiment has successfully integrated High-Performance Computing (HPC) resources in its production system. Unlike the current generation of HPC systems, and the LHC computing grid, the next generation of supercomputers is expected to be extremely heterogeneous in nature: different systems will have radically different architectures, and most of them will provide partitions optimized for different kinds of workloads. In this work we explore the applicability of concepts and tools realized in Ray (the high-performance distributed execution framework targeting large-scale machine learning applications) to ATLAS event throughput optimization on heterogeneous distributed resources, ranging from traditional grid clusters to Exascale computers.
We present a prototype of Raythena, a Ray-based implementation of the ATLAS Event Service (AES), a fine-grained event processing workflow aimed at improving the efficiency of ATLAS workflows on opportunistic resources, specifically HPCs. The AES is implemented as an event processing task farm that distributes packets of events to several worker processes running on multiple nodes. Each worker in the task farm runs an event-processing application (Athena) as a daemon. In Raythena we replaced the event task farm workers with stateful components of Ray called Actors, which process packets of events and return data processing results. In addition to stateful Actors, Raythena also utilizes stateless Tasks for merging intermediate outputs produced by the Actors. The whole system is orchestrated by Ray, which assigns work to Actors and Tasks in a distributed, possibly heterogeneous, environment.
The second thrust of this study is to use Raythena to schedule Gaudi Algorithms (the primary unit of work of ATLAS' Athena framework) across a set of heterogeneous nodes. For ease of testing, we have used the Gaudi execution flow simulator to run a production ATLAS reconstruction scenario consisting of 309 Algorithms, modeled by synthetic CPU burners constrained by the data dependencies, and run for the time duration of the original Algorithms. The Algorithms are wrapped in Ray Actors or Tasks, and communicate via the Ray Global Control Store. This approach allows the processing of a single event to be distributed across more than one node, a functionality currently not supported by the Athena framework. We will discuss Raythena features and performance as a scheduler for ATLAS workflows, comparing them to those offered by Athena.
For all its flexibility, the AES implementation is currently comprised of multiple separate layers that communicate through ad-hoc command-line and file-based interfaces. The goal of Raythena is to integrate these layers through a feature-rich, efficient application framework. Besides increasing usability and robustness, a vertically integrated scheduler will enable us to explore advanced concepts such as dynamically shaping of workflows to exploit currently available resources, particularly on heterogeneous systems.
Speaker: Miha Muskinja (Lawrence Berkeley National Lab. (US))
miham_2019_11_05_CHEP_Raythena.pdf
Computational workflow of the LZ dark matter detection experiment at NERSC 15m
High Energy Physics experiments face unique challenges when running their computation on High Performance Computing (HPC) resources. The LZ dark matter detection experiment has two data centers, one each in the US and UK, to perform computations. Its US data center uses the HPC resources at NERSC.
In this talk, I will describe the current computational workflow of the LZ experiment, detailing some of the challenges faced while making the transition from network distributed computing environments like PDSF to newer HPC resources like Cori, at NERSC.
Speaker: Dr Venkitesh Ayyar (Lawrence Berkeley National Lab)
LZ-chep_2019.pdf
Track 6 – Physics Analysis: Framework Hall G
Using Analysis Declarative Languages for the HL-LHC 15m
The increase in luminosity by a factor of 100 for the HL-LHC with respect to Run 1 poses a big challenge from the data analysis point of view. It demands a comparable improvement in software and processing infrastructure. The use of GPU enhanced supercomputers will increase the amount of computer power and analysis languages will have to be adapted to integrate them. The particle physics community has traditionally developed their own tools to analyze the data, usually creating dedicated ROOT-based data formats. However, there have been several attempts to explore the inclusion of new tools into the experiments data analysis workflow considering data formats not necessarily based on ROOT. Concepts and techniques include declarative languages to specify hierarchical data selection and transformation, cluster systems to manage processing the data, Machine Learning integration at the most basic levels, statistical analysis techniques, etc. This talk will provide an overview of the current efforts in the field, including efforts in traditional programming languages like C++, Python, and Go, and efforts that have invented their own languages like Root Data Frame, CutLang, ADL, coffea, and functional declarative languages. There is a tremendous amount of activity in this field right now, and this talk will attempt to summarize the current state of the field.
Speaker: Gordon Watts (University of Washington (US))
Declarative.pdf Declarative.pptx
Evolution of the ATLAS analysis model for Run-3 and prospects for HL-LHC 15m
With an increased dataset obtained during CERN LHC Run-2, the even larger forthcoming Run-3 data and more than an order of magnitude expected increase for HL-LHC, the ATLAS experiment is reaching the limits of the current data production model in terms of disk storage resources. The anticipated availability of an improved fast simulation will enable ATLAS to produce significantly larger Monte Carlo samples with the available CPU, which will then be limited by insufficient disk resources.
The ATLAS Analysis Model Study Group for Run-3 was setup at the end of Run-2. Its tasks have been to analyse the efficiency and suitability of the current analysis model and to propose significant improvements to it. The group has considered options allowing ATLAS to save, for the same data/MC sample, at least 30% disk space overall, and has given directions how significantly larger savings could be realised for the HL-LHC. Furthermore, recommendations have been suggested to harmonise the current stage of analysis across the collaboration. The group has now completed its work: key recommendations will be the new small sized analysis formats DAOD_PHYS and DAOD_PHYSLITE and the increased usage of a tape carousel mode in the centralized production of these formats. This talk will review the recommended ATLAS analysis model for Run-3 and its status of the implementation. It will also provide an outlook to the HL-LHC analysis.
Speaker: Johannes Elmsheuser (Brookhaven National Laboratory (US))
a_051119.pdf
Extreme compression for Large Scale Data store 15m
For the last 5 years Accelogic pioneered and perfected a radically new theory of numerical computing codenamed "Compressive Computing", which has an extremely profound impact on real-world computer science. At the core of this new theory is the discovery of one of its fundamental theorems which states that, under very general conditions, the vast majority (typically between 70% and 80%) of the bits used in modern large-scale numerical computations are absolutely irrelevant for the accuracy of the end result. This theory of Compressive Computing provides mechanisms able to identify (with high intelligence and surgical accuracy) the number of bits (i.e., the precision) that can be used to represent numbers without affecting the substance of the end results, as they are computed and vary in real time. The bottom line outcome would be to provide a state-of-the-art compression algorithm that surpasses those currently available in the ROOT framework, with the purpose of enabling substantial economic and operational gains (including speedup) for High Energy and Nuclear Physics data storage/analysis. In our initial studies, a factor of nearly x4 (3.9) compression was achieved with RHIC/STAR data where ROOT compression managed only x1.4.
In this contribution, we will present our concepts of "functionally lossless compression", have a glance at examples and achievements in other communities, present the results and outcome of our current R&D as well as present a high-level view of our plan to move forward with a ROOT implementation that would deliver a basic solution readily integrated into HENP applications. As a collaboration of experimental scientists, private industry, and the ROOT Team, our aim is to capitalize on the substantial success delivered by the initial effort and produce a robust technology properly packaged as an open-source tool that could be used by virtually every experiment around the world as means for improving data management and accessibility.
Speaker: Gene Van Buren (Brookhaven National Laboratory)
CHEP2019_DataCompression.pdf
Data Analysis using ALICE Run3 Framework 15m
ALICE Experiment is currently undergoing a major upgrade program, both in
terms of hardware and software, to prepare for the LHC Run 3. A new Software
Framework is being developed in collaboration with the FAIR experiments at GSI
to cope with the 100 fold increase in collected collisions.
We present our progress to adapt such a framework for the end user physics data
analysis. In particular, we will highlight the design and technology choices.
We will show how we adopt Apache Arrow as a platform for our in memory analysis
data layout. We will illustrate the benefits of this solution, such as:
efficient and parallel data processing, interoperability with a large number of
analysis tools and ecosystems, integration with the modern ROOT declarative
analysis framework RDataFrame.
Speaker: Giulio Eulisse (CERN)
2019-11-chep-alice-data-analysis.pdf
The Scikit-HEP project - overview and prospects 15m
Scikit-HEP is a community-driven and community-oriented project with the goal of providing an ecosystem for particle physics data analysis in Python. Scikit-HEP is a toolset of approximately twenty packages and a few "affiliated" packages. It expands the typical Python data analysis tools for particle physicists. Each package focuses on a particular topic, and interacts with other packages in the toolset, where appropriate. Most of the packages are easy to install in many environments; much work has been done this year to provide binary "wheels" on PyPI and conda-forge packages. The uproot family provides pure Python ROOT file access and has been a runaway success with over 15000 downloads per month. AwkwardArray provides a natural "Jagged" array structure. The iMinuit package exposes the MINUIT2 C++ package to Python. Histogramming is central in any analysis workflow and has received much attention, including new Python bindings for the performant C++14 Boost::Histogram library. The Particle and DecayLanguage packages were developed to deal with particles and decay chains. Other packages provide the ability to interface between Numpy and popular HEP tools such as Pythia and FastJet. The Scikit-HEP project has been gaining interest and momentum, by building a user and developer community engaging collaboration across experiments. Some of the packages are being used by other communities, including the astroparticle physics community. An overview of the overall project and toolset will be presented, as well as a vision for development and sustainability.
Speaker: Eduardo Rodrigues (University of Cincinnati (US))
EduardoRodrigues_2019-11-05_CHEP2019Adelaide.pdf EduardoRodrigues_2019-11-05_CHEP2019Adelaide.pptx
COFFEA - Columnar Object Framework For Effective Analysis 15m
The COFFEA Framework provides a new approach to HEP analysis, via columnar operations, that improves time-to-insight, scalability, portability, and reproducibility of analysis. It is implemented with the Python programming language and commodity big data technologies such as Apache Spark and NoSQL databases. To achieve this suite of improvements across many use cases, COFFEA takes a factorized approach, separating the analysis implementation and data delivery scheme. All analysis operations are implemented using the NumPy or awkward-array packages which are wrapped to yield user code whose purpose is quickly intuited. Various data delivery schemes are wrapped into a common front-end which accepts user inputs and code, and returns user defined outputs. We will present published results from analysis of CMS data using the COFFEA framework along with a discussion of metrics and the user experience of arriving at those results with columnar analysis.
Speaker: Nick Smith (Fermi National Accelerator Lab. (US))
ncsmith-chep2019-coffea.pdf
Track 7 – Facilities, Clouds and Containers: Trends and new Approaches Riverbank R7
Real-time HEP analysis with funcX, a high-performance platform for function as a service 15m
The traditional HEP analysis model uses successive processing steps to reduce the initial dataset to a size that permits real-time analysis. This iterative approach requires significant CPU time and storage of large intermediate datasets and may take weeks or months to complete. Low-latency, query-based analysis strategies are being developed to enable real-time analysis of primary datasets by replacing conventional nested loops over objects with native operations on hierarchically nested, columnar data. Such queries are well-suited to distributed processing using a strategy called function as a service (FaaS).
In this presentation we introduce funcX---a high-performance FaaS platform that enables intuitive, flexible, efficient, and scalable remote function execution on existing infrastructure including clouds, clusters, and supercomputers. A funcX function explicitly defines a function body and dependencies required to execute the function. FuncX allows users, interacting via a REST API, to register and then execute such functions without regard for the physical resource location or scheduler architecture on which the function is executed---an approach we refer to as ``serverless supercomputing.'' We show how funcX can be used to parallelize a real-world HEP analysis operating on columnar data to aggregate histograms of analysis products of interest in real time. Subtasks representing partial histograms are dispatched as funcX requests with expected runtimes of less than a second. Finally, we demonstrate efficient execution of such analyses on heterogeneous resources, including leadership-class computing facilities.
Speaker: Dr Anna Elizabeth Woodard (University of Chicago)
funcx.pdf
Trends in computing technologies and markets 15m
Driven by the need to carefully plan and optimise the resources for the next data taking periods of Big Science projects, such as CERN's Large Hadron Collider and others, sites started a common activity, the HEPiX Technology Watch Working Group, tasked with tracking the evolution of technologies and markets of concern to the data centres. The talk will give an overview of general and semiconductor markets, server markets, CPUs and accelerators, memories, storage and networks; it will highlight important areas of uncertainties and risks.
Speaker: Shigeki Misawa (BNL)
HEPiX_Techwatch-CHEP2019-v2.pdf
Integrating Interactive Jupyter Notebooks at the SDCC 15m
At the SDCC we are deploying a Jupyterhub infrastructure to enable
scientists from multiple disciplines to access our diverse compute and
storage resources. One major design goal was to avoid rolling out yet
another compute backend and leverage our pre-existing resources via our
batch systems (HTCondor and Slurm). Challenges faced include creating a
frontend that allows users to choose what HPC resources they have access
to as well as selecting containers or environments, delegating
authentication to a MFA-enabled proxy, and automating deployment of
multiple hub instances. We will show what we have done, and some
examples of how we have worked with various groups to get their analysis
working with Jupyter notebooks.
Speaker: Ofer Rind
Jupyter_SDCC_CHEP2019_gif.key Jupyter_SDCC_CHEP2019.pdf
The SIMPLE Framework for deploying containerized grid services 15m
The WLCG has over 170 sites and the number is expected to grow in the coming years. In order to support WLCG workloads, each site has to deploy and maintain several middleware packages and grid services. Setting up, maintaining and supporting the grid infrastructure at a site can be a demanding activity and often requires significant assistance from WLCG experts. Modern configuration management (Puppet, Ansible, ...), container orchestration (Docker Swarm, Kubernetes, ...) and containerization technologies (Docker, ...) can effectively make such activities lightweight via packaging sensible configurations of grid services and providing simple mechanisms to distribute and deploy them across the infrastructure available at a site. This article describes the SIMPLE project: a Solution for Installation, Management and Provisioning of Lightweight Elements. The SIMPLE framework leverages modern infrastructure management tools to deploy containerized grid services, such as popular compute elements (HTCondor, ARC, ...), batch systems (HTCondor, Slurm, ...), worker nodes etc. It is built on the principles of software sustainability, modularity and scalability. The article also describes the framework's architecture, extensibility and the special features that enable lightweight deployments at WLCG sites.
Speaker: Julia Andreeva (CERN)
SIMPLE-16-9-CHEP-2019-v6.pdf SIMPLE-16-9-CHEP-2019-v6.pptx
Towards a NoOps Model for WLCG 15m
One of the most costly factors in providing a global computing infrastructure such as the WLCG is the human effort in deployment, integration, and operation of the distributed services supporting collaborative computing, data sharing and delivery, and analysis of extreme scale datasets. Furthermore, the time required to roll out global software updates, introduce new service components, or prototype novel systems requiring coordinated deployments across multiple facilities is often increased by communication latencies, staff availability, and in many cases expertise required for operations of bespoke services. While the WLCG computing grid (and distributed systems implemented throughout HEP) is a global service platform, it lacks the capability and flexibility of a modern platform-as-a-service including continuous integration/continuous delivery (CI/CD) methods, development-operations capabilities (DevOps, where developers assume a more direct role in the actual production infrastructure), and automation. Most importantly, tooling which reduces required training, bespoke service expertise, and the operational effort throughout the infrastructure, most notably at the resource endpoints ("sites"), is entirely absent in the current model. In this paper, we explore ideas and questions around potential "NoOps" models in this context: what is realistic given organizational policies and constraints? How should operational responsibility be organized across teams and facilities? What are the technical gaps? What are the social and cybersecurity challenges? Conversely what advantages does a NoOps model deliver for innovation and for accelerating the pace of delivery of new services needed for the HL-LHC era? We will describe initial work along these lines in the context of providing a data delivery network supporting IRIS-HEP DOMA R&D.
Speaker: Robert William Gardner Jr (University of Chicago (US))
Towards a NoOps Model for WLCG.pdf
The NOTED software tool-set enables improved exploitation of WAN bandwidth for Rucio data transfers via FTS 15m
We describe the software tool-set being implemented in the contest of the NOTED [1] project to better exploit WAN bandwidth for Rucio and FTS data transfers, how it has been developed and the results obtained.
The first component is a generic data-transfer broker that interfaces with Rucio and FTS. It identifies data transfers for which network reconfiguration is both possible and beneficial, translates the Rucio and FTS information into parameters that can be used by network controllers and makes these available via a public interface.
The second component is a network controller that, based on the parameters provided by the transfer broker, decides which actions to apply to improve the path for a given transfer.
Unlike the transfer-broker, the network controller described here is tailored to the CERN network as it has to choose the appropriate action given the network configuration and protocols used at CERN. However, this network controller can easily be used as a model for site-specific implementations elsewhere.
The paper describes the design and the implementation of the two tools, the tests performed and the results obtained. It also analyses how the tool-set could be used for WLCG in the contest of the DOMA [2] activity.
[1] Network Optimisation for Transport of Experimental Data - CERN project
[2] Data Organisation, Management and Access - WLCG activity
Speaker: Tony Cass (CERN)
noted_CHEP.pdf noted_CHEP.pptx
Track 8 – Collaboration, Education, Training and Outreach: Outreach Riverbank R1
The International Particle Physics Outreach Group - Reaching Across the Globe with Science 15m
The International Particle Physics Outreach Group (IPPOG) is a network of scientists, science educators and communication specialists working across the globe in informal science education and outreach for particle physics. The primary methodology adopted by IPPOG requires the direct involvement of scientists active in current research with education and communication specialists, in order to effectively develop and share best practices in outreach. IPPOG member activities include the International Particle Physics Masterclass programme, International Day of Women and Girls in Science, Worldwide Data Day, International Muon Week and International Cosmic Day organisation, and participation in activities ranging from public talks, festivals, exhibitions, teacher training, student competitions, and open days at local institutions. These independent activities, often carried out in a variety of languages to public with a variety of backgrounds, all serve to gain the public trust and to improve worldwide understanding and support of science. We present our vision of IPPOG as a strategic pillar of particle physics, fundamental research and evidence-based decision-making around the world.
Speaker: Steven Goldfarb (University of Melbourne (AU))
IPPOG-Overview-20191103.pdf IPPOG-Overview-20191103.pptx
Belle2VR: An Interactive Virtual Reality Visualization of GEANT4 Event Histories 15m
I describe a novel interactive virtual reality visualization of the Belle II detector at KEK and the animation therein of GEANT4-simulated event histories. Belle2VR runs on Oculus and Vive headsets (as well as in a web browser and on 2D computer screens, in the absence of a headset). A user with some particle-physics knowledge manipulates a gamepad or hand controller(s) to interact with and interrogate the detailed GEANT4 event history over time, to adjust the visibility and transparency of the detector subsystems, to translate freely in 3D, to zoom in or out, and to control the event-history timeline (scrub forward or backward, speed up or slow down). A non-expert uses the app - during public outreach events, for example - to explore the world of subatomic physics via electron-positron collision events in the Belle II experiment at the SuperKEKB colliding-beam facility at KEK in Japan. Multiple simultaneous users, wearing untethered locomotive VR backpacks and headsets, walk about a room containing the virtual model of the Belle II detector and each others' avatars as they observe and control the simulated event history. Developed at Virginia Tech by an interdisciplinary team of researchers in physics, education, and virtual environments, the simulation is intended to be integrated into the undergraduate physics curriculum. I describe the app, including visualization features and design decisions, and illustrate how a user interacts with its features to expose the underlying physics in each electron-positron collision event.
Speaker: Leo Piilonen (Virginia Tech)
BelleII-VR-CHEP2019.pdf BelleII-VR-CHEP2019.pptx
The CMS DAQ pinball machine 15m
We present an interactive game for up to seven players that demonstrates the challenges of on-line event selection at the Compact Muon Solenoid (CMS) experiment to the public. The game - in the shape of a popular classic pinball machine - was conceived and prototyped by an interdisciplinary team of graphic designers, physicists and engineers at the CMS Create hackathon in 2016. Having won the competition, the prototype was turned into a fully working machine that is now exhibited on the CMS visitor's path. Teams of 2-7 visitors can compete with one another to collect as many interesting events as possible within a simulated LHC fill. In a fun and engaging way, the game conveys concepts such as multi-level triggering, pipelined processing, event building, the importance of purity in event selection and more subtle details such as dead time. The multi-player character of the game corresponds to the distributed nature of the actual trigger and data acquisition system of the experiment. We present the concept of the game, its design and its technical implementation centred around an Arduino micro-controller controlling 700 RGB LEDs and a sound subsystem running on a Mac mini.
Speaker: Hannes Sakulin (CERN)
2019_11_04_CMSDAQPinballCHEP_final.pptx
Engaging the youth in programming and physics through an online educational activity 15m
The rapid economic growth is building new trends in careers. Almost every domain, including high-energy physics, needs people with strong capabilities in programming. In this evolving environment, it is highly desirable that young people are equipped with computational thinking (CT) skills, such as problem-solving and logical thinking, as well as the ability to develop software applications and write code. These are crucial elements of Science, Technology, Engineering, and Mathematics education (STEM).
This talk will present an outcome from a Proof of Concept study of educational online activity. The project consists of building a first step of an interactive coding tutorial that will aim to introduce young people to computer science and particle physics principles in a fun and engaging way. Successful realization of this online educational asset will equip educators with a new tool to introduce STEM education and digital literacy in the classrooms, to eventually inspire young people to acquire necessary skills to be ready for a digital economic growth and future jobs.
Speaker: Marzena Lapka (CERN)
Coding_tutorial_CHEP2019_MLapka.pdf Coding_tutorial_CHEP2019_MLapka.pptx
Fluidic Data: When Art Meets CERN, Data Flows 15m
Fluidic Data is a floor-to-ceiling installation spanning the four levels of the CERN Data Centre stairwell. It utilizes the interplay of water and light to visualize the magnitude and flow of information coming from the four major LHC experiments. The installation consists of an array of transparent hoses that house colored fluid, symbolizing the data of each experiment, surrounded by a collection of diffractive "pods" representing the particles pivotal to each experiment. The organic fusion of art and science engenders a meditative environment, allowing the visitor time for reflection and curiosity.
The Fluidic Data installation is a cross department collaboration that incorporates materials and techniques used in the construction of the LHC and its experiments. The project brings together artists, engineers, science communicators and physicists with a common goal of communicating CERN's research and resources. The success of this collaboration exemplifies the effectiveness of working in diverse teams, both intellectually and culturally, to accomplish unique projects.
Speaker: Julien Leduc (CERN)
FluidicData05112019CHEP.pptm fluidicdata.pdf
Public Engagement - More than just fun 15m
Public Engagement (PE) with science should be more than "fun" for the staff involved. PE should be a strategic aim of any publically funded science organisation to ensure the public develops an understanding and appreciation of their work, its benefits to everyday life and to ensure the next generation is enthused to take up STEM careers. Most scientific organisations do have aims to do this, but very few have significant budgets to deliver this. In a landscape of ever tightening budgets, how can we develop a sustainable culture of PE within these organisations?
UKRI/STFC's Scientific Computing Department present how we have worked to embed a culture of PE with the department by developing our early career staff members; highlighting the impact PE makes at the departmental and project level; and linking PE to our competency framework.
We will also discuss how our departmental work interacts with and complements STFC's organisational-wide PE effort, such as making use of a shared evaluation framework that allows us to evaluate our public engagement activities against their goals and make strategic decisions about the programmes future direction.
Speaker: Mr Greg Corbett (STFC)
Public Engagement - More than just fun.pptx
Track 9 – Exascale Science: Porting applications to HPCs Riverbank R4
Convener: Steven Farrell (Lawrence Berkeley National Lab (US))
MPI-based tools for large-scale training and optimization at HPC sites 15m
MPI-learn and MPI-opt are libraries to perform large-scale training and hyper-parameter optimization for deep neural networks. The two libraries, based on Message Passing Interface, allows to perform these tasks on GPU clusters, through different kinds of parallelism. The main characteristic of these libraries is their flexibility: the user has complete freedom in building her own model, thanks to the multi-backend support. In addition, the library supports several cluster architectures, allowing a deployment on multiple platforms. This generality can make this the basis for a train & optimise service for the HEP community. We present scalability results obtained from two typical HEP use-case: jet identification from raw data and shower generation from a GAN model. Results on GPU clusters were obtained at the ORNL TITAN supercomputer ad other HPC facilities, as well as exploiting commercial cloud resources and OpenStack. A comprehensive comparisons of scalability performance across platforms will be presented, together with a detailed description of the libraries and their functionalities.
Speaker: Vladimir Loncar (University of Belgrade (RS))
NNLO - CHEP2019.pdf
Migrating Engineering Windows HPC applications to Linux HTCondor and SLURM Clusters 15m
CERN IT department has been maintaining different HPC facilities over the past five years, one in Windows and the other one on Linux as the bulk of computing facilities at CERN are running under Linux. The Windows cluster has been dedicated to engineering simulations and analysis problems. This cluster is a High Performance Computing (HPC) cluster thanks to powerful hardware and low-latency interconnects. The Linux cluster resources are accessible through HTCondor, and are used for general purpose parallel but single-node type jobs, providing computing power to the CERN experiments and departments for tasks such as physics event reconstruction, data analysis and simulation. For HPC workloads that require multi-node parallel environments for MPI programs, there is a dedicated HPC service with MPI clusters running under the SLURM batch system and dedicated hardware with fast interconnects.
In the past year, it was decided to consolidate compute intensive jobs in Linux to make a better use of the existing resources. Moreover, this was also in line with CERN IT strategy to reduce its dependencies on Microsoft products. This paper describes the migration of Ansys, COMSOL and CST users who were running on Windows HPC to Linux clusters. Ansys, COMSOL and CST are three engineering applications used at CERN on different domains, like multiphysics simulations or electromagnetic field problems. Users of these applications are sitting in different departments, with different needs and levels of expertise. In most cases the users have no prior knowledge of Linux. The paper will present the technical strategy to allow the engineering users to submit their simulations to the appropriate Linux cluster, depending on their HW needs. It will also describe the technical solution to integrate their Windows installations to submit to Linux clusters. Finally, the challenges and lessons learnt during the migration will be also discussed.
Speaker: Maria Alandes Pradillo (CERN)
HPC_CHEP2019.pdf
Geant Exascale Pilot Project 15m
The upcoming generation of exascale HPC machines will all have most of their computing power provided by GPGPU accelerators. In order to be able to take advantage of this class of machines for HEP Monte Carlo simulations, we started to develop a Geant pilot application as a collaboration between HEP and the Exascale Computing Project. We will use this pilot to study and characterize how the machines' architecture affects performance. The pilot will encapsulate the minimum set of physics and software framework processes necessary to describe a representative HEP simulation problem. The pilot will then be used to exercise communication, computation, and data access patterns. The project's main objective is to identify re-engineering opportunities that will increase event throughput by improving single node performance and being able to make efficient use of the next generation of accelerators available in Exascale facilities.
Speaker: Elizabeth Sexton-Kennedy (Fermi National Accelerator Lab. (US))
2019_CHEP_Geant_Exascale_Pilot 2019_CHEP_Geant_Exascale_Pilot.pdf
Covariance Matrix Acceleration on a Hybrid FPGA/CPU Platform 15m
Covariance matrices are used for a wide range of applications in particle ohysics, including Kalman filter for tracking purposes, as well as for Primary Component Analysis and other dimensionality reduction techniques. The covariance matrix contains covariance and variance measures between all permutations of data dimensions, leading to high computational cost.
By using a novel decomposition of the covariance matrix and exploiting parallelism on FPGA as well as separability of subtasks to CPU and FPGA, a linear increase of computation time for 156 number of integer dimensions and a constant computation time for 16 integer dimensions is achieved for exact covariance matrix calculation on a hybrid FPGA-CPU system, the Intel HARP 2. This leads up to 100 times faster results than the FPGA baseline and 10 times faster computation time compared to standard CPU covariance matrix calculation.
Speaker: Lukas On Arnold (Columbia University)
presentation_2019_chep.pdf presentation_2019_chep.pptx
3D Generative Adversarial Networks inference implementation on FPGAs 15m
Detailed simulation is one of the most expensive tasks, in terms of time and computing resources for High Energy Physics experiments. The need for simulated events will dramatically increase for the next generation experiments, like the ones that will run at the High Luminosity LHC. The computing model must evolve and in this context, alternative fast simulation solutions are being studied. 3DGAN represent a successful example across the several R&D activities focusing on the use of deep generative models to particle detector simulation: physics results in terms of agreement to standard Monte Carlo techniques are already very promising. Optimisation of the computing resources needed to train these models, and consequently to deploy them efficiently during the inference phase will be essential to exploit the added-value of their full capabilities.
In this context, CERN openlab has a collaboration with the researchers at SHREC at the University of Florida and with Intel to accelerate the 3DGAN inferencing stage using FPGAs. This contribution will describe the efforts ongoing at the University of Florida to develop an efficient heterogeneous computing (HGC) framework, CPUs integrated with accelerators such as GPUs and FPGAs, in order to accelerate Deep Learning. The HGC framework uses Intel distribution of OpenVINO, running on an Intel Programmable Acceleration Card (PAC) equipped with an Arria 10 GX FPGA.
Integration of the 3DGAN use case in the HGC framework has required development and optimisation of new FPGA primitives using the Intel Deep Learning Acceleration (DLA) development suite.
A number of details of this work and preliminary results will be presented, specifically in terms of speedup, stimulating a discussion for future development.
3DGAN_FPGA_T9.pdf 3DGAN_FPGA_T9.pptx
Track 1 – Online and Real-time Computing: Real-time analysis Riverbank R5
40 MHz Level-1 Trigger Scouting for CMS 15m
The CMS experiment will be upgraded for operation at the High-Luminosity LHC to maintain and extend its optimal physics performance under extreme pileup conditions. Upgrades will include an entirely new tracking system, supplemented by a track trigger processor capable of providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control. In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyses based on these limited-resolution data. This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations, and it has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 accept budget, or with requirements which are orthogonal to "mainstream" physics, such as long-lived particles. We discuss the requirements and possible architecture of a Phase-2 40 MHz scouting system, as well as some of the physics potential, and results from a demonstrator operated at the end of Run-2 using the Global Muon Trigger data from CMS. Plans for further demonstrators envisaged for Run 3 are also discussed.
2019_11_04_40MHzScoutingCHEP_final.pdf 2019_11_04_40MHzScoutingCHEP_final.pptx
An express data production chain in the STAR experiment 15m
Within the FAIR Phase-0 program the fast algorithms of the FLES (First-Level Event Selection) package developed for the CBM experiment (FAIR/GSI, Germany) are adapted for online and offline processing in the STAR experiment (BNL, USA). Using the same algorithms creates a bridge between online and offline. This makes it possible to combine online and offline resources for data processing.
Thus, on the basis of the STAR HLT farm an express data production chain was created, which extends the functionality of HLT in real time, up to the analysis of physics. The same express data production chain can be used on the RCF farm, which is used for fast offline production with the similar tasks as in the extended HLT. The chain of express analysis does not interfere with the chain of standard analysis.
An important advantage of express analysis is that it allows to start calibration, production and analysis of the data as soon as they are received. Therefore, use of the express analysis can be beneficial for BES-II data production and help accelerate science discovery by helping to obtain results within a year after the end of data acquisition.
The specific features of express data production are given, as well as the result of online QA plots such as the real-time reconstruction of secondary decays in a BES-II environment.
Speaker: Ivan Kisel (Johann-Wolfgang-Goethe Univ. (DE))
Kisel STAR CHEP-2019.pdf
Trigger level analysis technique in ATLAS for Run 2 and beyond 15m
With the unprecedented high luminosity delivered by the LHC, detector readout and data storage limitations severely limit searches for processes with high-rate backgrounds. An example of such searches is those for mediators of the interactions between the Standard Model and dark matter, decaying to hadronic jets. Traditional signatures and data taking techniques limit these searches to masses above the TeV. In order to extend the search range to lower masses on the order of 100 GeV and probe weaker couplings, the ATLAS experiment employs a range of novel trigger and analysis strategies. One of these is the trigger-level analysis (TLA), which records only trigger-level jet objects instead of the full detector information. This strategy of using only partial event information permits the use of lower jet trigger thresholds and increased recording rates with minimal impact on the total output bandwidth. We discuss the implementation of this stream and its planned updates for Run 3 and outline its technical challenges. We also present the results of an analysis using this technique, highlighting the competitiveness and complementarity with traditional data streams.
Speaker: Antonio Boveia (Ohio State University)
20191105-tla.pdf
Ingest pipeline for ASKAP 15m
The Australian Square Kilometre Array Pathfinder (ASKAP) is a
new generation 36-antenna 36-beam interferometer capable of producing
about 2.5 Gb/s of raw data. The data are streamed from the observatory
directly to the dedicated small cluster at the Pawsey HPC centre. The ingest
pipeline is a distributed real time software which runs on this cluster
and prepares the data for further (offline) processing by imaging and
calibration pipelines. In addition to its main functionality, it turned out
to be a valuable tool for various commissioning experiments and allowed us
to run an interim system and achieve the first scientific results much earlier.
I will review the architecture of the ingest pipeline, its role in the
overall ASKAP's design as well as the lessons learned by developing a hard
real-time application in the HPC environment.
Speaker: Dr Maxim Voronkov (CSIRO)
MVoronkovASKAPIngest.pdf
Low Latency, Online Processing of the High-Bandwidth Bunch-by-Bunch Observation Data from the Transverse Damper Systems of the LHC 15m
The transverse feedback system in LHC provides turn-by-turn, bunch-by-bunch measurements of the beam transverse position with a submicrometer resolution from 16 pickups. This results in a 16 high-bandwidth data-streams (1Gbit/s each), which are sent through a digital signal processing chain to calculate the correction kicks which are then applied to the beam. These data-streams contain valuable information about beam parameters and stability. A system that can extract and analyze these parameters and make them available for the users is extremely valuable for the accelerators physicists, machine operators, or engineers working with LHC. This paper introduces the next generation transverse observation system, which was designed specifically to allow demanding low-latency (few turns) beam parameter analysis such as passive tune extraction or transverse instability detection, while at the same time provide users around CERN with the raw data-streams in form of buffers. A new acquisition card and driver was developed that achieves a latency less than 100$\mu$s from the position being measured by the pickup to data being available for processing on the host. This data is then processed by a multitude of applications that are executed in a real-time environment that was fine-tuned for the driver and the applications. To handle the high throughput required by the analysis applications without saturating the computing resources, a combination of parallel programming techniques are used in combination with GPGPU computing.
Speaker: Martin Soderen (CERN)
ADTObsBox_CHEP_2019.pdf ADTObsBox_CHEP_2019.pptx
JANA2 Framework for event based and triggerless data processing 15m
Development of the second generation JANA2 multi-threaded event processing framework is ongoing through an LDRD initiative grant at Jefferson Lab. The framework is designed to take full advantage of all cores on modern many-core compute nodes. JANA2 efficiently handles both traditional hardware triggered event data and streaming data in online triggerless environments. Development is being done in conjunction with the Electron Ion Collider development. Anticipated to be the next large scale Nuclear Physics facility constructed. The core framework is written in modern C++ but includes an integrated Python interface. The status of development and summary of the more interesting features will be presented.
Speaker: David Lawrence (Jefferson Lab)
2019.11.5.CHEP_JANA2.pdf
Track 2 – Offline Computing: Lightweight simulation and optimisation Riverbank R6
Gaussino - a Gaudi-based core simulation framework 15m
The increase in luminosity foreseen in the future years of operation of the Large Hadron Collider (LHC) creates new challenges in computing efficiency for all participating experiment. To cope with these challenges and in preparation for the third running period of the LHC, the LHCb collaboration currently overhauls its software framework to better utilise modern computing architectures. This effort includes the LHCb simulation framework (Gauss).
In this talk, we present Gaussino, an LHCb-independent simulation framework which forms the basis for LHCb's future simulation framework which incorporates the reimplemented or modernised core features of Gauss. It is built on Gaudi's functional framework making use of multiple threads. Event generation is interfaced to external generators with an example implementation of a multi-threaded Pythia8 interface being included. The detector simulation is handled by the multithreaded version of Geant4 with an interface allowing for the parallel execution of multiple events at the same time as well as for parallelism within a single event. Additionally, we present the integration of DD4hep geometry description into Gaussino to handle the detector geometry and conversion.
Speaker: Dominik Muller (CERN)
DM_Gaussino.pdf
Geant4 performance optimization in the ATLAS experiment 15m
Software improvements in the ATLAS Geant4-based simulation are critical to keep up with the evolving hardware and increasing luminosity. Geant4 simulation currently accounts for about 50% of CPU consumption in ATLAS and it is expected to remain the leading CPU load during Run 4 (HL-LHC upgrade) with an approximately 25% share in the most optimistic computing model. The ATLAS experiment recently developed two algorithms for optimizing Geant4 performance: Neutron Russian Roulette (NRR) and range cuts for electromagnetic processes. The NRR randomly terminates a fraction of low energy neutrons in the simulation and weights energy deposits of the remaining neutrons to maintain physics performance. Low energy neutrons typically undergo many interactions with the detector material and their path becomes uncorrelated with the point of origin. Therefore, the response of neutrons can be efficiently estimated only with a subset of neutrons. Range cuts for electromagnetic processes exploit a built-in feature of Geant4 and terminate low energy electrons that originate from physics processes including conversions, the photoelectric effect, and Compton scattering. Both algorithms were tuned to maintain physics performance in ATLAS and together they bring about a 20% speedup of the ATLAS Geant4 simulation. Additional ideas for improvements currently under investigation will be also be discussed in the talk. Lastly, this talk presents how the ATLAS experiment utilizes software packages such as Intel's VTune to identify and resolve hot-spots in simulation.
miham_2019_11_05_CHEP_Sim.pdf
FullSimLight: ATLAS standalone Geant4 simulation 15m
HEP experiments simulate the detector response by accessing all needed data and services within their own software frameworks. However, decoupling the simulation process from the experiment infrastructure can be useful for a number of tasks, amongst them the debugging of new features, or the validation of multithreaded vs sequential simulation code and the optimization of algorithms for HPCs. The relevant features and data must be extracted from the framework to produce a standalone simulation application.
As an example, the simulation of the detector response of the ATLAS experiment at the LHC is based on the Geant4 toolkit and is fully integrated in the experiment's framework "Athena". Recent developments opened the possibility of accessing a full persistent copy of the ATLAS geometry outside of the Athena framework. This is a prerequisite for running ATLAS Geant4 simulation standalone. In this talk we present the status of development of FullSimLight, a full simulation prototype that is being developed with the goal of running ATLAS standalone Geant4 simulation with the actual ATLAS geometry.
The purpose of FullSimLight is to simplify studies of Geant4 tracking and physics processes, including on novel architectures. We will also address the challenges related to the complexity of ATLAS's geometry implementation, which precludes persistifying a complete detector description in a way that can be automatically read by standalone Geant4. This lightweight prototype is meant to ease debugging operations on the Geant4 side and to allow early testing of new Geant4 releases. It will also ease optimization studies and R&D activities related to HPC development: i.e. the possibility to offload partially/totally the simulation to GPUs/Accelerators without having to port the whole experimental infrastructure.
Speaker: Marilena Bandieramonte (University of Pittsburgh (US))
FullSimLight_MBandieramonte_CHEP2019.pdf
The Heavy Photon Search Experiment Software Environment 15m
The Heavy Photon Search (HPS) is an experiment at the Thomas Jefferson National Accelerator Facility designed to search for a hidden sector photon (A') in fixed-target electro-production. It uses a silicon micro-strip tracking and vertexing detector inside a dipole magnet to measure charged particle trajectories and a fast lead-tungstate crystal calorimeter just downstream of the magnet to provide a trigger and to identify electromagnetic showers. The HPS experiment uses both invariant mass and secondary vertex signatures to search for the A'. The overall design of the detector follows from the kinematics of A' production which typically results in a final state particle within a few degrees of the incoming beam. The occupancies of sensors near the beam plane are high, so high-rate detectors, a fast trigger, and excellent time tagging are required to minimize their impact and detailed simulations of backgrounds are crucial to the success of the experiment. The detector is fully simulated using the flexible and performant Geant4-based program "slic" using the xml-based "lcdd" detector description (described in previous CHEP conferences). Simulation of the readout and the event reconstruction itself are performed with the Java-based software package "hps-java." The simulation of the detector readout includes full charge deposition, drift and diffusion in the silicon wafers, followed by a detailed simulation of the readout chip and associated electronics. Full accounting of the occupancies and trigger was performed by overlaying simulated beam backgrounds. HPS has successfully completed two engineering runs and will complete its first physics run in the summer of 2019. Event reconstruction involving track, cluster and vertex finding and fitting for both simulated and real data will be described. We will begin with an overview of the physics goals of the experiment followed by a short description of the detector design. We will then describe the software tools used to design the detector layout and simulate the expected detector performance. Finally, the event reconstruction chain will be presented and preliminary comparisons of the expected and measured detector performance will be presented.
Speaker: Norman Anthony Graf (SLAC National Accelerator Laboratory (US))
HPS_Software_CHEP2019_Graf.pdf
Selective background Monte Carlo simulation at Belle II 15m
The large volume of data expected to be produced by the Belle II experiment presents the opportunity for for studies of rare, previously inaccessible processes. To investigate such rare processes in a high data volume environment necessitates a correspondingly high volume of Monte Carlo simulations to prepare analyses and gain a deep understanding of the contributing physics processes to each individual study. This resulting challenge, in terms of computing resource requirements, calls for more intelligent methods of simulation, in particular for background processes with very high rejection rates. This work presents a method of predicting in the early stages of the simulation process the likelihood of relevancy of an individual event to the target study using convolutional neural networks. The results show a robust training that is integrated natively into the existing Belle II analysis software framework, with steps taken to mitigate systematic biases induced by the early selection procedure.
Speaker: James Kahn (Karlsruhe Institute of Technology (KIT))
kahn_CHEP19_KIT.pdf
Multithreaded simulation for ATLAS: challenges and validation strategy 15m
Estimations of the CPU resources that will be needed to produce simulated data for the future runs of the ATLAS experiment at the LHC indicate a compelling need to speed-up the process to reduce the computational time required. While different fast simulation projects are ongoing (FastCaloSim, FastChain, etc.), full Geant4 based simulation will still be heavily used and is expected to consume the biggest portion of the total estimated processing time. In order to run effectively on modern architectures and profit from multi-core designs, a migration of the Athena framework to a multi-threading processing model has been performed in the last years. A multi-threaded simulation based on AthenaMT and Geant4MT enables substantial decreases in the memory footprint of jobs, largely from shared geometry and cross-sections tables. This approach scales better with respect to the multi-processing approach (AthenaMP) especially on the architectures that are foreseen to be used in the next LHC runs. In this paper we will report about the status of the multithreaded simulation in ATLAS, focusing on the different challenges of its validation process. We will demonstrate the different tools and strategies that have been used for debugging multi-threaded runs versus the corresponding sequential ones, in order to have a fully reproducible and consistent simulation result.
AthenaMT_MBandieramonte_CHEP2019.pdf
Track 3 – Middleware and Distributed Computing: Operations & Monitoring Riverbank R3
Operational Intelligence 15m
In the near future, large scientific collaborations will face unprecedented computing challenges. Processing and storing exabyte datasets require a federated infrastructure of distributed computing resources. The current systems have proven to be mature and capable of meeting the experiment goals, by allowing timely delivery of scientific results. However, a substantial amount of interventions from software developers, shifters and operational teams is needed to efficiently manage such heterogeneous infrastructures. For instance, every year thousands of tickets are submitted to ATLAS and CMS issue tracking systems, hence further processed by the experiment operators. On the other hand, logging information from computing services and systems is being archived on ElasticSearch, Hadoop, and NoSQL data stores. Such a wealth of information can be exploited to increase the level of automation in computing operations by using adequate techniques, such as machine learning (ML), tailored to solve specific problems. ML models applied to the prediction of intelligent data placements and access patterns can help to increase the efficiency of resource exploitation and the overall throughput of the experiments distributed computing infrastructures. Time-series analyses may allow for the estimation of the time needed to complete certain tasks, such as processing a certain number of events or transferring a certain amount of data. Anomaly detection techniques can be employed to predict system failures, leading for example to network congestion. Recording and analyzing shifter actions can be used to automate tasks such as submitting tickets to support centers, or to suggest possible solutions to repeating issues. The Operational Intelligence project is a joint effort from various WLCG communities aimed at increasing the level of automation in computing operations. We discuss how state-of-the-art technologies can be used to build general solutions to common problems and to reduce the operational cost of the experiment computing infrastructure.
Speaker: Alessandro Di Girolamo (CERN)
CHEP 2019 Operational Intelligence - rev05.pdf
Big data solutions for CMS computing monitoring and analytics 15m
The CMS computing infrastructure is composed by several subsystems that accomplish complex tasks such as workload and data management, transfers, submission of user and centrally managed production requests. Till recently, most subsystems were monitored through custom tools and web applications, and logging information was scattered in several sources and typically accessible only by experts. In the last year CMS computing fostered the adoption of common big data solutions based on open-source, scalable, and no-SQL tools, such as Hadoop, InfluxDB, and ElasticSearch, available through the CERN IT infrastructure. Such system allows for the easy deployment of monitoring and accounting applications using visualisation tools such as Kibana and Graphana. Alarms can be raised when anomalous conditions in the monitoring data are met, and the relevant teams are automatically notified. Data sources from different subsystems are used to build complex workflows and predictive analytics (data popularity, smart caching, transfer latency, …), and for performance studies. We describe the full software architecture and data flow, the CMS computing data sources and monitoring applications, and show how the stored data can be used to gain insights into the various subsystems by exploiting scalable solutions based on Spark.
CHEP19 - CMS monitoring(1).pdf
Implementation of ATLAS Distributed Computing monitoring dashboards using InfluxDB and Grafana 15m
For the last 10 years, the ATLAS Distributed Computing project has based its monitoring infrastructure on a set of custom designed dashboards provided by CERN-IT. This system functioned very well for LHC Runs 1 and 2, but its maintenance has progressively become more difficult and the conditions for Run 3, starting in 2021, will be even more demanding; hence a more standard code base and more automatic operations are needed. A new infrastructure has been provided by the CERN-IT Monit group, based on InfluxDB as the data store and Grafana as the display environment. ATLAS has adapted and further developed its monitoring tools to use this infrastructure for data and workflow management monitoring and accounting dashboards, expanding the range of previous possibilities with the aim of achieving a single, simpler, environment for all monitoring applications. This presentation will describe the tools used, the data flows for monitoring and accounting, the problems encountered and the solutions found.
Speaker: Thomas Beermann (University of Innsbruck (AT))
ATL-SOFT-SLIDE-2019-772.pdf
Automatic log analysis with NLP for the CMS workflow handling 15m
The central Monte-Carlo production of the CMS experiment utilizes the WLCG infrastructure and manages daily thousands of tasks, each up to thousands of jobs. The distributed computing system is bound to sustain a certain rate of failures of various types, which are currently handled by computing operators a posteriori. Within the context of computing operations, and operation intelligence, we propose a machine learning technique to learn from the operators with a view to reduce the operational workload and delays. This work is in continuation of CMS work on operation intelligence to try and reach accurate predictions with machine learning. We present an approach to consider the log files of the workflows as regular text to leverage modern techniques from natural language processing (NLP). In general, log files contain a substantial amount of text that is not human language. Therefore, different log parsing approaches are studied in order to map the log files' words to high dimensional vectors. These vectors are then exploited as feature space to train a model that predicts the action that the operator has to take. This approach has the advantage that the information of the log files is extracted automatically and the format of the logs can be arbitrary. In this work the performance of the log file analysis with NLP is presented and compared to previous approaches.
Speaker: Lukas Layer (Universita e sezione INFN di Napoli (IT))
NLP_LLayer_CHEP_FinalDraft_v10.pdf
Easy-to-use data schema management scheme for RDBMS that includes the utilization of the column-store features 15m
Relational database (RDB) and its management system (RDBMS) offer many advantages to us, such as a rich query language, maintainability gained from a concrete schema, robust and reasonable backup solutions such as differential backup, and so on. Recently, some of RDBMS has supported column-store features that offer data compression with a high level of both data size and query performance. These features are useful for data collection and management. However, it is not easy to leverage such features. First of all, RDBMS gains a reasonable performance only after a proper description of the data schema, which requires expertise.
In this talk, we propose an easy-to-use data schema management scheme for RDBMS that includes the utilization of the column-store features. Our approach mainly focuses on time-series data. First of all, our approach supports appropriate schema generation to leverage an RDBMS that includes automatic creation of sub-tables and indexes. This is good preparation for leveraging column-store features in RDBMS.
Along with the proposal, we implemented a prototype system on PostgreSQL-based RDBMS. Our preliminary experiments show a good performance over other ordinary approaches.
Speaker: Tadashi Murakami (KEK)
CHEP2019_column-store_murakami_v1.0.pdf
Monitoring distributed computing beyond the traditional time-series histogram 15m
In this work we review existing monitoring outputs and recommend some novel alternative approaches to improve the comprehension of large volumes of operations data that are produced in distributed computing. Current monitoring output is dominated by the pervasive use of time-series histograms showing the evolution of various metrics. These can quickly overwhelm or confuse the viewer due to the large number of similar looking plots. We propose a supplementary approach through the sonification of real-time data streamed directly from a variety of distributed computing services. The real-time nature of this method allows operations staff to quickly detect problems and identify that a problem is still ongoing, avoiding the case of investigating an issue a-priori when it may already have been resolved. In this paper we present details of the system architecture and provide a recipe for deployment suitable for both site and experiment teams.
Speaker: Peter Love (Lancaster University (GB))
subtlenoise-chep2019-v2.pptx
Track 4 – Data Organisation, Management and Access: Caching Riverbank R8
Convener: Alessandra Forti (University of Manchester (GB))
Smart caching at CMS: applying AI to XCache edge services 15m
The envisaged Storage and Compute needs for the HL-LHC will be a factor up to 10 above what can be achieved by the evolution of current technology within a flat budget. The WLCG community is studying possible technical solutions to evolve the current computing in order to cope with the requirements; one of the main focuses is resource optimization, with the ultimate objective of improving performance and efficiency as well as simplifying and reducing operation costs. As of today the storage consolidation based on a Data Lake model is considered a good candidate for addressing HL-LHC data access challenges, allowing global redundancy instead of local redundancy, dynamic adaptation of QoS, intelligent data deployment based on cost driven metrics. A Data Lake model under evaluation can be seen as a logical entity which hosts a distributed working set of analysis data. Compute power can be close to the lake, but also remote and thus completely external. In this context we expect Data caching to play a central role as a technical solution to reduce the impact of latency and reduce network load. A geographically distributed caching layer will be functional to many satellite computing centers might appear and disappear dynamically. In this talk we propose to develop a flexible and automated AI environment for smart management of the content of clustered cache systems, to optimize hardware for the service and operations for maintenance. In this talk we demonstrate a AI-based smart caching system, and discuss the implementation of training and inference facilities along with the XCache integration with the smart decision service. Finally, we evaluate the effect on smart-caches and data placement, and compare data placement algorithm with and without ML model.
CHEP2019-cache-spiga.pdf
CMS data access and usage studies at PIC Tier-1 and CIEMAT Tier-2 15m
Computing needs projections for the HL-LHC era (2026+), following the current computing models, indicate that much larger resource increases would be required than those that technology evolution at a constant budget could bring. Since worldwide budget for computing is not expected to increase, many research activities have emerged to improve the performance of the LHC processing software applications, as well as to propose more efficient deployment scenarios and techniques which might alleviate the increase of expected resources for the HL-LHC. The massively increasing amounts of data to be processed leads to enormous challenges for HEP storage systems, networks and the data distribution to end-users. This is particularly important in scenarios in which the LHC data would be distributed from sufficiently small numbers of centers holding the experiment's data. Enabling data locality via local caches on sites seems a very promising approach to hide transfer latencies while reducing the deployed storage space and number of replicas elsewhere. However, this highly depends on the workflow I/O characteristics and available network across sites. A crucial assessment is to study how the experiments are accessing and using the storage services deployed in sites in WLCG, to properly evaluate and simulate the benefits for several of the new emerging proposals within WLCG/HSF. In order to evaluate access and usage of storage, this contribution shows data access and popularity studies for the CMS Workflows executed in the Spanish Tier-1 (PIC) and Tier-2 (CIEMAT) sites supporting CMS activities, based on local and experiment monitoring data spanning more than one year. Simulations of data caches for end-user analysis data, as well as potential areas for storage savings will be reviewed.
Speaker: Jose Flix Molina (Centro de Investigaciones Energéti cas Medioambientales y Tecno)
20191105_DataAccess_CMS_PIC_CIEMAT_CHEP2019.pdf
Moving the California distributed CMS xcache from bare metal into containers using Kubernetes 15m
The University of California system has excellent networking between all of its campuses as well as a number of other Universities in CA, including Caltech, most of them being connected at 100 Gbps. UCSD and Caltech have thus joined their disk systems into a single logical xcache system, with worker nodes from both sites accessing data from disks at either site. This setup has been in place for a couple years now and has shown to work very well. Coherently managing nodes at multiple physical locations has however not been trivial, and we have been looking for ways to improve operations. With the Pacific Research Platform (PRP) now providing a Kubernetes resource pool spanning resources in the science DMZs of all the UC campuses, we have recently migrated the xcache services from being hosted bare-metal into containers. This talk presents our experience in both migrating to and operating in the new environment.
Speaker: Matevz Tadel (Univ. of California San Diego (US))
CMS Kubernetes (CHEP 2019).pdf
Creating a content delivery network for general science on the backbone of the Internet using xcaches. 15m
A general problem faced by computing on the grid for opportunistic users is that while delivering opportunistic cycles is simpler compared to delivering opportunistic storage. In this project we show how we integrated Xrootd caches places on the internet backbone to simulate a content delivery network for general science workflows. We will show that for some workflows on LIGO, DUNE, and general gravitational waves data reuse increase cpu efficiency while decreasing network bandwidth reuse.
Stashcache CHEP 2019.pdf
Implementation and performances of a DPM federated storage and integration within the ATLAS environment 15m
With the increase of storage needs at the HL-LHC horizon, the data management and access will be very challenging for this critical service. The evaluation of possible solutions within the DOMA, DOMA-FR (IN2P3 project contribution to DOMA) and ESCAPE initiatives is a major activity to select the most optimal ones from the experiment and site point of views. The LAPP and LPSC teams have put their expertise and computing infrastructures in common to build the FR-ALPES federation and setup a DPM federated storage. Based on their experience of their Tier2 WLCG site management, their implication in the ATLAS Grid infrastructure and thanks to the flexibility of ATLAS and Rucio tools, the integration of this federation into the ATLAS grid infrastructure has been straightforward. In addition, the integrated DPM caching mechanism including volatile pools is also implemented. This infrastructure is foreseen to be a test bed for a DPM component within a DataLake. This presentation will describe the test bed (infrastructures separated by few ms in Round Trip Time unit) and its integration into the ATLAS framework. The impact on the sites and ATLAS operations of both the test bed implementation and its use will also be shown, as well as the measured performances on data access speed and reliability.
Speaker: Stephane Jezequel (LAPP-Annecy CNRS/USMB (FR))
CHEP19_ALPES.pdf
Analysis and modeling of data access patterns in ATLAS and CMS 15m
Data movement between sites, replication and storage are very expensive operations, in terms of time and resources, for the LHC collaborations, and are expected to be even more so in the future. In this work we derived usage patterns based on traces and logs from the data and workflow management systems of CMS and ATLAS, and simulated the impact of different caching and data lifecycle management approaches. Data corresponding to one year of operation and covering all Grid sites have been the basis for the analysis. For selected sites, this data has been augmented by access logs from the local storage system to also include data accesses not managed via the standard experiments workflow management systems. We present the results of the studies, the tools developed and the experiences with the data analysis frameworks used, and assess the validity of both current and alternative approaches to data management from a cost perspective.
Speaker: Markus Schulz (CERN)
DataAccessPatternsV2.pdf DataAccessPatternsV2.pptx
Track 5 – Software Development: Software management and packaging Riverbank R2
Convener: Mihaela Gheata (Institute of Space Science (RO))
Modern Software Stack Building for HEP 15m
High-Energy Physics has evolved a rich set of software packages that need to work harmoniously to carry out the key software tasks needed by experiments. The problem of consistently building and deploying these software packages as a coherent software stack is one that is shared across the HEP community. To that end the HEP Software Foundation Packaging Working Group has worked to identify common solutions that can be used across experiments, with an emphasis on consistent, reproducible builds and easy deployment into CVMFS or containers via CI systems. We based our approach on well identified use cases and requirements from many experiments. In this paper we summarise the work of the group in the last year and how we have explored various approaches based on package managers from industry and the scientific computing community.
We give details about a solution based on the Spack package manager which has been used to build the software required by the SuperNEMO and FCC experiments. We shall discuss changes that needed to be made to Spack to satisfy all our requirements. A layered approach to packaging with Spack, that allows build artefacts to be shared between different experiments, is described. We show how support for a build environment for software developers is provided.
Speaker: Graeme A Stewart (CERN)
Modern Software Stack Building for HEP - CHEP2019.pdf
Gentoo Prefix as a physics software manager 15m
In big physics experiments, as simulation, reconstruction and analysis become more sophisticated, scientific reproducibility is not a trivial task. Software is one of the biggest challenges. Modularity is a common sense of software engineering to facilitate quality and reusability of code. However, that often introduces nested dependencies not obvious for physicists to work with. Package manager is the widely practised solution to organize dependencies systematically.
Portage from Gentoo Linux is both robust and flexible, and is highly regarded by the free operating system community. In form of Gentoo Prefix, portage can be deployed by a normal user into a directory prefix, on a workstation, cloud or supercomputing node. Software is described by its build recipes along with dependency relations. Real world use cases of Gentoo Prefix in neutrino and dark matter experiments will be demonstrated, to show how physicists could benefit from existing tools of proven superiority to guarantee reproducibility in simulation, reconstruction and analysis of big physics experiments.
Speaker: Prof. Benda Xu (Tsinghua University)
Gentoo-BendaXu-CHEP-r2.pdf
SpackDev: Parallel Package Development with Spack 15m
Development of scientific software has always presented challenges to its practitioners, among other things due to its inherently collaborative nature. Software systems often consistent of up to several dozen closely-related packages developed within a particular experiment or related ecosystem, with up to a couple of hundred externally-sourced dependencies. Making improvements to one such package can require related changes to multiple other packages, and some systemic improvements can require major structural changes across the ecosystem.
There have been several attempts to produce a multi-package development system within HEP in the past, such systems usually being limited to one or a few experiments and requiring a common build system (e.g. Make, CMake). Common features include a central installation of each "release" of the software system to avoid multiple builds of the same package on a system, and integration with version control systems.
SpackDev is based on the powerful Spack build and packaging system in wide use in HPC, utilizing its package recipes and build mangement system to extract build instructions and manage the parallel development, build and test process for multiple packages at a time. Intended to handle packages without restriction to one internal build system, SpackDev is integrated with Spack as a command extension and is generally applicable outside HEP. We describe SpackDev's features and development over the last two years and the medium-term future, and initial experience using the SpackDev in the context of the LArSoft liquid argon detector toolkit.
SpackDev-CHEP-2019.pdf
Sustainable software packaging for end users with conda 15m
The conda package manager is widely used in both commercial and academic high-performance computing across a wide range of fields. In 2016 conda-forge was founded as a community-driven package repository which allows packaging efforts to be shared across communities. This is especially important with the challenges faced when packaging modern software with complex dependency chains or specialised hardware such as GPUs. Conda-forge receives support from Anaconda Inc. and became an officially supported PyData project in 2018. Conda is a language independent package manager which can be used for providing native binaries for Linux, macOS and Windows with x86, arm64 and POWER architectures.
The ROOT framework is a fundamental component of many HEP experiments. However, quickly installing ROOT on a new laptop or deploying it in continuous integration systems typically requires a non-negligible amount of domain-specific skills. The ability to install ROOT within conda has been requested for many years and its appeal was proven with it over 18,000 downloads within the first 5 months of it being made available. In addition, it has subsequently been used as a base for distributing other packages such as CMS's event display package (Fireworks) and the alphatwirl analysis framework.
In this contribution we will discuss the process of adding ROOT releases to conda-forge and how nightly builds of ROOT are being provided to allow end users to provide feedback on new and experimental features such as RDataFrame. We also discuss our experience distributing conda environments using CVMFS for physics analysts to use both interactively and with distributed computing resources.
Speaker: Chris Burr (CERN)
2019-11-05_CHEP2019-conda.pdf
CERN AppStore: Development of a multi-platform application management system for BYOD devices at CERN 15m
The number of BYOD continuously grows at CERN. Additionally, it is desirable to move from a centrally managed model to a distributed model where users are responsible for their own devices. Following this strategy, the new tools have to be provided to distribute and - in case of licensed software - also track applications used by CERN users. The available open source and commercial solutions were analyzed and none of them proved to be a good fit for CERN use cases. Therefore, it was decided to develop a system that could integrate various open source solutions and provide desired functionality for multiple platforms, both mobile and desktop. This paper presents the architecture and design decisions made to achieve a platform-independent, modern, maintainable and extensible system for software distribution at CERN.
Speaker: Tamas Bato (CERN)
appstoreChep2019.pdf
Chopin Management System: improving Windows infrastructure monitoring and management 15m
CERN Windows server infrastructure consists of about 900 servers. The management and maintenance is often a challenging task as the data to be monitored is disparate and has to be collected from various sources. Currently, alarms are collected from the Microsoft System Center Operation Manager (SCOM) and many administrative actions are triggered through e-mails sent by various systems or scripts.
The objective of the Chopin Management System project is to maximize automation and facilitate the management of the infrastructure. The current status of the infrastructure, including essential health checks, is centralized and presented through a dashboard. The system collects information necessary for managing the infrastructure in the real-time, such as hardware configuration or Windows updates, and reacts to any change or failure instantly . As part of the system design, big data streaming technologies are employed in order to assure the scalability and fault-tolerance of the service, should the number of servers drastically grow. Server events are aggregated and processed in real-time through the use of these technologies, ensuring quick response to possible failures. This paper presents details of the architecture and design decisions taken in order to achieve a modern, maintainable and extensible system for Windows Server Infrastructure management at CERN.
Speaker: Sebastian Bukowiec (CERN)
chep2019-chopin.pdf
Track 6 – Physics Analysis: Lattice QCD Hall G
Convener: Phiala Shanahan (Massachusetts Institute of Technology)
GUM: GAMBIT Universal Models 15m
GUM is a new feature of the GAMBIT global fitting software framework, which provides a direct interface between Lagrangian level tools and GAMBIT. GUM automatically writes GAMBIT routines to compute observables and likelihoods for physics beyond the Standard Model. I will describe the structure of GUM, the tools (within GAMBIT) it is able to create interfaces to, and the observables it is able to compute.
Speaker: Sanjay Bloor (Imperial College London)
GUM_SanjayBloor.pdf
Computing the properties of nuclei from QCD 15m
I will discuss recent advances in lattice QCD from the physics and computational points of view that have enabled basic a number properties and interactions of light nuclei to be determined directly from QCD. These calculations offer the prospect of providing nuclear matrix inputs necessary for a range of intensity frontier experiments (DUNE, mu2e) and dark matter direct-detection experiments along with well-quantified uncertainties.
Speaker: Dr William Detmold (MIT)
2019_CHEP_Adelaide_DETMOLD.pdf
Computing the magnetic field response of the proton 15m
Background field methods offer an approach through which fundamental non-perturbative hadronic properties can be studied. Lattice QCD is the only ab initio method with which Quantum Chromodynamics can be studied at low energies; it involves numerically calculating expectation values in the path integral formalism. This requires substantial investment in high performance super computing resources. Here the background field method is used with lattice QCD to induce a uniform background magnetic field.
A particular challenge of lattice QCD is isolating the desired state, rather than a superposition of excited states. While extensive work has been performed which allows the ground state to be identified in lattice QCD calculations, this remains a challenging proposition for the ground state in the presence of a background field. Quark level operators are introduced to resolve this challenge and thus allow for extraction of the magnetic polarisability that characterises the response of the nucleon to a magnetic field.
Speaker: Ryan Bignell (University of Adelaide)
presentation.pdf
Investigating the anomalous magnetic moment on the lattice 15m
There exists a long standing discrepancy of around 3.5 sigma between experimental measurements and standard model calculations of the magnetic moment of the muon. Current experiments aim to reduce the experimental uncertainty by a factor of 4, and Standard Model calculations must also be improved by a similar order. The largest uncertainty in the Standard Model calculation comes from the QCD contribution, in particular the leading order hadronic vacuum polarisation (HVP). To calculate the HVP contribution, we use lattice gauge theories which allows us to study QCD at low energies. In order to better understand this quantity, we investigate the effect of QED corrections to the leading order HVP term by including QED in our lattice calculations, and investigate flavour breaking effects. This is done using fully dynamical QCD+QED gauge configurations generated by the QCDSF collaboration and a novel method of quark turning.
Speaker: Alex Westin (The University of Adelaide)
chep_AlexWestin.pdf
Directly calculating the glue component of the nucleon in lattice QCD 15m
Computing the gluon component of momentum in the nucleon is a difficult and computationally expensive problem, as the matrix element involves a quark-line-disconnected gluon operator which suffers from ultra-violet fluctuations. But also necessary for a successful determination is the non-perturbative renormalisation of this operator. We investigate this renormalisation here by direct computation in the RI mom scheme. A clear statistical signal is obtained in the direct calculation by an adaption of the Feynman-Hellmann technique. A comparison is conducted in order to verify the energy-momentum sum rule of the nucleon.
Speaker: Tomas Howson (University of Adelaide)
The computational challenge of lattice chiral symmetry - Is it worth the expense? 15m
The origin of the low-lying nature of the Roper resonance has been the subject of significant interest for many years, including several investigations using lattice QCD. It has been claimed that chiral symmetry plays an important role in our understanding of this resonance. We present results from our systematic examination of the potential role of chiral symmetry in the low-lying nucleon spectrum through the direct comparison of the clover and overlap fermion actions. After a brief summary of the background motivation, we specify the computational details of the study and outline our comparison methodologies. We do not find any strong evidence supporting the claim that chiral symmetry plays a significant role in understanding the Roper resonance on the lattice.
Speaker: Adam Virgili
Track 7 – Facilities, Clouds and Containers: Infrastructure Riverbank R7
Convener: Sang Un Ahn (Korea Institute of Science & Technology Information (KR))
WLCG Web Proxy Auto Discovery for Dynamically Created Web Proxies 15m
The WLCG Web Proxy Auto Discovery (WPAD) service provides a convenient mechanism for jobs running anywhere on the WLCG to dynamically discover web proxy cache servers that are nearby. The web proxy caches are general purpose for a number of different http applications, but different applications have different usage characteristics and not all proxy caches are engineered to work with the heaviest loads. For this reason, the initial sources of information for WLCG WPAD were the static configurations that ATLAS and CMS maintain for the Conditions data that they read through the Frontier Distributed Database system, which is the most demanding popular WLCG application for web proxy caches. That works well for use at traditional statically defined WLCG sites, but now that usage of commercial clouds is increasing, there is also a need for web proxy caches to dynamically register themselves as they are created. A package called Shoal had already been created to manage dynamically created web proxy caches. This paper describes the integration of the Shoal package into the WLCG WPAD system, such that both staticly and dynamically created web proxy caches can be located from a single source. It also describes other improvements to the WLCG WPAD system since the last CHEP publication.
CHEP19_Talk_WPAD.pdf
Designing a new infrastructure for ATLAS Online Web Services 15m
Within the ATLAS detector, the Trigger and Data Acquisition system is responsible for the online processing of data streamed from the detector during collisions at the Large Hadron Collider (LHC) at CERN. The online farm is composed of ~4000 servers processing the data read out from ~100 million detector channels through multiple trigger levels. The capability to monitor the ongoing data taking and all the involved applications is essential to debug and intervene promptly to ensure efficient data taking. The base of the current web service architecture was designed a few years ago, at the beginning of the ATLAS operation (Run 1). It was intended to serve primarily static content from a Network-attached Storage, and privileging strict security, using separate web servers for internal (ATLAS Technical and Control Network - ATCN) and external (CERN General Purpose Network and public internet) access. During these years, it has become necessary to add to the static content an increasing number of dynamic web-based User Interfaces, as they provided new functionalities and replaced legacy desktop UIs. These are typically served by applications on VMs inside ATCN and made accessible externally via chained reverse HTTP proxies. As the trend towards Web UIs continues, the current design has shown its limits, and its increasing complexity became an issue for maintenance and growth. It is, therefore, necessary to review the overall web services architecture for ATLAS, taking into account the current and future needs of the upcoming LHC Run 3.
In this paper, we present our investigation and roadmap to re-design the web services system to better operate and monitor the ATLAS detector, while maintaining the security of critical services, such as Detector Control System, and maintaining the separation of remote monitoring and on-site control according to ATLAS policies.
Speaker: Diana Scannicchio (University of California Irvine (US))
DianaScannicchio.pdf DianaScannicchio.pptx
Construction of a New Data Center at BNL 15m
Computational science, data management and analysis have been key factors in the success of Brookhaven Lab's scientific programs at the Relativistic Heavy Ion Collider (RHIC), the National Synchrotron Light Source (NSLS-II), the Center for Functional Nanomaterials (CFN), and in biological, atmospheric, and energy systems science, Lattice Quantum Chromodynamics (LQCD) and Materials Science as well as our participation in international research collaborations, such as the ATLAS Experiment at Europe's Large Hadron Collider (LHC) and Belle II Experiment at KEK (Japan). The construction of a new data center is an acknowledgement of the increasing demand for computing and storage services at BNL.
The Computing Facility Revitalization (CFR) project is aimed at repurposing the former National Synchrotron Light Source (NSLS-I) building as the new datacenter for BNL. The new data center is to become available in early 2021 for ATLAS compute, disk storage and tape storage equipment, and later that year - for all other collaborations supported by the RACF/SDCC Facility, including: STAR, PHENIX and sPHENIX experiments at RHIC collider at BNL, Belle II Experiment at KEK (Japan), and BNL CSI HPC clusters. Migration of the majority of IT payload from the existing datacenter to the new datacenter is expected to begin with the central networking systems and first BNL ATLAS Tier-1 Site tape robot in early FY21, and it is expected to continue throughout FY21-23. This presentation will highlight the key MEP facility infrastructure components of the new data center. Also, we will describe our plans to migrate IT equipment between datacenters, the inter-operational period in FY21, gradual IT equipment replacement in FY21-24, and show the expected state of occupancy and infrastructure utilization for both datacenters in FY25.
CFR-CHEP-2019.pdf
Preparing CERN Tier-0 data centres for LHC Run3 15m
Since 2013 CERN's local data centre combined with a colocation infrastructure at the Wigner data centre in Budapest have been hosting the compute and storage capacity for WLCG Tier-0. In this paper we will describe how we try to optimize and improve the operation of our local data centre to meet the anticipated increment of the physics compute and storage requirements for Run3, taking into account two important changes on the way: the end of the colocation contract with Wigner in 2019 and the loan of 2 out of 6 prefabricated compute containers being commissioned by the LHCb experiment for their online computing farm.
Speaker: Olof Barring (CERN)
CHEP19-CERNTier0DCforRun3.pdf CHEP19-CERNTier0DCforRun3.pptx
Computing Activities at the Spanish Tier-1 and Tier-2s for the ATLAS experiment towards the LHC Run3 and High Luminosity (HL-LHC) periods 15m
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already actively participating in, and even coordinating, emerging R&D computing activities developing the new computing models needed in the LHC Run3 and HL-LHC periods.
In this contribution, we present details on the integration of new components, such as HPC computing resources, to execute ATLAS simulation workflows; the development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations; and improvements in Data Organization, Management and Access through storage consolidations ("data-lakes"), the use of data Caches, and improving experiment data catalogues, like Event Index. The design and deployment of novel analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented.
ATLAS Tier-1 and Tier-2 sites in Spain are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the LHC High Luminosity era.
Speaker: Santiago Gonzalez De La Hoz (Univ. of Valencia and CSIC (ES))
Spanish-T1-T2-chep19-v2.pdf
Beyond HEP: Photon and accelerator science computing infrastructure at DESY 15m
DESY is one of the largest accelerator laboratories in Europe, developing and operating state of the art accelerators, used to perform fundamental science in the areas of high-energy physics photon science and accelerator development.\newline
While for decades high energy physics has been the most prominent user of the DESY compute, storage and network infrastructure, various scientific areas as science with photons and accelerator development have catched up and are now dominating the demands on the DESY infrastructure resources, with significant consequences for the IT resource provisioning. In this contribution, we will present an overview of the computational, storage and network resources covering the various physics communities on site.\newline
Ranging from HTC batch-like offline processing in the Grid and the interactive user analyses resources in the National Analysis Factory for
the HEP community, to the computing needs of accelerator development or of photon sciences such as PETRA III or the European XFEL. Since DESY co-hosts these experiments and their data taking, their requirements include fast low-latency online processing for data taking and calibration as well as offline processing, thus HPC workloads, that are run on the dedicated {\it Maxwell} HPC cluster.\newline
As all communities face in the coming years significant challenges due to changing environments and increasing data rates, we will discuss how this will reflect in necessary changes to the computing and storage
infrastructures.\newline
We will present DESY compute cloud and container orchestration plans as a possible basis for infrastructure and platform services. We will show examples of Jupyter for small scale interactive analysis, as well as its integration into large scale resources such as batch systems or Spark clusters.\newline
To overcome the fragmentation of the various resources for all scientific communities at DESY , we explore how to integrate them into a seamless user experience in an {\it Interdisciplinary Data and Analysis Facility}
Speaker: Thomas Hartmann (Deutsches Elektronen-Synchrotron (DE))
CHEP-2019-BeyondHEP.pdf CHEP-2019-BeyondHEP.pptx
Track 8 – Collaboration, Education, Training and Outreach: Open data Riverbank R1
Open data provenance and reproducibility: a case study from publishing CMS open data 15m
In this paper we present the latest CMS open data release published on the CERN Open Data portal. The samples of raw datasets, collision and simulated datasets were released together with the detailed information about the data provenance. The data production chain covers the necessary compute environments, the configuration files and the computational procedures used in each data production step. We describe data curation techniques used to obtain and publish the data provenance information and we study the possibility to reproduce parts of the released data using the publicly available information. The present work demonstrates the usefulness of releasing selected samples of raw and primary data in order to fully ensure the completeness of information about data production chain for the attention of general data scientists and other non-specialists interested in using particle physics data for education or research purposes.
Speaker: Tibor Simko (CERN)
chep2019-opendata-cms-slides.pdf
Using CMS Open Data for education, outreach and software benchmarking 15m
The CMS collaboration at the CERN LHC has made more than one petabyte of open data available to the public, including large parts of the data which formed the basis for the discovery of the Higgs boson in 2012. Apart from their scientific value, these data can be used not only for education and outreach, but also for open benchmarks of analysis software. However, in their original format, the data cannot be accessed easily without experiment-specific knowledge and skills. Work is presented that allows to set up open analyses that are performed close to the published ones, but which meet minimum requirements for experiment-specific knowledge and software. The suitability of this approach for education and outreach is demonstrated with analyses that have been made fully accessible to the public via the CERN open data portal. In the second part of the talk, the value of these data as basis for benchmarks of analysis software under realistic conditions of a high-energy physics experiment is discussed.
CHEP 2019_ Using CMS Open Data for education, outreach and software development.pdf
Open Data Science Mesh: friction-free collaboration for researchers bridging High-Energy Physics and European Open Science Cloud 15m
Open Data Science Mesh (CS3MESH4EOSC) is a newly funded project to create a new generation, interoperable federation of data and higher-level services to enable friction-free collaboration between European researchers.
This new EU-funded project brings together 12 partners from the CS3 community (Cloud Synchronization and Sharing Services). The consortium partners include CERN, Danish Technical University (DK), SURFSARA (NL), Poznan Supercomputing Centre (PL), CESNET (CZ), AARNET (AUS), SWITCH (CH), University of Munster (DE), Ailleron SA (PL), Cubbit (IT), Joint Research Centre (BE) and Fundacion ESADE (ES). CERN acts as project coordinator.
The consortium already operates services and storage-centric infrastructure for around 300 thousand scientists and researchers across the globe. The project will integrate these local existing sites and services into a seamless mesh infrastructure which is fully interconnected with the EOSC-Hub, as proposed in the European Commission's Implementation Roadmap for EOSC.
The project will provide a framework for applications in several major areas: Data Science Environments, Open Data Systems, Collaborative Documents, On-demand Large Dataset Transfers and Cross-domain Data Sharing.
The collaboration between the users will be enabled by a simple sharing mechanism: a user will select a file or folder to share with other users at other sites. Such shared links will be established and removed dynamically by the users from a streamline web interface of their local storage systems. The mesh will automatically and contextually enable different research workflow actions based on type of content shared in the folder. One of the excellence areas of CS3 services is access to content from all types of devices: web, desktop applications and mobile devices. The project augments this capability to access content stored on remote sites and will in practice introduce FAIR principles in European Science.
The project with leverage on technologies developed and integrated in the research community, such as ScienceBox (CERNBox, SWAN, EOS), EGI-CheckIn, File Transfer Service (FTS), ARGO, EduGAIN and others. The project will also involve commercial cloud providers, integrating their software and services
CHEP-CS3Mesh-EduOutreach-2019.pdf
ATLAS Open Data software: the development of simple-but-real HEP data analysis examples 15m
The ATLAS Collaboration is releasing a new set of recorded and simulated data samples at a centre-of-mass energy of 13 TeV. This new dataset was designed after an in-depth review of the usage of the previous release of samples at 8 TeV. That review showed that capacity-building is one of the most important and abundant uses of public ATLAS samples. To fulfil the requirements of the community and at the same time attract new users and use cases, we developed real analysis software based on ROOT in two of the most popular programming languages: C++ and Python. These so-called analysis frameworks are complex enough to reproduce with reasonable accuracy the results -figures and final yields- of published ATLAS Collaboration physics papers, but still light enough to be run on commodity computers. Computers that university students and regular classrooms have, allow students to explore LHC data with similar techniques to those used by current ATLAS analysers. We present the development path and the final result of these analysis frameworks, their products and how they are distributed to final users inside and outside the ATLAS community.
ATLAS Open Data - CHEP2019.pdf
ATLAS Open Data: Using open education resources effectively 15m
Perform data analysis and visualisation on your own computer? Yes, you can! Commodity computers are now very powerful in comparison to only a few years ago. On top of that, the performance of today's software and data development techniques facilitates complex computation with fewer resources. Cloud computing is not always the solution, and reliability or even privacy is regularly a concern. While the Infrastructure as a Service (IaaS) and Software as a Service (SaaS) philosophies are a key part of current scientific endeavours, there is a misleading feeling that we need to have remote computers to do any kind of data analysis. One of the aims of the ATLAS Open Data project is to provide resources — data, software and documents — that can be stored and executed in computers with minimal or non-internet access, and in as many different operating systems as possible. This approach is viewed as complementary to the IaaS/SaaS approach, where local university, students and trainers' resources can be used in an effective and reproducible way — making the HEP and Computer Sciences fields accessible to more people. We present the latest developments in the production and use of local Virtual Machines and Docker Containers for the development of physics data analysis. We also discuss example software and Jupyter notebooks, which are in constant development for use in classrooms, and students' and teachers' computers around the world.
Speaker: Leonid Serkin (INFN Gruppo Collegato di Udine and ICTP Trieste (IT))
ATLAS_Open_Data_LSerkin_CHEP19.pdf
Dataset of tau neutrino interactions recorded by OPERA experiment 15m
We describe the dataset of very rare events recorded by the OPERA experiment. Those events represent tracks of particles associated with tau neutrinos emerged from a pure muon neutrino beam, due to neutrino oscillations. The OPERA detector, located in the underground Gran Sasso Laboratory, consisted of an emulsion/lead target with an average mass of about 1.2 kt, complemented by the electronic detectors. It was exposed, from 2008 to 2012, to the CNGS (CERN Neutrinos to Gran Sasso) beam, an almost pure muon neutrino beam with a baseline of 730 km, collecting a total of $17.97 \times 10^{19}$ protons on target. The OPERA Collaboration eventually assessed the discovery of $\nu_{\mu} \rightarrow \nu_{\tau}$ oscillations with a significance of 6.1 $\sigma$ by observing ten $\nu_{\tau}$ candidates. These events have been published at CERN Open Data Portal.
chep2019-opendata-opera-slides.pdf
Track X – Crossover sessions: Optimisation and acceleration Riverbank R4
Conveners: Teng Jian Khoo (Universite de Geneve (CH)), Yu Nakahama Higuchi (Nagoya University (JP))
FPGA-accelerated machine learning inference as a service for particle physics computing 15m
Large-scale particle physics experiments face challenging demands for high-throughput computing resources both now and in the future. New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms in particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that requires minimal modification to the current computing model. As examples, we retrain the ResNet50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet50 model with transfer learning for neutrino event classification. Using Microsoft Azure Machine Learning deploying Intel FPGAs to accelerate the ResNet50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework deployed as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600-700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.
sonic CHEP2019.pdf
GPU-based reconstruction and data compression at ALICE during LHC Run 3 15m
In LHC Run 3, ALICE will increase the data taking rate significantly to 50 kHz continuous read out of minimum bias Pb-Pb collisions. The reconstruction strategy of the online offline computing upgrade foresees a first synchronous online reconstruction stage during data taking enabling detector calibration, and a posterior calibrated asynchronous reconstruction stage. The significant increase in the data rate poses challenges for online and offline reconstruction as well as for data compression. Compared to Run 2, the online farm must process 50 times more events per second and achieve a higher data compression factor. ALICE will rely on GPUs to perform real time processing and data compression of the Time Projection Chamber (TPC) detector in real time, the biggest contributor to the data rate. With GPUs available in the online farm, we are evaluating their usage also for the full tracking chain during the asynchronous reconstruction for the silicon Inner Tracking System (ITS) and Transition Radiation Detector (TRD). The software is written in a generic way, such that it can also run on processors on the WLCG with the same reconstruction output. We give an overview of the status and the current performance of the reconstruction and the data compression implementations on the GPU for the TPC and for the global reconstruction.
Speaker: David Rohr (CERN)
2019-11-05 CHEP 2019.pdf
Reconstruction of track candidates at the LHC crossing rate using FPGAs 15m
In 2021 the LHCb experiment will be upgraded, and the DAQ system will be based on full reconstruction of events, at the full LHC crossing rate. This requires an entirely new system, capable of reading out, building and reconstructing events at an average rate of 30 MHz. In facing this challenge, the system could take advantage of a fast pre-processing of data on dedicated FPGAs. We present the results of an R&D on these technologies developed in the context of the LHCb Upgrade I. In particular, we discuss the details and potential benefits of an approach based on producing in real-time sorted collections of hits in the VELO detector (pre-tracks). These pre-processed data can then be used as seeds by the High Level Trigger (HLT) farm to find tracks for the Level 1 trigger with much lower computational effort than possible by starting from the raw detector data, thus freeing an important fraction of the power of the CPU farm for higher level processing tasks.
Speaker: Giulia Tuci (Universita & INFN Pisa (IT))
CHEP_2019_Tuci.pdf
Quantum annealing algorithms for track pattern recognition 15m
The pattern recognition of the trajectories of charged particles is at the core of the computing challenge for the HL-LHC, which is currently the center of a very active area of research. There has also been rapid progress in the development of quantum computers, including the D-Wave quantum annealer. In this talk we will discuss results from our project investigating the use of annealing algorithms for pattern recognition. We will present results we achieved expressing pattern recognition as a Quadratic Unconstrained Binary Optimization (QUBO) that can be solved using a D-Wave Quantum Annealer. We generated QUBOs that encode the pattern recognition problem at the LHC on the TrackML dataset, and we solved them using D-Wave qbsolv hybrid optimizer. These achieved a performance exceeding 99% for purity, efficiency, and for the TrackML score at low track multiplicities. We will discuss how the algorithm performs at track multiplicities expected at the HL-LHC. We will also report on early results comparing digital annealers to quantum annealers. We will also discuss results from the application of annealing algorithms to resolve between tracks in the dense cores of jets, and possible improvement of the annealing algorithm in a new workflow with a quantum/classical hybrid optimizer. We will conclude with future perspectives on using annealing-based algorithms for pattern recognition in high-energy physics experiments.
Speaker: Masahiko Saito (University of Tokyo (JP))
QA_Tracking_CHEP2019_4.pdf
The Tracking Machine Learning Challenge 15m
The HL-LHC will see ATLAS and CMS see proton bunch collisions reaching track multiplicity up to 10.000 charged tracks per event. Algorithms need to be developed to harness the increased combinatorial complexity. To engage the Computer Science community to contribute new ideas, we have organized a Tracking Machine Learning challenge (TrackML). Participants are provided events with 100k 3D points, and are asked to group the points into tracks; they are also given a 100GB training dataset including the ground truth. The challenge is run in two phases. The first "Accuracy" phase has run on Kaggle platform from May to August 2018; algorithms were judged judged only on a score related to the fraction of correctly assigned hits. The second "Throughput" phase ran Sep 2018 to March 2019 on Codalab, required code submission; algorithms were then ranked by combining accuracy and speed. The first phase has seen 653 participants, with top performers with innovative approaches (see arXiv:1904.06778). The second phase has recently finished and featured some astonishingly fast solutions. A "grand Finale" workshop will have taken place at CERN early July 2019. The talk will report on the lessons from the TrackML challenge and perspectives
vlimant_CHEP19_TrackML_Nov19.pdf
Evolving Geant4 to cope with the new HEP computing challenges 15m
The future High Energy Physics experiments, based on upgraded or next generation particle accelerators with higher luminosity and energy, will put more stringent demands on the simulation as far as precision and speed are concerned. In particular, matching the statistical uncertainties of the collected experimental data, will require the simulation toolkits to be more CPU-efficient, while keeping the same, if not higher, precision of the physics. On the other hand, the computing architectures have evolved considerably opening new opportunities for code improvements, based on parallelism and use of compute accelerators.
In this talk we present the R&D activities to cope with the new HEP computing challenges, taking place in the context of the Geant4 simulation toolkit. We first discuss the general scope and plan of this initiative and we introduce the different directions that are being explored with the potential benefits they can bring. The second part is focused on a few concrete examples of the R&D projects, like the use of tasking-based parallelism with possible off-load to GPUs, introduction of vectorization at different stages of the simulation or implementation of 'per volume'-specialized geometry navigators. We discuss the technical details of the different prototype implementations. In conclusion, our first results in those different areas are reported and the plans for the near future are presented.
Speaker: Andrei Gheata (CERN)
CHEP2019_Evolving_Geant4_v4.pdf
Posters: A Hall F
Hall F
Evaluation of Linux distributions for SoC devices on custom electronics in the CMS Network 15m
System on Chip (SoC) devices have become popular for custom electronics HEP boards. Advantages include the tight integration of FPGA logic with CPU, and the option for having relatively powerful CPUs, with the potential of running a fully fledged operating system.
In the CMS trigger and data acquisition system, there are already a small number of back-end electronics boards with Xilinx Zync SoCs in use since 2015 (LHC run-2). These are stand-alone installations. For the High Luminosity phase of the LHC starting around 2026, entirely new CMS back-end electronics is being developed. It is expected that SoC devices will be used at large scale (order of 1000) comparable to the number of High Level Trigger (HLT) nodes today, but with diverse use cases, hardware types, and capabilities (memory, cpu power).
This large scale will pose challenges for their integration in the experiment network, system administration services and overall configuration management. Issues include the time distribution, IP/name distribution (DHCP or other), remote system logs, read-only or read-write root file systems, NFS mounted root or application file systems, local or network system boot, and configuration management of devices on various linux distributions. Furthermore, with the emergence of more powerful CPUs it will be interesting to see how much of the data acquisition control and monitoring software could or should be deployed on those devices compared to server PCs.
We have evaluated a number of Linux distributions (Yocto, PetalLinux, ArchLinux, CentOS), addressing the complexity of building a distribution, the requirements on hardware resources, and the characteristics for network and sysadmin integration.
Speaker: Marc Dobson (CERN)
CHEP 2019 Poster_v2.1.pdf
SDN for End-to-End Networking at Exascale 15m
The Caltech team in collaboration with network, computer science, and HEP partners at the DOE laboratories and universities, building smart network services ("The Software-defined network for End-to-end Networked Science at Exascale (SENSE) research project") to accelerate scientific discovery.
The overarching goal of SENSE is to enable National Labs and universities to request and provision end-to-end intelligent network services for their application workflows leveraging SDN capabilities. The project's architecture, models, and demonstrated prototype define the mechanisms needed to dynamically build end-to-end virtual guaranteed networks across administrative domains, with no manual intervention. In addition, a highly intuitive 'intent' based interface, as defined by the project, allows applications to express their high-level service requirements, and an intelligent, scalable model-based software orchestrator converts that intent into appropriate network services, configured across multiple types of devices.
The overarching goal of SENSE is to enable National Labs and universities to request and provision end-to-end intelligent network services for their application workflows leveraging SDN capabilities.
In this paper, we will present system's architecture and it's components, first results of dynamic network resource provisioning and Quality of Service for data transfers using FTS3 and other transfer protocols, like GridFTP, XRootD, FDT.
Partial wave analysis with OpenAcc 15m
Partial wave analysis is an important tool in hadron physics. Large data sets from the experiments in high precision frontier require high computational power. To utilize GPU cluster and the resource of supercomputers with various types of the accelerator, we implement a software framework for partial wave analysis using OpenAcc, OpenAccPWA. OpenAccPWA provides convenient approaches for exposing parallelism in the code and excellent support for a large amount of existing CPU-based codes of partial wave amplitudes. It can avoid a heavy workload of code migration from CPU to GPU.
This poster will briefly introduce the software framework and performance of OpenAccPWA.
Speaker: Yanjia Xiao
OpenAccPWA_poster.pdf
A culture shift: transforming learning at CERN 20m
To accomplish its mission, the European Centre for Nuclear Research (CERN, Switzerland) is committed to the continuous development of its personnel through a systematic and sustained learning culture, that aims at keeping the knowledge and competences of the personnel in line with the evolving needs of the Organisation.
With this goal in mind, CERN supports learning in its broadest sense and promotes a variety of learning methods. Over the last few years, CERN has focused its efforts on expanding the learning opportunities of its personnel via newly available software and e-learning tools and methodologies, thereby bringing a shift in the learning culture of the organisation. In September 2018, CERN launched a new Learning Management System centralizing all learning opportunities in a single platform, the 'CERN Learning Hub'. In addition, new e-learning resources are now widely available to the personnel, including customized internally created e-learnings, an e-library, a commercial e-learning platform for self-paced learning and online surveys (180/360 feedback tools for CERN manager and leaders).
This paper presents the experience gained by CERN in testing and adopting these new e-learning technologies and discusses the future vision for CERN.
[email protected]
A Faster, More Accessible RooFit 15m
RooFit and RooStats, the toolkits for statistical modelling in ROOT, are used in most searches and measurements at the Large Hadron Collider, as well as B factories. The large datasets to be collected in Run 3 will enable measurements with higher precision, but will require faster data processing to keep fitting times stable.
In this talk, a redesign of RooFit's internal dataflow will be presented. Cache locality and data loading are improved, and batches of data are processed with vectorised SIMD computations. This improves RooFit's single-thread performance by several orders. In conjunction with multiple workers, this will allow to fit the larger datasets of Run 3 in the same time or faster than today's fits.
RooFit's interfaces will further be extended to be more accessible both from C++ and Python to improve interoperability and ease of use.
Speaker: Stephan Hageboeck (CERN)
RooFit CHEP19.pdf
A Functional Declarative Analysis Language in Python 15m
Based on work in the ROOTLINQ project, we've re-written a functional declarative analysis language in Python. With a declarative language, the physicist specifies what they want to do with the data, rather than how they want to do it. Then the system translates the intent into actions. Using declarative languages would have numerous benefits for the LHC community, ranging from analysis preservation that goes beyond the lifetimes of experiments or analysis software, to facilitating the abstraction, design, validation, combination, interpretation and overall communication of the contents of LHC analyses. This talk focuses on an ongoing effort to define an analysis language based on queries, designed to loop over structured data including a complete set of unambiguous operations. This project has several implementation goals: 1) Design a syntax that matches how physicists think about event data, 2) Run on different back-end formats, including binary data (xAOD's from ATLAS, for example), flat TTree's using RDataFrame, and columnar data in python. This work will further help to understand the differences between Analysis Languages and Data Query Languages in HEP, how hard it is to translate data manipulation from a row-wise-centric layout to a column-wise-centric layout, and, finally, to scale from a small laptop-like environment to a larger cluster. The system currently has all three backends implemented to varying degrees and is being used in a full Run 2 analysis in ATLAS. The plans, goals, design, progress, and pitfalls will be described in this presentation.
Speaker: Emma Torro Pastor (University of Washington (US))
CHEP_2019_funcADL_poster.pdf
A gateway between Gitlab CI and DIRAC 15m
The Gitlab continuous integration system (http://gitlab.com) is an invaluable tool for software developer to test and validate their software. LHCb analysts have also been using it to validate physics software tools and data analysis scripts, but this usage faced issues differing from standard software testing, as it requires significant amount of CPU resources and credentials to access physics data. This paper presents the Gitlab CI to DIRAC gateway, a tool that runs Gitlab CI jobs within the LHCb grid system (LHCbDirac) therefore bridging the gap between the Gitlab jobs and the CPU and disk resources provided to the experiment.
Speaker: Ben Couturier (CERN)
CHEP2019_Gateway_Gitlab_DIRAC.pdf
A global track finding algorithm for CGEM +DC with Hough Transform 15m
A present-day detection system for charged tracks in particle physics experiments is typically composed of two or more types of detectors. Then global track finding with these sub-detectors is one important topic. This contribution is to describe a global track finding algorithm with Hough Transform for a detection system consist of a Cylindrical-Gas-Electron-Multiplier (CGEM) and a Drift Chamber (DC). The detailed Hough Transform of the hits detected by CGEM and DC, the optimization of the binning of Hough maps, the global track fitting, the iterative way to determine tracks and some results with simulated samples are going to be presented.
Speaker: Dr Linghui Wu (Institute of High Energy Physics)
Global Track Finding with a Cylindrical-GEM and a Drift Chamber(poster).pdf
A Lightweight Door into Non-grid Sites 15m
The Open Science Grid (OSG) provides a common service for resource providers and scientific institutions, and supports sciences such as High Energy Physics, Structural Biology, and other community sciences. As scientific frontiers expand, so does the need for resources to analyze new data. For example, high energy physics (LHC) sciences foresee an exponential growth in the amount of data collected, which comes with corresponding growth in the need for computing resources. Allowing resource providers an easy way to share their resources is paramount to ensure the grow of resources available to scientists.
In this context, the OSG Hosted CE initiative provides site administrator a way to reduce the effort needed to install and maintain a Compute Element (CE), and represents a solution for sites who do not have the effort and expertise to run their own Grid middleware. An HTCondor Compute Element is installed on a remote VM at UChicago for each site that joins the Hosted CE initiative. The hardware/software stack is maintained by OSG Operations staff in a homogeneus and automated way, providing a reduction in the overall operational effort needed to maintain the CEs: one single organization does it in an uniform way, instead of each single resource provider doing it in their own way. Currently, more than 20 institutions joined the Hosted CE initiative. This contribution discusses the technical details behind a Hosted CE installation, highlighting key strenghts and common pitfalls, and outlining future plans to further reduce operational experience.
chep2019-hosted-ce.pdf
An ARM cluster for running CMSSW jobs 15m
The ARM platform extends from the mobile phone area to development board computers and servers. It could be that in the future the importance of the ARM platform will increase if new more powerful (server) boards are released. For this reason CMSSW has previously been ported to ARM in earlier work.
The CMS software is deployed using CVMFS and the jobs are run inside Singularity containers. Some ARM aarch64 CMSSW releases are available in CVMFS for testing and development. In this work CVMFS and Singularity have been compiled and installed on an ARM cluster and the aarch64 CMSSW releases in CVMFS have been used. We report on our experiences with this ARM cluster for CMSSW jobs.
Speaker: Tomas Lindén (Helsinki Institute of Physics (FI))
20191104_CHEP_ARM.pdf
Applying OSiRIS NMAL to Network Slices on SLATE 15m
We will present techniques developed in collaboration with the OSiRIS project (NSF Award #1541335, UM, IU, MSU and WSU) and SLATE (NSF Award #1724821) for orchestrating software defined network slices with a goal of building reproducible and reliable computer networks for large data collaborations. With this project we have explored methods of utilizing passive and active measurements to build a carefully curated model of the network. We will show that by then using such a model, we can dynamically and programmatically alter network and host configuration to effectively respond to changing network conditions.
As part of our presentation, we will show how SLATE, operating over a slice of the Internet2 network, provides a container focused platform for running a Network Management Abstraction Layer (NMAL), allowing us to control applications in a reliable and reproducible way. This presentation will demonstrate how NMAL tracks live network topological and performance statistics on an Internet2 slice with SLATE-enabled hosts to enact traffic engineering and container placement decisions in order to optimize network behavior based on user defined profiles. We will conclude by discussing the future of this work and our plans for using it to support science activities in production.
Applying OSiRIS NMAL to Network Slices on SLATE-test.pdf
BAT.jl – Upgrading the Bayesian Analysis Toolkit 15m
BAT.jl, the Julia version of the Bayesian Analysis Toolkit, is a software package which is designed to help solve statistical problems encountered in Bayesian inference. Typical examples are the extraction of the values of the free parameters of a model, the comparison of different models in the light of a given data set, and the test of the validity of a model to represent the data set at hand. BAT.jl is based on Bayes' Theorem and it is realized with the use of different algorithms. These give access to the full posterior probability distribution, and they enable parameter estimation, limit setting and uncertainty propagation.
BAT.jl is implemented in Julia and allows for a flexible definition of mathematical models and applications while keeping in mind the reliability and speed requirements of the numerical operations. It provides implementations (or links to implementations) of algorithms for sampling, optimization and integration. While predefined models exist for standard cases, such as simple counting experiments, binomial problems or Gaussian models, its full strength lies in the analysis of complex and high-dimensional models often encountered in high energy and nuclear physics.
BAT.jl is a completely re-written code based on the original BAT code written in C++ . There is no backward compatibility whatsoever, but the spirit is the same: providing a tool for Bayesian computations of complex models.
The poster will summarize the current status of the BAT.jl project and highlight the challenges faced in the fields of high energy and nuclear physics.
Speakers: Cornelius Grunwald (Technische Universitaet Dortmund (DE)), Cornelius Grunwald (TU Dortmund)
BAT.jl_poster.pdf
Calibration and Performance of the CMS Electromagnetic Calorimeter in LHC Run2 15m
Many physics analyses using the Compact Muon Solenoid (CMS) detector at the LHC require accurate, high resolution electron and photon energy measurements. Excellent energy resolution is crucial for studies of Higgs boson decays with electromagnetic particles in the final state, as well as searches for very high mass resonances decaying to energetic photons or electrons. The CMS electromagnetic calorimeter (ECAL) is a fundamental instrument for these analyses and its energy resolution is crucial for the Higgs boson mass measurement. Recently the energy response of the calorimeter has been precisely calibrated exploiting the full Run2 data, aiming at a legacy reprocessing of the data. A dedicated calibration of each detector channel has been performed with physics events exploiting electrons from W and Z boson decays, photons from pi0/eta decays, and from the azimuthally symmetric energy distribution of minimum bias events. This talk presents the calibration strategies that have been implemented and the excellent performance achieved by the CMS ECAL with the ultimate calibration of Run II data, in terms of energy scale stability and energy resolution.
Speaker: Chiara Ilaria Rovelli (Sapienza Universita e INFN, Roma I (IT))
Poster_Cavallari_Rovelli_v5.pdf
Code health in EOS: Improving test infrastructure and overall service quality 15m
During the last few years, the EOS distributed storage system at CERN has seen a steady increase in use, both in terms of traffic volume as well as sheer amount of stored data.
This has brought the unwelcome side effect of stretching the EOS software stack to its design constraints, resulting in frequent user-facing issues and occasional downtime of critical services.
In this paper, we discuss the challenges of adapting the software to meet the increasing demands, while at the same time preserving functionality without breaking existing features or introducing new bugs. We document our efforts in modernizing and stabilizing the codebase, through the refactoring of legacy code, introduction of widespread unit testing, as well as leveraging kubernetes to build a comprehensive test orchestration framework capable of stressing every aspect of an EOS installation, with the goal of discovering bottlenecks and instabilities before they reach production.
Speaker: Andreas Joachim Peters (CERN)
code-health-final.pdf
Compass SPMD, a SPMD vectorized tracking algorithm 15m
The LHCb detector will be upgraded in 2021, where the hardware-level trigger will be replaced by a High Level Trigger 1 software trigger that needs to process the full 30 MHz data-collision rate. As part of the efforts to create a GPU High Level Trigger 1, tracking algorithms need to be optimized for SIMD architectures in order to achieve high-throughput. We present a SPMD (Single Program, Multiple Data) version of Compass, a tracking algorithm optimized for SIMD architectures, vectorized using the Intel SPMD Program Compiler. This compiler and model allows to execute program instances in parallel, and allows to use exploit the SIMD lanes of CPUs using GPU-like source code, without the need of low-level details knowledge. It is able to target different vector widths, vector instructions sets and combine different levels of parallelism. We design the algorithm focusing on highly parallel architectures in mind, minimizing divergence and memory footprint while creating a data-oriented algorithm that is efficient for SIMD architectures. We vectorize the algorithm using the SPMD programming model, preserving the algorithm design and delivering the same physics efficiency as its GPU counterpart. We study the physics performance and throughput of the algorithm. We discuss the impact with different vector widths and instructions sets and compare it with the GPU implementation.
Speaker: Placido Fernandez Declara (University Carlos III (ES))
chep_poster_V5.pdf
Computing for the general public at CERN's Science Gateway 15m
CERN is launching the Science Gateway, a new scientific education and outreach centre targeting the general public of all ages. Construction is planned to start in 2020 and to be completed in 2022. In addition to Physics exhibits, the Science Gateway will include immersive, hands-on activities that explore Computer Science and Technology. This poster will present the methodology used to generate the content for the computing installations and showcase the ongoing progress towards bringing the concepts to life.
CHEP2019_ScienceGateway_Poster.pdf
Conditions Databases at FNAL 15m
Conditions databases is an important class of database applications where the database is used
to record the state of a set of quantities as a function of observation time.
Conditions databases are used in Hight Energy Physics to record the state of
the detector apparatus during data taking, and then to use the data during
the event reconstruction and analysis phases.
At FNAL, we have a set of 3 different conditions database products, Minerva Conditions DB, ConDB and UConDB, which cover the whole range of the use cases presented by the FNAL experiments. These products have common features such as conditions data representation model, data version control, ability to restore the database to a previous state, scalable web service data access interface. In the paper, we will present the common features of the products, common solutions used to build them and also the differences between the products and their target use cases.
Speaker: Eric Vaandering (Fermi National Accelerator Lab. (US))
ConditionsDatabases.pdf
Data Management and User Data Portal at CSNS 15m
China Spallation Neutron Source (CSNS) is a large science facility, and it is public available to researchers from all over the world. The data platform of CSNS is aimed for diverse data and computing supports, the design philosophy behind is data safety, big-data sharing, and user convenience.
In order to manage scientific data, a metadata catalogue based on ICAT is built to manage full life-time experiment metadata, from idea to publication. It is used as the data middleware, forms the basis of various data services. A multi-layered, distributed storage layout based on iRODS is adopted to store the data files, which enables the data virtualization and workflow automation. The digital object identifier (DOI) is applied to data itself to uniquely identify the data and to enable trace, interoperation and discovery of data.
The web-based user data portal providers user with one-stop services for experiment information view, data search, retrieve, analysis and share in anytime and anywhere. Furthermore, a cloud analysis portal is developed based on Openstack, which enables user to utilize CSNS computing resources to handle data on demand.
Speaker: Mr Ming Tang (Institute of High Energy Physics, Chinese Academy of Sciences)
Poster_tangm.pdf
Design principles of the Metadata Querying Language (MQL) implemented in the ATLAS Metadata Interface (AMI) ecosystem 15m
ATLAS Metadata Interface (AMI) is a generic ecosystem for metadata aggregation, transformation and cataloging benefiting from about 20 years of feedback in the LHC context. This poster describes the design principles of the Metadata Querying Language (MQL) implemented in AMI, a metadata-oriented domain-specific language allowing to query databases without knowing the relation between tables. With this simplified yet generic grammar, MQL permits writing complex queries much more simply than Structured Query Language (SQL). The poster describes how AMI compiles MQL into SQL queries using the underlying table relations graph automatically extracted through a reflexion mechanism.
Speaker: Jerome Odier (LPSC/CNRS (Grenoble, FR))
poster-MQL.pdf
Detector Construction Application for CMS Ph2 Detectors 15m
During the third long shutdown of the CERN Large Hadron Collider, the CMS Detector will undergo a major upgrade to prepare for Phase-2 of the CMS physics program, starting around 2026. Upgrade projects will replace or improve detector systems to provide the necessary physics performance under the challenging conditions of high luminosity at the HL-LHC. Among other upgrades, the new CMS Silicon-Tracker will substantially increase in the number of channels and will feature an improved spatial resolution. The new Endcap Calorimeter will allow measurement of the 3D topology of energy deposits in particle showers induced by incident electrons, photons and hadrons, as well as precise time-stamping of neutral particles down to low transverse momentum.
Ph2 upgrade project collaborations consist of dozens of institutions, many participating in actual detector design, development, assembly and quality control (QC) testing. In terms of participating institutions and worldwide responsibilities Ph2 HGCAL and Outer Tracker projects are unprecedented. This raises a huge challenge for detector parts tracking, assembly and QC information bookkeeping.
Detector Construction Application (DCA) is based on the universal database model which is capable of hosting information about different detectors assembly (construction), parts tracking between institutions and QC information. DCA consists of and maintains a number of tools for data upload (DB Loader), retrieval (Restful API), editing and analysis GUI. In this report we present the design and architecture of DCA which helps physicists and institutions to collaborate worldwide while building the next CMS detector.
Speaker: Valdas Rapsevicius (Vilnius University (LT))
Distributed Caching in the WLCG 15m
With the evolution of the WLCG towards opportunistic resource usage and cross-site data access, new challenges for data analysis have emerged in recent years. To enable performant data access without relying on static data locality, distributed caching aims at providing data locality dynamically. Recent work successfully employs various approaches for effective and coherent caching, from centrally managed approaches employing volatile storage to hybrid approaches orchestrating autonomous caches. However, due to the scale and use-case of the WLCG there is little prior work to assess the general applicability and scalability of these approaches.
Building on our previous developments of coordinated, distributed caches at KIT, we have identified the primary challenge not in the technical implementation but the underlying logic and architecture. We have studied several key issues solved by all approaches in various ways: aggregation of meta-data, identification of viable data, coherence of cache contents, and integration of temporary caches. Monitoring data from XRootD storage and HTCondor batch systems from both our Tier 1 and Tier 3 infrastructure provides realistic usage characteristics. This allows us to assess both the use cases of well-defined user jobs, as well as late-bound anonymous pilot jobs.
In this contribution, we present our findings on the implications of different architectures for distributed caching depending on the targeted use-case.
Speaker: Max Fischer (Karlsruhe Institute of Technology)
Distributed_Caching_in_the_WLCG.pdf
End-to-end Deep Learning Fast Simulation Framework 15m
To address the increase in computational costs and speed requirements for simulation related to the higher luminosity and energy of future accelerators, a number of Fast Simulation tools based on Deep Learning (DL) procedures have been developed. We discuss the features and implementation of an end-to-end framework which integrates DL simulation methods with an existing Full Simulations toolkit (Geant4). We give a description of the key concepts and challenges in developing a production environment level Simulation framework based on Deep Neural Network (DNN) models designed for High Energy Physics (HEP) problem domain and trained on HEP data. We discuss, data generation (simplified calorimeters simulations obtained with the Geant4 toolkit) and processing, DNN architecture evaluation procedures and API integration. We address the challenge of distributional shifts in the input data (dependent on calorimeter type) and evaluate the response of trained networks and propose a general framework for physics validation of DL models in HEP.
End_to_End_Deep_Learning_FastSim_Framework_Poster_Ioana_Ifrim.pdf
Enhancements in Functionality of the Interactive Visual Explorer for ATLAS Computing Metadata 15m
The development of the Interactive Visual Explorer (InVEx), a visual analytics tool for ATLAS computing metadata, includes research of various approaches for data handling both on server and on client sides. InVEx is implemented as a web-based application which aims at the enhancing of analytical and visualization capabilities of the existing monitoring tools, and facilitate the process of data analysis with the interactivity and human supervision. The development of InVEx started with the implementation of a 3-dimensional interactive tool for cluster analysis (for the k-means and DBSCAN algorithms), and its further evolvement is closely linked to the needs of ATLAS computing experts, providing metadata analysis to ensure the stability and efficiency of the distributed computing environment functionality. In the process of the integration of InVEx with ATLAS computing metadata sources we faced two main challenges: 1) big data volumes needed to be analyzed in the real time mode (as an example, one ATLAS computing task may contai | CommonCrawl |
ariabartar.com
What Is The Age Limit For Pilot?
Is 40 too old to become a pilot?
Is it hard to be a pilot?
Do pilots wives fly free?
Is it good to marry a pilot?
Can I become a pilot at age 35?
Is 55 too old to become a commercial pilot?
What is the cheapest way to become a pilot?
Do pilots die early?
Is 50 too old to become a helicopter pilot?
Is 50 too old to become an airline pilot?
Are Pilots rich?
Do you have to be rich to become a pilot?
Do pilots have to retire at a certain age?
Is there an age limit for airline pilots?
Who is the richest pilot?
There is no age limit for pilots to work commercially.
At 40 years of age, you can start a career in airlines as a pilot..
The answer is no. While some airlines have an age requirement before you can fly a commercial flight, there's no age limit in wanting to become a pilot. Many think that at age 40, they have missed their opportunity to become a pilot. Their time has simply gone by.
It is not hard to fly an airplane. To qualify and become an airline pilot is hard. Operating a commercial jet requires years of flying experience and extensive aeronautical knowledge to become an incompetent pilot. Pilots need skill and confidence to take the responsibility of conducting safe flights.
When it comes up that I'm married to an airline pilot, I usually get told how lucky I am and how they'd love to be in my shoes. I usually just smile and nod, but I know all they see is that you can fly standby for free. Truth is, just like everyone else's life, the life of a pilot's wife really can suck.
Pilots have a unique profession and being married to them is very different from being married to someone with a predictable standard 8-5 desk job. … If you do end up marrying a pilot, consider adding some airline decor to your home. Check out our fine selection of High Flying Models, perfect for any aviation family.
35 years of age is definitely not too late to become a pilot. It is possible, but be prepared to see people in their early 20's earning much more and at higher positions than you. Pilot training is roughly 3-ish years, give or take a year. So you should be done before turning 40.
The FAA imposes a mandatory retirement age for pilots of 65, there are no exceptions. This means that if you started right now and were airline eligible at the age of 50, you would have 15 years left of flying, which sounds pretty decent to me.
There is no faster or cheaper way to become a commercial pilot. The other option is to pursue a Bachelor of Science degree online through the Liberty University Flight Training Program while completing your FAA flight ratings through J.A. Flight Training, located at Aurora Airport.
One from 1992, for Flight Safety Digest – a former publication of the US-based Flight Safety Foundation – concluded that pilots do die at a younger age than the general population, based on two main data sources. … The study lists "physical and emotional" stressors that are thought to affect airline pilots' health.
I don't think you're ever too old to learn the aircraft. You can learn to fly the helicopter. … You can learn what you have to know to pass the test to become a helicopter pilot. The biggest thing if you're thinking about changing career later in life, some people are going to tell you you're too old.
26 is not very late to be a pilot as long as you complete your CPL by the age of 29/30. Airlines upper age for FO are usually 33–35. … Easier said than done, becoming a pilot is one thing and working as one for an airline is another. To become a pilot (cpl holder) you just need to spend around 25 lakhs.
If you've read the forum and seen similar than you should know the answer is no you're not too old BUT you do need to be realistic about your expectations and goals in the industry. If you start now you'll be in a position to be hired by a Regional in about 2 yrs making you 54.
(Most) pilots aren't rich. But if they're lucky, they'll have enough, which should be good enough for any pilot wife, too. If you're going through it right now, if you're feeling the pinch and feel like there's no way out, take heart, because you are not alone. The aviation industry is looking up, and there's hope.
"You certainly don't have to be rich, but you have to have a plan," he says. "Like with most larger expenses, you need to be smart and know what your goals and objectives are."
No, you can become a pilot at any age as long as you have fairly good health. As a matter fact, this is about the best time in history to become a Commercial Airline Pilot. … The most difficult part for most people is the 1500 hours required to fly for the Airlines.
If you want to become an airline pilot I would make sure that your are no older than 55 – 58 maybe even 60 to have the 1500 hours to start with a regional. This will, in the current state of aviation world, give you a solid 5 years of flying for a regional since the retirement age is 65.
They may not like it, but airline pilots are required to retire at age 65. … (On domestic flights, both pilots can be 60 or older.) But there's no mandatory retirement age for general aviation pilots.
But the pilot's age may sound surprising in light of the fact that airline pilots are required to retire at 65. Federal Aviation Administration regulations place no upper age limit on private pilots. Beginning at age 40, they must pass medical exams every two years instead of every five years like younger pilots.
10 Countries With The Highest Pilot Salaries In The WorldChina. Average salary – $300,000. … Netherlands. Average salary – $245,000. … France. Average salary – $235,000. … Ireland. Average salary – $225,000. … Canada. Average salary – $225,000. … Qatar. Average salary – $220,000. … Germany. Average salary – $203,000. … Switzerland. Average salary – $211,000.More items…•
Quick Answer: At What Age Can I Withdraw From My 401k?
What happens if I retire at 59? You can retire with
Quick Answer: How Do You Calculate Sin R?
Why is sin a sin R constant? \frac{sin\;i}{sin\;
Quick Answer: Who Created NIMS?
What are the 5 components of NIMS? NIMS 2008 defined
Quick Answer: What Is The Standard Deduction For Married Filing Jointly In 2020?
What is the standard deduction for married filing jointly over 65?
What Is The Difference Between API And ASME Standards?
What is API standard? American Petroleum Institute
Quick Answer: How Does Depression Cause Memory Loss?
What is brain fog? What Is It. "Brain fog" isn'
Quick Answer: Can I Get Fired For Having Bad Credit?
What does an employer credit check look like?
Quick Answer: Do You Have To Report Stocks On Taxes If You Didn'T Sell?
What happens if you don't claim stocks on taxes?
Is A 10000 Mah Power Bank Allowed On A Plane?
How much mAh is 100wh? HyperJuice AC Battery Pack (100Wh
How Do I Get An Accent Over An E?
Where is the accent grave on the keyboard?
Are Promo Albums Worth More?
What is a promotional copy game? A promo copy is a
Question: Why Do We Need Property Law?
Why is property law important? The fundamental purpose
Question: Are Stock Gains Taxed If Reinvested?
Can you reinvest to avoid capital gains? With some
© 2021 ariabartar.com | CommonCrawl |
Finite to infinite steady state solutions, bifurcations of an integro-differential equation
DCDS-B Home
Sharp interface limit of the Fisher-KPP equation when initial data have slow exponential decay
July 2011, 16(1): 31-55. doi: 10.3934/dcdsb.2011.16.31
Existence of solution for a generalized stochastic Cahn-Hilliard equation on convex domains
Dimitra Antonopoulou 1, and Georgia Karali 2,
Department of Applied Mathematics, University of Crete, Heraklion, Greece
Department of Applied Mathematics, University Crete, P.O. Box 2208, 71409, Heraklion, Crete, Greece
Received April 2010 Revised February 2011 Published April 2011
We consider a generalized Stochastic Cahn-Hilliard equation with multiplicative white noise posed on bounded convex domains in $R^d$, $d=1,2,3$, with piece-wise smooth boundary, and introduce an additive time dependent white noise term in the chemical potential. Since the Green's function of the problem is induced by a convolution semigroup, we present the equation in a weak stochastic integral formulation and prove existence of solution when $d\leq 2$ for general domains, and for $d=3$ for domains with minimum eigenfunction growth, without making use of any explicit expression of the spectrum and the eigenfunctions. The analysis is based on stochastic integral calculus, Galerkin approximations and the asymptotic spectral properties of the Neumann Laplacian operator. Existence is also derived for some non-convex cases when the boundary is smooth.
Keywords: convex domains, space-time white noise, convolution semigroup, Galerkin approximations., Stochastic Cahn-Hilliard, Neumann Laplacian eigenfunction basis, Green's function.
Mathematics Subject Classification: 35K55, 35K40, 60H30, 60H1.
Citation: Dimitra Antonopoulou, Georgia Karali. Existence of solution for a generalized stochastic Cahn-Hilliard equation on convex domains. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 31-55. doi: 10.3934/dcdsb.2011.16.31
R. A. Adams, "Sobolev Spaces,", Academic Press, (1993). Google Scholar
N. D. Alikakos, P. W. Bates and X. Chen, Convergence of the Cahn-Hilliard equation to the Hele-Shaw model,, Arch. Rat. Mech. Anal., 128 (1994), 165. doi: 10.1007/BF00375025. Google Scholar
N. D. Alikakos, G. Fusco and G. Karali, The effect of the geometry of the particle distribution in Ostwald Ripening,, Comm. Math. Phys., 238 (2003), 480. doi: 10.1007/s00220-003-0834-4. Google Scholar
N. D. Alikakos, G. Fusco and G. Karali, Ostwald Ripening in two dimensions - The rigorous derivation of the equations from the Mullins-Sekerka dynamics,, J. Differential Equations 205, 1 (2004), 1. Google Scholar
T. Antal, M. Droz, J. Magnin and Z. Rácz, Formation of Liesengang patterns: A spinodal decomposition scenarion,, Phys. Rev. Lett., 83 (1999), 2880. doi: 10.1103/PhysRevLett.83.2880. Google Scholar
D. C. Antonopoulou, G. D. Karali and G. T. Kossioris, Asymptotics for a generalized Cahn-Hilliard equation with forcing terms,, to appear in Discrete and Cont. Dyn. Syst. A., (). Google Scholar
L. Arnold, "Stochastic Differential Equations: Theory and Applications,", Wiley and Sons, (1974). Google Scholar
P. W. Bates and P. C. Fife, The dynamics of nucleation for the Cahn-Hilliard,, SIAM J. Appl. Math., 53 (1993), 990. doi: 10.1137/0153049. Google Scholar
G. Belletini, M. S. Gelli, S. Luckhaus and M. Novaga, Deterministic equivalent for the Allen Cahn energy of a scaling law in the Ising model,, Calc. Var., 26 (2006), 429. doi: 10.1007/s00526-006-0012-6. Google Scholar
J. M. Berezanskii, "Expansions in Eigenfunctions of Selfadjoint Operators,", Translations of Mathematical Monographs, (1968). Google Scholar
D. Blömker, S. Maier-Paape and T. Wanner, Spinodal decomposition for the Cahn-Hilliard-Cook equation,, Communications in Mathematical Physics, 3 (2001), 553. Google Scholar
D. Blömker, S. Maier-Paape and T. Wanner, Phase separation in stochastic Cahn-Hilliard models,, Mathematical Methods and Models in Phase Transitions (2005), (2005), 1. Google Scholar
S. C. Brenner and L. R. Scott, "The Mathematical Theory of Finite Element Methods,", Springer-Verlag, (1994). Google Scholar
J. W. Cahn, On spinodal decomposition,, Acta Metallurgica, 9 (1961), 795. doi: 10.1016/0001-6160(61)90182-1. Google Scholar
J. W. Cahn and J. E. Hilliard, Free energy of a nonuniform system. I. Interfacial free energy,, J. Chem. Phys., 28 (1958), 258. doi: 10.1063/1.1744102. Google Scholar
C. Cardon-Weber, Cahn-Hilliard stochastic equation: Existence of the solution and of its density,, Bernoulli, 5 (2001), 777. doi: 10.2307/3318542. Google Scholar
C. Cardon-Weber, Cahn-Hilliard stochastic equation: Strict positivity of the density,, Stoch. Stoch. Rep., 72 (2002), 191. Google Scholar
H. Cook, Brownian motion in spinodal decomposition,, Acta Metallurgica, 18 (1970), 297. doi: 10.1016/0001-6160(70)90144-6. Google Scholar
R. Courant and D. Hilbert, "Methods of Mathematical Physics,", Vol. \textbf{1}, 1 (1953). Google Scholar
G. Da Prato and A. Debussche, Stochastic Cahn-Hilliard equation,, Nonlin. Anal. Th. Meth. Appl., 26 (1996), 241. doi: 10.1016/0362-546X(94)00277-O. Google Scholar
A. Debussche and L. Dettori, On the Cahn-Hilliard equation with logarithmic free energy,, Nonlin. Anal. Th. Meth. Appl., 24 (1995), 1491. doi: 10.1016/0362-546X(94)00205-V. Google Scholar
A. Debussche and L. Goudenège, Stochastic Cahn-Hilliard equation with double singular nonlinearities and two reflections,, Prepublication - IRMAR, 29 (2009). Google Scholar
A. Debussche and L. Zambotti, Conservative stochastic Cahn-Hilliard equation with reflection,, Ann. Probab., 35 (2007), 1706. doi: 10.1214/009117906000000773. Google Scholar
J. J. Duistermaat and V. W. Guillemin, The spectrum of positive elliptic operators and periodic bicharacteristics,, Invent. Math., 29 (1975), 39. doi: 10.1007/BF01405172. Google Scholar
S. D. Eidelman and N. V. Ivasisen, Investigation of the green matrix for homogeneous parabolic boundary value problem,, Trans. Moscow. Math. Soc., 23 (1970), 179. Google Scholar
N. Elezovic and A. Mikelic, On the stochastic Cahn-Hilliard equation,, Nonlinear Anal., 16 (1991), 1169. doi: 10.1016/0362-546X(91)90204-E. Google Scholar
C. M. Elliott and S. Zheng, On the Cahn-Hilliard equation,, Arch. Rational Mech. Anal., 96 (1986), 339. doi: 10.1007/BF00251803. Google Scholar
L. C. Evans, "Partial Differential Equations,", American Mathematical Society, (1998). Google Scholar
P. C. Fife, "Dynamical Aspects of the Cahn-Hilliard Equations,", Barret Lectures, (1991). Google Scholar
P. C. Fife, Models for phase separation and their mathematics,, El. Journ. Diff. E., 48 (2000), 1. Google Scholar
A. Garsia, Continuity properties of Gaussian process with multi-dimensional time parameter,, Proc. Sixth Berkeley Symp. Math. Stat. Probab., (1972), 369. Google Scholar
L. Goudenège, Stochastic Cahn-Hilliard equation with singular nonlinearity and reflection,, Stochastic Processes and their Applications, 119 (2009), 3516. doi: 10.1016/j.spa.2009.06.008. Google Scholar
M. E. Gurtin, Generalized Ginzburg-Landau and Cahn-Hilliard equations based on a microforce balance,, Physica D, 92 (1996), 178. doi: 10.1016/0167-2789(95)00173-5. Google Scholar
A. Hassell, "Eigenvalues and Eigenfunctions of the Laplacian,", Lecture notes, (). Google Scholar
M. Hildebrand and A. S. Mikhailov, Mesoscopic modeling in the kinetic theory of adsorbates,, J. Phys. Chem., 100 (1996), 19089. doi: 10.1021/jp961668w. Google Scholar
P. C. Hohenberg and B. I. Halperin, Theory of dynamic critical phenomena,, Rev. mod. Phys., 49 (1977), 435. doi: 10.1103/RevModPhys.49.435. Google Scholar
G. Karali, Phase boundaries motion preserving the volume of each connected component,, Asymptotic Analysis, 49 (2006), 17. Google Scholar
G. Karali and M. A. Katsoulakis, The role of multiple microscopic mechanisms in cluster interface evolution,, J. Differential Equations, 235 (2007), 418. Google Scholar
M. A. Katsoulakis and D. G. Vlachos, From microscopic interactions to macroscopic laws of cluster evolution,, Phys. Rev. Letters, 84 (2000), 1511. doi: 10.1103/PhysRevLett.84.1511. Google Scholar
K. Kitahara, Y. Oono and D. Jasnow, Phase separation dynamics and external force field,, Mod. Phys. Letters B, 2 (1988), 765. doi: 10.1142/S0217984988000461. Google Scholar
P. Kröger, Upper bounds for the Neumann eigenvalues on a bounded domain in Euclidean space,, J. Funct. Anal., 106 (1992), 353. doi: 10.1016/0022-1236(92)90052-K. Google Scholar
J. S. Langer, Theory of spinodal decomposition in alloys,, Annals of Physics, 65 (1971), 53. doi: 10.1016/0003-4916(71)90162-X. Google Scholar
S. Maier-Paape, U. Miller, K. Mischaikow and T. Wanner, Rigorous numerics for the Cahn-Hilliard equation on the unit square,, Rev. Mat. Complut., 21 (2008), 351. Google Scholar
B. Øksendal, "Stochastic Differential Equations,", Springer, (2003). Google Scholar
R. L. Pego, Front migration in the non-linear Cahn-Hilliard equation,, Proc. R. Soc. Lond. A, 422 (1989), 261. doi: 10.1098/rspa.1989.0027. Google Scholar
P. E. Protter, "Stochastic Integration and Differential Equations,", Springer-Verlag Berlin Heidelberg, (2005). Google Scholar
S. Semmes, Some aspects of calculus on non-smooth sets,, \arXiv{0709.2508v3}, (2007). Google Scholar
J. B. Walsh, An introduction to stochastic partial differential equations,, 265-439, (1180), 265. Google Scholar
H. Weyl, Das asymptotische Verteilungsgesetz der Eigenwerte linearer partieller Differentialgleichungen,, Math. Ann., 71 (1911), 441. doi: 10.1007/BF01456804. Google Scholar
Tomáš Roubíček. Cahn-Hilliard equation with capillarity in actual deforming configurations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 41-55. doi: 10.3934/dcdss.2020303
Hussein Fakih, Ragheb Mghames, Noura Nasreddine. On the Cahn-Hilliard equation with mass source for biological applications. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020277
Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037
Erica Ipocoana, Andrea Zafferi. Further regularity and uniqueness results for a non-isothermal Cahn-Hilliard equation. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020289
Shang Wu, Pengfei Xu, Jianhua Huang, Wei Yan. Ergodicity of stochastic damped Ostrovsky equation driven by white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1615-1626. doi: 10.3934/dcdsb.2020175
Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1749-1762. doi: 10.3934/dcdsb.2020318
Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242
Wenqiang Zhao, Yijin Zhang. High-order Wong-Zakai approximations for non-autonomous stochastic $ p $-Laplacian equations on $ \mathbb{R}^N $. Communications on Pure & Applied Analysis, 2021, 20 (1) : 243-280. doi: 10.3934/cpaa.2020265
Lingwei Ma, Zhenqiu Zhang. Monotonicity for fractional Laplacian systems in unbounded Lipschitz domains. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 537-552. doi: 10.3934/dcds.2020268
Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364
Elimhan N. Mahmudov. Infimal convolution and duality in convex optimal control problems with second order evolution differential inclusions. Evolution Equations & Control Theory, 2021, 10 (1) : 37-59. doi: 10.3934/eect.2020051
Xuhui Peng, Rangrang Zhang. Approximations of stochastic 3D tamed Navier-Stokes equations. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5337-5365. doi: 10.3934/cpaa.2020241
Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456
Pierluigi Colli, Gianni Gilardi, Jürgen Sprekels. Deep quench approximation and optimal control of general Cahn–Hilliard systems with fractional operators and double obstacle potentials. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 243-271. doi: 10.3934/dcdss.2020213
Pierluigi Colli, Gianni Gilardi, Gabriela Marinoschi. Solvability and sliding mode control for the viscous Cahn–Hilliard system with a possibly singular potential. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020051
Andrea Giorgini, Roger Temam, Xuan-Truong Vu. The Navier-Stokes-Cahn-Hilliard equations for mildly compressible binary fluid mixtures. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 337-366. doi: 10.3934/dcdsb.2020141
Fang Li, Bo You. On the dimension of global attractor for the Cahn-Hilliard-Brinkman system with dynamic boundary conditions. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021024
Lingfeng Li, Shousheng Luo, Xue-Cheng Tai, Jiang Yang. A new variational approach based on level-set function for convex hull problem with outliers. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2020070
Martin Heida, Stefan Neukamm, Mario Varga. Stochastic homogenization of $ \Lambda $-convex gradient flows. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 427-453. doi: 10.3934/dcdss.2020328
Haodong Yu, Jie Sun. Robust stochastic optimization with convex risk measures: A discretized subgradient scheme. Journal of Industrial & Management Optimization, 2021, 17 (1) : 81-99. doi: 10.3934/jimo.2019100
2019 Impact Factor: 1.27
Dimitra Antonopoulou Georgia Karali | CommonCrawl |
Hypergeometric function with a matrix argument
I am looking for the evaluation of a Hypergeometric function with a matrix argument as for example in Koev and Edelman or as showcased in this Wikipedia article.
From what I understand from Mathematica's documentation, it only accepts a scalar as the last argument.
matrix special-functions vector-calculus stochastic-calculus
HirekHirek
$\begingroup$ I didn't look at the links, but if your matrix is diagonalizable you can apply the scalar function to the eigenvalues and transform the result back afterwards. $\endgroup$ – Jens Aug 30 '14 at 19:17
$\begingroup$ Is there a reason why a diagonalization approach doesn't work? Please provide some code with a minimal example. $\endgroup$ – Jens Aug 31 '14 at 5:33
$\begingroup$ In the paper, the authors note that this function depends only on the eigenvalues of the matrix, so their algorithm is formulated only for diagonal matrices (and indeed, you may diagonalize your matrix and then assume that it started out this way -- but still, as others have said, Mathematica doesn't have this built-in). $\endgroup$ – Kellen Myers Aug 31 '14 at 16:41
$\begingroup$ @Jens et al: please reopen the question. There's suprisingly much to be told. Thank you in advance. $\endgroup$ – Dr. Wolfgang Hintze Sep 4 '14 at 15:48
$\begingroup$ @Dr.WolfgangHintze, it is no longer on hold. You can transfer your answer now. $\endgroup$ – RunnyKine Sep 4 '14 at 18:11
Matrix functions in MMA
First of all MMA does in general not support matrix arguments in its standard functions. Therefore there are special functions MatrixExp and MatrixPower available. But, as will be shown in this answer, it is possible to create user defined functions via infinite series, and it turns out that MMA is surprisingly well able in dealing with these matrix functions. The idea is simple: a function with a known series expansion can be transformed into a Matrix function letting z^k -> MatrixPower[z,k].
Hypergeometric matrix function in one variable
Let us start with the well known function $2F1(a,b,c;z)$ which in MMA is Hypergeometric2F1[a,b,c,z], and define
matrix2F1[a_, b_, c_, z_] :=
Sum[Pochhammer[a, k] Pochhammer[b, k]/
Pochhammer[c, k] MatrixPower[z, k]/k!, {k, 0, \[Infinity]}]
Here z is a matrix.
Example 1: Pauli matrix
z = PauliMatrix[1]
$\left( \begin{array}{cc} 0 & 1 \\ 1 & 0 \\ \end{array} \right)$
matrix2F1[a,b,c,z]
$\left( \begin{array}{cc} \frac{1}{2} (\text{Hypergeometric2F1}[a,b,c,-t]+\text{Hypergeometric2F1}[a,b,c,t]) & \frac{1}{2} (-\text{Hypergeometric2F1}[a,b,c,-t]+\text{Hypergeometric2F1}[a,b,c,t]) \\ \frac{1}{2} (-\text{Hypergeometric2F1}[a,b,c,-t]+\text{Hypergeometric2F1}[a,b,c,t]) & \frac{1}{2} (\text{Hypergeometric2F1}[a,b,c,-t]+\text{Hypergeometric2F1}[a,b,c,t]) \\ \end{array} \right)$
I found it very surprising that MMA computes the series and recognizes closed expressions without problems.
Example 2: general 2x2 matrix
z = {{p, q}, {r, s}}
$\left( \begin{array}{cc} p & q \\ r & s \\ \end{array} \right)$
matrix2F1[1, 2, 2, z t] // FullSimplify;
% // MatrixForm
$\left( \begin{array}{cc} \frac{1-s t}{1-t (s+q r t)+p t (-1+s t)} & -\frac{q t}{-1+t (p+s+q r t-p s t)} \\ -\frac{r t}{-1+t (p+s+q r t-p s t)} & \frac{1-p t}{1-t (s+q r t)+p t (-1+s t)} \\ \end{array} \right)$
Hypergeometric matrix function in two variables
For two variables there is a problem to be clarified first: two matrices in general do not commute, i.e. the result of a multiplication depends on the order. Hence $f(x,y)\neq f(y,x)$, in general. A relation between $f(x,y)$ and $f(y,x)$ become a Little simpler if x.y = A y.x + B with some scalars A and B. But let's us Abandon this question for a Moment and go to the hypergeometrics.
There are 4 different generalizations of the hypergeometric function, called Appell functions (http://mathworld.wolfram.com/AppellHypergeometricFunction.html).
Let's take the first one and define
matrixAppellF1[a_, b_, b1_, c_, x_, y_] :=
Sum[(Pochhammer[a, m + n] Pochhammer[b, m] Pochhammer[b1, n])/(
m! n! Pochhammer[c, m + n])
MatrixPower[x, m].MatrixPower[y, n], {m, 0, \[Infinity]}, {n,
0, \[Infinity]}]
Here we have taken the powers of x together to the left of the powers of y. This seems to be Kind of "natural", but there are other possibilities.
Example 3: AppelF1[a,b,b1,c,x,y]
First check result with scalars
AppellF1[1, 1, 1, 1, t , u ]
$\frac{1}{(1-t) (1-u)}$
x = PauliMatrix[1];
y = PauliMatrix[3];
matrixAppellF1[1, 1, 1, 1, t x, u y]
$\left( \begin{array}{cc} -\frac{1}{(-1+t) (1+t) (1-u)} & -\frac{t}{(-1+t) (1+t) (1+u)} \\ -\frac{t}{(-1+t) (1+t) (1-u)} & -\frac{1}{(-1+t) (1+t) (1+u)} \\ \end{array} \right)$
We have shown that MMA is very well suited to handle complicated functions with Matrix arguments.We simply have to replace in the power series Expansion of the usual function the power of the variable z^k with MatrixPower[z,k].
Surprisingly enough, MMA can do the infinite sums and provide closed expressions.
By the way: the there is no need for the user to think about eigenvalues or diagonalization.
Best regards, Wolfgang
Dr. Wolfgang HintzeDr. Wolfgang Hintze
$\begingroup$ After correcting a missing ending in the first definition, I get stuck in line 3: matrix2F1[a,b,c,z] yields an error with 0^0 (using version 10). $\endgroup$ – Jens Sep 4 '14 at 19:06
$\begingroup$ @Jens: sorry, I made a simple typing mistake in the basic formula for matrix2F1: I wrote z^k instead of MatrixPower[z,k]. I have corrected it. The results we ok. Might have been a Freudian error to make exactly the essential part wrong ;-). $\endgroup$ – Dr. Wolfgang Hintze Sep 5 '14 at 7:18
$\begingroup$ Still waiting for your comments ... $\endgroup$ – Dr. Wolfgang Hintze Sep 5 '14 at 12:28
$\begingroup$ Yes, now it all works as advertised (+1)! $\endgroup$ – Jens Sep 5 '14 at 16:57
$\begingroup$ Wow, this is wonderful! Thanks so much guys! $\endgroup$ – Hirek Sep 8 '14 at 9:12
You can use the new in M9 function MatrixFunction to do this. For instance:
MatrixFunction[Hypergeometric2F1[a, b, c, #]&, {{0, 1}, {1, 0}}] //TeXForm
$\left( \begin{array}{cc} \frac{\, _2F_1(a,b;c;-1)}{2}+\frac{\, _2F_1(a,b;c;1)}{2} & \frac{\, _2F_1(a,b;c;1)}{2}-\frac{\, _2F_1(a,b;c;-1)}{2} \\ \frac{\, _2F_1(a,b;c;1)}{2}-\frac{\, _2F_1(a,b;c;-1)}{2} & \frac{\, _2F_1(a,b;c;-1)}{2}+\frac{\, _2F_1(a,b;c;1)}{2} \\ \end{array} \right)$
Carl WollCarl Woll
To expand on my comment to the question: there is in fact an "official" recommendation for how to approach this in the documentation of DiagonalizableMatrixQ, under "Applications". It uses the idea that for a function that can be expanded in a power series, we can always convert a matrix argument into set of a scalar arguments if the matrix is diagonalizable. The similarity transformation of the diagonalization can be pulled out of the power series, and the function only needs to be evaluated for the eigenvalues.
Since most built-in functions (including the hypergeometric ones) are listable, it will be efficient if we can use this listability to our advantage. With a matrix as an argument, listability is initially undesirable, but when we have the eigenvalues as a list it's a great feature.
So here is a more streamlined implementation of the suggestion in the documentation:
matrixEval[func_, arg_?DiagonalizableMatrixQ] := Module[{u, d, e},
{e, u} = Eigensystem[arg];
u = ConjugateTranspose[u];
d = func[e];
If[! ListQ[d],
d = func /@ e];
u.DiagonalMatrix[d].Inverse[u]
Using the Pauli x matrix as an example, I'll first show how the diagonalizing similarity transform wraps a general function:
Clear[f];
z = PauliMatrix[1];
matrixEval[f[#] &, z] // TraditionalForm
$$\left( \begin{array}{cc} \frac{f(-1)}{2}+\frac{f(1)}{2} & \frac{f(1)}{2}-\frac{f(-1)}{2} \\ \frac{f(1)}{2}-\frac{f(-1)}{2} & \frac{f(-1)}{2}+\frac{f(1)}{2} \\ \end{array} \right)$$
Now replace f by the desired function:
matrixEval[Hypergeometric2F1[a, b, c, #] &, z] // TraditionalForm
$$\left( \begin{array}{cc} \frac{\, _2F_1(a,b;c;-1)}{2}+\frac{\, _2F_1(a,b;c;1)}{2} & \frac{\, _2F_1(a,b;c;1)}{2}-\frac{\, _2F_1(a,b;c;-1)}{2} \\ \frac{\, _2F_1(a,b;c;1)}{2}-\frac{\, _2F_1(a,b;c;-1)}{2} & \frac{\, _2F_1(a,b;c;-1)}{2}+\frac{\, _2F_1(a,b;c;1)}{2} \\ \end{array} \right)$$
This shows the same structure as above because the diagonalization is the same.
Here is a test with a built-in function that actually has a matrix equivalent:
m = {{3, 2}, {1, 4}};
matrixEval[Exp, m] == MatrixExp[m]
(* ==> True *)
Now it remains to address the question of non-diagonalizable matrix arguments. This can be done in principle as follows:
Find the minimal polynomial of the matrix M. If it is of the form x^k then k is the degree of the matrix, meaning the power for which M^k == 0. This degree is then used in a series expansion of the function, and the matrix can be inserted into that by replacing Power with MatrixPower. Such a matrix is called nilpotent, and the good thing is that power series of such matrices always have at most k terms, so there is no need for infinite sums. I'll make this more concrete if the OP wants to work with such arguments. Probably the best way to actually do this in Mathematica is to use the JordanDecomposition.
So in summary, I would always choose the diagonalization approach rather than trying to rewrite the built in function using its infinite series definition because the latter requires you to first find the appropriate definition and then will also be slow. A series approach is, however, no problem when you have a nilpotent matrix argument, because you can simply apply Series to find the expression you need.
And of course, a power series approach is fine for any finite-dimensional matrix argument if the function in question has a finite number of terms in its series. Another case is where the series terminates but the matrix is infinite-dimensional, as in this answer. Then one can still get pretty good approximate results with a MatrixPower approach using truncated matrices.
$\begingroup$ (at)Jens: please notice that I had already shown that the original question has a solution in MMA without caring about diagionalization. $\endgroup$ – Dr. Wolfgang Hintze Sep 5 '14 at 7:22
$\begingroup$ @Dr.WolfgangHintze Sure, but my solution works without having to look up the series definition of the function. Going through the eigenvalues as I do here is also safer in cases where you have to worry about the radius of convergence of the function. For things like MatrixExp this isn't an issue because the function is absolutely convergent. But that's not the case generically. $\endgroup$ – Jens Sep 5 '14 at 17:02
$\begingroup$ (at) Jens: part 1 of 2: (1) please remember that the question was how to treat hypergeometric functions of one an two variables with matrix arguments. I have made a natural proposal and solved exactly that question completely and surprisingly easy. (2) I point out again that I don't need to care about the complicated task of diagonalization, eigenvalues and so on. This is done "behind curtain" by MMA, as can be seen in my example of a general 2x2-matrix. Comment continued below. $\endgroup$ – Dr. Wolfgang Hintze Sep 6 '14 at 20:20
$\begingroup$ (at) Jens: part 2 of 2: (3) The convergence question is also taken care of in my approach by allowing for a scalar parameter t. MMA would tell us during summing up the infinite series which range in t would lead to convergence or divergence. Furthermore, in the requested hypergeometric case this is not an issue. (4) the interesting questions of non-commutability I put up for the two variable case still is uncommented; also it seems no to be treated in the references quoted. $\endgroup$ – Dr. Wolfgang Hintze Sep 6 '14 at 20:21
$\begingroup$ @Jens Thank you so much for your thoughtful response! Upon revisiting it, I believe there may be some miscommunication?! The coded up examples here build matrices of hypergeometric functions of each element in a matrix. What I am wondering about is that the hypergeometric function itself takes the matrix as argument and gives a scalar output. $\endgroup$ – Hirek Jan 30 '15 at 0:38
Mathematica does not support Hypergeometric functions of matrix arguments. Currently only the cited paper by Plamen and Koev at http://math.mit.edu/~plamen/files/hyper.pdf contains a link to code that implements it in MATLAB.
$\begingroup$ The paper has pseudocode that is fairly easy to read and implement, so perhaps you can translate it to Mathematica? $\endgroup$ – rm -rf♦ Aug 30 '14 at 15:58
$\begingroup$ I might do that when I have enough time but I need to use my analytical results first for a paper I am writing. Where would I publish that though? $\endgroup$ – Hirek Aug 30 '14 at 18:50
Not the answer you're looking for? Browse other questions tagged matrix special-functions vector-calculus stochastic-calculus or ask your own question.
Does Mathematica support Laguerre Polynomials of Matrix Argument?
Computing Hypergeometric Funtion of Matrix Argument
Find eigen energies of time-independent Schrödinger equation
Flatten command: matrix as second argument
What kind of hypergeometric function is it?
Element assignment in matrix function
How to get row matrix as an output from a mapped piecewise function?
Help plotting or evaluating confluent hypergeometric function and a Gamma function, both with imaginary values
Special case of hypergeometric function
How to take the curl of a vector function involving hypergeometric functions?
HypExp and HPL packages for hypergeometric functions: Evaluating a function HPL[{minus,plus},x]?
Summation involving 2F2 hypergeometric function | CommonCrawl |
Prove: $n\log(n^2) + (\log\ n)^2 = O(n\log(n))$
Prove: $n\log(n^2) + (\log n)^2 = O(n\log(n))$
I'm trying to use the Big Oh definition, what I reached so far is:
$f(n)$ is in $O(g(n))$ if there is $M > 0,𝑥∈\mathbb{R}$ such that whenever $m > x$ we have $|f(m)|<M|𝑔(m)|$
How do I, however, continue from here?
algorithms asymptotics computational-complexity
Lowly0palaceLowly0palace
Hint: Use that $\log n < n$ and $\log (n^2)=2\log n$.
$\begingroup$ I think one should use $\log n < n^{0.5}$ or something like that instead of $\log n < n$, because $\log n < n$ yields $(\log n)^2 < n^2$. $\endgroup$ – Sanya Nov 9 '19 at 22:26
$\begingroup$ @Sanya $(\log n)^2<n \log n$ $\endgroup$ – user658409 Nov 10 '19 at 4:22
You could also apply the limit definition
\begin{align}\limsup_{n\to\infty}\frac{n\log(n^2) + (\log n)^2}{n\log n}&=\limsup_{n\to\infty}\frac{2n\log n}{n\log n}+\limsup_{n\to\infty}\frac{(\log n)^2}{n\log n}\\&= \limsup_{n\to\infty} 2+\limsup_{n\to\infty}\frac{\log n}{n}\\&<\infty \end{align}
therefore $n\log(n^2) + (\log n)^2=O(n\log n)$.
Axion004Axion004
We will take $M = 4$ and $x = e$. Then, for $n > x$, $$ \begin{align*} n \log (n^2) + (\log n)^2 &= 2n \log n + (\log n)^2 & (\because \text{Property of log})\\ &\le 2n \log n + (\sqrt{n})^2 = 2n \log n + n & (\because \log n \le \sqrt n \ \ \text{for all} \ \ n \ge 0) \\ &\le 3n \log n & (\because \log n > 1 \ \ \text{for all} \ \ n > e) \\ & < 4n \log n = M (n \log n) & \end{align*} $$ So, by definition of big-Oh notation, we are done.
SanyaSanya
We have that
$$n\ \log(n^2) + (\log\ n)^2 =2n\log n+\log n \cdot \log n$$
$$\frac{n\ \log(n^2) + (\log\ n)^2 }{n\log n}=\frac{2n\log n+\log n \cdot \log n}{n\log n}=2+\frac{\log n}n \to 2$$
$$n\ \log(n^2) + (\log\ n)^2 =O(n\log n)$$
112k1010 gold badges5050 silver badges104104 bronze badges
$n\log(n^2)=2nlog n$ is $O(n\log n)$ by definition and $\log^2n=o(n\log n)$ since $$\lim_{n\to\infty}\dfrac{\log^2n}{n\log n}=\lim_{n\to\infty}\dfrac{\log n}{n }=0$$ and a fortiori, $ \log^2n=O(n\log n)$, so that $$n\log(n^2)+\log^2n=O(n\log n)+O(n\log n)=O(n\log n).$$
BernardBernard
Not the answer you're looking for? Browse other questions tagged algorithms asymptotics computational-complexity or ask your own question.
Big-O compared to a new Operator
Little-o proof by definition
Proving n(log(n)) is O(log(n!))
Is $\log(3^n) = O(\log(2^n))$?
Show that $n$ is $o(\log n)$
Prove that for all $k \geq 1$, $\mathbb{log}(n)^k \in o(n)$ without using limits
Prove that $O((N+1)\log(N+1)) \approx O(N\log N)$
Complexity of algorithms using Big Theta notation. log n? n log n?
Prove $\frac{n}{3}\log\ n\ - \frac{n}{2}\log \sqrt{n}$ is $\Omega(n\log\ n) $ | CommonCrawl |
OSA Publishing > Biomedical Optics Express > Volume 12 > Issue 1 > Page 1
Christoph Hitzenberger, Editor-in-Chief
Positive role of the long luminescence lifetime of upconversion nanophosphors on resonant surfaces for ultra-compact filter-free bio-assays
Duc Tu Vu, Thanh-Thu Vu Le, Chia-Chen Hsu, Ngoc Diep Lai, Christophe Hecquet, and Henri Benisty
Duc Tu Vu,1,2,3 Thanh-Thu Vu Le,4 Chia-Chen Hsu,4 Ngoc Diep Lai,3 Christophe Hecquet,1 and Henri Benisty1,*
1Laboratoire Charles Fabry, CNRS, Institut d'Optique Graduate School, Université Paris-Saclay, Palaiseau, 91127, France
2Faculty of Electrical and Electronics Engineering, Phenikaa University, Yen Nghia, Ha-Dong District, Hanoi, 10000, Vietnam
3Laboratoire Lumière, Matière et Interfaces (LuMIn), FRE 2036, École Normale Supérieure Paris-Saclay, 4 Avenue des Sciences, Gif-sur-Yvette, 91190, France
4Department of Physics and Center for Nano Bio-Detection, National Chung Cheng University, Ming Hsiung, Chia Yi, 621, Taiwan
*Corresponding author: [email protected]
Duc Tu Vu https://orcid.org/0000-0002-5651-3337
Henri Benisty https://orcid.org/0000-0001-7428-7196
D Vu
T Le
C Hsu
N Lai
C Hecquet
H Benisty
Issue 1,
•https://doi.org/10.1364/BOE.405759
Duc Tu Vu, Thanh-Thu Vu Le, Chia-Chen Hsu, Ngoc Diep Lai, Christophe Hecquet, and Henri Benisty, "Positive role of the long luminescence lifetime of upconversion nanophosphors on resonant surfaces for ultra-compact filter-free bio-assays," Biomed. Opt. Express 12, 1-19 (2021)
Optically investigating Nd3+-Yb3+ cascade sensitized upconversion nanoparticles for high resolution, rapid scanning, deep and damage-free bio-imaging (BOE)
Upconversion-luminescent hydrogel optical probe for in situ dopamine monitoring (PRJ)
Temperature-dependent Förster resonance energy transfer from upconversion nanoparticles to quantum dots (OE)
Optical Biosensors
High numerical aperture optics
Waveguide gratings
X ray imaging
Manuscript Accepted: November 17, 2020
Principle of excitation light rejection in the filter-free fluorescence imaging
Photometric aspects
We introduce a compact array fluorescence sensor principle that takes advantage of the long luminescence lifetimes of upconversion nanoparticles (UCNPs) to deploy a filter-free, optics-less contact geometry, advantageous for modern biochemical assays of biomolecules, pollutants or cells. Based on technologically mature CMOS chips for ∼10 kHz technical/scientific imaging, we propose a contact geometry between assayed molecules or cells and a CMOS chip that makes use of only a faceplate or direct contact, employing time-window management to reject the 975 nm excitation light of highly efficient UCNPs. The chip surface is intended to implement, in future devices, a resonant waveguide grating (RWG) to enhance excitation efficiency, aiming at the improvement of upconversion luminescence emission intensity of UCNP deposited atop of such an RWG structure. Based on mock-up experiments that assess the actual chip rejection performance, we bracket the photometric figures of merit of such a promising chip principle and predict a limit of detection around 10-100 nanoparticles.
Fluorescence-based analysis currently evolves from a critical research tool to an enabling technology for the emerging applications of biomedicine ranging from biomedical diagnostics [1,2] and cellular imaging [3] to molecular imaging [4]. In this landscape, compact fluorescence assays are desirable in several contexts, to assist healthcare or research work without impacting the lab's limited real estate [5–8]. Handheld devices enable point-of-care testing, capable of delivering diagnosis results rapidly, easily and accurately near patient's bedside, doctor's surgery, emergency room or intensive care unit [9,10]. The huge achievements of fluorescence microscopies have most often required bulky high performance microscopes, or some still bulky optics to visualize useful features on chips such as "fluorescence spots" or individual cells. The direct contact geometry [11] using detection on a CCD or CMOS sensor chip (whose performances are nowadays very good) is much less bulky and close to the lower limit that can be devised: only illumination must be brought to the chip either in some innovative integrated fashion or with any ordinary small-size optics: small LED illuminators and light pipes, small laser heads, fiber head for remote lasers. A very thin glass bottom (say 100 µm) can limit the blurring on the short path to the chip and grant reasonable images for spots down to 200 µm diameter. Furthermore, a glass-based faceplate (polished fiber bunch) can serve as an angular filter (typically selecting NA≲1) that limits blurring and relaxes various mechanical constraints [7,12].
A daunting challenge when targeting ordinary down-conversion luminescence signals in an optical assay/sensor contact geometry is to implement the mandatory excitation rejection [5,11–13]. A huge rejection factor is needed to limit residual excitation detection below the noise floor. To do this, a filter inserted between the chip and the sensor blurs direct imaging for extra thicknesses as small as 70 µm. It is known that good filters work on absorption rather than on multilayer, but then suffer from a residual autofluorescence and also feature a substantial thickness [12,13]. These factors make the insertion of a rejection filter a very delicate task. A mitigation is to use fluorescent species with giant Stokes shift such as red-emitting quantum dots excited around λ = 400 nm, as the large shift λfluo - λexc = ΔλStokes $\gtrsim$ 200 nm makes high rejection obtainment a lot easier than the classical 50–80 nm Stokes shift of most common fluorescent dyes for biomolecules. Still, undoubtedly, the rejection issue adversely impacts the broader use of the contact geometry. Besides, the major remaining bottlenecks of both quantum dots and fluorescent dyes are their commonly met photo-instability, photobleaching and biological incompatibility, which have restricted their commercial biomedical applications.
Because they are free from such issues, non-bleaching and non-blinking lanthanide (Ln3+)-based upconversion nanoparticles (UCNPs), capable of converting near-infrared (NIR) photons into higher energy visible emission, are emerging as a new class for photoluminescence bioimaging probes [14–16]. Two key advantages for the synergy between UCNP and CMOS technology in fluorescence imaging are [17,18] (i) Silicon-based area CMOS detectors have lower quantum efficiency in the NIR range (less than 10%), compared with that in the visible range of upconversion luminescence (UCL) emission (above 50%), thus a positive factor for filter-free fluorescence imaging system; (ii) the long UCL lifetimes of UCNPs, from microseconds to even milliseconds, enable inexpensive time-gated detection schemes, capable of completely suppressing the excitation light or residual autofluorescence arising from the optical filters.
It is thus expected that combing superior UCNPs and CMOS properties may render biological analysis instruments low-cost, robust, compact and portable. In this paper, we present along this line a chip principle based on UCNPs, whose UCL lifetime can reach the millisecond range, but for which the excitation is at 975 nm, still well in the sensitivity window of silicon. Thanks to the long luminescence lifetimes, it is possible to expect a large part of the rejection (>104 to 106) by electronic means: Electronic shutters inside current fast-imaging CMOS sensor arrays can be active during the laser excitation and play the role of a very efficient rejection means. Such sensors typically target crash-tests for cars, with 50-100 µs exposure time as a lower limit and down to 200-250 µs frame time for 1000 pixel-area sub-images. We are not aware that such chips have yet been used for contact imaging, but since CMOS chips are commonly used in that contact format in niche technical applications (scintillators for X-ray imaging, notably dental imaging), we may reasonably assume that players in the micro-optoelectronics silicon industry shall as well produce fast chips compatible with contact imaging, the main issue being to contact the side pads more "horizontally", and to adapt for instance a faceplate with the proper glue/resist combination.
So, as shown in Fig. 1(a), we can first hope that the electronic shutter rejection makes it possible to capture most of the upconverted photons emitted after a short excitation pulse, due to the long lifetime, without a bona fide spectral filter. Next, an optics-less contact version is straightforwardly deduced from this layout, as shown in Fig. 1(b) (compare to image 1(a) of Ref. [11] where a high-rejection filter is needed for standard fluorescence in the same compact lens-free biochip layout). It is well-known, however, that the usual multiphoton UCL efficiencies of Ln3+-doped UCNPs are very low (typically < 1%) and that the multicolor UCL emission, characterized by the electronic transitions of each Ln3+ ion (Er3+, Tm3+ and Ho3+) has a nonlinear response to excitation intensity (often power-law type). These characteristics normally result in a demand for high intensity. Fortunately, the high-intensity domain is not too acute an issue because the excitation wavelength of 975 nm is among the best tolerated (minimal absorption) ones in biological media (cells and tissues). Nevertheless, it is always better to enhance the excitation intensity directly at or close to the useful surface where the useful signal is generated and to minimize said excitation intensity elsewhere on the path. To do this, a local-field enhancement provided by a resonant waveguide grating (RWG) structure was recently shown to achieve over 104 fold enhanced UCL emission from UCNPs [19,20]. A high degree of resonance on a waveguide at the chip surface can be obtained by a quite modest corrugation depth of the waveguide, a couple of 10's of nm, which makes the topology issue very secondary for most popular assays ("spots" of DNA biochips, or the various flavors of cell assays).
Fig. 1. Schematic of (a) the filter-free CMOS-based fluorescence imaging, (b) Lens-less compact scheme with contact geometry, compare with the filter-version of Ref. [11] for classical fluorescence; (c) the low-n RWG sample for enhancing UCL emission of UCNPs, reprinted with permission from Ref. [20].
Let us describe the main features of the setup of Fig. 1(a) that we shall use as a proof of principle of a contact-type, filter-free, lens-less chip geometry, as shown in Fig. 1(b). The CMOS chip-based fluorescence imaging was constructed based on a commercial high-speed camera (Promon U750, used with AOS imaging studio v4 software). The excitation source for UCL measurement was a NIR fibered laser diode at 975 nm (common InGaAs-based model used to pump Erbium-doped fibers), which was synchronized with the detection window of the camera through a standard function generator (10 ns timing accuracy). The laser excitation time was set to ${\tau _{\textrm{pulse}}}$ = 50 µs. The exposure (detection) time for a single detection window was set to ${\tau _{\textrm{det}}}$ = 100 µs for all measurement. Other details of the sequence will be given later in Section 5.
In another variant, in order to assess as much as possible the targeted lens-less chip geometry and the associated larger collection efficiency, we inserted directly a fiber optics plate (J3182-72, Hamamatsu) behind the sample. In the targeted lens-less geometry of Fig. 1(b), there would be, of course, a much larger collection $N{A_{\textrm{coll}}} \simeq 1.0$, and a signal about 40 times larger than for $N{A_{\textrm{coll}}} \simeq 0.15$, depending on the exact source emission diagram (e.g. Lambertian in a glass-type medium above the plate). However, with our setup and ordinary lenses, we lost a significant amount of upconverted photons. Only immersion microscope objectives would collect the full 1.0 aperture, to the expense of a reduced field. This alternative would have some heuristic interest, but we believe that the simpler setup is a sufficient proof-of-principle, given the well-known properties of UCL emission in the targeted application (typically involving emission at transparent solid/aqueous medium interfaces).
To further enhance the UCL emission of UCNPs, as shown in Fig. 1(c), the best configuration is to use such a RWG for the 975 nm excitation wavelength around some prescribed angle of incidence, which matches the resonant angle of a guided-mode of the resonant grating. If the chip is excited in properly tuned optical conditions, the amount of transmitted (non-rejected) NIR light can become very small, assisting the aforementioned shutter rejection by the same token (and in addition to the differential sensor spectral sensitivity mentioned above). The main beam would be a reflected beam, while residual absorption in the waveguide would help "dumping" NIR photons and avert their undesirable scattering + detection on the image sensor.
In the following Section 2, we present the preparation process of UCNPs, including core, core-shell and core-shell-shell, which can be excited with a 975 nm laser source. In Section 3, we describe the principle in more detail and the extra uses that can be made of the various degrees of freedom. In Section 4, we discuss the photometric aspects and the kind of sensing performances that can be expected. In Section 5, we use the experiments in the far-field of Fig. 1(a) to assess the good rejection of the electronic shutter and other factors that are essential ingredients of the principle proposed in Fig. 1(b). We thus assess as much as possible the low limit of detection per pixel suggested by the system model, which turns out to be competitive with usual fluorophore down-conversion efficiencies. We conclude in the last section.
2.1 Materials
Yttrium(III) chloride (YCl3, anhydrous powder, 99.99%), ytterbium(III) chloride (YbCl3, anhydrous powder, 99.9%), erbium(III) chloride (ErCl3, anhydrous powder, 99.9%), and ammonium fluoride (NH4F, anhydrous 99.99%) were purchased from Sigma-Aldrich and stored in a dry box. Sodium hydroxide (pellets, 98%) was brought from Macron Fine Chemicals. Oleic acid (OA, technical grade, 90%) and 1-octadecene (ODE, technical grade, 90%) were purchased from Sigma-Aldrich.
2.2 Synthesis of outer shell (NaYF4) solution
The outer shell precursor was initially prepared by mixing 1 mmol of YCl3, 6 mL of OA and 15 mL of ODE in a 100 mL round bottom flask. Under a nitrogen flow, the resulting mixture was then heated to 200°C to obtain a clear solution with a yellowish color. Next, the solution was placed under vacuum and heated to 110°C for 1 h to extract the unwanted impurities, then cooled down to room temperature. In the meantime, the fresh methanol solution was prepared by dissolving 0.1 g of NaOH and 0.148 g of NH4F, followed by adding slowly into the reaction flask. Finally, the solution was heated at 110°C to extract methanol and then kept for the next stage.
2.3 Synthesis of middle shell (NaYF4:Yb) solution
For the synthesis of the middle shell precursor solution, the procedure was the same as that for the outer shell except that YbCl3 was used. Briefly, 6 mL of OA and 15 mL of ODE was mixed with 0.8 mmol of YCl3 and 0.2 mmol of YbCl3 in a 100 mL flask. Then, the mixture was heated to 200°C under a nitrogen flow with vigorous magnetic stirring. The clear solution was subsequently vacuum filtered and quickly heated to 110°C for 1 h. When the reaction solution is cooled down to room temperature, the methanol solution with 0.1 g of NaOH and 0.148 g NH4F was added dropwise into the reaction flask. After removal of methanol by evaporation, the obtained solution was kept for further use.
2.4 Synthesis of core-shell-shell (NaYF4:Yb,Er@NaYF4:Yb@NaYF4) UCNPs
The NaYF4:Yb, Er core UCNPs were initially synthesized as the protocol in our previous study [20]. Typically, 0.78 mmol of YCl3, 0.2 mmol of YbCl3, 0.02 mmol of ErCl3, 6 mL of oleic acid and 15 mL of ODE were mixed into a 100 mL flask, and heated to 200°C under a nitrogen atmosphere to obtain a transparent solution. Afterwards, the obtained solution was placed in a vacuum chamber and subsequently naturally cooled down to room temperature. Next, the methanol solution containing 0.1 g of NaOH and 0.148 g of NH4F was added into the reaction. After removal of residual methanol, the reaction solution was heated to 300°C for 1 h and then cooled down to 280°C under a gentle flow of nitrogen gas. Subsequently, the solution of the middle shell precursor was added slowly at a speed of 0.05 mL/min, followed by adding the outer shell precursor in the same conditions. The mixture was allowed to react for another 15 min, and then rapidly cooled down to room temperature by blowing a stream of nitrogen at the outside of the reaction flask. Finally, the as-synthesized UCNPs were collected by centrifugation at 6000 r.p.m for 10 min and washed with a mixture of cyclohexane/ethanol (1:1 v/v) at least two more times. After the last cycle, the as-obtained UCNPs were redispersed in toluene for further experiments. For the synthesis of core-shell NaYF4:Yb,Er@NaYF4:Yb UCNPs, the process was similar except the outer shell precursor (NaYF4) was not used.
2.5 Characterization
The crystal structure of UCNPs was determined with a Bruker APEX diffractometer (λ = 1.5406 Å). Morphology of nanoparticles was characterized with TEM (JEOL-JEM 2010). The optical measurements (UCL and lifetime spectra) were carried out using a home-built system, as described in Ref. [20].
2.6 Preparation of UCNP samples for fluorescence imaging
Typically, all as-synthesized UCNP samples (100 µL, 2 mmol/15 mL) were dropped onto glass substrates (2 cm × 2 cm). Samples were then dried naturally at the room temperature.
3. Principle of excitation light rejection in the filter-free fluorescence imaging
3.1 Fluorescence from a square pulse
We first consider the setup of Fig. 1(a), without surface enhancement. The fluorescent species are UCNPs (core, core-shell and core-shell-shell), with typical lifetimes ${\tau _\textrm{U}}$ in the range $100 - 2000\; \mathrm{\mu} \textrm{s}.$ The time-rejection issue is managed as follows. Consider an on/off square-shaped 975 nm NIR light pulse $I(t )= {I_m}\textrm{Rect}\left( {\frac{{t - {t_0}}}{{{\tau_{\textrm{pulse}}}}}} \right)$ centered at ${t_0}$ with a given peak intensity ${I_m}$ and typically ${\tau _{\textrm{pulse}}} = 10 - 100\; \mathrm{\mu} \textrm{s}$. We assume that we have the response of UCNPs as a collection of identical mono-exponential emitters with characteristic decay time ${\tau _U}$. The physical UCL signal ${I_U}(t )$ at $t \ge {t_0} + \frac{1}{2}{\tau _{pulse}}$ (after the pulse) can then be calculated according to a convolution:
(1)$$\begin{aligned}{I_U}(t ) &= \eta ({{I_m}} )\mathop \smallint \nolimits_{ - \infty }^t I({t^{\prime}} )\textrm{exp}\left( { - \frac{{t - t^{\prime}}}{{{\tau_U}}}} \right)dt^{\prime}\\ &= 2\eta ({{I_m}} ){I_m}{\tau _U}\textrm{exp} \left( { - \frac{{t - {t_0}}}{{{\tau_U}}}} \right)\sinh \left( {\frac{{{\tau_{\textrm{pulse}}}}}{{2{\tau_U}}}} \right) \end{aligned}$$
with $\eta ({{I_m}} )$ being the adapted nonlinear upconversion efficiency (well defined for a constant-intensity pulse).
3.2 Window-detection of fluorescence imaging for a filter-free CMOS chip
The integrated UCL signal is integrated between two time bounds ${t_1}$ and ${t_2}$. We write ${t_1} = {t_0} + \frac{1}{2}{\tau _{\textrm{pulse}}} + {\tau _{\textrm{lag}}}$, where ${\tau _{\textrm{lag}}}$ represents a lag-time after the end of the pulse at ${t_0} + \frac{1}{2}{\tau _{\textrm{pulse}}}$, and we write ${t_2} = {t_1} + {\tau _{\textrm{exp}}}$, where ${\tau _{\textrm{exp}}}$ is the exposure time, ideally defined by the electronic shutter. Both times ${t_1}$ and ${t_2}$ are after the end of the excitation pulse so that the integrated signal is given by:
(2)$${F_U} = {\tau _U}{\tau _{\textrm{pulse}}}\; \eta ({{I_m}} ){I_m}\; \textrm{sinhc}\left( {\frac{{{\tau_{\textrm{pulse}}}}}{{2{\tau_U}}}} \right)\; \left[ {\textrm{exp} \left( {- \frac{{{t_1} - {t_0}}}{{{\tau_U}}}} \right) - \textrm{exp} \left( {- \frac{{{t_2} - {t_0}}}{{{\tau_U}}}} \right)} \right]$$
with $\textrm{sinhc}(x )= \textrm{sinh}(x )/x$, the "sinc" function [$\textrm{sinc}(x )= \textrm{sin}(x )/x$] but for the hyperbolic sine instead of normal sine. Compared to the maximum obtainable signal (all emitted photons) ${F_{Umax}}$= ${\tau _U}{\tau _{\textrm{pulse}}}\eta ({{I_m}} ){I_m}$ (which is the convenient quantity to access to $\eta ({{I_m}} )$ in practice), we see that we obtain the fraction ${F_U}/{F_{Umax}}$ defined by the last two factors. Defining $\alpha = \frac{{{\tau _{\textrm{pulse}}}}}{{{\tau _U}}}$, we note that $\textrm{sinhc}\left( {\frac{\mathrm{\alpha }}{2}} \right) \to 1$ when $\alpha $ vanishes. For instance, for the limit case ${t_1} = {t_0} + \frac{{{\tau _{\textrm{pulse}}}}}{2}$ (${\tau _{\textrm{lag}}} = 0$) and ${t_2} \to + \infty $, we obtain the fraction:
(3)$$\frac{{{F_U}}}{{{F_{Umax}}}} = \textrm{sinhc}\left( {\frac{{{\tau_{\textrm{pulse}}}}}{{2{\tau_U}}}} \right)\textrm{exp} \left( {- \frac{{{\tau_{\textrm{pulse}}}}}{{2{\tau_U}}}} \right) = \left( {\frac{{{\tau_\textrm{U}}}}{{{\tau_{\textrm{pulse}}}}}} \right)\left[ {1 - \textrm{exp} \left( {- \frac{{{\tau_{\textrm{pulse}}}}}{{{\tau_U}}}} \right)} \right]$$
whose second-order expansion in $\alpha $ is $1 - \frac{\alpha }{2} = 1 - \frac{{{\tau _{\textrm{pulse}}}}}{{2{\tau _U}}}$ (first-order is 1, the left-hand side expression with "sinhc" is indeed easier to grasp as none of its factors diverges) : As can be intuited if the pulse is much shorter than the decay, about one half of the pulse width, $\frac{\tau _{\textrm{pulse}}}{2}$, becomes lost information if integration starts after the pulse, as if all the excitation energy had been deposited exactly at the pulse middle at ${t_0}$ and not distributed.
Thus, with a scheme featuring a 100 µs pulse width and a 1000 µs decay time, we shall still get 95% of available photons (and similarly for our 50 µs laser pulses and, say, a ${\tau _U} =$ 500 µs decay time). Another case worth examination is $\alpha = 1$, thus ${\tau _U} = {\tau _{\textrm{pulse}}}$. Then we still get a fraction $1 - \frac{1}{e} = 0.63$ of the signal. To get only 25% of the signal (i.e. lose 75% due to the pulse duration), we have to assume ${\tau _U} \simeq 0.57\; {\tau _{\textrm{pulse}}}$, which is below 30 µs.
3.3 Lifetime retrieval feasibility
From the above analysis, we see that if we choose a shorter integration time, but perform several distinct integrations at well-chosen intervals $[{t_{1}},\; {t_2}]$, we have several measurements for each pulse and we can assess the lifetime of the measured species, in a large window if signal-to-noise (SNR) is appropriately enhanced (on the one hand, measurements of weak signals are needed to assess the shorter decays, cf. Equation (3), on the other hand, longer decay times for a given excited population means a lower photon flux).
3.4 Enhanced rejection with lagged detection
We also see from the general formula Eq. (2) for detection after the pulse that we have some margin to delay the shutter exposure by the lag time ${\tau _{\textrm{lag}}}$, say by 10's of µs. This delay may be favorable to rejection, as trapped photocharges from the intense excitation pulse may survive in the sensor during a very short shutter time and induce spurious signal. Some extra delay helps recombining or de-trapping these charges. Such an advantage can be favorably traded-off with a 2-10% decay of the useful signal. We will give experimental data on the impact of this lagged detection.
4. Photometric aspects
4.1 Photometric aspects of the setup
Let us give more photometry-related features for the setup, Fig. 1(a): A convex 35 mm doublet lens was used to focus the laser output onto a ∼20 µm diameter spot in order to increase excitation light local intensity for generating stronger UCL emission from UCNPs. The emitted UCL signal was then collected by another lens (50 mm focal length, 25 mm diameter, thus numerical collection aperture $N{A_{\textrm{max}}} \simeq 0.25$) and delivered to the image sensor of the camera through its standard objective set at infinity, and whose 15.5 mm outer pupil (f-number 1.4) restricts the beam and defines the actual collection efficiency at the sample as $N{A_{\textrm{coll}}} \simeq 0.15$. The camera was connected to a computer using the USB 3.0 port for imaging acquisition and hardware installation. The camera + software enables frame rates up to 6000 frames per second (fps) at the 16×16 pixel frame and still 4000 fps for 32×32 pixel frames. All data were analyzed by an ad hoc program written in Matlab, which readily allows the signal analysis of captured images. We use the software camera counts, associated to an 8-bit digitization scheme.
4.2 Photometric aspects of the lens-free + RWG detector
Very-high rejection and high sensitivities would be needed if operation at moderate power is desired, say less than 20 mW incident power in the near-infrared. To alleviate this, we can recourse to guided mode resonance (GMR) enhancement of RWG structure. It was demonstrated that the UCL intensity of Tm3+-doped UCNPs has strongly enhanced signal by utilizing RWG structures, higher than 104-fold those in PMMA matrix [19] or aqueous solution [20]. This demonstration involved Q factors that are only in the range 400-1000, thanks to the UCL response being highly nonlinear (Q=104 would have been needed for linear response). Also, the coupling was not yet perfect (the optimal coupling depends notably on the losses), as the residual transmission at the peak was generally above 60%.
We further deal here with two points that are of importance for the actual signal: the photon paths from assay to sample, and the range of signal that has to be reached to operate on ordinary room-temperature CMOS sensors, assuming state-of-the-art noise figures for the photodiodes.
We have treated mathematically the assay above as a "black box" with excitation as input and fluorescence as output. One should keep in mind that extraction efficiencies are a very important factor in actual efficiencies. Here, the main point is to channel the ${F_U}$ photons of Eq. (2) to the sensor.
We write the number N of UCL photo-electrons per pulse and per pixel as the following product: [Excitation power density] ${\times}$ [Absorption cross section per UCNP] ${\times}$ [UCNP concentration] ${\times}$ [Pixel Area] ${\times}$ [Excitation efficiency in RWG] ${\times}$ [Extraction efficiency on the bottom side] ${\times}$ [Collection efficiency in NA=1] ${\times}$ [Detector sensitivity].
The last three factors of this expression are assumed to reach 0.2 in total. We consider a pixel area, say 10${\times} $10 µm, and assume a coupled power of 10 mW along an elongated broad line of 10 µm width in the RWG (assuming for instance a residual transmission of 40% and 10% reflection, this means a 50% excitation efficiency in RWG, the fifth factor of the formula, and fits the 20 mW excitation power suggested above, the important data is the coupled power). For optimal use, the absorption should be 100% over the typical light path of $\ell =$ Q $\times$ wavelength (Q∼1000) in the RWG's guide, so say $\ell =$ 400 µm and thus 2.5% absorption of guided beam within 10 µm. This deposits 250 µW per pixel, which become with 0.4% UCL efficiency 1 µW of blue-green light. This gives 2.5${\times} $1012 photons/s hence about 1.25${\times} $109 photons in 500 µs (typical detection time and fluorescence time as well). There are thus, with the three last factors taken as 0.2, N = 2.5${\times} $108 photo-electrons per pulse, which is still 5${\times} $103 over a reasonable typical room-temperature noise floor of 5${\times} $104 electrons. So the detection limit would be as low as 10−3 of the concentration C400µm that causes an absorption length of NPs of 400 µm for a guided mode.
Theoretically, the absorption cross-section of Yb3+ ion is about 1.15${\times} $10−20 cm2 at 980 nm [21]. A unit cell of the β-phase NaYF4-based UCNPs has lattice parameters: $a = 0.591$ nm, $c = 0.353$ nm, as reported in Ref. [22]. Therefore, if we model a CSS UCNP as a regular hexagonal prism shaped single crystal having the above average lattice parameters and the following size: base edge length ${a_{UCNP}} = 50$ nm and height ${h_{UCNP}} = 25$ nm (see Section 5 below), it contains 1.317${\times} $106 unit cells. It is also well-known that a unit cell of NaLnF4 contains two Ln3+ ions (including: 78% of Y, 20% of Yb, 2% of Er), which dictates as much as 5.27${\times} $105 Yb ions per UCNP particle, and a naive 980 nm cross section of ${\sigma _{UCNP}} \simeq $ 6.06${\times} $10−15 cm2. To go further, we need a guided mode data, say a typical vertical squared-field profile of effective height ${h_{wg}} \simeq$ 300 nm in typical RWG waveguides. A layer of 25 nm thickness (=${h_{UCNP}})$ with 1 particle every ${a_{pix}} =$ 10 µm (pixel size), overlapping a guided mode with vertical confinement factor $\mathrm{\Gamma } \sim {h_{UCNP}}$ / ${h_{wg}}$ of about 0.08, is a heuristic limit case. The guided beam effective cross section above the pixel being ${a_{pix}}{h_{wg}} \simeq 3 \times {10^{ - 8}}$ cm2, the single particle attenuates the beam by a factor ${\sigma _{UCNP}}/({a_{pix}}{h_{wg}})$=2${\times} $10−7. To reach the above mentioned criterion of 10−3${\times} $2.5% = 2.5${\times} $10−5 attenuation, it is thus sufficient to have 125 UCNPs per pixel. We thus claim, given the margin that can still be improved, that the detection limit of this room-temperature scheme is rather in the 10-100 UCNPs per pixel, which compares well with standard fluorescence labels. Coarsely speaking, the RWG geometry and the high Q can compensate for the lower efficiency of UCNPs.
For CSS UCNPs experiments below, we drop-casted 100 µL of CSS solution dispersed into toluene (77 mg/mL) onto the glass substrate (2 cm × 2 cm). As reported in Ref. [22], the UCNPs with a size of 100 nm have a molecular weight of 103 MDa, corresponding to 1.66${\times} $10−12 mg/particle. This resulted in an average concentration of 1.16${\times} $1010 particles/mm2. Other kinds of UCNPs (core and core-shell) were prepared with the same molar concentration of CSS UCNPs, which have the same particle density. The spot size of the focused 975 nm laser excitation beam is 20 µm, leading to look at around 4.6${\times} $106 UCNPs in one spot, equivalent to 1.4${\times} $106 per pixel. However the last factors are lower than the "ideal" value 0.2 by 1-2 orders of magnitude at least (among other things there are no RWG), so a limit of detection is rather expected around 104 UCNPs per pixel in our case. We defer the detailed optical figure of merit and precise noise-floor estimates to further work and will only discuss below the broad agreement with the above estimates.
5.1 Characterization of UCNPs
The basic structural (TEM, XRD) characterization results of the three kinds of UCNPs are shown in Figs. 2(a)–2(b) and their upconversion fluorescence spectra in Fig. 2(c). We also investigated the decays of UCL emission at the dominant green emission (4S3/2→4I15/2 transition) see Fig. 2(d). The measured UCL lifetime, ${\tau _m}$, is determined by the rate at which the ions leave their excited state, which can be written as a combination of the radiative transition rate ($1/{\tau _R}$) and non-radiative relaxation rate (${W_{NR}}$), according to the following equation:
(4)$$\frac{1}{{{\tau _m}}} = \frac{1}{{{\tau _R}}} + {W_{NR}}$$
Fig. 2. Characterization of UCNPs. (A) TEM image of NaYF4:Yb3+,Er3+@NaYF4:Yb3+@NaYF4 CSS UCNPs. (B) XRD pattern of C, CS, CSS samples and standard beta-phase NaYF4 data (JCPDS file no. 16-0334). (C) UCL emission spectra in the wavelength range of 500-700 nm of the as-synthesized samples (C, CS, CSS) under 976 nm excitation. (D) Lifetime measurement of the UCL emission at 545 nm obtained from C, CS, CSS samples.
Figure 2(d) clearly displays that the UCL lifetime was effectively extended from 258 µs (core) to 686 µs (core-shell), resulting from the prolongation of the diffusive energy migration of Yb3+ ion [23]. With the further addition of the inactive shell, an even longer green emission lifetime, 979 µs for CSS UCNPs, was observed under 976 nm excitation. In fact, the growth of an inert shell possessing similar material with interior core does not result in the significant change in local crystal field surrounding the dopants, so the radiative rate ($1/{\tau _R}$) is the same for both CS and CSS UCNP samples. As a result, the longer lifetime at each UCL emission peak observed in the CSS-structured UCNPs indicates the lower non-radiative relaxation rate [23–26], which agrees with the result shown in Fig. 2(d). These long luminescence lifetime, highly efficient UCNPs are ideal to serve as photoluminescence probes for ultra-compact filter-free bio-assays.
5.2 Detector rejection vs. lag time
As a proof-of-concept, a proposed home-build imaging system based on CMOS-chip high-speed camera can be used to assess both UCL emission and lifetime modes of UCNPs, as introduced in Fig. 1(a). Figure 3(a) illustrates the timing diagram for the operation of filter-free CMOS chip-based fluorescence imaging. A NIR fibered laser diode with a wavelength of 975 nm was used as an excitation source, which can be modulated at the desired frequency with negligible transients. Here we typically operate with frame time of ${\tau _{frame}} = $ 200 µs, thus at 5000 fps: the high-speed camera allows this rate for a small but arbitrary-located 16×16 pixel frame with ${\tau _{\textrm{exp}}} = $100 µs exposure. The control signal for the excitation light is synchronized with the detection window gate in the image sensor. We essentially want to adjust the lag time ${\tau _{\textrm{lag}}}$ and detection (electronic shutter open) duration ${\tau _{\textrm{exp}}}$ as indicated in Fig. 3(a). A negative ${\tau _{\textrm{lag}}}$ corresponds to the laser being still on during the exposure interval. We take several images after each pulse to evaluate both intensity and decay of UCL emission. Technically, we use the sync signal of the camera to trigger through the function generator the next pulse with a long delay which is a large multiple of the frame time plus a shift of a fraction of the period. For simplicity, we focus on this sole fractional shift that we call ${\tau _{lag}}$. All in all, in this measurement, the 975 nm laser source eventually delivers an average power of 0.12 mW with a 50 µs pulse width at a repetition rate of only 50 Hz, providing a time interval between the subsequent pulses of 20 ms with ${N_{post}} = $100 post-pulse frames after each pulse.
Fig. 3. Impact of excitation light at short lags for a glass substrate (no UCNP) due to the non-ideal shuttering: (a) Timing diagram for the operation of filter-free CMOS chip-based fluorescence imaging. The average pixel intensity histogram of the detected signal curve plotted over recorded time, obtained from (b) dark noise, laser excitation "noise" at the different ${\tau _{lag}}$: (c) 10 µs, (d) 5 µs, (e) 0 µs.
On the physics side, the background signals on a glass substrate were first recorded at the same camera conditions later used for the measurement of UCL emission, with the laser being either off (dark noise signal) or on (excitation background signal, without UCNP), to find the optimal conditions for image acquisition and assess the degree of non-ideal camera shuttering. Note that the signal was collected and directed to the image sensor in the camera by a lens at an off-axis angle relative to the laser direction to reduce excitation light delivered to the camera (On-axis measurements are addressed below in Section 5.5). An average pixel intensity for each of the frames, hereafter referred to as ${\textrm{I}_{\textrm{x},\textrm{y},\textrm{k}}}$, was determined by averaging all x-y addresses of pixels in the image sequence, according to the equation:
(5)$${I_k} = \frac{1}{{{N_{pixel}}}}\mathop \sum \limits_{x = 1}^{16} \mathop \sum \limits_{y = 1}^{16} S({x,y,k} )$$
where $x,y$ are pixel coordinates, k is the frame index, ${N_{pixel}} = 256 = {16^2}$ and $S({x,y,k} )$ represents the pixel signal. Figures 3(b)–3(e) show the average pixel intensities for all captured frames and plot it over time, allowing the evaluation of dark and excitation noises. It clearly evidences that the excitation light signal captured at ${\tau _{lag}} = 10\; \mathrm{\mu} s$ (Fig. 3(c)) remains on the order of the dark noise on the image sensor (see Fig. 3(b)). As ${\tau _{lag}}$ was decreased, the scattered light from the excitation pulse is seen at all the corresponding frames, $k = {k_0} + m{N_{post}}$ ($m = $ 0 to 10 here, ${k_0} < {N_{post}}$ is an experimental offset), giving rise to an increase of light captured in a single exposure detection window. This signal is about 10 times the dark noise at ${\tau _{lag}} = 5\; \mathrm{\mu} s$ (with large fluctuations) and becomes ∼140 times the dark noise at ${\tau _{lag}} = 0\; \mathrm{\mu} s$, now in a very reproducible fashion. It is likely that trapped photoelectrons generated in the camera chip during the laser pulse are de-trapped within a time scale of 1 µs or less so that after 10 µs, no signal can be detected (a multi-exponential signal is likely for such trapping, but its analysis goes beyond the scope of this paper, and is probably very much camera-specific, in spite of the numerous common features of CMOS technologies in different foundries). Therefore, a choice such as ${\tau _{lag}} = 10\; \mathrm{\mu} s$ ensures that the excitation pulse can be totally rejected in this off-axis configuration (we will provide a corresponding statement for the on-axis case below), hence a promising first step for the filter-free CMOS chip-based imaging. This value was thus chosen for the next step for recording the UCL signal of UCNPs.
5.3 Upconversion filter-free measurements
As a short-pulsed NIR excitation source irradiates a specimen which carries upconversion nanoprobes, the subsequent emission light of high-energy photons in the visible region will be generated, which have a long exponential decay on the order of milliseconds. For the fluorescence imaging acquisition using traditional photoluminescence probes, i.e., organic dyes or quantum dot, with rapid exponential decay, typical nanosecond to several tens of nanoseconds, the excitation light must be filtered out by a high-rejection optical filter to ensure only the desired fluorescence light is collected on the image sensor or detector. In this study, the background signal associated with the NIR excitation beam was effectively eliminated by the proper selection of detection time window of the high-speed camera. Taking the advantages of the long luminescence rise and decay times associated with UCNPs, the capture of UCL signal was performed after the excitation pulse. This electronic shuttering allows the high rejection of the excitation light and negligibly penalizes the high collection efficiency of UCL emission light time-wise, as was discussed in the mathematical model (Sections 3.2–3.3).
The timing principle of UCL measurements based on the filter-free fluorescence imaging is outlined in Fig. 4(a). According to the aforementioned calibration of the background noise, the capture conditions, such as the laser pulse-width of 50 µs, the exposure sequence of 100 µs and the delay (${\tau _{lag}}$) of 10 µs were chosen as the guideline for the measurement of UCL emission signal. By counting all pixel intensity values within every captured window of a 10 pulse sequence of 200 ms (1000 frames), a luminescence decay curve can be deduced. Here we average on the ${N_{frame}} = $10 frames that have the same $n$-th post-pulse position in the sequence, i.e. $k = {k_0} + m{N_{post}} + n$, which is associated with the decay "instant" $\tau = n{\tau _{frame}} = n \times 200\; \mathrm{\mu} s$:
(6)$${I_{\tau}} = \frac{1}{{{N_{frame}}}}\frac{1}{{{N_{pixel}}}}\mathop \sum \limits_{m = 1}^{{N_{frame}}} \mathop \sum \limits_{x = 1}^{16} \mathop \sum \limits_{y = 1}^{16} S(x,y,{k_0} + m{N_{post}} + n)$$
Fig. 4. (A) Timing diagram for the operation of filter-free CMOS chip-based fluorescence imaging. The average intensity histogram of the detected signal curve plotted over recorded time, obtained from (B) core, (C) core-shell, (D) core-shell-shell UCNPs at ${\tau _{lag}}$ = 10 µs.
Figures 4(b)–4(d) reveal the intensity and decay of the UCL emission obtained from the upconversion samples (C, CS and CSS). By comparing the peak values for each sample, it confirms that the UCL intensity of CSS UCNPs is the strongest among these samples (17 vs. 12 for CS and 5 for C), the same trend as the UCL spectra displayed in Fig. 2(c). The time-integrated values $\smallint {I_\tau }\; d\tau $, akin to the number of emitted photons, display an even stronger trend (85 for CSS vs. 36 for CS vs. 12 for C). Furthermore, given the high SNR ratio, the UCL lifetime can be calculated here from the ratio of the luminescence intensities within two adjacent count windows N1, N2 as shown in the inset of Fig. 4(d), as expressed in the following equation:
(7)$$\tau = \frac{{\mathrm{\Delta }T}}{{ln({{I_1}/{I_2}} )}}$$
where $\mathrm{\Delta }T$ represents the time interval between two time gates with equal width, ${I_1}$ and ${I_2}$ are the integrated luminescence intensity within the corresponding time gates, respectively. Analysis of these decay curves yielded lifetime estimations of 875, 534 and 350 µs for the core, core-shell, core-shell-shell structured UCNPs, respectively, which are close to the measured lifetimes displayed in Fig. 2(d). These results prove the feasibility to obtain direct UCL imaging without the need for an optical filter, providing a powerful tool for high-throughput imaging with the high rejection ratio of background. A contact imaging device would have exactly the same timing behavior, only the photometric issues would be different (and more favorable to emission collection). We note that the large margin vs. noise observed here are compatible with the various estimates made in Section 4, and thus make it more plausible that an attainable detection limit of the lens-free contact scheme is indeed in the 10-100 UCNPs per pixel range.
5.4 Cross-check of infrared rejection
To confirm the high amount of UCL signal captured in our design and technique (excitation attenuation, negligible noise and low-cost), and in particular the very limited impact of the lag time on this amount, we devised a way to collect the highest possible UCL emission intensity from UCNPs in this generic scheme, which is expected for ${\tau _{lag}} = 0\; \mathrm{\mu} s$. To this end, we recorded the signal within two photometric configurations, with and without using a standard excitation filter (KG5, cutting the NIR), since the situation with filter is insensitive to excitation at any time in the sequence. Figure 5(a) shows the luminescence decay curve obtained from CSS UCNPs in the filter-free photometric at ${\tau _{lag}} = 0\; \mathrm{\mu} s$, where the falling edge of the excitation pulse coincides with the start of the exposure window. Compared with the result obtained at ${\tau _{lag}} = 10\; \mathrm{\mu} s$ (Fig. 4(d)), the signal intensity of the former is much higher than that of the latter, indicating that a high amount of pumping light had illuminated the image sensor and created photo-carriers that were able to show up as a signal in the detection window.
Fig. 5. The average intensity histogram of the detected signal curve plotted over recored time, obtained from CSS UCNPs at (a) ${\tau _{lag}}$ = 0 µs w/o KG5, (b) ${\tau _{lag}}$ = 10 µs with KG5, (c) ${\tau _{lag}}$ = 10 µs with KG5.
To determine how the UCL signal is affected by the addition of the 10 µs lag time, we compare UCL imaging for both lag times with the KG5 filter inserted, getting rid of any NIR light. As shown in Figs. 5(b) and 5(c), the light intensity captured for both cases is entirely similar, both being much weaker compared with that obtained without the KG5 filter and ${\tau _{lag}} = 0\; \mathrm{\mu} s$ . The filter has a certain effect to attenuate the spectrum in the region of UCL emission. The similar amount of signal at ${\tau _{lag}} = 10\mathrm{\mu} s$ implies that the UCL emission light is significantly captured with negligible loss. Besides confirming the theory for simple exponential decay that predicts a < 2% impact, an extra factor favourable factor for the unchanged signal could be the long rise time of UCNPs luminescence [27], that facilitates its centering in the exposure window. Its modelling is a refinement that would, however, introduce complexities that we do not need for our central claims.
5.5 On-axis geometry
Next, we study the fluorescence imaging of UCNP by the camera at an on-axis angle of incidence. We do this study in the absence of a RWG or any «light » rejection filter such as a simple Bragg mirror. A good RWG or a few-layers Bragg mirror could reduce the NIR transmission by a factor of 10. So our study is clearly a worst case: almost all NIR photons of the laser reach the sensor given the various apertures involved. The 975 nm diode laser is pulsed at a repetition rate of 83.3 Hz with an average power of 110 µW and a pulse duration of 50 µs (peak power is thus 26.4 mW). The high-speed camera is now set to acquire images at a frame rate of 250 fps (thus ${N_{post}} = $3) with the detection time window of 100 µs. This choice allowed to treat more signal-containing frames than the previous ${N_{post}} = $100 choice. The delay time is set to ensure the same ${\tau _{lag}} = 10\; \mathrm{\mu} s$ as in the previous experiments under the off-axis geometry. Figures 6(a) and 6(b) display the intensity traces from glass substrate and CSS UCNPs, respectively. The signal is about 10 times larger in the latter case. So, there are now photocarriers generated by the laser and seen later in spite of the 10 µs lag (the laser is completely off after at most a fraction of µs, being regulated by a current source with sufficient bandwidth and undergoing a large transient). To evaluate a precise SNR ratio in this situation, we must subtract the laser signal from the total signal. The important quantities are thus the standard deviation of each signal. As can be seen in the histograms of 200 peaks of Figs. 6(c) and 6(d) that relate to the glass substrate and CSS case respectively, both signals have a similar standard deviation: it is characterized by a FWHM width $\mathrm{\Delta }{S_{FWHM}} = $ 0.20 in both cases. This is likely to be a readout noise, as it is only 2–4 times larger than the dark noise seen in the above section. In our conditions, the inhomogeneity of the image may play a role in the exact value of such quantities and may result in such modest discrepancies. The important point is the equal value of both width and the sufficient plausibility of a Gaussian distribution. It means that the standard deviation and FWHM width of the difference signal that represents the upconversion luminescence (${S_{UCL}} = {S_{CSS}} - {S_{glass}}$) are only multiplied by $\sqrt 2 $. The FWHM width thus reaches 0.28 and the standard deviation 0.12 $\left( { \simeq 0.28/2\sqrt {2Ln2}} \right)$. Hence the UCL signal to standard deviation ("noise") ratio is ∼15/0.12 and exceeds 100 in this experiment, where a large part of the unwanted excitation signal turns out to be deterministic.
Fig. 6. On-axis geometry : The average image intensity records (150 fps, 83.3 Hz laser), obtained from (a) glass sample (only excitation light), (b) CSS UCNPs. Intensity histograms for the signals from (c) excitation light and (d) CSS UCNPs, showing identical fitted Gaussian width of 0.20. (e) Log-log plot of signal dependence on incident power (we use the average power as measured, the peak power is stronger by a factor 400 for these 50 µs pulses every 20 ms). The nonlinearity of UCNP signal (obtained by subtraction) with fitted exponent n=1.53 is clear, while it is near unity for excitation light.
To cross-check the origin of the UCL emission, it is appropriate to verify the non-linear property of UCL emission. We thus studied the excitation intensity dependence of the UCL emission intensity of the UCNPs sample by repeating the experiment for five different excitation powers $({{P_{exc}}} )$ in the range of 60−130 µW. The UCL emission intensity emitted from UCNPs was again inferred by subtracting the laser background (as in Fig. 6(a)) from the detected signal (as in Fig. 6(b)), ${S_{UCL}}({{P_{exc}}} )= {S_{CSS}}({{P_{exc}}} )- {S_{glass}}({{P_{exc}}} )$. Figure 6(e) displays the log-log plot of the inferred signal intensities ${S_{UCL}}({{P_{exc}}} )$ and ${S_{glass}}({{P_{exc}}} )$ versus the excitation ${P_{exc}}$. It is observed that ${S_{UCL}}({{P_{exc}}} )$ is proportional to the nth power of the excitation power, where the fitted exponent n is 1.5 for the UCL emission from CSS UCNPs associated with a theoretical two-photon process of the transitions of erbium ions. For comparison, ${S_{glass}}({{P_{exc}}} )$ shows the fitted exponent n of 0.99, with the various noise sources plausibly accounting for the minute difference to unity.
We thus observe that the technique is robust to the situation of the chip geometry. Even a head-on laser beam with typically 40 mW can be accommodated with a background that does not harm the dynamical range appreciably, and with also a still very modest and entirely tractable contribution to the standard deviation. Let us again insist that this is a worst-case: In the targeted geometry of a chip, the increased collection efficiency of UCL with an aperture of 1.0 (faceplate aperture) compared to 0.15 here should bring a 30 to 40-fold improvement, while an optimized GMR mode of the RWG would reduce the NIR excitation beam by a factor of 10. With this factor of 300 margin, and with the increased signal excitation enhancement of 104 due to the RWG grating field concentration, we have a factor over 2${\times} $106 at hand, ideally. We can thus hope that a single pixel (or pixel group) can detect 10 NPs instead of an estimated ∼3${\times} $104 in our test experiments (inferred from the fact that we currently have 5${\times} $106 UCNPs in the imaged spot and a UCL to noise ratio over 100, however, we did not undertake concentration-dependent experiments that are more delicate to perform). This broadly agrees with the lines of theoretical calculation as presented in Section 4.2. We believe that on this basis, our technique will find applications in small, robust and low-cost fluorescence-based devices.
5.6 Prospective imaging performance of the face optic plate
Our technique has shown the high excitation light rejection characteristics in the filter-free, lens-less CMOS-based fluorescence imaging, however, there is still room for further improving the performance of this microscopy platform. To evaluate the feasibility of using this fluorescence imaging technique for contact imaging and RWG detector, we performed the measurement with slight modifications. One emerging possibility toward lens-free contact imaging is the use of an additional planar optical component, i.e. a fiber optic faceplate (FOP) inserted directly after the sample substrate as illustrated in Fig. 7(a). In this modified configuration, the rapid divergence (free space propagation modes) of UCL emission from specimen is converted into the guided modes of a 2D array of fiber-optic bundles without spatial spreading, thanks to the high numerical aperture (up to 1.0) [6,7,28,29]. Because such chips have not yet been prepared for contact imaging, we performed a simple test to estimate the spot width captured by the camera. The imaging system is kept the same as in Fig. 1(a), except insertion of the FOP between the sample and the sensor. In addition, to excite UCL emission generated from UCNPs, the 975 nm laser beam coupled to this setup has an off-angle (θ) with respect to the surface normal direction. This configuration is a simulation for the resonant excitation condition of RWG structure, i.e. the incident angle and wavelength of the excitation light simultaneously matching with the resonant angle and wavelength of a guided-mode of the RWG. Note that the image size was kept at 32×32 frame, whereas the other experimental conditions for the camera and laser pulse were similar to the study in Section 5.5.
Fig. 7. Experiments with a Fiber-Optics-Plate added (a) Schematic of the filter-less CMOS-based fluorescence imaging with the FOP. θ is the excitation beam incident angle relative to the FOP + sample surface normal direction. (b) Images of the signal obtained from the CSS samples, averaged on 168 frames, at indicated lag times. (c, d) Laser-only and CSS UCNPs data vs. lag time ${\tau _{lag}}$. The lag time ${\tau _{lag}}$ is purposely indicated on a very nonlinear horizontal scale [$X \propto \tau _{lag}^{1/4} \times \textrm{sign}({{\tau_{lag}}} )$]: (c) The average intensities for the signals from CSS UCNPs sample (blue curve) and excitation light (black curve) $,$ and the difference (red curve). (d) The peak typical width for excitation light (black curve) and for CSS UCNPs sample (red curve).
In order to evaluate the imaging performance of our lens-free on-chip technique, the signals from excitation light and CSS UCNPs were captured and analyzed at different lag times. Figure 7(b) displays the particular images of the measured spots obtained from the CSS UCNPs sample at the different indicated lag times. Each image averaged on 168 frames was analyzed by an ad hoc program written in Matlab. It clearly shows that the detected spot is larger and brighter as ${\tau _{lag}}$ decreases to negative values, as a consequence of trapping of the laser excitation signal in the exposure window. At the positive value ${\tau _{lag}} = 5.87\; \mathrm{\mu} s$, it exhibits a bright spot corresponding to the UCL signal, and a relatively weak background laser noise. Conversely, the UCL signal is indistinguishable from the laser noise at negative ${\tau _{lag}}$. It implies that by selecting the proper ${\tau _{lag}}$, we can filter out excitation light efficiently.
Hereafter, we use an adapted abscissa $X = \tau _{lag}^{1/4} \times \textrm{sign}({{\tau_{lag}}} )$ with exponent ¼ arbitrarily chosen to give a gently stretched rendering of the transition for the exposure window starting either before (${\tau _{lag}} < 0,\; \; X < 0$) or after (${\tau _{lag}} > 0,\; \; X > 0$) the end of the laser pulse. Figure 7(c) makes use of this abscissa to depict a quantitative comparison of the average pixel intensities for the excitation-only signal (black curve) and raw UCL emission (blue curve) signals. The red difference curve shows the signal that comes from the sole UCL and is constant to better than 5% in the two rightmost data points here. It suggests that beyond a positive ${\tau _{lag}}$ as small as 1 µs, there is an insignificant change in the captured intensity of the raw UCL emission signal.
The widths of the signal spots of the aforementioned samples were also analyzed by a numerical method using Matlab. The intensity distribution in each imaging spot displayed in Fig. 7(b) was fitted to a Gaussian lineshape model to extract a proper account of spot width. Figure 7(d) shows the width of the signal spot of the excitation light (black curve) and CSS UCNPs sample (red curve) as a function of the same stretched abscissa X as above. We analyzed the measured signals before they were saturated at ${\tau _{lag}} ={-} 4.13\; \mathrm{\mu} s$. According to the numerical analysis of the laser and UCL emission signals at the negative ${\tau _{lag}}$, both yield similar widths as the laser signal dominates. Width variations are observed from CSS UCNPs sample between different ${\tau _{lag}}$, which are attributed to variations in the number of UCL photons captured in the exposure window. As the UCL signal dominates, the spot width is smaller than that of the laser spot and closer to the system resolution (about 100 µm). The results suggest that the optimal region of the lag time for capturing the UCL emission signal should start after about 4 µs to ensure the characteristics of high-efficient UCL detection and high excitation light rejection. The evolution of the recorded signal clearly displays the advantages of our technique especially in terms of excitation light rejection.
Besides, the imaging spot of the UCL signal still remains relatively small without a noticeable penalty in the degradation of UCL intensity with the use of the FOP, which is a good starting point for our further study of the filter-free optics-less system. Therefore, contact imaging using dense FOP to collect and deliver UCL emission from sample to image sensor without the use of any lenses is the next approach to achieve high throughput and small spatial resolution. Another advantage of tuning the incident angle is to eliminate the appearance of shadow of the microbiological specimen on the image sensor, thus improving the sensitivity of image in the bright field (non-fluorescent imaging) [28].
Such novel imaging architecture combining the long UCL lifetime of UCNPs and the high-speed imaging capabilities of the camera could be especially valuable for high-throughput imaging, compactness, and low-cost systems expanding fluorescence-based analysis.
Herein, we present the new concept of synergy between UCNPs and fast (1−100 µs) CMOS imaging technology for achieving filter-free, lens-less fluorescence imaging at a modest cost. By taking the key characteristics and advantages of long UCL emission and high-speed imaging capabilities, we demonstrate the feasibility for fluorescence detection with both off- and on-axis excitation geometries. In our fluorescent imaging technique, the samples of interest are pumped through the 975 nm excitation source, whose excitation pulse is synchronized with, and prior to, the detection window of the camera. By choosing the proper lag time between the laser pulse and the detection window, a high-rejection of the excitation light can be achieved while UCL collection remains excellent. Another benefit of using UCNPs as fluorescence nanoprobes is the long decay time (possibly assisted by a non-negligible rise time), which results in an efficient collection of UCL signal into the exposure window of the camera. Furthermore, multiple images in the sequence are useful: By the analysis of the luminescence intensities of Ln3+-doped UCNPs at our available rates, the UCL lifetime can be assessed, showing very good agreement with the otherwise measured UCL lifetime, a neat way to control signals. Further investigation of coupling a FOP or fiber-optic-plate into the fluorescence system was also performed, revealing the high potential for the next step in contact imaging and combination with RWG detector. Such a compact fluorescent imaging platform is expected to enable the ubiquitous, inexpensive and high-performance imaging systems, which can open a new avenue of fluorescence imaging assays.
Agence Nationale de la Recherche (ANR-17-CE09-0047); Ministry of Science and Technology, Taiwan (107-2923-M-194-001-MY3, MOST-107-2923-M-194-001-MY3).
All other authors declare that they have no competing interests.
1. X. Zhang, A. Fales, and T. V. Dinh, "Time-resolved synchronous fluorescence for biomedical diagnosis," Sensors 15(9), 21746–21759 (2015). [CrossRef]
2. Y. Fan, S. Wang, and F. Zhang, "Optical multiplexed bioassays improve biomedical diagnostics," Angew. Chem. Int. Ed. 58(38), 13208–13219 (2019). [CrossRef]
3. F. Wang, S. Wen, H. He, B. Wang, Z. Zhou, O. Shimoni, and D. Jin, "Microscopic inspection and tracking of single upconversion nanoparticles in living cells," Light: Sci. Appl. 7(4), 18007 (2018). [CrossRef]
4. T. Lagache, A. Grassart, S. Dallongeville, O. Faklairs, N. Sauvonnet, A. Dufour, L. Danglot, and J. C. O. Marin, "Mapping molecular assemblies with fluorescence microscopy and object-based spatial statistics," Nat. Commun. 9(1), 698 (2018). [CrossRef]
5. L. Wei, W. Yan, and D. Ho, "Recent Advances in fluorescence lifetime analytical microsystems: contact optics and CMOS time-resolved electronics," Sensors 17(12), 2800 (2017). [CrossRef]
6. A. F. Coskun, I. Sencan, T. W. Su, and A. Ozcan, "Lensless wide-field fluorescent imaging on a chip using compressive decoding of sparse objects," Opt. Express 18(10), 10510 (2010). [CrossRef]
7. K. Sasagawa, A. Kimura, M. Haruta, T. Noda, T. Tokuda, and J. Ohta, "Highly sensitive lens-free fluorescence imaging device enabled by a complementary combination of interference and absorption filters," Biomed. Opt. Express 9(9), 4329 (2018). [CrossRef]
8. B. R. Rae, K. R. Muir, Z. Gong, J. Mckendry, J. M. Girkin, E. Gu, D. Renshaw, M. D. Dawson, and R. K. Henderson, "A CMOS time-resolved fluorescence lifetime analysis micro-system," Sensors 9(11), 9255–9274 (2009). [CrossRef]
9. E. Petryayeva and W. R. Algar, "Toward point-of-care diagnostics with consumer electronic devices: the expanding role of nanoparticles," RSC Adv. 5(28), 22256–22282 (2015). [CrossRef]
10. S. Nayak, N. R. Blumenfeld, T. Laksanasopin, and S. K. Sia, "Point-of-care diagnostics: recent developments in a connected age," Anal. Chem. 89(1), 102–123 (2017). [CrossRef]
11. L. Martinelli, H. Choumane, K. N. Ha, G. Sagarzazu, T. Gacoin, C. Goutel, C. Weisbuch, and H. Benisty, "Sensor-integrated fluorescent microarray for ultrahigh sensitivity direct-imaging bioassays: Role of a high rejection of excitation light," Appl. Phys. Lett. 91(8), 083901 (2007). [CrossRef]
12. A. Pandya, I. Schelkanova, and A. Douplik, "Spatio-angular filter (SAF) imaging device for deep interrogation of scattering media," Biomed. Opt. Express 10(9), 4656 (2019). [CrossRef]
13. C. Richard, A. Renaudin, V. Aimez, and P. G. Charette, "An integrated hybrid interference and absorption filter for fluorescence detection in lab-on-a-chip devices," Lab Chip 9(10), 1371 (2009). [CrossRef]
14. C. Chen, C. Li, and Z. Shi, "Current advances in lanthanide-doped upconversion nanostructures for detection and bioapplication," Adv. Sci. 3(10), 1600029 (2016). [CrossRef]
15. M. Lin, Y. Zhao, S. Q. Wang, M. Liu, Z. F. Duan, Y. M. Chen, F. Li, F. Xu, and T. Lu, "Recent advances in synthesis and surface modification of lanthanide-doped upconversion nanoparticles for biomedical applications," Biotechnol. Adv. 30(6), 1551–1561 (2012). [CrossRef]
16. W. Zheng, P. Huang, D. Tu, E. Ma, H. Zhu, and X. Chen, "Lanthanide-doped upconversion nano-bioprobes: electronic structures, optical properties, and biodetection," Chem. Soc. Rev. 44(6), 1379–1415 (2015). [CrossRef]
17. L. Wei, S. Doughan, Y. Han, M. V. DaCosta, U. J. Krull, and D. Ho, "The intersection of CMOS microsystems and upconversion nanoparticles for luminescence bioimaging and bioassays," Sensors 14(9), 16829–16855 (2014). [CrossRef]
18. A. Nadort, V. K. A. Sreenivasan, Z. Song, E. A. Grebenik, A. V. Nechaev, V. A. Semchishen, V. Y. Panchenko, and A. V. Zvyagin, "Quantitative imaging of single upconversion nanoparticles in biological tissue," PLoS One 8(5), e63292 (2013). [CrossRef]
19. J. H. Lin, H. Y. Liou, C. D. Wang, C. Y. Tseng, C. T. Lee, C. C. Ting, H. C. Kan, and C. C. Hsu, "Giant enhancement of upconversion fluorescence of NaYF4:Yb3+,Tm3+ nanocrystals with resonant waveguide grating substrate," ACS Photonics 2(4), 530–536 (2015). [CrossRef]
20. D. T. Vu, H. W. Chiu, R. Nababan, Q. M. Le, S. W. Kuo, L. K. Chau, C. C. Ting, H. C. Kan, and C. C. Hsu, "Enhancing upconversion luminescence emission of rare earth nanophosphors in aqueous solution with thousands fold enhancement factor by low refractive index resonant waveguide grating," ACS Photonics 5(8), 3263–3271 (2018). [CrossRef]
21. Y. F. Wang, G. Y. Liu, L. D. Sun, J. W. Xiao, J. C. Zhou, and C. H. Yan, "Nd3+-sensitized upconversion nanophosphors: Efficient in vivo bioimaging probes with minimized heating effect," ACS Nano 7(8), 7200–7206 (2013). [CrossRef]
22. L. E. Mackenzie, J. A. Goode, A. Vakurov, P. P. Nampi, S. Saha, G. Jose, and P. A. Milner, "The theoretical molecular weight of NaYF4:RE upconversion nanoparticles," Sci. Rep. 8(1), 1106 (2018). [CrossRef]
23. J. Zuo, D. Sun, L. Tu, Y. Wu, Y. Cao, B. Xue, Y. Zhang, Y. Chang, X. Liu, X. Kong, W. J. Buma, E. J. Meijer, and H. Zhang, "Precisely tailoring upconversion dynamics via energy migration in core-shell nanostructures," Angew. Chem. Int. Ed. 57(12), 3054–3058 (2018). [CrossRef]
24. F. Zhang, R. Che, X. Li, C. Yao, J. Yang, D. Shen, P. Hu, W. Li, and D. Zhao, "Direct imaging the upconversion nanocrystal core/shell structure at the subnanometer level: Shell thickness dependence in upconverting optical properties," Nano Lett. 12(6), 2852–2858 (2012). [CrossRef]
25. Y. Wang, L. Tu, J. Zhao, Y. Sun, X. Kong, and H. Zhang, "Upconversion luminescence of β-NaYF4:Yb3+,Er3+@β-NaYF4 core/shell nanoparticles: Excitation power density and surface dependence," J. Phys. Chem. C 113(17), 7164–7169 (2009). [CrossRef]
26. S. Fischer, N. D. Bronstein, J. K. Swabeck, E. M. Chan, and A. P. Alivisatos, "Precise tuning of surface quenching for luminescence enhancement in core-shell lanthanide-doped nanocrystals," Nano Lett. 16(11), 7241–7247 (2016). [CrossRef]
27. K. Green, K. Huang, H. Pan, G. Han, and S. F. Lim, "Optical temperature sensing with infrared excited upconversion nanoparticles," Front. Chem. 6, 416 (2018). [CrossRef]
28. K. Sasagawa, S. H. Kim, K. Miyazawa, H. Takehara, T. Noda, T. Tokuda, R. Iino, H. Noji, and J. Ohta, "Dual-mode lenless imaging device for digital enzyme linked immunosorbent assay," Proc. SPIE 8933, 89330N (2014). [CrossRef]
29. A. F. Coskun, I. Sencan, T. W. Su, and A. Ozcan, "Wide-field lensless fluorescent microscopy using a tapered fiber-optic faceplate on a chip," Analyst 136(17), 3512 (2011). [CrossRef]
X. Zhang, A. Fales, and T. V. Dinh, "Time-resolved synchronous fluorescence for biomedical diagnosis," Sensors 15(9), 21746–21759 (2015).
Y. Fan, S. Wang, and F. Zhang, "Optical multiplexed bioassays improve biomedical diagnostics," Angew. Chem. Int. Ed. 58(38), 13208–13219 (2019).
F. Wang, S. Wen, H. He, B. Wang, Z. Zhou, O. Shimoni, and D. Jin, "Microscopic inspection and tracking of single upconversion nanoparticles in living cells," Light: Sci. Appl. 7(4), 18007 (2018).
T. Lagache, A. Grassart, S. Dallongeville, O. Faklairs, N. Sauvonnet, A. Dufour, L. Danglot, and J. C. O. Marin, "Mapping molecular assemblies with fluorescence microscopy and object-based spatial statistics," Nat. Commun. 9(1), 698 (2018).
L. Wei, W. Yan, and D. Ho, "Recent Advances in fluorescence lifetime analytical microsystems: contact optics and CMOS time-resolved electronics," Sensors 17(12), 2800 (2017).
A. F. Coskun, I. Sencan, T. W. Su, and A. Ozcan, "Lensless wide-field fluorescent imaging on a chip using compressive decoding of sparse objects," Opt. Express 18(10), 10510 (2010).
K. Sasagawa, A. Kimura, M. Haruta, T. Noda, T. Tokuda, and J. Ohta, "Highly sensitive lens-free fluorescence imaging device enabled by a complementary combination of interference and absorption filters," Biomed. Opt. Express 9(9), 4329 (2018).
B. R. Rae, K. R. Muir, Z. Gong, J. Mckendry, J. M. Girkin, E. Gu, D. Renshaw, M. D. Dawson, and R. K. Henderson, "A CMOS time-resolved fluorescence lifetime analysis micro-system," Sensors 9(11), 9255–9274 (2009).
E. Petryayeva and W. R. Algar, "Toward point-of-care diagnostics with consumer electronic devices: the expanding role of nanoparticles," RSC Adv. 5(28), 22256–22282 (2015).
S. Nayak, N. R. Blumenfeld, T. Laksanasopin, and S. K. Sia, "Point-of-care diagnostics: recent developments in a connected age," Anal. Chem. 89(1), 102–123 (2017).
L. Martinelli, H. Choumane, K. N. Ha, G. Sagarzazu, T. Gacoin, C. Goutel, C. Weisbuch, and H. Benisty, "Sensor-integrated fluorescent microarray for ultrahigh sensitivity direct-imaging bioassays: Role of a high rejection of excitation light," Appl. Phys. Lett. 91(8), 083901 (2007).
A. Pandya, I. Schelkanova, and A. Douplik, "Spatio-angular filter (SAF) imaging device for deep interrogation of scattering media," Biomed. Opt. Express 10(9), 4656 (2019).
C. Richard, A. Renaudin, V. Aimez, and P. G. Charette, "An integrated hybrid interference and absorption filter for fluorescence detection in lab-on-a-chip devices," Lab Chip 9(10), 1371 (2009).
C. Chen, C. Li, and Z. Shi, "Current advances in lanthanide-doped upconversion nanostructures for detection and bioapplication," Adv. Sci. 3(10), 1600029 (2016).
M. Lin, Y. Zhao, S. Q. Wang, M. Liu, Z. F. Duan, Y. M. Chen, F. Li, F. Xu, and T. Lu, "Recent advances in synthesis and surface modification of lanthanide-doped upconversion nanoparticles for biomedical applications," Biotechnol. Adv. 30(6), 1551–1561 (2012).
W. Zheng, P. Huang, D. Tu, E. Ma, H. Zhu, and X. Chen, "Lanthanide-doped upconversion nano-bioprobes: electronic structures, optical properties, and biodetection," Chem. Soc. Rev. 44(6), 1379–1415 (2015).
L. Wei, S. Doughan, Y. Han, M. V. DaCosta, U. J. Krull, and D. Ho, "The intersection of CMOS microsystems and upconversion nanoparticles for luminescence bioimaging and bioassays," Sensors 14(9), 16829–16855 (2014).
A. Nadort, V. K. A. Sreenivasan, Z. Song, E. A. Grebenik, A. V. Nechaev, V. A. Semchishen, V. Y. Panchenko, and A. V. Zvyagin, "Quantitative imaging of single upconversion nanoparticles in biological tissue," PLoS One 8(5), e63292 (2013).
J. H. Lin, H. Y. Liou, C. D. Wang, C. Y. Tseng, C. T. Lee, C. C. Ting, H. C. Kan, and C. C. Hsu, "Giant enhancement of upconversion fluorescence of NaYF4:Yb3+,Tm3+ nanocrystals with resonant waveguide grating substrate," ACS Photonics 2(4), 530–536 (2015).
D. T. Vu, H. W. Chiu, R. Nababan, Q. M. Le, S. W. Kuo, L. K. Chau, C. C. Ting, H. C. Kan, and C. C. Hsu, "Enhancing upconversion luminescence emission of rare earth nanophosphors in aqueous solution with thousands fold enhancement factor by low refractive index resonant waveguide grating," ACS Photonics 5(8), 3263–3271 (2018).
Y. F. Wang, G. Y. Liu, L. D. Sun, J. W. Xiao, J. C. Zhou, and C. H. Yan, "Nd3+-sensitized upconversion nanophosphors: Efficient in vivo bioimaging probes with minimized heating effect," ACS Nano 7(8), 7200–7206 (2013).
L. E. Mackenzie, J. A. Goode, A. Vakurov, P. P. Nampi, S. Saha, G. Jose, and P. A. Milner, "The theoretical molecular weight of NaYF4:RE upconversion nanoparticles," Sci. Rep. 8(1), 1106 (2018).
J. Zuo, D. Sun, L. Tu, Y. Wu, Y. Cao, B. Xue, Y. Zhang, Y. Chang, X. Liu, X. Kong, W. J. Buma, E. J. Meijer, and H. Zhang, "Precisely tailoring upconversion dynamics via energy migration in core-shell nanostructures," Angew. Chem. Int. Ed. 57(12), 3054–3058 (2018).
F. Zhang, R. Che, X. Li, C. Yao, J. Yang, D. Shen, P. Hu, W. Li, and D. Zhao, "Direct imaging the upconversion nanocrystal core/shell structure at the subnanometer level: Shell thickness dependence in upconverting optical properties," Nano Lett. 12(6), 2852–2858 (2012).
Y. Wang, L. Tu, J. Zhao, Y. Sun, X. Kong, and H. Zhang, "Upconversion luminescence of β-NaYF4:Yb3+,Er3+@β-NaYF4 core/shell nanoparticles: Excitation power density and surface dependence," J. Phys. Chem. C 113(17), 7164–7169 (2009).
S. Fischer, N. D. Bronstein, J. K. Swabeck, E. M. Chan, and A. P. Alivisatos, "Precise tuning of surface quenching for luminescence enhancement in core-shell lanthanide-doped nanocrystals," Nano Lett. 16(11), 7241–7247 (2016).
K. Green, K. Huang, H. Pan, G. Han, and S. F. Lim, "Optical temperature sensing with infrared excited upconversion nanoparticles," Front. Chem. 6, 416 (2018).
K. Sasagawa, S. H. Kim, K. Miyazawa, H. Takehara, T. Noda, T. Tokuda, R. Iino, H. Noji, and J. Ohta, "Dual-mode lenless imaging device for digital enzyme linked immunosorbent assay," Proc. SPIE 8933, 89330N (2014).
A. F. Coskun, I. Sencan, T. W. Su, and A. Ozcan, "Wide-field lensless fluorescent microscopy using a tapered fiber-optic faceplate on a chip," Analyst 136(17), 3512 (2011).
Aimez, V.
Algar, W. R.
Alivisatos, A. P.
Benisty, H.
Blumenfeld, N. R.
Bronstein, N. D.
Buma, W. J.
Chan, E. M.
Chang, Y.
Charette, P. G.
Chau, L. K.
Che, R.
Chen, X.
Chen, Y. M.
Chiu, H. W.
Choumane, H.
Coskun, A. F.
DaCosta, M. V.
Dallongeville, S.
Danglot, L.
Dawson, M. D.
Dinh, T. V.
Doughan, S.
Douplik, A.
Duan, Z. F.
Dufour, A.
Faklairs, O.
Fales, A.
Gacoin, T.
Girkin, J. M.
Gong, Z.
Goode, J. A.
Goutel, C.
Grassart, A.
Grebenik, E. A.
Green, K.
Gu, E.
Ha, K. N.
Han, G.
Han, Y.
Haruta, M.
He, H.
Henderson, R. K.
Ho, D.
Hsu, C. C.
Hu, P.
Huang, P.
Iino, R.
Jin, D.
Jose, G.
Kan, H. C.
Kim, S. H.
Kimura, A.
Kong, X.
Krull, U. J.
Kuo, S. W.
Lagache, T.
Laksanasopin, T.
Le, Q. M.
Lee, C. T.
Li, C.
Li, F.
Li, X.
Lim, S. F.
Lin, J. H.
Liou, H. Y.
Liu, G. Y.
Lu, T.
Ma, E.
Mackenzie, L. E.
Marin, J. C. O.
Martinelli, L.
Mckendry, J.
Meijer, E. J.
Milner, P. A.
Miyazawa, K.
Muir, K. R.
Nababan, R.
Nadort, A.
Nampi, P. P.
Nayak, S.
Nechaev, A. V.
Noda, T.
Noji, H.
Ohta, J.
Ozcan, A.
Pan, H.
Panchenko, V. Y.
Pandya, A.
Petryayeva, E.
Rae, B. R.
Renaudin, A.
Renshaw, D.
Richard, C.
Sagarzazu, G.
Saha, S.
Sasagawa, K.
Sauvonnet, N.
Schelkanova, I.
Semchishen, V. A.
Sencan, I.
Shen, D.
Shi, Z.
Shimoni, O.
Sia, S. K.
Song, Z.
Sreenivasan, V. K. A.
Su, T. W.
Sun, D.
Sun, L. D.
Swabeck, J. K.
Takehara, H.
Ting, C. C.
Tokuda, T.
Tseng, C. Y.
Tu, D.
Tu, L.
Vakurov, A.
Vu, D. T.
Wang, C. D.
Wang, S. Q.
Wang, Y. F.
Wei, L.
Weisbuch, C.
Wen, S.
Wu, Y.
Xiao, J. W.
Xu, F.
Xue, B.
Yan, C. H.
Yan, W.
Yang, J.
Yao, C.
Zhao, D.
Zhao, Y.
Zheng, W.
Zhou, J. C.
Zhou, Z.
Zhu, H.
Zuo, J.
Zvyagin, A. V.
Adv. Sci. (1)
Anal. Chem. (1)
Angew. Chem. Int. Ed. (2)
Biomed. Opt. Express (2)
Biotechnol. Adv. (1)
Chem. Soc. Rev. (1)
Front. Chem. (1)
Lab Chip (1)
Light: Sci. Appl. (1)
PLoS One (1)
Proc. SPIE (1)
(1) I U ( t ) = η ( I m ) ∫ − ∞ t I ( t ′ ) exp ( − t − t ′ τ U ) d t ′ = 2 η ( I m ) I m τ U exp ( − t − t 0 τ U ) sinh ( τ pulse 2 τ U )
(2) F U = τ U τ pulse η ( I m ) I m sinhc ( τ pulse 2 τ U ) [ exp ( − t 1 − t 0 τ U ) − exp ( − t 2 − t 0 τ U ) ]
(3) F U F U m a x = sinhc ( τ pulse 2 τ U ) exp ( − τ pulse 2 τ U ) = ( τ U τ pulse ) [ 1 − exp ( − τ pulse τ U ) ]
(4) 1 τ m = 1 τ R + W N R
(5) I k = 1 N p i x e l ∑ x = 1 16 ∑ y = 1 16 S ( x , y , k )
(6) I τ = 1 N f r a m e 1 N p i x e l ∑ m = 1 N f r a m e ∑ x = 1 16 ∑ y = 1 16 S ( x , y , k 0 + m N p o s t + n )
(7) τ = Δ T l n ( I 1 / I 2 ) | CommonCrawl |
A DAG-based comparison of interventional effect underestimation between composite endpoint and multi-state analysis in cardiovascular trials
Antje Jahn-Eimermacher ORCID: orcid.org/0000-0002-2397-53401,
Katharina Ingel1,
Stella Preussler1,
Antoni Bayes-Genis2 &
Harald Binder1,3
BMC Medical Research Methodology volume 17, Article number: 92 (2017) Cite this article
Composite endpoints comprising hospital admissions and death are the primary outcome in many cardiovascular clinical trials. For statistical analysis, a Cox proportional hazards model for the time to first event is commonly applied. There is an ongoing debate on whether multiple episodes per individual should be incorporated into the primary analysis. While the advantages in terms of power are readily apparent, potential biases have been mostly overlooked so far.
Motivated by a randomized controlled clinical trial in heart failure patients, we use directed acyclic graphs (DAG) to investigate potential sources of bias in treatment effect estimates, depending on whether only the first or multiple episodes are considered. The biases first are explained in simplified examples and then more thoroughly investigated in simulation studies that mimic realistic patterns.
Particularly the Cox model is prone to potentially severe selection bias and direct effect bias, resulting in underestimation when restricting the analysis to first events. We find that both kinds of bias can simultaneously be reduced by adequately incorporating recurrent events into the analysis model. Correspondingly, we point out appropriate proportional hazards-based multi-state models for decreasing bias and increasing power when analyzing multiple-episode composite endpoints in randomized clinical trials.
Incorporating multiple episodes per individual into the primary analysis can reduce the bias of a treatment's total effect estimate. Our findings will help to move beyond the paradigm of considering first events only for approaches that use more information from the trial and augment interpretability, as has been called for in cardiovascular research.
When analyzing composite endpoints that incorporate an endpoint with multiple episodes, such as hospital admission, a time to first event approach is frequently adopted for randomized clinical trials. Researchers from different disciplines have called for more appropriate methods of statistical analysis to more closely reflect the patients' disease burden. This involves a discussion on whether multiple episodes per patient are to be analyzed. So far, this discussion mostly has considered power issues, while overlooking potential bias. In this work, we investigate sources of bias and show that there can be a potentially severe underestimation of treatment effect estimates, when derived only based on first events, that can be substantially reduced by adequately modeling multiple episodes per patient.
Composite endpoints combine several events of interest into a single variable, usually defined as a time to event outcome. They are frequently used as primary or secondary endpoints in cardiovascular clinical trials [1, 2]. Composite outcomes facilitate the evaluation of treatment effects when unrealistically large sample sizes would be required to detect differences in the incidence of single outcomes among treatment groups, for example mortality. While using a composite outcome may help in terms of power, at the same time it introduces its own difficulties concerning interpretation of trial results and methodological challenges [2–6]. One major concern is that endpoints occurring in individual patients usually are clinically related (such as nonfatal and fatal myocardial infarctions). Multi-state modeling of these relations by allowing for separate transition hazards between the different subsequent events has recently been proposed for large cardiovascular observational studies [7, 8]. However, for randomized clinical trials this is suspected to attenuate the power and confirmatory character of the trial [9]. In the majority of clinical trials, the concern for potential relations between clinical episodes is therefore addressed by counting only one event per patient and analyzing the time to the first of all components. By following this approach, only data on the first episode per individual are used for the primary statistical analysis, even when subsequent episodes (including deaths) have been recorded. There is an ongoing debate, in particular in cardiovascular research, on the efficiency and validity of this practice because it ignores a great deal of clinically relevant information [3, 10–12]. The impact of multiple episodes per patient on the power of a clinical trial is apparently promising [3, 13], and selected statistical methods have been exemplarily applied to single trial data [14–16]. However, less attention is paid to the estimation and interpretability of treatment effects that can be substantially attenuated depending on whether multiple episodes are analyzed or not. We consider this critical since the choice of a statistical method for analyzing trial data should not be mainly driven by power considerations but by the objective to obtain an unbiased and meaningful treatment effect estimate, i.e. to make causal inferences about the treatment and its (added) benefit and to understand how a treatment influences a patient's disease burden.
Although randomized clinical trials are often suspected to produce unbiased results as the randomized treatment allocation prevents confounding, hazard-based survival analysis can introduce its own bias [17–20]. In particular, the Andersen-Gill approach [21] has been suspected to introduce bias by erroneously modeling that a clinical episode will leave a patient's risk profile unchanged and will not affect the incidence rate for future episodes [22–25]. This finding has been controversially discussed as it implicitly assumes that direct effects are to be estimated [26]. The causal directed acyclic graphs approach (DAG) [27, 28] has been proposed for defining adequate statistical models that prevent or minimize bias in the presence of confounding. It is a powerful tool for identifying and addressing bias and is increasingly popular, but it is primarily applied in epidemiological research. In this work, we will make use of this approach for randomized clinical trials to provide an accessible explanation of potential bias in proportional hazards-based survival analysis of first and multiple episodes of a composite endpoint and to define adequate statistical models for reducing or preventing bias. While the use of DAGs may be problematic in a continuous time setting [29], we are avoiding such issues by first considering actual discrete states in DAG analysis, and making the transition to continuous time settings with evidence from simulations.
The article is organized as follows: We motivate this research with a clinical example in "Cardiovascular clinical trial example" section. Then, in "Methods" section, we first formalize potential bias via directed acyclic graphs and illustrate the findings on simplified examples. Thereafter we identify statistical models that have the potential to reduce that bias. We support our findings by simulation studies that mimic the motivating clinical trial situation and present the results in "Results" section. Finally, we finish the article with a discussion in "Discussion" section.
Cardiovascular clinical trial example
This work has been motivated by the ST2 guided tReatment upON discharGe in Heart Failure (STRONG-HF) trial, a randomized controlled clinical trial that has been planned to investigate whether heart failure patients will benefit from a biomarker-based treatment scheme compared to standard care. It is planned as a multicenter prospective, randomized, open-label for patients, blinded-endpoint and event-driven study. The primary endpoint was defined as a composite of cardiovascular mortality and recurrent worsening heart failure. Worsening heart failure includes hospitalization due to heart failure or urgent visit to the emergency department or heart failure clinic due to decompensation needing unplanned intravenous diuretic treatment. Patients are to be uniformly recruited over a period of one year and are to be followed for one year after the end of the recruitment phase. The two regimens are to be allocated randomly and in a balanced fashion among the recruited patients. In addition to the treatments' effect on the combined endpoint, its effects on the single components, cardiovascular death and disease-associated admissions, are also of major interest. From previous data, an annual death rate of 0.14 and an annual admission rate of 1.17 is expected for the patients under standard care (control group), defining a hazard rate for the composite endpoint of λ=1.31. Treatment is expected to decrease that rate by 25%, corresponding to a hazard ratio of HR =0.75. When the time to first composite endpoint is analyzed, a total number of N=465 patients is required to attain a power of 80% for rejecting the null hypothesis of no treatment effect on the incidence of the composite endpoint H 0={H R=1} [30]. Incorporating recurrent events into the statistical analysis has the potential to decrease the sample size to up to N=223 [13], and thus is apparently promising for improving the feasibility and efficiency of the trial. However, disease-associated complications that require a hospital admission will obviously affect the risk for further non-fatal and fatal outcomes. For example, patients who acquire a non-fatal MI have an increased risk for fatal and non-fatal outcomes thereafter. Concern arises if this might question the study results, and, more generally, how incorporating recurrent events into the primary statistical analysis will affect the treatment effect estimates and thus the interpretation of trial results.
Formalizing potential bias via directed acyclic graphs
The graphical representation of causal effects between variables [27, 28] helps to understand the sources of potential bias when estimating some causal effect of an exposure to an outcome and how different statistical models differently address that bias. In the causal directed acyclic graph (DAG) approach, an arrow connecting two variables indicates causation; variables with no direct causal association are left unconnected. We will use this approach for illustrating the causal system in randomized clinical trials when a composite endpoint is investigated that comprises fatal and non-fatal events. An example is the composite of cardiovascular death and hospital admission for heart failure disease as defined in the motivating clinical trial example ("Cardiovascular clinical trial example" section). Effect estimation is assumed to be hazard-based with a proportional hazards assumption.
Figure 1 illustrates the causal system in a time to first composite endpoint approach. The randomized treatment (X) is the exposure variable, that is assumed to affect the fatal and non-fatal outcomes and thus the composite endpoint. In addition to treatment, further disease or patient characteristics will affect the risk for adverse outcomes. Some are known, others are unknown or unmeasurable (summarized as a single unobserved variable Z). Obviously, being free of any event at time t (S t ) is a collider on the path between the exposure treatment and the unobserved variable Z. Conditioning on a collider will open the path between the variables that are connected by the collider and thus artificially introduce spurious associations [31]. Each contribution to the partial likelihood in the Cox proportional hazards model is a conditional contribution, conditional on being free of any event up to that time. Therefore, an association is induced between the actually unrelated randomized treatment and the unobserved variable Z. As Z affects the outcomes, this association will bias the treatment's effect estimate for the fatal, non-fatal and composite outcome. This bias is called selection bias and has been investigated for incidence rate ratios [28] and hazard ratios [17, 18, 20, 32] before. We will illustrate selection bias by a simple example at the end of this subsection. Whereas conditioning on being alive is an unavoidable step in the hazard-based analysis, we can prevent conditioning on being free of any event by including the recurrent non-fatal events into the statistical model: the at-risk set in the partial likelihood estimator then comprises all subjects that are still alive in contrast to a set of those subjects only that are free of any event at the particular time point. This way, incorporating recurrent events will reduce selection bias when estimating the treatment effect on the fatal and non-fatal outcomes and will thus also reduce the bias when estimating the treatment effect on the composite endpoint. In summary, the first insight gained from a formalization via DAGs is that analyzing all non-fatal events, also the recurrent ones, in the statistical model for the composite endpoint will reduce selection bias.
Directed acyclic graph for the causal system between treatment (X), being free of any event at time t (S t ) and t+Δ (S t+Δ ), and unobserved variables (Z) that are unrelated to treatment (for example by randomization) and affect the event rate. Figure according to Aalen et al. [20]
Consider a balanced randomized trial comparing the time to first event under a particular treatment, as compared to some control intervention. Further assume that the study population consists of two equally-sized subgroups, a low-risk group and a high-risk group, specified by an unobserved variable Z (Fig. 1). For illustrating selection bias we consider a setting with discrete times (which can be readily transferred to the continuous time Cox proportional hazards model [33]) with failures occurring only at times t 1 and t 2. In the control group the risk for experiencing an event at time t 1 is assumed to be 1/3 in the low-risk-group and 2/3 in the high-risk-group, respectively. The same risk probabilities are assumed for experiencing an event at time t 2 in the subset of subjects that are still at risk before t 2, i.e. having not experienced an event at t 1. The odds ratio for treatment compared to control, which is the discrete-time equivalent to the continuous time hazard ratio [33], is assumed to be 1/2 within each subgroup and for each time t 1 and t 2 (constant hazard ratio assumption). From the odds ratio and the expected failure rates per subgroup in the control group at t 1, we can derive the expected failure rates at t 1 for the treatment group, which are 1/5 and 1/2, respectively. For the example of a sample size of N=1800 per treatment group, Table 1 shows the number of event-free subjects just before t 2 per treatment group and subgroup and the expected number of failures at t 2 as derived from the expected failure rates of 1/3 and 2/3 in the control group and 1/5 and 1/2 in the treatment group. Whereas the odds ratio is unbiased when estimated within each subgroup (0.5), the crude odds ratio estimated from the marginal table is 0.57, indicating a smaller treatment effect (selection bias) that is obtained when conditioning on being event-free but not taking Z into account. As indicated, Z might be unobserved, making conditioning on Z problematic. The selection bias does not depend on sample size, which was chosen to be large in this data example to obtain integer patient numbers. The difference between conditional and unconditional modeling remains when moving from discrete time to continuous time, i.e. when the interval between two potential failures becomes infinitesimally small and the hazard ratio is defined on a continuous time scale. Simulation results ("Simulation studies" section) will further support this finding.
Table 1 Expected patient numbers in the discrete failure time example for time to first event stratified by subgroup ("Selection bias" section)
Direct effect bias
When following the recommendation to include recurrent events into the statistical model as derived from the previous section, concern might arise as to how to model the transitions from one non-fatal event to a succeeding fatal or non-fatal event. From cardiovascular research it is well known that the different components in a composite endpoint are related. For example, a non-fatal myocardial infarction will apparently affect the risk for further fatal or non-fatal cardiovascular outcomes. To address this concern, we will again apply the approach of directed acyclic graphs. Figure 2 illustrates the causal system when more than only the first event is considered and the risk for further events is potentially changing with each non-fatal event. The number of events experienced until time t, N(t), is a mediator lying along the causal pathway between treatment X and the number of events at t+Δ, N(t+Δ). Conditioning on or stratifying by the number of previously experienced events will close this path, and the treatment effect estimate is reduced to the treatments' direct effect on the outcome, whereas its indirect effect is not considered. While direct effects are interesting from a biological viewpoint, estimation of total effects is important from the clinical, health care, and patients' perspective. For example, the mortality rate increases after a non-fatal myocardial infarction, and therefore a treatment that effectively prevents myocardial infarctions in general reduces the mortality (indirect effect), besides its direct effect on mortality. Both, direct and indirect effects, define a treatment's total effect. We will illustrate the difference between direct and total effect estimation in a simple example at the end of this section. In summary, the second insight gained from a formalization via DAGs is to not condition on the individual's event history by stratifying or adjusting for the previous non-fatal events for estimating a treatments total effect. In contrast, in a time to first event analysis, the effect estimate is naturally restricted to the direct effect as it is derived only from those pathways, that start from the exposure variable treatment. We use the term direct effect bias when effect estimates are reduced to direct effects only.
Directed acyclic graph for the causal system between treatment (X) and the number of events up to time t (N t ) and t+Δ (N t+Δ )
Again, consider a balanced randomized trial comparing the time to event under a particular treatment as compared to some control intervention. As before, we consider a setting with discrete times, that can be transferred to the continuous time Cox proportional hazards model. We assume that non-fatal events are experienced at time t 1 and can be followed by death at time t 2. The risk for experiencing a non-fatal event at t 1 is assumed to be 2/3 in the control group and 1/3 in the treament group, respectively. The mortality rate in patients who have acquired the non-fatal event at t 1 increases to 40% as compared to a 20% risk in those subjects who are free of an event at t 1. Mortality rates are assumed to be not affected by treatment conditionally on the number of prior events, i.e. neither before nor after having experienced a non-fatal event. For the example of a total sample size of N=1800, Table 2 shows the expected number of death cases stratified by having experienced a preceding non-fatal event at t 1 and marginally over all subjects. Whereas within each stratum no treatment effect on mortality is observed, respectively, the odds ratio estimated from the marginal table is 0.73, indicating a positive treatment effect on mortality. This result indicates that treatment effectively reduces mortality by preventing subjects from entering that stratum, which is characterized by a higher mortality rate (total effect), although it has no direct effect on the mortality rates at all. Effect estimates differ when conditioning on prior events or not, irrespective of sample size, and when moving from discrete time to continuous time, thus when deriving the hazard ratio in a continuous time scale. Simulation results ("Simulation studies" section) will further support this finding.
Table 2 Expected patient numbers in the discrete failure time example for time to death stratified by previously experienced non-fatal event ("Direct effect bias"section)
Reducing bias by statistical modeling
We will now transfer the insights on biased effect estimation as derived from the DAGs to identify statistical analysis models that have the potential to reduce that bias. Consider a randomized clinical trial with n subjects followed for a composite endpoint. Subjects will be indexed by i, events by j. Let \(T_{CE,ij}^{*}\) be a series of random variables that describe the time from starting point 0 to the j-th occurrence of the composite endpoint in subject i. Let further C i be independent identically distributed random variables that describe the time to censoring. We observe \(T_{CE,ij}=min(T_{CE,ij}^{*},C_{i})\), the time to composite endpoint or censoring, whichever comes first, and the indicator variables \(\delta _{ij}=\textbf {I}\left \{T_{CE,ij}^{*}\leq C_{i}\right \}\).
It has been proposed to describe the distribution of T C E,i j by a multiplicative intensity process [34], Y i (t)·λ C E,i (t), of the underlying counting process
$$N_{i}(t):=\#\left\{j; \;T_{CE,ij} \leq t \;\wedge\; T^{*}_{CE,ij} \leq C_{i}\right\}, $$
with deterministic hazard function λ C E,i (t) (Fig. 3) and Y i (t)=I{t≤C i }. Figure 3 sketches a model that comprises all events, also the recurrent ones (C E 1, C E 2…), without conditioning on or stratifying by the event history (transition hazards between the succeeding events do not change). If conditional on covariates λ C E,i has a Cox proportional hazards shape, this model is known as the Andersen-Gill [21] model.
$$ \lambda_{CE}(t|X_{i}) = \lambda_{CE,0}(t) \cdot \exp(\beta X_{i}') $$
Unstratified transition hazard model for the transitions between study start (S) and the recurrent composite endpoints (C E 1, C E 2…)
with X i being the p-dimensional vector of covariates for subject i and β being the vector of regression coefficients. The Andersen-Gill model was recently applied to re-analyze clinical trials in patients suffering from heart failure to evaluate the effect of new therapies on the patients risk of the composite of hospitalizations due to heart failure and cardiovascular death [14–16]. The treatment effect β is then estimated by maximizing the partial likelihood
$$\begin{array}{@{}rcl@{}} PL^{AG}(\beta) &=& \prod_{\mathrm{i}} \prod_{\mathrm{j}} \left(\frac{\exp(\beta X_{i}')}{{\sum\nolimits}_{k \in R_{(ij)}^{AG}}\exp(\beta X_{k}')} \right)^{\delta_{ij}} \end{array} $$
The at-risk set \(R_{(ij)}^{AG}\) includes all subjects who have not been censored and have not died before time t ij , the time when individual i experiences its j-th event. In contrast, in a stratified model as proposed by Prentice et al. [35], the at-risk set \(R_{(ij)}^{PWP}\) is restricted to only those subjects who are at risk for experiencing the j-th event at time t ij , thus having experienced j−1 events before. However, following the arguments of "Formalizing potential bias via directed acyclic graphs" section, the Andersen-Gill model allows the estimation of total effects by not stratifying on the event history, in contrast to the stratified model that is estimating direct effects only [26]. Both models are still susceptible to selection bias as they naturally restrict the risk sets to subjects being alive. However, they reduce the selection bias as compared to results derived from a Cox proportional hazards model with partial likelihood
$$\begin{array}{@{}rcl@{}} PL^{C}(\beta) &=& \prod_{\mathrm{i}} \left(\frac{\exp(\beta X_{i}')}{{\sum\nolimits}_{k \in R_{(i)}^{C}}\exp(\beta X_{k}')}\right)^{\delta_{i1}} \end{array} $$
as in this model the risk sets \(R_{(i)}^{C}\) are restricted to subjects that are not only still alive but also free of any previous non-fatal event at time t i1, the time of the first event or censoring of individual i.
The partial likelihood (2) of the unstratified maximally unrestricted Andersen-Gill model (1) can be re-written as
$$\begin{array}{@{}rcl@{}} PL^{AG}(\beta) &=& \prod_{l}\prod_{i} \prod_{j_{l}} \left(\frac{\exp(\beta X_{i}')}{{\sum\nolimits}_{k \in R_{(ij_{l})}^{AG}}\exp(\beta X_{k}')}\right)^{\delta_{ij_{l}}} \end{array} $$
with l=1,…,L indexing the L components of the composite and j l indexing the events of type l and \(\delta _{ij_{l}}\) again being the corresponding event indicator. Therefore, model (2) can also be described as a multi-state model that allows for different baseline transition hazards for the different components. Figure 4 sketches this model for the motivating example of two components: death (D) and hospital admission (H 1,H 2…) with
$$\begin{array}{@{}rcl@{}} \lambda_{l}(t|X_{i})&=& \lambda_{l,0}(t)\exp(\beta X_{i}'), l=1,2. \end{array} $$
Multi-state model for the transitions between study start (S), recurrent hospital admissions (H i ) and death (D) stratified by the event type but un-stratified by the number of preceeding hospital admissions
By defining a single vector β for both the transition hazards λ 1 and λ 2, a constraint is induced, namely that the covariates equally affect fatal and non-fatal events. In particular, for our motivating example this means that treatment has the same effect on the fatal and non-fatal outcomes. This constraint has in fact been described as a requirement for the proper use of composite endpoints, for example by regulatory agencies [36]. However, at the same time it has been observed that in practice this assumption is frequently violated. Ferreira-Gonzalez et al. [37] conclude from a systematic literature review that effects of treatments in cardiovascular clinical trials differ strongly between the components, with larger effects in less relevant components and the smallest effects in mortality. The same has been observed in several clinical trials on heart failure disease [14, 16]. To relax the constraint of a common treatment effect on all components, the more general multi-state model (MS) can be defined by transition hazards
$$\begin{array}{@{}rcl@{}} \lambda_{l}(t|X_{i})&=& \lambda_{l,0}(t)\exp(\beta_{l} X_{i}'), l=1\ldots{L} \end{array} $$
and partial likelihood
$$\begin{array}{@{}rcl@{}} {}PL^{MS}(\beta_{1}, \ldots,\beta_{L}) &\,=\,& \!\prod_{l}\prod_{i} \prod_{j_{l}}\! \left(\!\frac{\exp(\beta_{l} X_{i}')}{{\sum\nolimits}_{k \in R_{(ij_{l})}^{MS}}\exp(\beta_{l} X_{k}')}\!\right)^{\delta_{ij_{l}}}. \end{array} $$
and risk sets \(R_{(ij_{l})}^{MS}\) that include all subjects who have not been censored and have not died before the particular event time, respectively. This generalization of the Andersen-Gill model allowing for separate treatment effects for each component, β l , can be proposed whenever sample size and event frequency allow for such an approach. It still does not stratify on the event history and does not restrict the at-risk-set only to those subjects that are free of any event, but allows for a higher flexibility with respect to differential treatment effects.
Note, that we focus on marginal models within this manuscript. By introducing a (joint) frailty term into model (5) or (6) and applying penalized likelihoods [38], a conditional joint frailty model could also be fitted. By conditioning on the frailty term the selection bias as illustrated in Fig. 1 is minimized, however at the price of increasing the model complexity by introducing further model assumptions (joint frailty distribution) and parameters (frailty variance). We will show in the next section that in many applications one can safely stay with the marginal model, thereby following the Occam's razor principle.
Simulation studies
We investigate the bias in treatment effect estimation as identified in "Formalizing potential bias via directed acyclic graphs" section (selection bias, direct effect bias) in simulation studies. The simulation study mimics the clinical trial situation that has motivated this research. For this purpose, we consider a balanced randomized clinical trial with a follow-up of two years and uniformly distributed recruitment of N=380 individuals over the first year. The transition hazards λ 1 and λ 2 (Fig. 4 and Eq. (6)) for the transitions to fatal and non-fatal events, respectively, are defined by λ l (t|X i )=λ l · exp(β l X i ).
Baseline annual death and admission rates are defined as λ 1=0.14 and λ 2=1.17, respectively. Further simulations are performed where fatal and non-fatal events equally contribute to the same overall annual event rate, thus λ 1=λ 2=0.655. Treatment is assumed to equally affect both components of the composite, and we define β=β 1=β 2= log(0.75) as was expected in the planning phase of the STRONG-HF trial. We additionally consider situations where treatment has a minor effect on mortality (β 1= log(0.92)), following the findings of a systematic literature review on cardiovascular clinical trials [37]. To consider that unobserved or unmeasurable variables affect the outcomes, we define an unobserved variable Z i per individual i. The Z i are generated as independent and gamma-distributed random variables with mean 1 and variance θ. Following a frailty approach, Z i is assumed to act multiplicatively on the hazard by
$$\begin{array}{@{}rcl@{}} \lambda_{l}(t|X_{i},Z_{i})&=& \lambda_{l,0}(t)\cdot Z_{i} \cdot \exp(\beta_{l} X_{i}), l=1,2 \end{array} $$
Note that the unobserved variable acts on both transition hazards, inducing a correlation between both processes. Such a joint model [38] is considered to more closely mimic real clinical trial data as compared to simulation models assuming independency between the event processes, as in most situations it can be expected that patient and disease characteristics will affect adverse disease outcomes towards the same direction. Different θ∈{0,0.2,…,1} reflect different strengths of association between the unobserved variable and the fatal and non-fatal outcomes and will therefore cause different degrees of selection bias. In a second simulation study we add an indirect effect of treatment on the composite outcome by defining the transition hazards to be increasing by a factor of ρ with each non-fatal event. By applying a range of values between ρ=1 (no increase of hazards) and ρ=1.3 (increase of hazards by 30% with each non-fatal event), different degrees of the indirect effects are evaluated.
In a third simulation study we investigate treatment effect estimation when both effects are present, that is the transition hazards increase with each non-fatal events by a factor of ρ (ρ∈ [ 1,1.3]) while in addition a gamma-distributed frailty term with mean 1 and a moderate variance of θ=0.6 acts on all transition hazards. For each simulation model 5000 datasets are simulated, respectively.
All simulated data are analyzed by the Andersen-Gill model for the composite endpoint (1) and its multi-state extension (6) to estimate separate treatment effects on fatal and non-fatal outcomes. Both models are applied to the full simulated datasets and to datasets that are restricted to the first composite endpoint per individual. For the restricted data, the Andersen-Gill model then reduces to a Cox proportional hazards model and its multi-state extension to a competing risk model.
All data are simulated and analyzed in the open-source statistical environment R, version 3.1.0 (2014-04-10) [39] and by extending the published simulation algorithm for recurrent event data [40]. Mean regression coefficient estimates are derived together with standard errors as estimated from their variability among the simulations.
Simulation results are presented in Tables 3 and 4 for λ 1∈{0.14,0.655}, β 1∈{log(0.92), log(0.75)}, ρ∈{1,1.05,1.1,1.15,1.2,1.25,1.3} and θ∈{0,0.6}. In addition, Fig. 5 summarizes the simulation results for data following model (8), where transition hazards are equally affected by treatment and unaffected by non-fatal events (ρ=1), but a common unobserved variable Z acts multiplicatively on each transition hazard. Mean treatment regression coefficient estimates are given dependent on the variance of Z (θ) when applying the Cox proportional hazards analysis for the time to first event to each particular outcome (1st events), when applying the Andersen-Gill modeling approach (1) for the time to recurrent composite endpoints (all events, composite outcome) and when applying the multi-state modeling approach (6) to the recurrent events (all events, fatal and non-fatal outcome). The extent of bias that is introduced by conditioning on being event-free (1st event analyses) is increasing with the strength of association between the unobserved variable and the fatal and non-fatal outcomes, supporting the findings of "Formalizing potential bias via directed acyclic graphs" section. The statistical analysis models incorporating recurrent events do not condition on being event-free and thus substantially decrease the selection bias. The bias that is still remaining is only small, because it is caused by conditioning on survival status and the mortality rate was assumed to be low as observed in most cardiovascular trials [37]. When the mortality rate has a larger contribution to the overall event rate of λ 1+λ 2=1.31 (λ 1=0.655), selection bias in the analysis of recurrent events slightly increases as compared to the situation with a mortality rate of only λ 1=0.14. The higher the mortality rate, the more conditioning on being alive is affecting the partial likelihood estimates, which explains this result. However, bias remains small (\(\exp (\hat {\beta _{1}})=\exp (\hat {\beta _{2}})=0.78\) when exp(β 1)= exp(β 2)=0.75 and \(\exp (\hat {\beta _{1}})=0.93\), \(\exp (\hat {\beta _{2}})=0.76\) when exp(β 1)=0.92 and exp(β 2)=0.75) for θ=0.6 and ρ=1 (Table 4). When treatment differentially affects the risk for fatal and non-fatal outcomes (Fig. 6), the treatment regression coefficient
Mean hazard ratio estimates in the simulation model under λ 1=0.14,λ 2=1.17, ρ=1, a common treatment effect on non-fatal and fatal outcomes (β 1=β 2= log(0.75)) and varying influence of an unobserved variable Z (having variance θ). Cox proportional hazards analysis for the composite and the single components, respectively (1st events), Andersen-Gill analysis (all events, composite outcome) and multi-state analysis (all events, fatal and non-fatal outcomes)
Mean hazard ratio estimates in the simulation model under λ 1=0.14,λ 2=1.17, ρ=1, a lower treatment effect on fatal than on non-fatal outcomes (log(0.92)=β 1>β 2= log(0.75)) and varying influence of an unobserved variable Z (having variance θ). Cox proportional hazards analysis for the composite and the single components, respectively (1st events), Andersen-Gill analysis (all events, composite outcome) and multi-state analysis (all events, fatal and non-fatal outcomes)
Table 3 Simulation results for λ 1=0.14 and λ 2=1.17
Table 4 Simulation results for λ 1=0.655 and λ 2=0.655
estimates differ by outcome. However, compared to the setting with a common treatment effect (Fig. 5), all effect estimates are similarly affected by selection bias with respect to the direction and magnitude of that bias.
Figure 7 shows the simulation results for data randomly generated under transition hazards for fatal and non-fatal events that are equally affected by treatment and increase by a factor of ρ with each non-fatal event. No unobserved variable is introduced in this simulation model to clearly differentiate between the different sources of bias. Whereas direct and total effects coincide when transition hazards remain unaffected by previous events (ρ=1), Fig. 7 clearly shows that direct and total effects substantially differ when transition hazards increase with non-fatal events (ρ>1). The analysis of 1st events provides direct effect estimates whereas the analysis of all events provides total effect estimates according to the findings of "Formalizing potential bias via directed acyclic graphs" section. By preventing experiencing a first non-fatal event, the treatment prevents the patients from becoming at an increased risk for further events. This contributes to the indirect effect, and thus to a larger total treatment effect as compared to its direct effect. Under an increased mortality rate (λ 1=0.655), the process for recurrent events stops earlier on average due to the higher frequency of competing terminal events. Thus, the indirect effect of the treatment (preventing later events that occur with an increased risk rate), contributes less to the total effect estimates. Therefore, differences between total and direct effect estimates become smaller: whereas under λ 1=0.14, θ=0 and ρ=1.3 the total effect in terms of the hazard ratio is estimated as 0.65 as compared to the direct effect of 0.75 (Table 3), under λ 1=0.655 the total effect estimate of 0.72 is more closely approaching the direct effect (Table 4).
Mean hazard ratio estimates in the simulation model under λ 1=0.14,λ 2=1.17,θ=0, a common direct treatment effect on non-fatal and fatal outcomes (β 1=β 2= log(0.75)) and transition hazards that increase by a factor of ρ after each non-fatal event. Cox proportional hazards analysis for the composite and the single components, respectively (1st events), Andersen-Gill analysis (all events, composite outcome) and multi-state analysis (all events, fatal and non-fatal outcomes)
Again, when treatment differentially affects the risk for fatal and non-fatal outcomes (Fig. 8), direct and total effect estimates also differ for each single outcome. The direction and magnitude of these differences are comparable to the results observed for common treatment effects (Fig. 7).
Mean hazard ratio estimates in the simulation model under λ 1=0.14,λ 2=1.17,θ=0, a lower direct treatment effect on fatal than on non-fatal outcomes (log(0.92)=β 1>β 2= log(0.75)) and transition hazards that increase by a factor of ρ after each non-fatal event. Cox proportional hazards analysis for the composite and the single components, respectively (1st events), Andersen-Gill analysis (all events, composite outcome) and multi-state analysis (all events, fatal and non-fatal outcomes)
As the hazard for the composite endpoint is the sum of the hazards over the two components [41], the hazard ratio can be derived as \(1/(\lambda _{1}+\lambda _{2}) {\sum \nolimits }_{i=1}^{2} \lambda _{i} \exp (\beta _{i})\) in the situation of constant hazards. This weighted sum is estimated when analysing the composite outcome using first events only or all events as long as no selection bias and no indirect effects are present, that is θ=0 for the analysis of 1st events and ρ=1 for the analysis of all events (Figs. 6 and 8). θ>0 and/or ρ>1 then affect the estimates for the composite endpoint in the same direction as the estimates for the single components.
Whereas selection bias is attenuating the treatment effect estimates, hazards that increase with each non-fatal event induce the total effect estimates to become larger than the direct effect only. As a consequence, the differences between direct and total treatment effect estimates decrease with increasing degree of selection bias. Whereas \(\exp (\hat {\beta _{2}})\) decreased from 0.75 to 0.65 when hazards increase by 0 to 30% with each non-fatal event, under θ=0.6 only a decrease up to 0.72 is still observed (Table 3). Under a higher mortality rate of λ 1=0.655 even not any decrease in the total effect estimate is observed (\(\exp (\hat {\beta _{2}})=0.78\)) as here the selection bias starts to prevail (Table 4).
Potential biases in analysis of composite endpoints that comprise endpoints with multiple episodes, such as hospital admission, have been mostly overlooked so far. To advance the state-of-the-art, we provided an accessible explanation of biases in this setting, that is supported by simulation results. Our results show that the initial step in modeling must be defining the treatment effect that is of interest: A total treatment effect estimate can only be derived by analysing all events, whereas only the direct treatment effect can be estimated from analyses of 1st events or from analyses that are stratified by event history. When interpreting trial results, eventually derived from different statistical models, one must be aware, that the direct effect estimates can be severly more prone to selection bias. Our findings will help to move beyond the paradigm of considering first events only for approaches that use more information from the trial and augment interpretability, as has been called for in cardiovascular research [11, 12].
The association of some variable with the outcome is not a reasonable criterion for covariate selection in multiple regression, as has been described in epidemiology for example to explain the birth-weight paradox [42]. We use similar arguments in randomized clinical trials to justify that adjusting or stratifying for the patients' disease history within trial time is inadequate for estimating a treatments' total effect.
Selection bias in the Cox proportional hazards model as arising from the non-collapsibility of the hazard ratio estimate [18, 28] has recently been described by Aalen et al. [20]. They use a hypothetical example, where each individual who dies is replaced by an identical individual having the same covariate structure, which would prevent selection bias. In a way, the Andersen-Gill model implements this idea for non-fatal recurrent events by leaving individuals in the risk set after having experienced an event. A terminal component of the composite will still cause selection bias under the Andersen-Gill and multi-state approach. Its magnitude depends on the terminal event rate. Whereas in our simulations, the terminal event rate was small, as observed for most cardiovascular studies [37], and the multi-state models provided nearly unbiased results, Rogers et al. [43] advocate the need for joint frailty models [38, 44] to prevent from bias. However, their findings are based on simulation studies with high mortality rates (up to 31%), which explains these controversial conclusions. Balan et al. [45] recently proposed a score test for deciding between multi-state and joint frailty modeling. All these findings confirm, that using composite endpoints in randomized clinical trials can not eliminate the bias arising from the association between the risk processes of the single components as long as only the first event is analyzed [46].
We have focused on the estimation of a treatment effect based on proportional hazards. Additive hazard models have been recommended instead as they are unaffected by non-collapsibility [20, 47].
Hazard ratios are used to assess the early benefit of new drugs compared to some control [48]. Our results indicate the need to further specify the estimand, the assessment refers to: a treatment's direct or its total effect as both can differ substantially.
In recent years alternatives to hazard-based analyses of composite endpoints have been proposed based on weighted outcomes [49–51] to consider that not all components are of the same clinical relevance and importance for the patients. The multi-state approach proposed in this paper allows a separate investigation of treatment effects on the different components, and it seems to be important to compare both approaches with respect to interpretability of treatment effect estimation and power. Concerning power, the multi-state approach requires some kind of multiplicity adjustment as different treatment effects are estimated for the different components. Sequentially rejective test procedures provide a powerful and flexible tool to control type I error. As with other multivariate time to event outcomes, closed form solutions for sample size planning will be difficult to obtain [52], but simulation algorithms allow for an extensive investigation of sample size requirements, including for complex models [40, 52].
This manuscript provides an accessible explanation of potential biases in treatment effect estimation when analysing composite endpoints. It illustrates that the risk for bias and its degree depend on whether first or multiple episodes per patient are analysed. Integrating multiple episodes into the statistical analysis model has the potential to reduce selection bias and to additionally capture indirect treatment effects. In particular for cardiovascular research, these findings may help to move beyond the paradigm of considering first events only.
Lim E, Brown A, Helmy A, Mussa S, Altman DG. Composite outcomes in cardiovascular research: a survey of randomized trials. Ann Intern Med. 2008; 149(9):612–17.
Freemantle N, Calvert M, Wood J, Eastaugh J, Griffin C. Composite outcomes in randomized trials: greater precision but with greater uncertainty?JAMA. 2003; 289(19):2554–9.
Ferreira-González I, Permanyer-Miralda G, Busse JW, Bryant DM, Montori VM, Alonso-Coello P, Walter SD, Guyatt GH. Methodologic discussions for using and interpreting composite endpoints are limited, but still identify major concerns. J Clin Epidemiol. 2007; 60(7):651–7.
Freemantle N, Calvert M. Weighing the pros and cons for composite outcomes in clinical trials. J Clin Epidemiol. 2007; 60(7):658–9.
Montori VM, Permanyer-Miralda G, Ferreira-González I, Busse JW, Pacheco-Huergo V, Bryant D, Alonso J, Akl EA, Domingo-Salvany A, Mills E, Wu P, Schünemann HJ, Jaeschke R, Guyatt GH. Validity of composite end points in clinical trials. BMJ. 2005; 330(7491):594–6.
Chi GYH. Some issues with composite endpoints in clinical trials. Fundam Clin Pharmacol. 2005; 19(6):609–19.
Ieva F, Jackson CH, Sharples LD. Multi-state modelling of repeated hospitalisation and death in patients with heart failure: The use of large administrative databases in clinical epidemiology. Stat Methods Med Res. 2017; 26(3):1350–72.
Ip EH, Efendi A, Molenberghs G, Bertoni AG. Comparison of risks of cardiovascular events in the elderly using standard survival analysis and multiple-events and recurrent-events methods. BMC Med Res Methodol. 2015; 15(1):15.
Rauch G, Rauch B, Schüler S, Kieser M. Opportunities and challenges of clinical trials in cardiology using composite primary endpoints. World J Cardiol. 2015; 7(1):1–5.
Anker SD, Schroeder S, Atar D, Bax JJ, Ceconi C, Cowie MR, Crisp A, Dominjon F, Ford I, Ghofrani HA, Gropper S, Hindricks G, Hlatky MA, Holcomb R, Honarpour N, Jukema JW, Kim AM, Kunz M, Lefkowitz M, Le Floch C, Landmesser U, McDonagh TA, McMurray JJ, Merkely B, Packer M, Prasad K, Revkin J, Rosano GMC, Somaratne R, Stough WG, Voors AA, Ruschitzka F. Traditional and new composite endpoints in heart failure clinical trials: facilitating comprehensive efficacy assessments and improving trial efficiency. Eur J Heart Fail. 2016; 18(5):482–89.
Anker SD, McMurray JJV. Time to move on from 'time-to-first': should all events be included in the analysis of clinical trials?Eur Heart J. 2012; 33(22):2764–5.
Claggett B, Wei LJ, Pfeffer MA. Moving beyond our comfort zone. Eur Heart J. 2013; 34(12):869–71.
Ingel K, Jahn-Eimermacher A. Sample-size calculation and reestimation for a semiparametric analysis of recurrent event data taking robust standard errors into account. Biometrical J. 2014; 56(4):631–48.
Rogers JK, McMurray JJV, Pocock SJ, Zannad F, Krum H, van Veldhuisen DJ, Swedberg K, Shi H, Vincent J, Pitt B. Eplerenone in patients with systolic heart failure and mild symptoms: analysis of repeat hospitalizations. Circulation. 2012; 126(19):2317–23.
Rogers JK, Pocock SJ, McMurray JJV, Granger CB, Michelson EL, Östergren J, Pfeffer Ma, Solomon SD, Swedberg K, Yusuf S. Analysing recurrent hospitalizations in heart failure: a review of statistical methodology, with application to CHARM-Preserved. Eur J Heart Fail. 2014; 16(1):33–40.
Rogers JK, Jhund PS, Perez AC, Böhm M, Cleland JG, Gullestad L, Kjekshus J, van Veldhuisen DJ, Wikstrand J, Wedel H, McMurray JJV, Pocock SJ. Effect of rosuvastatin on repeat heart failure hospitalizations: the CORONA Trial (Controlled Rosuvastatin Multinational Trial in Heart Failure). JACC Heart Fail. 2014; 2(3):289–97.
Schmoor C, Schumacher M. Effects of covariate omission and categorization when analysing randomized trials with the Cox model. Stat Med. 1997; 16(1-3):225–37.
Hernan MA. The Hazards of Hazard Ratios. Epidemiology. 2010; 21(1):13–5.
Cécilia-Joseph E, Auvert B, Broët P, Moreau T. Influence of trial duration on the bias of the estimated treatment effect in clinical trials when individual heterogeneity is ignored. Biom J. 2015; 57(3):371–83.
Aalen OO, Cook RJ, Rysland K. Does Cox analysis of a randomized survival study yield a causal treatment effect?Lifetime Data Anal. 2015; 21(4):579–93.
Andersen PK, Gill RD. Cox's regression model for counting processes: a large sample study. Ann Stat. 1982; 10(4):1100–20.
Jahn-Eimermacher A. Comparison of the Andersen-Gill model with poisson and negative binomial regression on recurrent event data. Comput Stat Data Anal. 2008; 52(11):4989–97.
Metcalfe C, Thompson SG. The importance of varying the event generation process in simulation studies of statistical methods for recurrent events. Stat Med. 2006; 25:165–79.
Kelly PJ, Lim LL. Survival analysis for recurrent event data: an application to childhood infectious diseases. Stat Med. 2000; 19(1):13–33.
Therneau TM, Grambsch PM. Modeling Survival Data: Extending the Cox Model. New York: Springer; 2000.
Cheung YB, Xu Y, Tan SH, Cutts F, Milligan P. Estimation of intervention effects using first or multiple episodes in clinical trials: The Andersen-Gill model re-examined. Stat Med. 2010; 29(3):328–6.
Pearl J. Causal diagrams for empirical research. Biometrika. 1995; 82(4):669–88.
Greenland S, Pearl J, Robins JM. Causal Diagrams for Epidemiological Research. Epidemiology. 1999; 10(1):37–48.
Aalen OO, Roysland K, Gran JM, Kouyos R, Lange T. Can we believe the DAGs? A comment on the relationship between causal DAGs and mechanisms. Stat Methods Med Res. 2016; 25(5):2294–314.
Schoenfeld DA. Sample-size formula for the proportional-hazards regression model. Biometrics. 1983; 39(2):499–503.
Cole SR, Platt RW, Schisterman EF, Chu H, Westreich D, Richardson D, Poole C. Illustrating bias due to conditioning on a collider. Int J Epidemiol. 2010; 39(2):417–20.
Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004; 15(5):615–25.
Cox DR. Regression models and life-tables (with discussion). J R Stat Soc Ser B. 1972; 34(2):187–220.
Aalen O. Nonparametric inference for a family of counting processes. Ann Stat. 1978; 6(4):701–26.
Prentice RL, Williams BJ, Peterson AV. On the regression analysis of multivariate failure time data. Biometrika. 1981; 68:373–79.
European Medicines Agency: EMEA/CHMP/EWP/311890/2007 - Guideline on the evaluation of medicinal products for cardiovascular disease prevention. 2008. http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003290.pdf. Assessed June 2017.
Ferreira-González I, Busse JW, Heels-Ansdell D, Montori VM, Akl Ea, Bryant DM, Alonso-Coello P, Alonso J, Worster A, Upadhye S, Jaeschke R, Schünemann HJ, Permanyer-Miralda G, Pacheco-Huergo V, Domingo-Salvany A, Wu P, Mills EJ, Guyatt GH. Problems with use of composite end points in cardiovascular trials: systematic review of randomised controlled trials. BMJ. 2007; 334(7597):786.
Mazroui Y, Mathoulin-Pelissier S, Soubeyran P, Rondeau V. General joint frailty model for recurrent event data with a dependent terminal event: Application to follicular lymphoma data. Stat Med. 2012; 31(11-12):1162–76.
R Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing; 2014. http://www.R-project.org/.
Jahn-Eimermacher A, Ingel K, Ozga AK, Preussler S, Binder H. Simulating recurrent event data with hazard functions defined on a total time scale. BMC Med Res Methodol. 2015; 15:16.
Beyersmann J, Latouche A, Buchholz A, Schumacher M. Simulating competing risks data in survival analysis. Stat Med. 2009; 28(6):956–71.
Hernández-Díaz S, Schisterman EF, Hernán MA. The birth weight paradox uncovered?Am J Epidemiol. 2006; 164(11):1115–20.
Rogers JK, Yaroshinsky A, Pocock SJ, Stokar D, Pogoda J. Analysis of recurrent events with an associated informative dropout time: Application of the joint frailty model. Stat Med. 2016; 35(13):2195–205.
Liu L, Wolfe RA, Huang X. Shared frailty models for recurrent events and a terminal event. Biometrics. 2004; 60(3):747–56.
Balan TA, Boonk SE, Vermeer MH, Putter H. Score test for association between recurrent events and a terminal event. Stat Med. 2016; 35(18):3037–48.
Wu L, Cook RJ. Misspecification of Cox regression models with composite endpoints. Stat Med. 2012; 31(28):3545–62.
Martinussen T, Vansteelandt S. On collapsibility and confounding bias in Cox and Aalen regression models. Lifetime Data Anal. 2013; 19(3):279–96.
Skipka G, Wieseler B, Kaiser T, Thomas S, Bender R, Windeler J, Lange S. Methodological approach to determine minor, considerable, and major treatment effects in the early benefit assessment of new drugs. Biom J. 2016; 58(1):43–58.
Pocock SJ, Ariti CA, Collier TJ, Wang D. The win ratio: a new approach to the analysis of composite endpoints in clinical trials based on clinical priorities. Eur Heart J. 2012; 33(2):176–82.
Bebu I, Lachin JM. Large sample inference for a win ratio analysis of a composite outcome based on prioritized components. Biostatistics. 2016; 17(1):178–87.
Rauch G, Jahn-Eimermacher A, Brannath W, Kieser M. Opportunities and challenges of combined effect measures based on prioritized outcomes. Stat Med. 2014; 33(7):1104–20.
Rauch G, Beyersmann J. Planning and evaluating clinical trials with composite time-to-first-event endpoints in a competing risk framework. Stat Med. 2013; 32(21):3595–608.
We thank Daniela Zoeller and two referees for their constructive comments improving the manuscript and Kathy Taylor for proof-reading.
This research was supported by a grant of the Deutsche Forschungsgemeinschaft (DFG) grant number JA 1821/4.
AJ developed the method, produced the results and wrote the first draft of the manuscript. KI derived the sample size requirements for the STRONG-HF trial and contributed to the methods. SP implemented the simulations. AB designed the STRONG-HF trial and contributed to the introduction, results and discussion sections. HB contributed to all parts of the manuscript. All authors read and approved the final manuscript.
Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center Johannes Gutenberg-University Mainz, Obere Zahlbacher Str. 69, Mainz, 55131, Germany
Antje Jahn-Eimermacher, Katharina Ingel, Stella Preussler & Harald Binder
Heart Failure Clinic, Cardiology Service, CIBERCV, Department of Medicine, UAB, Hospital Universitari Germans Trias i Pujol, Carretera del Canyet, Badalona, Barcelona, 08916, Spain
Antoni Bayes-Genis
Institute for Medical Biometry and Statistics, Faculty of Medicine and Medical Center - University of Freiburg, Stefan-Meier-Str. 26, Freiburg, 79104, Germany
Harald Binder
Antje Jahn-Eimermacher
Katharina Ingel
Stella Preussler
Correspondence to Antje Jahn-Eimermacher.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Jahn-Eimermacher, A., Ingel, K., Preussler, S. et al. A DAG-based comparison of interventional effect underestimation between composite endpoint and multi-state analysis in cardiovascular trials. BMC Med Res Methodol 17, 92 (2017). https://doi.org/10.1186/s12874-017-0366-9
Accepted: 09 June 2017
Composite endpoint
Recurrent events
Multi-state models
Hospital admissions
Data analysis, statistics and modelling | CommonCrawl |
Preliminary geochemical modeling of water–rock–gas interactions controlling CO2 storage in the Badenian Aquifer within Czech Part of Vienna Basin
K. Labus ORCID: orcid.org/0000-0001-9854-154X1,
P. Bujok2,
M. Klempa2,
M. Porzer2 &
D. Matýsek2
Environmental Earth Sciences volume 75, Article number: 1086 (2016) Cite this article
A Correction to this article was published on 16 April 2019
Prediction of hydrogeochemical effects of geological CO2 sequestration is crucial for planning an industrial or even experimental scale injection of carbon dioxide gas into geological formations. This paper presents a preliminary study of the suitability of saline aquifer associated with a depleted oil field in Czech Part of Vienna Basin, as potential greenhouse gas repository. Two steps of modeling enabled prediction of immediate changes in the aquifer and caprocks impacted by the first stage of CO2 injection and the assessment of long-term effects of sequestration. Hydrochemical modeling and experimental tests of rock–water–gas interactions allowed for evaluation of trapping mechanisms and assessment of CO2 storage capacity of the formations. In the analyzed aquifer, CO2 gas may be locked in mineral form in dolomite and dawsonite, and the calculated trapping capacity reaches 13.22 kgCO2/m3. For the caprock, the only mineral able to trap CO2 is dolomite, and trapping capacity equals to 5.07 kgCO2/m3.
Prediction of hydrogeochemical effects of geological CO2 sequestration is crucial for planning an industrial or even experimental scale injection of carbon dioxide gas into geological formations (e.g., Bachu et al. 1994, 2007). Experimental examinations of the CO2–brine–water system behavior serve precise results of short-term reactions and their products (e.g., Kaszuba et al. 2005; Lin et al. 2008; Rosenbauer et al. 2005). On the other hand, they give only an approximation of the long-term phenomena that occur within the geologic space. Coupled numerical models, incorporating kinetic transport through porous media and thermodynamic issues of the multiphase system are the most helpful in prognosing the injection impact on the hosting and insulating rock environment (e.g., Gunter et al. 1993; Perkins and Gunter 1995; White et al. 2005).
Batch experiments and geochemical modeling allow for the assessment of geochemical evolution without taking into account the fluid flow and chemical transport. Such approach is a simplification as the real geochemical evolution in gas–rock–brine systems occurs through a complex interplay of fluid and heat flow, and chemical transport processes. The geologic storage of CO2 is possible due to several physicochemical mechanisms, and one of them is the mineral trapping. These processes evolve over time, since CO2 injection, and at an early stage of the project, they are dominated by structural, stratigraphic or hydrodynamic trapping. They are ruled mainly by the following physical processes: fluid flow in liquid and gas phases under pressure and gravity forces, capillary pressure effects and heat flow by conduction, convection and diffusion. Transport of aqueous and gaseous species by advection and molecular diffusion is considered in both liquid and gas phases. (Xu et al. 2003). After CO2 injection is finished, numerous trapping mechanisms become increasingly important. CO2 may be partially contained via residual trapping as the plume moves away from the well. The gas also mixes with and dissolves in the formation water at the leading and trailing edges of the plume (solubility trapping). Dissociation of the CO2 dissolved in the formation water creates acidity that reacts with minerals in the formation and may dissolve fast reacting carbonate minerals (if present) in the acidified zone surrounding the injection well, leading to an increase in dissolved bicarbonate (so-called ionic trapping). In the longer term, dissolution of silicates such as plagioclase and chlorite causes pH to increase, and carbonates may precipitate in the previously acidified zone as CO2 partial pressure declines (mineral trapping) (Golding et al. 2013).
For the purpose of this work, hydrochemical modeling under no-flow conditions was carried out. It based on the information regarding the petrophysical and mineralogical characteristics of the formation, pore water composition, pressure and temperature values and kinetic reaction rate constants. The study was performed in the framework of research pilot project for geological storage of CO2 in the Czech Republic, conducted within the area of the Vienna Basin–Fig. 1.
Location of the study area
The potential storage site is situated in the depleted oil field Brodské of Middle Badenian age, in the Moravian part of the Vienna Basin–Fig. 2. The basin is associated with a classical thin-skinned pull-apart basin of Miocene age, which sedimentary fill is overlying the Carpathian thrust belt (Decker 1996). The petroleum systems of the Vienna basin Miocene sedimentary carapace and entire the Carpathian region in Moravia are mostly associated with the Jurassic source rocks (Picha and Peters 1998). Hydrocarbons generated within the formation supplied several oil and gas fields in the Miocene reservoirs, mostly via several major fault and fracture zones. Lower Badenian sediments of total thickness of about 700 m, consist from the basal conglomerates, covered with clays thickness up to 350 m thick. Considering the carbon dioxide sequestration, the aquifer of Middle Badenian age represented by 50- to 80-m-thick sands (that were also collector for oil and gas) was taken into account in our study. The overlying caprock, about 100 m thick, is built of pelitic sediments, containing agglutinated foraminifera fossils. The Upper Badenian sands and pelitic sediments are about 200 m thick (Krejcí et al. 2015).
Schematic cross section of the Brodské oil field (Krejčí et al. 2015)
Modeling scheme and input data
Modeling scheme
The applied scheme was designed to represent dual scale phenomena typical for relatively short-term injection and for longer-term sequestration. Simulations of water–rock–gas interactions were performed with use of the Geochemist's Workbench (GWB) 9.0–geochemical software (Bethke 1996, 2008). The GWB package was used for equilibrium and kinetic modeling of gas–brine–water system in two stages. The first one was aimed at simulating the immediate changes in the aquifer and caprock impacted by the beginning of CO2 injection, the second–enabled assessment of long-term effects of sequestration. The reactions quality and progress were monitored, and their effects on formation porosity and mineral sequestration capacity (CO2 trapping in form of carbonates) were calculated. The CO2–brine–rock reactions were simulated using two modeling procedures:
Equilibrium modeling was applied to reproduce the composition of pore water, basing on the sample chemical composition equilibrated with the formation rock mineralogy. The model required the thermodynamic data for the reacting minerals, their abundance in the assemblages within the host- and the caprock, relative fraction of pore water and the information on its physicochemical parameters,
Kinetic modeling was carried out in order to evaluate changes in the hydrogeochemical environment of the formation, due to the injection and CO2 storage. This stage considered the pore water composition calculated in the previous step (equilibrium modeling). The sliding fugacity path of CO2 gas was applied to simulate the introduction of the gas into the system and the desired pressure buildup within 100 days. This simplification assumed also the complete mixing between the gas and brine, from the beginning of the reactions in modeled system. This enabled the assessment of volumes and amounts of mineral phase precipitating or being dissolved during the simulated reactions, and their influence on porosity changes and amounts of CO2 sequestered.
Thermodynamic database "thermo.dat" (built-in the GWB package) containing activity coefficients calculated on the basis of "B-dot" equation (Helgeson and Kirkham 1974) (an extended Debye-Hückel model) was applied.
Mineralogical characteristics of the formation
Composition of mineral assemblage of the samples considered in the model was determined by means of XRD analysis–Table 1.
Table 1 Composition of mineral assemblages considered in the model (%)
Petrophysical characteristics of the formation
Values of porosity of 27.3 % for the aquifer and 8 % for the caprock the porosimetric properties of the examined rocks were determined by means of Mercury Intrusion Porosimetry (Autopore 9220 Micrometrics Injection Porosimeter). Density of the rock samples was measured with use of helium AccuPyc 1330 pycnometer. The method allowed for the determination of pore size distribution and the "effective" porosity related to pores with the radius between 0.01 and 100 μm.
Reaction model required the input of the mineral specific surface areas–SSAs. They were calculated assuming spherical grains of different diameters for sandstones and fine-grained rocks. The SSA [cm2/g] is calculated using the radius, molar volume and molecular weight of each of mineral after the following formula:
$$SSM = \frac{A \cdot v}{V \cdot MW},$$
where A−sphere area [cm2], v−molar volume [cm3/mol], V−sphere volume [cm3], and MW−molecular weight [g/mol] of a given mineral phase. Values of the specific surface areas used in calculations are presented in Table 2.
Table 2 Specific surface area of mineral grains (cm2/g) applied in modeling
Pressure, temperature and CO2 fugacity
The modeling was performed assuming the formation pressure at the level of hydrostatic pressure proposed. There is no information of underpressure or overpressure conditions within the sedimentary complex under consideration. Pressure and temperature relevant to the depth of modeled environments are given in Table 3. Temperature values were accepted after the archival well-log data.
Table 3 Pressure, temperature and CO2 fugacity data for modeled environments
As the utilized software−GWB−requires the gas pressure input in form of fugacity–a measure of a chemical potential in the form of adjusted pressure. The appropriate values (Table 3) were calculated using online calculator of the Duan Group (http://www.geochem-model.org/models/co2/), after (Duan et al. 2006).
Pore water composition
Analyses of the formation water were carried out using standard methods, including in situ measurements, assuring the quality of interpretation. The chemical compositions of the formation water in the aquifer–host environment, and the caprock, for the purpose of the simulation were obtained by equilibration of the formation water (Table 4) with the minerals assemblage typical for the modeled environments (Table 2).
Table 4 Initial composition of aquifer pore waters used in the simulations
Kinetic rate parameters
The following kinetic dissolution/precipitation rate equation simplified after Lasaga (1984) was used in the calculations:
$$r_{k} \, = \,A_{S} k_{T} \,\left( {1 - \frac{Q}{K}} \right),$$
where rk−reaction rate ([mol s−1], dissolution−rk > 0, precipitation–rk < 0), AS−mineral's surface area (cm2), kT−rate constant [mol cm−2 s−1] at the temperature T, Q−activity product (−), K−equilibrium reaction for the dissolution reaction (−).
According to the above equation, a given mineral precipitates when it is supersaturated or dissolves when it is undersaturated at a rate proportional to its rate constant and the surface area. The Arrhenius law expresses the dependence of the rate constant−kT on the temperature−T:
$$k_{{_{T} }} \, = \,k_{25} \,\exp \left[ {\frac{{ - E_{A} }}{R}\left( {\frac{1}{T}\, - \,\frac{1}{298,15}} \right)} \right] ,$$
where k25−rate constant at 25 °C [mol m−2 s−1], EA−activation energy [J mol−1], R−gas constant (8,3143 J K−1 mol−1), T−absolute temperature (K).
The kinetic rate constants for the minerals involved in modeled reactions (Table 5) were taken from Palndri and Kharaka (2004).
Table 5 Kinetic rate parameters at 25 °C–data from Palndri and Kharaka (2004)
Reaction of carbon dioxide with water producing the carbonic acid is of the greatest importance for the mineral sequestration process, because just the aqueous form of CO2 (not the molecular form–CO2(g)) can react with the aquifer rocks. Solubility of CO2 is the function of temperature, pressure and ionic strength of the solution. CO2 solubility in 1 m NaCl solution, at the temperature of 40 °C and 100 bar pressure–similar to the possible disposal conditions in the aquifer considered–equals to ca. 1 Mol, and it is lower by 23 % than in pure water (calculated basing on Duan and Sun 2003; Duan et al. 2006).
Dissociation of H2CO3 results in pH decrease, reaching its minimum at about 50 °C (Rosenbauer et al. 2005). Therefore, high availability of H+ ions, at relatively lower temperatures, enhances hydrolysis of minerals forming the aquifer rock matrix. Carbonic acid dissociation initiates several reactions involving mineral phases and pore fluids, in consequence leading to the mineral or solubility CO2 trapping.
In this work, at each modeled stage (injection and storage), the brine of a given chemistry (Table 4) was considered. Its volume was set, assuming full-water saturation, at the value allowing to obtain the required porosity (Table 4), considering the volume of mineral assemblage (Table 1), as a complement to 10,000 cm3. The system temperature and CO2 fugacity were accepted at the levels shown in Table 3. The reactions in the aquifer and caprock systems considered are described in this chapter.
Stage 1: 100 days of CO2 injection
At the first stage, the CO2 injection, lasting for 100 days, causes the increase in gas fugacity to the assumed value: fCO2–68.88 bar. In effect, a significant elevation of CO2(aq) and HCO3 − concentrations (Reaction 1) as well as the drop of pore waters' pH from 6.6 (value in formation water equilibration with the mineral assemblage) to 4.8 pH is observed–Fig. 3. Total porosity grows in the sandstone by relative 3.6 %–virtually not influencing the injected fluid penetration into the aquifer.
Aquifer–changes in, fCO2, concentrations of CO2(aq) and HCO3 −, pH, and rock matrix porosity at the stage of CO2 injection
$${\text{CO}}_{ 2} ( {\text{g}}) + {\text{H}}_{ 2} {\text{O}} \leftrightarrow 2{\text{HCO}}_{3}^{ - } + {\text{H}}^{ + }$$
Increase in porosity is controlled mainly by the transformation of gypsum, to anhydrite (Reaction 2), described, e.g., in Ostroff (1964), and the dissolution calcite. Primary gypsum becomes completely depleted in this process. The amounts (mol) of the minerals precipitated or dissolved in this processes, per 10,000 cm3 of modeled rock, are shown in Fig. 4.
$$\mathop {{\text{CaSO}}_{ 4} \cdot 2 {\text{H}}_{ 2} {\text{O}}}\limits_{\text{Gypsum}} \leftrightarrow \mathop {{\text{CaSO}}_{ 4} }\limits_{\text{Anhydrite}} +\,2 {\text{H}}_{2}$$
Aquifer–changes of selected minerals quantities at the stage of CO2 injection
Calcite dissolution also increases hydrocarbonate ions concentration (Reaction 3):
$$\mathop {{\text{CaCO}}_{ 3} }\limits_{\text{Calcite}} +\,{\text{CO}}_{ 2} ({\text{aq}}) + {\text{H}}_{ 2} {\text{O}} \leftrightarrow {\text{Ca}}^{2 + } + 2 {\text{HCO}}_{3}^{ - }$$
Calcite (and chlorite) dissolution may enhance dawsonite formation together with chalcedony and ordered dolomite (however, the vast part of which could be transformed from dolomite, which is present in the primary mineral assemblage of the rock)–Fig. 4, Reaction 4.
$$\begin{aligned} \mathop {{\text{CaCO}}_{ 3} }\limits_{\text{Calcite}} + 1.3{\text{CO}}_{2} ({\text{g}})\,+\,0.2\mathop {{\text{Mg}}_{ 5} {\text{Al}}_{ 2} {\text{Si}}_{ 3} {\text{O}}_{ 1 0} ( {\text{OH)}}_{ 8} }\limits_{\text{Clinochlor14A}}\,+\,0.3{\text {Na}}^{ + } \leftrightarrow \hfill \\ 0.3\mathop {{\text{NaAlCO}}_{ 3} ( {\text{OH)}}_{ 2} }\limits_{\text{Dawsonite}}\,+\,0.5{\text{H}}_{ 2} {\text{O}}\,+\,\mathop {{\text{CaMg(CO}}_{ 3} )_{ 2} }\limits_{\text{Dolomite - ord}}\,+\,0.1{\text{Al}}^{3 + }\,+\,0.6\mathop {{\text{SiO}}_{ 2} }\limits_{\text{Chalcedony}} \hfill \\ \end{aligned}$$
Stage 2: 10,000 years since the termination of CO2 injection
At the beginning of the second stage, CO2 fugacity drops rapidly from 59.62 bar to the value of approximately 30 bar, next a slower decrease to 1 bar, reached in 3000 years of storage, is noted–Fig. 5. The CO2(aq) concentration falls in the same manner, while HCO3 − concentrations are decreasing within the 0- to2500-year period. In the next 500 years, they increase in the concentration of 0.3 molal and stabilize around this level. The pH shows an adversely proportional trend to the CO2 fugacity; after 3000 years, the reaction of pore fluid stabilizes at approximately 6.4 pH. The porosity decreases and reaches about 26.8 %, which is 0.5 percent point less than the primary value. This is caused mainly by the precipitation of ordered dolomite, chalcedony and dawsonite (volume of these phases exceeds the volume of dissolved minerals)–Fig. 6.
Changes in, fCO2, concentrations of CO2(aq) and HCO3 −, pH, and rock matrix porosity since termination of CO2 injection
Aquifer–changes in selected minerals quantities after the injection termination–10,000 years
The mineral trapping mechanism is in general controlled by the same reactions as described for the injection stage: dissolution of calcite and dolomite, and precipitation of dolomite ord. together with dawsonite and chalcedony. This latter Reaction (5), consuming calcite and albite (constituents of the rock matrix) as well as hydrogen ions from the solution, might be responsible for the significant increase in the pH of pore waters
$$\mathop {{\text{CaCO}}_{ 3} }\limits_{\text{Calcite}}\,+\,{\text{Na}}\mathop {{\text{Al}}_{ 2} {\text{Si}}_{ 3} {\text{O}}_{ 8} }\limits_{\text{Albite}}\,+\,2{\text{H}}^{ + } \leftrightarrow \mathop {{\text{NaAlCO}}_{ 3} ( {\text{OH)}}_{ 2} }\limits_{\text{Dawsonite}}\,+\,{\text{Ca}}^{2 + } + \mathop { 3 {\text{SiO}}_{ 2} }\limits_{\text{Chalcedony}}$$
Transformation from dolomite (14 mol dissolved) is not the only cause for formation of ordered dolomite (17 mol precipitated)–Fig. 6. The remaining 3 mol of ordered structure CaMg(CO3)2 is produced in the Reaction (4)–dissolution of calcite and chlorite.
Caprocks
At the first stage, the CO2 injection, lasting for 100 days, causes the increase in gas fugacity to the assumed 59.62 bar. In effect, a significant elevation of CO2(aq) concentrations and a decline of pH to 4.7 are observed. In general, the reactions in the caprock system proceed in a similar manner as in the case of the aquifer–Fig. 7. This is connected with similar mineralogical compositions (Table 1) and pore water chemistry (Table 4), typical for the two formations considered. The porosity increase is mainly related to the transformation of gypsum (which is exhausted in this process) into anhydrite, Reaction (2). The volume of newly formed anhydrite exceeds the gypsum by over 50 %. Total porosity increases in the caprock by relative 7 %–this phenomenon may increase the penetration of injected fluid into the insulating layer.
Caprock–changes in selected minerals quantities at the stage of CO2 injection (0.8 mol anhydrite precipitation and 0.8 mol gypsum dissolution are not shown)
Calcite and chlorite dissolution triggers the precipitation of dawsonite, chalcedony and ordered dolomite–Fig. 7, Reaction (4) or (6).
$$\begin{aligned} \mathop {{\text{CaCO}}_{ 3} }\limits_{\text{Calcite}} +\, 1.4{\text{CO}}_{ 2} ( {\text{g}}) + 0.2\mathop {{\text{Mg}}_{ 5} {\text{Al}}_{ 2} {\text{Si}}_{ 3} {\text{O}}_{ 1 0} ( {\text{OH)}}_{ 8} }\limits_{\text{Clinochlor14A}} +\, 0.4{\text{Na}}^{ + } \hfill \\ \leftrightarrow 0.4\mathop {{\text{NaAlCO}}_{ 3} ( {\text{OH)}}_{ 2} }\limits_{\text{Dawsonite}} +\, .02{\text{H}}_{ 2} {\text{O}} + \mathop {{\text{CaMg(CO}}_{ 3} )_{ 2} }\limits_{\text{Dolomite - ord}} + 0. 6\mathop {{\text{SiO}}_{ 2} }\limits_{\text{Chalcedony}} \hfill \\ \end{aligned}$$
Significant amounts of ordered structure dolomite, however, are transformed from dolomite (as described earlier). Some part of dolomite ord. may also originate from Reaction (7), which consumes calcite, bicarbonate and magnesium ions from the pore solution and results in the decrease in pH.
$$\mathop {{\text{CaCO}}_{ 3} }\limits_{\text{Calcite}} + {\text{HCO}}_{3}^{ - } + {\text{Mg}}^{2 + } \leftrightarrow \mathop {{\text{CaMg(CO}}_{ 3} )_{ 2} }\limits_{\text{Dolomite - ord}}\,+\,{\text{H}}^{ + }$$
At the beginning of the second stage, CO2 fugacity drops rapidly from 59.62 bar to the value below 0.001 bar, next an increase to 0.002 bar is noted–Fig. 8. The CO2(aq) and HCO3 concentrations fall in the same manner, this is accompanied by a quick rise of pH to the value of 7.5, and in the remaining period, the reaction of pore fluid stabilizes at approximately 7.4 pH. The porosity reaches the value of about 9.15 %.
Caprock–changes in, fCO2, concentrations of CO2(aq) and HCO3 −, pH, and rock matrix porosity since termination of CO2 injection
In the first period of storage, the increasing porosity is controlled by the substantial decay of dolomite and aluminosilicates: clinochlore 14A, albite and K-feldspar (Fig. 9), whose volume is not substituted by the precipitating phases as ordered dolomite and saponite or muscovite. A possible Reaction (8) is hydrogen-consuming and may be responsible in part for the growth of pH.
Caprock–changes in selected minerals quantities after the injection termination–10,000 years
$$\begin{aligned} 0.62{\text{Na}}\mathop {{\text{Al}}_{ 2} {\text{Si}}_{ 3} {\text{O}}_{ 8} }\limits_{\text{Albite}}\,+\,0.6\mathop {{\text{Mg}}_{ 5} {\text{AlSi}}_{ 3} {\text{O}}_{ 1 0} ( {\text{OH)}}_{ 2} }\limits_{\text{Clinochlor14A}} + 0.747\mathop {{\text{KAlSi}}_{ 3} {\text{O}}_{ 8} }\limits_{\text{K - feldspar}}\,+\,0.29{\text{H}}^{ + } \leftrightarrow \hfill \\ \mathop {{\text{Na}}_{ 0 , 3 3} {\text{Mg}}_{ 3} {\text{Al}}_{ 0 , 3 3} {\text{Si}}_{ 3 , 6 7} {\text{O}}_{ 1 0} ( {\text{OH)}}_{ 2} }\limits_{\text{Saponite - Na}}\,+\,0.747\mathop {{\text{KAl}}_{ 3} {\text{Si}}_{ 3} {\text{O}}_{ 1 0} ( {\text{OH)}}_{ 2} }\limits_{\text{Muscovite}} + 0.29{\text{Na}}^{2 + } + 0.8{\text{H}}_{ 2} {\text{O}} \hfill \\ \end{aligned}$$
The mineral trapping mechanism is in general ruled by: dissolution of dolomite and calcite and dolomite ord. precipitation–Fig. 9. The dolomite ord. precipitation might be controlled by the sulfide-catalyzed mechanism as reported in Zhang et al. (2012).
Core samples were placed in the reaction chamber of the RK1 autoclave; construction details of the experimental apparatus RK1 were described in Labus and Bujok (2011). The chamber was filled to 3/4 volume with brine (Table 4), flushed with CO2 gas in order to evacuate the air from the free space and heated. Next, the CO2 was injected to the desired pressure, the temperature was set at 43 °C (± 0.2° C), to achieve the reservoir conditions, under which the CO2 occurs in supercritical phase. Swinging movement of the autoclave facilitated mixing of the fluids and enhanced the contact between liquid and solid phases. Experiment was carried on for 75 days in order to simulate the initial period of storage. During this time temperature, pressure and pH (using a high-pressure electrode) were monitored. At the end of the experiment, the autoclave was depressurized. The reacted samples were dried in a vacuum dryer; next their outer fragments were separated, powdered and examined by means of XRD analysis. XRD analysis of reacted sample–Br45 caprock–revealed differences in mineral composition, compared to the primary assemblage (Fig. 10). The results could not be interpreted in a simple way, because the powdered fragments consisted of the very superficial parts of the reacted cores as well as their inner, less reacted or even chemically unchanged parts. Nevertheless, the observations could support the modeling results particularly with regard to the dissolution of calcite, muscovite, feldspars and the increased abundance of dolomite.
Results of XRD analysis of caprock sample before and after autoclave experiment
The trapping capacity of analyzed formations (Table 6) was calculated under the following assumptions. The unitary volume of modeled rock–UVR–aquifer or caprock is equal to 0,01 m3, the primary porosity value (prior to storage) is equal to np, and then, the rock matrix volume measured in UVR in 1 m3 of formation is 100(1-np). Due to the modeled reactions, certain quantities of carbonate minerals dissolve or precipitate per each UVR. On this basis, the CO2 balance and eventually quantity of CO2 trapped in mineral phases are calculated. Modeled chemical constitution of pore water allows calculation of the quantity of carbon dioxide trapped in the form of solution. After simulated 10 ka of storage, the final porosity is nf. Pore space is assumed to be filled with pore water of known (modeled) concentrations of CO2-containing aqueous species, e.g., HCO3 −, CO2(aq), CO3 2−, NaHCO3 (expressed in terms of mgHCO3 −/dm3). The explanation on the example of aquifer rock is given below.
Table 6 Aquifer and caprock values of porosity, mineral and dissolution trapping capacity of analyzed formation
The primary porosity–np–is 0.273; thus, 1 m3 of formation contains 72.7 UVRs. For each UVR, 16.66 mol of dolomite ord. precipitates, trapping 33.329 mol of CO2 (each mole of dolomite traps two mole of CO2); additionally, 2.37 mol dawsonite precipitates as well. Per each UVR 1.29 mol of calcite, 14.14 mol dolomite and 1.103 mol ankerite are dissolved (each mole of ankerite releases two mole of CO2). The difference in quantity of CO2 trapped in the precipitating and dissolved minerals is equal to 3.917 mol per UVR; thus, 296.72 mol CO2 is trapped in 1 m3 of the formation.
After 20 ka of storage, the final mass of pore fluid per UVR is equal to 2.7733 kg; therefore, 1 m3 of formation is assumed to contain 277.33 kg of pore water. The difference in HCO3 − concentrations in the primary fluid (0.01244 molal) and the fluid after 10,000 years of storage (0.04206) equals to 0.0296 molal; difference in CO2aq concentrations is 0.01531, and in NaHCO3 concentrations is 0.004374, respectively. Therefore, approximately 0.049 mol CO2 is trapped in solution per 1 m3 formation.
In our previous work (Labus et al. 2011), regarding the CO2 storage in the Upper Silesian Coal Basin (Poland), we utilized data allowing for more complex modeling and sequestration capacity evaluation. Model calculations enabled the estimate of pore space saturation with gas, changes in the composition and pH of pore waters, and the relationships between porosity and permeability changes and crystallization or dissolution minerals in the rock matrix. On the basis of two-dimensional model, the processes of gas and pore fluid migration within the analyzed aquifers were also characterized, including the density driven flow based on the changing in time density contrasts between supercritical CO2, the initial brine and the brine with CO2 dissolved. These outcomes may give an approximation of the proportions between the different trapping mechanisms. Their magnitudes reached: 2.5–7.0 kg/m3 for the dissolved phase–CO2(aq), −1.2–9.9 kg/m3 for the mineral phase–SMCO2, but as much as 17–70 kg/m3 for the supercritical phase–SCCO2.
This work was aimed at preliminary determination of suitability for the purpose of CO2 sequestration, of the aquifer associated with the depleted oil field Brodské, in the Moravian part of the Vienna Basin. Identification of possible water–rock–gas interactions was performed by means of geochemical modeling in two stages simulating the immediate changes in the rocks impacted by the injection of CO2, and long-term effects of sequestration.
Hydrogeochemical simulation of the CO2 injection into aquifer rocks demonstrated that dehydration of gypsum (resulting in anhydrite formation) and dissolution of calcite are responsible for the increase in porosity. Dissolution of calcite and chlorite enables precipitation of dawsonite–NaAlCO3(OH)2, chalcedony and ordered dolomite. Significant amounts of the latter one, however, result from the transformation of primary dolomite, which was present in the original rock matrix, before the injection.
According to the hydrogeochemical model of the second stage (10,000 years of storage), the mineral trapping mechanism in aquifer is in general controlled by the same reactions as described for the injection stage. Additionally precipitation of dawsonite and chalcedony may occur, in effect of calcite and albite dissolution; this reaction contributes to a considerable increase in pH.
In general, the reactions in the caprock system proceed in a similar manner as in the case of the aquifer. Nevertheless, a considerable decay of primary dolomite together with aluminosilicates, which is not balanced with precipitation of secondary minerals, is responsible for increase in porosity in the first period of storage.
Previous studies proved that the caprock is also the environment for geochemical reactions that, in geological time frame, might be of importance with regard not only to the repository integrity but also to CO2 trapping or release. When modeling the contact zone between the aquifer and insulating layers Labus (2012) reported the process of CO2 desequestration, associated with the dissolution of carbonate minerals, operating in the lower part of caprock. On the other hand Xu et al. (2005) observed the most intense geochemical evolution in the first 4 meters of the caprock but some mineralogical changes (including siderite formation) reached the boundary of the model, i.e., 10 m from the aquifer–caprock interface. The mineral trapping capacity of the caprock leveled at approximately 10 kg/m3 while in the aquifer it was almost 80 kg/m3 (Xu et al. 2005). All this means that the caprock should be taken into account for calculating when calculating the CO2 trapping, because it may constitute at least a few percent in the whole repository.
Our laboratory experiment, reproducing water–rock–gas interactions in possible storage site during the injection stage supports the modeling results particularly with regard to the dissolution of calcite and aluminosilicates, as well as to an increase in relative share of dolomite and quartz in the rock matrix.
The phases capable of mineral CO2 trapping in the discussed aquifer are dolomite ord. and dawsonite, while dolomite, calcite and ankerite are susceptible to degradation. The trapping capacity calculated according to the results of modeling performed; for the aquifer levels at 13.22 kg CO2/m3, these values comparable to the ones obtained in simulations regarding other geologic formations considered as perspective CO2 repositories (e.g., Xu et al. 2003; Labus et al. 2010; Labus 2012). In the analyzed caprock, the only mineral able to trap CO2 is ordered structure dolomite, while dolomite or calcite tends to degrade. Dawsonite formed during the injection stage is quickly and completely dissolved during the storage stage. Trapping capacity of the caprock totals at 5.07 kgCO2/m3. Amount of carbon dioxide that could be trapped in pore water reaches 0.6 kgCO2/m3 of aquifer formation.
The work carried out constitutes the initial recognition stage of suitability of analyzed aquifer for CO2 storage. Its full characteristic in this respect requires detailed determination of the anisotropy of hydrogeological parameters and mineralogical composition of the formation. The models of transport and reaction, created on this basis and calibrated on the experimental results, shall provide information on the spatial distribution of trapping capacity values and variability of gas–rock–water interactions.
Bachu S, Gunter WD, Perkins EH (1994) Aquifer disposal of CO2: hydrodynamic and mineral trapping. Energy Convers Manag 35(4):269–279
Bachu S, Bonijoly D, Bradshaw J, Burruss R, Holloway S, Christensen NP, Mathiassen OM (2007) CO2 storage capacity estimation: methodology and gaps. Int J Greenhouse Gas Control 1:430–443
Bethke CM (1996) Geochemical reaction modeling. Oxford University Press, New York
Bethke CM (2008) Geochemical and biogeochemical reaction modeling. Cambridge University Press, Cambridge
CO2CRC (2008) Storage capacity estimation, site selection and characterisation for CO2 storage projects. Cooperative Research Centre for Greenhouse Gas Technologies, Canberra. CO2CRC Report No. RPT08-1001
Decker K (1996) Miocene tectonics at the Alpine-Carpathian junction and the evolution of the Vienna Basin. Mitt Ges Geol Bergbaustud Oesterreich 41:33–44
Duan ZH, Sun R (2003) An improved model calculating CO2 solubility in pure water and aqueous NaCl solutions from 273 to 533 K and from 0 to 2000 bar. Chem Geol 193:257–271
Duan ZH, Sun R, Zhu C, Chou IM (2006) An improved model for the calculation of CO2 solubility in aqueous solutions containing Na+, K+, Ca2+, Mg2+, Cl−, and SO4 2−. Mar Chem 98(2–4):131–139
Golding, SD, Dawson GKW, Boreham CJ, Mernagh T (2013) ANLEC Project 7-1011-0189: Authigenic carbonates as natural analogues of mineralisation trapping in CO2 sequestration: a desktop study Manuka, ACT, Australia: Australian National Low Emissions Coal Research and Development
Gunter WD, Perkins EH, McCann TJ (1993) Aquifer disposal of CO2-rich gases: reaction design for added capacity. Energy Convers Manag 34(9–11):941–948
Helgeson HC, Kirkham DH (1974) Theoretical prediction of the thermodynamic behavior of aqueous electrolytes at high pressures and temperatures, II. Debye- Hückel parameters for activity coefficients and relative partial molal properties. Am J Sci 274:1199–1261
Kaszuba JP, Janecky DR, Snow MG (2005) Experimental evaluation of mixed fluid reactions between supercritical carbon dioxide and NaCl brine: relevance to the integrity of a geologic carbon repository. Chem Geol 217:277–293
Krejčí O, Kociánová L, Krejčí V, Paleček M, Krejčí Z (2015) Structural maps and cross-sections (Package V1.5). REPP-CO2 Report, NF-CZ08-OV-1-006-2015
Labus K (2012) Phenomena at interface of saline aquifer and claystone caprock under conditions of CO2 storage. Ann Soc Geol Pol 82(3):255–262
Labus K, Bujok P (2011) CO2 mineral sequestration mechanisms and capacity of saline aquifers of the Upper Silesian Coal Basin (Central Europe)–Modeling and experimental verification. Energy 36:4974–4982
Labus K, Tarkowski R, Wdowin M (2010) Assessment of CO2 sequestration capacity based on hydrogeochemical model of Water-Rock-Gas interactions in the potential storage site within the Bełchatów area (Poland). Miner Resour Manag XXVI(2):69–84
Labus K, Bujok P, Leśniak G, Klempa M (2011) Study of reactions in water-rock-gas system for the purpose of CO2 aquifer sequestration (in Polish; English abstract). Wydawnictwo Politechniki Śląskiej, Gliwice
Lasaga AC (1984) Chemical kinetics of water-rock interactions. J Geophys Res 89:4009–4025
Lin H, Fujii T, Takisawa R, Takahashi T, Hashida T (2008) Experimental evaluation of interactions in supercritical CO2/water/rock minerals system under geologic CO2 sequestration conditions. J Mater Sci 43:2307–2315
Ostroff AG (1964) Conversion of gypsum to anhydrite in aqueous salt solutions. Geochim Cosmochim Acta 23:1363–1372
Palndri JL, Kharaka YK (2004) A compilation of rate parameters of water-mineral interaction kinetics for application to geochemical modeling. US Geological Survey. Open File Report 2004 1068: 1–64
Perkins EH, Gunter WD (1995) Aquifer disposal of CO2-rich greenhouse gases: modelling of water-rock reaction paths in a siliciclastic aquifer. In: Kharaka YK, Chudaev OV (eds) Proceedings of the 8th international symposium on water-rock interaction. Balkema, Rotterdam, pp 895–898
Picha FJ, Peters KE (1998) Biomarker oil-to-source rock correlation in the Western Carpathians and their foreland, Czech Republic. Petroleum Geoscience 4:289–302
Rosenbauer RJ, Koksalan T, Palandri JL (2005) Experimental investigation of CO2–brine–rock interactions at elevated temperature and pressure: implications for CO2 sequestration in deep-saline aquifers. Fuel Process Technol 86:1581–1597
White SP, Allis RG, Moore J, Chidsey T, Morgan C, Gwynn W, Adams M (2005) Simulation of reactive transport of injected CO2 on the Colorado Plateau, Utah, USA. Chem Geol 217:387–405
Xu T, Apps JA, Pruess K (2003) Reactive geochemical transport simulation to study mineral trapping for CO2 disposal in deep arenaceous formations. J Geophys Res 108(B2):2071–2084
Xu T, Apps JA, Pruess K (2005) Mineral sequestration of carbon dioxide in a sandstone–shale system. Chem Geol 217:295–318
Zhang F, Xu H, Konishi H, Kemp JM, Roden EE, Shen Z (2012) Dissolved sulfide-catalyzed precipitation of disordered dolomite: implications for the formation mechanism of sedimentary dolomite. Geochim Cosmochim Acta 97:148–165
This work was supported by Polish Ministry of Science and Higher Education (Grant N N525 363137) and the National Programme for Sustainability I (2013–2020) financed by the state budget of the Czech Republic–Institute of Clean Technologies for Mining and Utilization of Raw Materials for Energy Use, identification code: LO1406.
Silesian University of Technology, 2 Akademicka St., 44-100, Gliwice, Poland
K. Labus
VŠB-Technical University of Ostrava, 17 listopadu St., Poruba, Ostrava, Czech Republic
P. Bujok, M. Klempa, M. Porzer & D. Matýsek
P. Bujok
M. Klempa
M. Porzer
D. Matýsek
Correspondence to K. Labus.
Labus, K., Bujok, P., Klempa, M. et al. Preliminary geochemical modeling of water–rock–gas interactions controlling CO2 storage in the Badenian Aquifer within Czech Part of Vienna Basin. Environ Earth Sci 75, 1086 (2016). https://doi.org/10.1007/s12665-016-5879-8
Water–rock–gas interactions
Geochemical modeling
CO2 sequestration
CO2 trapping capacity
Vienna basin | CommonCrawl |
BioData Mining
A new pipeline for structural characterization and classification of RNA-Seq microbiome data
Sebastian Racedo ORCID: orcid.org/0000-0002-9927-41611,
Ivan Portnoy ORCID: orcid.org/0000-0002-7334-75961,2,
Jorge I. Vélez ORCID: orcid.org/0000-0002-3146-78991,
Homero San-Juan-Vergara ORCID: orcid.org/0000-0002-3808-46471,
Marco Sanjuan ORCID: orcid.org/0000-0001-7435-07811 &
Eduardo Zurek ORCID: orcid.org/0000-0002-9816-68631
BioData Mining volume 14, Article number: 31 (2021) Cite this article
High-throughput sequencing enables the analysis of the composition of numerous biological systems, such as microbial communities. The identification of dependencies within these systems requires the analysis and assimilation of the underlying interaction patterns between all the variables that make up that system. However, this task poses a challenge when considering the compositional nature of the data coming from DNA-sequencing experiments because traditional interaction metrics (e.g., correlation) produce unreliable results when analyzing relative fractions instead of absolute abundances. The compositionality-associated challenges extend to the classification task, as it usually involves the characterization of the interactions between the principal descriptive variables of the datasets. The classification of new samples/patients into binary categories corresponding to dissimilar biological settings or phenotypes (e.g., control and cases) could help researchers in the development of treatments/drugs.
Here, we develop and exemplify a new approach, applicable to compositional data, for the classification of new samples into two groups with different biological settings. We propose a new metric to characterize and quantify the overall correlation structure deviation between these groups and a technique for dimensionality reduction to facilitate graphical representation. We conduct simulation experiments with synthetic data to assess the proposed method's classification accuracy. Moreover, we illustrate the performance of the proposed approach using Operational Taxonomic Unit (OTU) count tables obtained through 16S rRNA gene sequencing data from two microbiota experiments. Also, compare our method's performance with that of two state-of-the-art methods.
Simulation experiments show that our method achieves a classification accuracy equal to or greater than 98% when using synthetic data. Finally, our method outperforms the other classification methods with real datasets from gene sequencing experiments.
Microorganisms living inside and on humans are known as the microbiota. When integrated with their genes' information, it is known as the microbiome. The Human Microbiome Project (HMP) was an endeavor for the characterization of the human microbiota to further understanding its impact on human health and diseases [1].
In recent years, biological sciences have experienced substantial technological advances that have led to the rediscovery of systems biology [2,3,4]. These advances were possible thanks to the technological ability to completely sequence the genome from any organism at a low cost [5, 6]. Such advances triggered the development of various analytic approaches and technologies to simultaneously monitoring all the components within cells (e.g., genes and proteins). With the genome information and analytic technologies, the mining and exploration of the resulting data opened up the possibility to better understand biological systems, such as microbial populations, and their complexity. The network structure of such biological systems can give insight into the underlying interactions taking place within those systems [7,8,9,10]. Furthermore, the understanding of these interactions can lead to the discovery of new methods that can help physicians, biologists, scientists, and healthcare workers with disease diagnosis, gene identification, classification of new data, and many other tasks [11].
We initially conducted a literature search in different medical, biological, and engineering databases as well as academic sites prestigious journals such as BMC Bioinformatics, PLOS ONE, ScienceDirect, and IEEE Xplore using the queries "correlation structure for gene expression classifications," "classifiers for compositional data," and "classifiers based on correlation structures" in order to identify papers in English using procedures for sample classification based on correlation structures in the 2009–2019 time window. Figure 1 shows the evolution of the number of publications retrieved when the keywords "correlation structure for gene expression classifications" are used. Publications were retrieved from several academic sites, namely BMC Bioinformatics, PLOS One, ScienceDirect, and Scopus. Figure 2 summarizes the current principal stages of gene expression analysis for sample classification.
Evolution of the number of publications per year from 2009 to 2019
Scheme of gene analysis used for sample classification
Operational Taxonomic Unit (OTU) count tables are the usual output when processing the 16S rRNA sequences of microbiota samples [12]. These tables show the relative abundances of the bacteria that make a microbiota population (e.g., the human gut microbiota). OTU-based data have a compositional nature, which makes them difficult to work with [13, 14]. Thus, data transformation is required prior to any further analysis.
Aitchison [15] proposed two transformations to compensate for the data's compositionality, thus allowing the use of standard metrics in further analysis. The first transformation is the additive log-ratio (alr), which is defined as:
$$ alr\left(\boldsymbol{x}\right)=\left(\mathit{\ln}\frac{x_1}{x_j},\dots, \mathit{\ln}\frac{x_{j-1}}{x_j},\mathit{\ln}\frac{x_{j+1}}{x_j},\mathit{\ln}\frac{x_n}{x_j}\ \right) $$
where xj is an element of {x1, x2, x3…, xn}. Because one value xj is selected as the denominator to build the log-ratios, the alr has been criticized as being subjective since the outcome depends mostly on the value of xj selected [15,16,17,18].
The second transformation proposed by Aitchison is the centered log-ratio (clr), which is defined as:
$$ clr\left(\boldsymbol{x}\right)=\left[\mathit{\ln}\ \frac{x_1}{g\left(\boldsymbol{x}\right)},\mathit{\ln}\ \frac{x_2}{g\left(\boldsymbol{x}\right)},\dots, \mathit{\ln}\ \frac{x_n}{g\left(\boldsymbol{x}\right)}\ \right] $$
where \( g\left(\boldsymbol{x}\right)={\left({\prod}_{i=1}^n{x}_i\right)}^{\frac{1}{n}} \) is the geometric mean. The use of g(x) avoids the subjectivity of the alr transformation since the method is taking all the information of x [15,16,17,18,19]. The clr transformation has proven to be reliable and has been extensively used in the scientific literature over the years to analyze microbiome data.
In [20] authors proposed a transformation called the isometric log-ratio (irl) transformation. This approach takes any compositional data x ∈ SN, and computes ilr(x) = z = [z1, z2, …, zN], where zi is calculated as:
$$ {z}_i=\sqrt{\frac{N-i}{N-i+1}\mathit{\ln}\ \left(\frac{x_i}{\sqrt[N-i]{\prod_{j=i+1}^N{x}_j}}\right),}\kern0.75em i=1,..,N. $$
However, implementing the ilr transformation poses serious practical difficulties for high-dimension data as the computational complexity increases rapidly with dimensionality [21].
After transforming the data, the next step is to separate the data into train, test, and validation sets, although in some cases only the train and test sets are considered. One of the most common problems prior to that step is the limitation of the number of data samples. Indeed, for a normal classifier to be employed using multivariate metrical techniques, the sample size required for optimum training is in order of thousands. This is known as the "curse of dimensionality" problem, and the usual way to overcome this limitation is by using a dimensionality reduction technique to collapse all the attributes (variables) into a lower-dimension space where the most dominant information of the dataset can be retrieved [13, 22].
Feature selection methods are usually separated into three categories: filter, wrapper, and embedded. Table 1 summarizes different approaches for feature selection in gene expression data, the most relevant categories for feature selection, and the current weaknesses when analyzing gene expression data. Filter methods can work with univariate and multivariate data, where univariate methods focus on each feature separately and multivariate methods focus on finding relationships between features [23, 24]. Here we only consider multivariate methods.
Table 1 Summary of feature selection approaches in gene expression analysis
The abovementioned filter methods tend to be computationally efficient. Wrapper methods, on the other hand, tend to have a better performance in selecting features since they take a model hypothesis into account, meaning that a training and testing procedure is made in the feature space. However, this approach is computationally inefficient and is more problematic as the feature space grows [23, 26, 29, 30]. Embedded methods make the feature selection based on the classifier (i.e., selected features might not work with any other classifier) and hence tend to have a better computational performance than wrappers. This is the case because the optimal set of descriptors is built when the classifier is constructed and the feature selection is affected by the hypotheses made by the classifier [23, 26, 29,30,31].
In [14], authors presented SParse InversE Covariance Estimation for Ecological ASsociation Inference (SPIEC-EASI), a novel strategy to infer networks from a high dimensional community compositional data. SPIEC-EASI estimates the interaction graph from the transformed data using either Recursive Feature selection or Sparse Inverse Covariance selection and seeks to infer an underlying graphical model using conditional independence. In [32] authors proposed a modification of the Support Vector Machine – Recursive Feature Elimination (SVM-RFE) algorithm for feature selection. SVM-RFE removes one irrelevant feature at each iteration, but this can be troublesome when the number of features is large. Thus, its modification, namely Correlation based Support Vector Machine – Recursive Multiple Feature Elimination (CSVM-RMFE), finds the correlated features and removes more than one irrelevant feature per iteration. Rao and S. Lakshminarayanan [13] presented a new significant attribute selection method based on the Partial Correlation Coefficient Matrix (PCCM).
The final step after finding the most relevant features of the transformed data is to select a classifier. In clinical and bioinformatic research, prediction models are extensively used to derive classification rules useful to accurately predict whether a patient has or would develop a disease, whether the treatment is going to work, or even whether a disease would recur [33,34,35]. Table 2 summarizes the relevant aspects of some widely used classifiers.
Table 2 Summary of classifiers used in gene expression analysis
Depending on the data, a classifier can belong to one of two groups: supervised or unsupervised [36]. In supervised classification (learning), samples are labeled according to some a priori-defined classes or categories, whereas in unsupervised learning, samples are not labeled, and the classifier clusters the data into different classes or categories after maximizing or minimizing a set of criteria.
Dembélé and Kastner [37] presented a new Fold Change method that can detect differentially expressed genes in microarray data. The traditional fold change method works by calculating the ratio between the averages from the samples (usually two different biological conditions, e.g., control and case samples). Then, cutoff values (e.g., 0.5 for down- and 2 for up-regulated) are used to select genes under/above such thresholds. This new approach is more accurate and faster than the traditional method and can assign a metric to each differentially expressed gene, which can be used as a selection criterion.
Belciug and F. Gorunescu [43] proposed a novel initialization of a single hidden layer feedforward neural network's input weights using the knowledge embedded in the connections between variables and class labels. The authors expressed this by the non-parametric Goodman-Kruskal Gamma rank correlation instead of the traditional random initialization. The use of this correlation also helped to increase computational speed by eliminating unnecessary features based on the significance of the rank correlation between variables and class labels.
In [42], authors proposed a framework to find information about genes and to classify gene combinations belonging to its relevant subtype using fuzzy logic, which adapts numerical data (input/output pairs) into human linguistic terms, offering good capabilities to deal with noisy and missing data. However, defining the rules and membership functions might require a lot of prior knowledge from a human expert [41]. Dettling and P. Bühlmann [44] proposed a boosting method combining a dimensionality reduction step with the LogitBoost algorithm [45] and compared it to AdaBoost.M1 [46], the nearest neighbor classifier [47], and classification and regression trees (CART) using gene expression data [48]. Dettling and P. Bühlmann showed that, for low dimensional data, LogitBoost can perform slightly better than AdaBoost.M1, and that for real high dimensional data, their approach can outperform the other classifiers in some cases.
In this paper, we present a new method to classify samples into two groups with different characteristics (i.e., phenotypes, health condition, among others) when data of compositional nature is available. Our method relies on a new metric to quantitatively characterize the overall correlation structure deviation when comparing the two datasets and a new dimensionality reduction approach. The proposed method is assessed and compared, based on classification accuracy, to two state-of-the-art methods using both synthetic datasets and real datasets from RNA-16s sequencing experiments.
Proposed classification method
Here, we explain in detail the proposed classification method. First, in section "Data pretreatment", we introduce the Data Pretreatment stage, and in section "Assessing correlation structure distortion", a novel metric to be used as the metric to assess correlation structure distortion is described. Finally, in section "Dimensionality reduction technique", we present the proposed classification rule, which is based on the previously defined metric and a proposed dimensionality-reduction approach to assess the disruption of a dataset's correlation structure after a new sample is included.
Data pretreatment
Let \( {X}_c^{\rho}\in {\mathbb{R}}^{n_c\times m} \) and \( {X}_v^{\rho}\in {\mathbb{R}}^{n_v\times m} \) be the OTU count tables where m features are assessed in nc and nv samples from control and case individuals, respectively. In the expressions above, the superindex ρ indicates the datasets are 'raw' or without pretreatment. From now on, \( {X}_g^{\rho } \) will represent any of the two groups (g = c for control, or g = v for case).
When analyzing OTU counts tables, a log-ratio transformation, such as the clr, is to be applied [15, 18, 19] before estimating correlations. However, in order to apply the log-ratio transformation, it is necessary to consider that compositional count datasets may contain null values resulting from insufficiently large or non-existing samples. As log-ratio transformations require data with exclusively positive values, the use of a zero-replacement method is a must. Here we use the Bayesian-multiplicative (BM) algorithm proposed by Martín-Fernández [49]. Let \( {\boldsymbol{x}}_{p_i} \) ∈ℝ1 × m be the i-th row of the matrix \( {X}_g^{\rho } \) (i = 1, 2, …, ng). The BM algorithm replaces the null counts by
$$ BM\left({x}_{p_{i,j}}\right)=\left\{\begin{array}{c}{t}_{i,j}\left(\frac{s_i}{n+{s}_i}\right),\kern1em \mathrm{if}\kern0.5em {x}_{p_{i,j}}=0\\ {}{x}_{p_{i,j}}\left(1-\sum \limits_{\forall k\mid {x}_{p_{i,j}}=0}{t}_{i,k}\left(\frac{s_i}{n+{s}_i}\right)\right),\kern0.75em \mathrm{if}\kern0.5em {x}_{p_{i,j}}\ne 0\end{array}\right. $$
When using the Bayes-Laplace prior, we set \( n=\sum \limits_{j=1}^m{x}_{p_{i,j}} \), ti, j = m−1 and si = m. Let \( {X}_g^{BM}:= BM\left({X}_g^{\rho}\right) \) be the resulting matrix after the BM algorithm is applied row-wise to \( {X}_g^{\rho } \).
To ensure the data's compositionality on \( {X}_g^{BM} \), a closure operation [15, 18, 19] is applied to every row of \( {X}_g^{BM} \), as follows:
$$ c\left({\boldsymbol{x}}_{p_i}^{BM}\right)=\frac{k}{\sum \limits_{j=1}^m{x}_{p_{i,j}}^{BM}}{\boldsymbol{x}}_{p_i}^{BM} $$
where k is an arbitrary constant (usually k = 100). Let \( {X}_g^{BM,c}:= c\left( BM\left({X}_g^{\rho}\right)\right) \) be the resulting matrix after the BM algorithm and the closure operation have been applied. Now, the clr transformation is applied to each vector xp ∈ ℝ1 × n \( {X}_g^{BM,c} \), as
$$ clr\left({\boldsymbol{x}}_p\right)=\left[\ln \frac{x_1}{g\left({\boldsymbol{x}}_p\right)},\ln \frac{x_2}{g\left({\boldsymbol{x}}_p\right)},\dots, \ln \frac{x_n}{g\left({\boldsymbol{x}}_p\right)}\right] $$
where \( g\left({x}_p\right)={\left({\prod}_{i=1}^n{x}_i\right)}^{\frac{1}{n}} \) is the geometric mean. Hence,
$$ {X}_g= clr\left(c\left( BM\left({X}_g^{\rho}\right)\right)\right) $$
Finally, a normalization is applied, resulting in:
$$ {X}_{g_{norm}}=\left({X}_g-{I}_{n_g}{b}_g^T\right){\varSigma}_g^{-1} $$
where \( {I}_g=\left[1\ 1\dots .1\right]\in {\mathbb{R}}^{n_g\times 1} \) is a column vector of ones, \( {b}_g\in {\mathbb{R}}^{n_g\times 1} \) is a column vector that contains the means of all the variables in Xg, and Σg ∈ ℝm × m is a diagonal matrix that contains the standard deviation (\( {\sigma}_{g_i} \), for i = 1, …, m) of all variables.
Assessing correlation structure distortion
Here, we introduce φ, a new metric to quantitatively assess the distortion in the correlation structure of a dataset after the incorporation of a new sample. The Pearson correlation matrix for Xg is calculated as follows [50]:
$$ {S}_g=\frac{1}{n_g-1}\ {X}_{g_{norm}}^T\ {X}_{g_{norm}} $$
Now, consider a new sample, xp ∈ ℝ1 × m. The pretreatment step for this sample yields:
$$ {\boldsymbol{x}}_p= clr\left(c\left( BM\left({\boldsymbol{x}}_p\right)\right)\right) $$
Let \( {\overset{\sim }{X}}_g{\mathbb{R}}^{n_g\times m} \) be the (augmented) dataset Xg after incorporating the new sample, and let Sg and \( {\overset{\sim }{S}}_g \) be the correlation matrices for Xg and \( {\overset{\sim }{X}}_g \), respectively. The spectral decomposition for these matrices is
$$ {S}_g={V}_g{\Lambda}_g{V}_g^T,\kern3.75em {\overset{\sim }{S}}_g={\overset{\sim }{V}}_g{\overset{\sim }{\Lambda}}_g{\overset{\sim }{V}}_g^T $$
$$ {\Lambda}_g=\left[\begin{array}{ccc}{\lambda}_{g_1}& & \\ {}& \ddots & \\ {}& & {\lambda}_{g_m}\end{array}\right]\in {\mathbb{R}}^{m\times m},\kern0.5em {\overset{\sim }{\Lambda}}_g=\left[\begin{array}{ccc}{\overset{\sim }{\lambda}}_{g_1}& & \\ {}& \ddots & \\ {}& & {\overset{\sim }{\lambda}}_{g_m}\end{array}\right]\in {\mathbb{R}}^{m\times m} $$
are diagonal matrices containing the eigenvalues for Sg and \( {\overset{\sim }{S}}_g \). Let \( {V}_g=\left[{\boldsymbol{v}}_{g_1}\kern0.5em {\boldsymbol{v}}_{g_2}\kern0.5em \begin{array}{cc}\cdots & {\boldsymbol{v}}_{g_m}\end{array}\right]\in {\mathbb{R}}^{m\times m} \) and \( {\overset{\sim }{V}}_g=\left[\begin{array}{cc}{\overset{\sim }{\boldsymbol{v}}}_{g_1}& {\overset{\sim }{\boldsymbol{v}}}_{g_2}\end{array}\kern0.5em \begin{array}{cc}\cdots & {\overset{\sim }{\boldsymbol{v}}}_{g_m}\end{array}\right]\in {\mathbb{R}}^{m\times m} \) be the eigenvector matrices of Sg and \( {\overset{\sim }{S}}_g \). Figure 3a illustrates, in a 2-dimensional example, the datasets Xg and \( {\overset{\sim }{X}}_g \). Figure 3b illustrates the datasets after carrying out the pre-treatment, along with their eigenvectors (which are unitary) scaled by their corresponding eigenvalues obtained from the spectral decompositions. Note that scaled eigenvectors mark out the directions of largest variability, capturing high order interactions between the OTUs ruling the overall association structure. Therefore, looking at deviations in both the magnitude and direction of those scaled eigenvectors must give insightful information on overall changes in the association structure of a microbiota population.
Bidimensional representation of datasets \( {\overset{\sim }{X}}_g \) and Xg a without pretreatment, and b after the pretreatment along with the eigenvectors scaled by the corresponding eigenvalues
Based on the abovementioned remarks, we introduce φ to characterize the distortion produced in the underlying correlation structure when two OTU counts datasets are compared. This metric first requires a dimensional reduction, which will be performed by selecting the principal components for each sample group. This procedure, integrated within the Principal Component Analysis (PCA) algorithm [25], consists of finding the minimum number of eigenvalues ag or \( \tilde{a}_{g} \) (for Xg and \( {\overset{\sim }{X}}_g \), respectively) that explain 100(1 − α)% of the total variance, i.e.:
$$ \frac{\sum \limits_{i=1}^{a_g}{\lambda}_{g_i}}{\sum \limits_{i=1}^m{\lambda}_{g_i}}\le \left(1-\alpha \right),\kern3.75em \frac{\sum \limits_{i=1}^{\tilde{a}_{g}}{\lambda}_{g_i}}{\sum \limits_{i=1}^m{\lambda}_{g_i}}\le \left(1-\alpha \right) $$
Thus, φ is defined as
$$ \varphi =\sum \limits_{j=1}^{\max \left({a}_g,\tilde{a}_{g}\right)}\left[\max \left\{{\lambda}_{g_j},{\overset{\sim }{\lambda}}_{g_j}\right\}\ \left({\lambda}_{g_j}-{\overset{\sim }{\lambda}}_{g_j}\right)\ {\cos}^{-1}\left({\boldsymbol{v}}_{g_j}^T{\overset{\sim }{\boldsymbol{v}}}_{g_j}\right)\right] $$
where \( \left({\lambda}_{g_j}-{\overset{\sim }{\lambda}}_{g_j}\right) \) is the algebraic difference (magnitude deviation) of the j-th eigenvalues in Λg and \( {\overset{\sim }{\Lambda}}_g \), \( {\cos}^{-1}\left({\boldsymbol{v}}_{g_j}^T{\overset{\sim }{\boldsymbol{v}}}_{g_j}\right) \) computes angular deviation between the j-th eigenvectors in Vg and \( {\overset{\sim }{V}}_g \), and \( \max \left\{{\lambda}_{g_j},{\overset{\sim }{\lambda}}_{g_j}\right\} \) provides a weighting factor so that the contribution of the j-th deviation to the index φ is proportional to the relative importance among principal components.
Dimensionality reduction technique
Now that we have a metric to measure the distortion caused in the correlation structure of the g group after the incorporation of a new sample, we could then infer to which group the new sample would belong, providing a classification criterion based on how distorted the correlation structure is when incorporating xp. The intuitive way of approaching the evaluation of the distortion would be to integrate xp into Xg and (re)calculate the correlation matrix for the further evaluation of its distortion. However, considering that the g group may contain many samples, a single new sample may not be enough to generate a significant distortion in the correlation structure. Furthermore, if the number of samples in the groups is unbalanced, the distortion caused by the inclusion of a new sample may not be comparable.
An approach to overcome this dimensional problem is to randomly subsample a small number of rows in Xg, combining them with xp, and then calculating the distortion caused. This approach, however, would not include a considerable amount of information, which is contained in the rows that were left out. To address this issue, we propose a new dimensionality reduction approach that allows a weighted assessment of the distortion in Sg caused by the integration of a new sample xp. This approach will use all the information contained in the original data, with the objective of providing a classification algorithm for any upcoming sample.
The first step of the proposed approach is to find an expression for the distorted correlation matrix that reveals the natural weights of the contributions of Xg and xp to the make-up of the new correlation structure. Suppose that the data is concatenated as:
$$ {\overset{\sim }{X}}_g=\left[\begin{array}{c}{X}_g\\ {}{\boldsymbol{x}}_p\end{array}\right]{\mathbb{R}}^{\tilde{n}_{g}\times m} $$
where \( \tilde{n}_{g}={n}_g+1 \) is the number of rows of \( {\overset{\sim }{X}}_g \). Combining Eqs. (15) and (8) yields
$$ {\overset{\sim }{X}}_g=\left[\begin{array}{c}{X}_{g_{norm}}{\Sigma}_g+{I}_{n_g}{b}_g^T\\ {}{\boldsymbol{x}}_p\end{array}\right] $$
Normalizing \( {\overset{\sim }{X}}_g \) produces
$$ {\overset{\sim }{X}}_{g_{norm}}=\left({\overset{\sim }{X}}_g-{I}_{\tilde{n}_{g}}\tilde{b}_{g}^T\right){\overset{\sim }{\Sigma}}_g^{-1}=\left[\begin{array}{c}\left({X}_{g_{norm}}{\Sigma}_g-{I}_{n_g}{\Delta b}_g^T\right){\overset{\sim }{\Sigma}}_g^{-1}\\ {}{\boldsymbol{x}}_{p_{norm}}\end{array}\right] $$
where \( \tilde{b}_{g} \) is the vector that contains the means of \( {\overset{\sim }{X}}_g \), \( {\overset{\sim }{\Sigma}}_g \) is a diagonal matrix that contains the distorted standard deviations, \( \Delta {b}_g:= \tilde{b}_{g}-{b}_g \) is the distortion in the mean vector, and \( {\boldsymbol{x}}_{p_{norm}}=\left({\boldsymbol{x}}_p-\tilde{b}_{g}^T\right){\overset{\sim }{\Sigma}}_g^{-1} \). Both \( \tilde{b}_{g} \) and \( {\overset{\sim }{\Sigma}}_g \) are unknown. Thus, we need to derive expressions for them. The distorted means vector is calculated as \( \tilde{b}_{g}=\frac{1}{\tilde{n}_{g}}{{\overset{\sim }{X}}_g}^T{I}_{\tilde{n}_{g}} \), which can be converted into:
$$ \tilde{b}_{g}=\frac{n_g}{n_g+1}{b}_g+\frac{1}{n_g+1}{\boldsymbol{x}}_p^T $$
Equation (18) shows that the natural weights are \( {w}_1=\frac{n_g}{n_g+1} \) and \( {w}_2=\frac{1}{n_g+1} \) for bg and xp, respectively. To find an expression for the diagonal matrix of distorted standard deviations, \( {\overset{\sim }{\Sigma}}_g \), a column-wise subtraction of the mean vector for \( {\overset{\sim }{X}}_g \) is performed:
$$ {\overset{\sim }{X}}_{g_{mean- centered}}= {\overset{\sim}{X}}_g-{I}_{\tilde{n}_{g}}\tilde{b}_{g}^T=\left[\begin{array}{c}{X}_g-{I}_{n_g}\tilde{b}_{g}^T\\ {}{\boldsymbol{x}}_p-\tilde{b}_{g}^T\end{array}\right] $$
Adding and subtracting \( {I}_{n_g}{b}_g^T \) to \( {X}_g-{I}_{n_g}\tilde{b}_{g}^T \) in Eq. (19) yields:
$$ {\overset{\sim }{X}}_{g_{mean- centered}}=\left[\begin{array}{c}\left({X}_g-{I}_{n_g}{b}_g^T\right)-{I}_{n_g}{\Delta b}_g^T\\ {}{\boldsymbol{x}}_p-\tilde{b}_{g}^T\end{array}\right] $$
$$ {\overset{\sim }{X}}_{g_{mean- centered}}\left(:,i\right)=\left[\begin{array}{c}\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)-\Delta {b}_g(i){I}_{n_g}\\ {}{\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\end{array}\right] $$
is the i-th column of \( {\overset{\sim }{X}}_{g_{mean- centered}}\left(:,i\right) \), the corresponding i-th variable. Then, the variance of this i-th variable will be \( {\overset{\sim }{\sigma}}_{g_i}^2=\frac{1}{\tilde{n}_{g}-1}{\left({\overset{\sim }{X}}_{g_{mean- centered}}\left(:,i\right)\right)}^T\ {\overset{\sim }{X}}_{g_{mean- centered}}\left(:,i\right) \), which can be written as:
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{\sigma}}_{g_i}^2=\left[\left({X}_g^T\left(:,i\right)-{b}_g(i){I}_{n_g}^T\right)-\Delta {b}_g(i){I}_{n_g}^T\kern0.5em {\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\right]\left[\begin{array}{c}\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)-\Delta {b}_g(i){I}_{n_g}\\ {}{\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\end{array}\right] $$
Equation (22) can be further expanded as:
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{\sigma}}_{g_i}^2=\left({X}_g^T\left(:,i\right)-{b}_g(i){I}_{n_g}^T\right)\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)-\left({X}_g^T\left(:,i\right)-{b}_g(i){I}_{n_g}^T\right)\Delta {b}_g(i){I}_{n_g}-\Delta {b}_g(i){I}_{n_g}^T\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)+{\Delta b}_c^2(i){I}_{n_g}^T{I}_{n_g}+{\left({\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\right)}^2 $$
Notice that, in this expression, the terms \( \left({X}_g^T\left(:,i\right)-{b}_g(i){I}_{n_g}^T\right)\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)=\left({n}_g-1\right){\sigma}_{g_i}^2 \), \( {I}_{n_g}^T{I}_{n_g}={n}_g \), and \( \left({X}_g^T\left(:,i\right)-{b}_g(i){I}_{n_g}^T\right)\Delta {b}_g(i){I}_{n_g}=\Delta {b}_g(i){I}_{n_g}^T\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right) \). Then, Eq. (23) can be reduced to:
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{\sigma}}_{g_i}^2=\left({n}_g-1\right){\sigma}_{g_i}^2-2\Delta {b}_g(i){I}_{n_g}^T\left({X}_g\left(:,i\right)-{b}_g(i){I}_{n_g}\right)+{n}_g{\Delta b}_g^2(i)+{\left({\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\right)}^2 $$
Considering that \( \tilde{n}_{g}={n}_g+1 \) and \( {I}_{n_g}^T{X}_g\left(:,i\right)={I}_{n_g}^T\left({b}_g(i){I}_{n_g}\right)={n}_g{b}_g(i) \), it follows that
$$ {\overset{\sim }{\sigma}}_{g_i}=\sqrt{\frac{n_g-1}{n_g}{\sigma}_{g_i}^2+{\Delta b}_g^2(i)+\frac{1}{n_g}{\left({\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\right)}^2} $$
From Eq. (25), notice that the (distorted) variances of the variables of the group \( {\overset{\sim }{X}}_g \) depend on: (1) the original variances in Xg, with natural weight \( \frac{n_g-1}{n_g} \); (2) the quadratic (mean centered) values of the new sample, \( {\left({\boldsymbol{x}}_p(i)-\tilde{b}_{g}(i)\right)}^2 \), with natural weight \( \frac{1}{n_g} \); and the quadratic values of the distortion in the mean vector, \( {\Delta b}_g^2(i) \). Based on equation [25], the standard deviation matrix for all m variables is
$$ {\overset{\sim }{\Sigma}}_g=\left[\begin{array}{ccc}{\overset{\sim }{\sigma}}_{g_1}& & \\ {}& \ddots & \\ {}& & {\overset{\sim }{\sigma}}_{g_m}\end{array}\right] $$
Having expressions for \( \tilde{b}_{g} \) and \( {\overset{\sim }{\Sigma}}_g \), it follows that the distorted correlation matrix is calculated as \( {\overset{\sim }{S}}_g=\frac{1}{\tilde{n}_{g}-1}{\overset{\sim }{X}}_{g_{norm}}^T{\overset{\sim }{X}}_{g_{norm}} \) . Combining \( {\overset{\sim }{S}}_g \) with Eq. (17) yields
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{S}}_g=\left[{\overset{\sim }{\Sigma}}_g^{-1}\left({\Sigma}_g{X}_{g_{norm}}^T-{\Delta b}_g{I}_{n_g}^T\right)\kern0.5em {\boldsymbol{x}}_{p_{norm}}^T\right]\left[\begin{array}{c}\left({X}_{g_{norm}}{\Sigma}_g-{I}_{n_g}{\Delta b}_g^T\right){\overset{\sim }{\Sigma}}_g^{-1}\\ {}{\boldsymbol{x}}_{p_{norm}}\end{array}\right] $$
It follows that,
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{S}}_g={\overset{\sim }{\Sigma}}_g^{-1}{\Sigma}_g{X}_{g_{norm}}^T{X}_{g_{norm}}{\Sigma}_g{\overset{\sim }{\Sigma}}_g^{-1}-{\overset{\sim }{\Sigma}}_g^{-1}{\Sigma}_g{X}_{g_{norm}}^T{I}_{n_g}{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1}-{\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{I}_{n_g}^T{X}_{g_{norm}}{\Sigma}_g{\overset{\sim }{\Sigma}}_g^{-1}+{\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{I}_{n_g}^T{I}_{n_g}{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1}+{\boldsymbol{x}}_{p_{norm}}^T{\boldsymbol{x}}_{p_{norm}} $$
As \( {X}_{g_{norm}}^T{X}_{g_{norm}}=\left({n}_g-1\right){S}_g \), \( {\Sigma}_g{X}_{g_{norm}}^T={X}_g^T-{b}_g{I}_{n_g}^T \), \( {X}_{g_{norm}}{\Sigma}_g={X}_g-{I}_{n_g}{b}_g^T \), this expression can be expressed as:
$$ \left(\tilde{n}_{g}-1\right){\overset{\sim }{S}}_g=\left({n}_g-1\right){\overset{\sim }{\Sigma}}_g^{-1}{\Sigma}_g{S}_g{\Sigma}_g{\overset{\sim }{\Sigma}}_g^{-1}-{\overset{\sim }{\Sigma}}_g^{-1}\left({X}_g^T-{b}_g{I}_{n_g}^T\right){I}_{n_g}{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1}-{\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{I}_{n_g}^T\left({X}_g-{I}_{n_g}{b}_g^T\right){\overset{\sim }{\Sigma}}_g^{-1}+{n}_g{\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1}+{\boldsymbol{x}}_{p_{norm}}^T{\boldsymbol{x}}_{p_{norm}} $$
Now, as \( {X}_g^T{I}_{n_g}={b}_g{I}_{n_g}^T{I}_{n_g}={I}_{n_g}^T{X}_g={I}_{n_g}^T{I}_{n_g}{b}_g^T={n}_g{b}_g \), the second and third terms of Eq. (29) disappear. Then, the distorted correlation matrix \( {\overset{\sim }{S}}_g \) is given by
$$ {\overset{\sim }{S}}_g=\frac{n_g-1}{n_g}{\overset{\sim }{\Sigma}}_g^{-1}{\Sigma}_g{S}_g{\Sigma}_g{\overset{\sim }{\Sigma}}_g^{-1}+{\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1}+\frac{1}{n_g}{\boldsymbol{x}}_{p_{norm}}^T{\boldsymbol{x}}_{p_{norm}} $$
Note that, in this expression, \( {\overset{\sim }{S}}_g \) depends on three terms:
\( {\overset{\sim }{\Sigma}}_g^{-1}{\Sigma}_g{S}_g{\Sigma}_g{\overset{\sim }{\Sigma}}_g^{-1} \), which considers the contributions made from the non-distorted correlation matrix Sg after an actualization of the standard deviation, with a natural weight of \( \frac{n_g-1}{n_g} \).
\( {\boldsymbol{x}}_{p_{norm}}^T{\boldsymbol{x}}_{p_{norm}} \), which considers the contribution of the new sample to the constitution of the distorted correlation matrix, with a natural weight of \( \frac{1}{n_g} \).
\( {\overset{\sim }{\Sigma}}_g^{-1}{\Delta b}_g{\Delta b}_g^T{\overset{\sim }{\Sigma}}_g^{-1} \), which considers the effects of the distortion of Σg and bg in \( {\overset{\sim }{S}}_g \).
Finally, the distortion of the correlation matrix will be measured with the estimation of the deviation between Sg and \( {\overset{\sim }{S}}_g \), using the metric \( \varphi \left({S}_g,{\overset{\sim }{S}}_g\right) \) defined in Eq. (14). As previously mentioned, if the number of samples for the group g is large, the integration of xp will barely cause a distortion in the correlation structure, even if it has different features compared to the samples in Xg. For example, if Xg were composed of 200 samples, the natural relative weight of the mean vector (bc) for the construction of the distorted mean vector would be ~ 0.995, while the natural weight of the sample would (only) be ~ 0.005.
On the other hand, if the weights were calculated assuming that Xg is composed of few samples, that is, replacing ng for \( {n}_g^{red} \) (so that \( {n}_g^{red}<{n}_g \)) in the quotients to calculate the relative weights, these weights would be more even and provide a weighting factor for the calculation of the distorted correlation matrix using all the information contained in the original samples of Xg (in bg, Σg, and Sg). This is equivalent to finding a generatrix base of a few samples/patients (\( {n}_g^{red} \)) that can represent all the characteristics of Xg, incorporate xp, and then evaluate the distortion caused to the correlation structure, providing an artificial dimensional reduction. For example, if the relative weights were calculated assuming that Xg is composed only of three samples that exhibit all the attributes of the original dataset (i.e., \( {n}_g^{red}=3 \)), these weights would have the values of 0.75 and 0.25, respectively, for the calculation of the distorted mean vector.
The lower threshold for this artificial dimensional reduction could be found making \( {n}_g^{red}=2 \) in the calculation of the relative weights. If \( {n}_g^{red}=1 \), this would lead to leaving out all the information contained in Sg to the estimation of \( {\overset{\sim }{S}}_g \) (see Eq. (30)). A similar result is obtained for the standard deviation (see Eq. (28)).
Proposed classification rule
Now that the artificial dimensional reduction approach has been proposed, it will be used alongside the metric φ for the creation of a tool to classify new samples/patients into either the control or case group. The classifier will work under the assumption that a sample's likelihood of belonging to either group is inversely proportional to the distortion caused by its incorporation into that group. This classification approach includes the following steps:
Store the new sample in xp.
Define the "maximum artificial dimension" to be evaluated as \( n\le \mathit{\min}\left({n}_c,{n}_v\right)\ \left(n\in {\mathbbm{z}}^{+}\right). \) Choose a dimension "step of change", \( \Delta n\in {\mathbbm{z}}^{+} \), such as n − 2 is divisible by ∆n. Thus, \( \frac{\left(n-2\right)}{\Delta n}+1 \) would define the number of artificial dimensions to be evaluated. Therefore, we set \( {n}_g^{red}=\left(2,2+\Delta n,2+2\Delta n,\dots, n\right) \) for both g = c and g = v.
Evaluate Eqs. (18), (25), (26) and (30) using \( {n}_g^{red} \) instead of ng. Perform this evaluation for both g = c and g = v, and for all values of \( {n}_g^{red} \). Store the resulting distorted correlation matrices as
$$ {\overset{\sim }{\mathcal{S}}}_c=\left\{\begin{array}{c}{\overset{\sim }{S}}_{c_{\mid {}_{n_g^{red}=2}}}\\ {}\vdots \\ {}{\overset{\sim }{S}}_{c_{\mid_{n_g^{red}=n}}}\end{array}\right\},\kern1em {\overset{\sim }{\mathcal{S}}}_v=\left\{\begin{array}{c}{\overset{\sim }{S}}_{v_{\mid_{n_g^{red}=2}}}\\ {}\vdots \\ {}{\overset{\sim }{S}}_{v_{\mid_{n_g^{red}=n}}}\end{array}\right\} $$
For each \( {n}_g^{red}=\left(2,2+\Delta n,2+2\Delta n,\dots, n\right) \), calculate
$$ {\left.\left({\psi}_g\right)\right|}_{n_g^{red}}:= \frac{1}{\left|\varphi \left({S}_g,{\overset{\sim }{S}}_{g_{n_g^{red}}}\right)\right|},\kern1.75em g=\left\{c,v\right\} $$
where |l| is the absolute value of l. In consequence, large values of ψ indicate a small distortion in the correlation structure, and therefore, a high degree of affinity between Xg and xp. On the other hand, small values of ψ indicate a big distortion and a low degree of affinity between Xg and xp.
Calculate the average value for \( {\left.\left({\psi}_g\right)\right|}_{n_g^{red}} \) as
$$ {\overline{\psi}}_g=\frac{1}{n}\sum \limits_{\forall {n}_g^{red}}\left[{\left.\left({\psi}_g\right)\right|}_{n_g^{red}}\right],\kern1.75em g=\left\{c,v\right\} $$
Finally, the outcomes of the proposed classification rule, for a single sample, are \( {\overline{\psi}}_c \) and \( {\overline{\psi}}_v \). The method will classify the sample into the group with the greater value of \( {\overline{\psi}}_g \). Figure 4 shows a graphical representation to visualize the outcome of the proposed classification method after classifying a set of new samples one-by-one.
Illustration of new samples and the line that separates both groups with the proposed method. Samples lying in the upper semi-plane will be classified in the case (v) group and in the control (c) group otherwise
Performance assessment with synthetic data
In this section, we assess the performance of the proposed method to correctly classify synthetically generated data.
Synthetic data generation
We conducted in silico experiments to assess the performance of the proposed method under different parameter settings. The following procedure was used to generate synthetic datasets:
Define the quadruplet (ni, mj, ρc, ρv). Set n = {20,40,60,80,100,120,140,160}, m = {20,40,60,80,100,120,140}, ρc = 0.1, ρv = 0.2.
For every quadruplet in step 1 construct a pair of generatrix correlation matrices, \( {\Sigma}_{c_{j,c}} \) and \( {\Sigma}_{v_{j,v}} \) as \( {\Sigma}_{c_{j,c}}=\left(1-{\rho}_c\right){I}_{m_j}+{\rho}_c{1}_{m_j}{1}_{m_j}^T \) and \( {\Sigma}_{v_{j,v}}=\left(1-{\rho}_v\right){I}_{m_j}+{\rho}_v{1}_{m_j}{1}_{m_j}^T \), where \( {I}_{m_j}\in {\mathbb{R}}^{m_j\times {m}_j} \) is the identity matrix and \( {1}_{m_j}\in {\mathbb{R}}^{m_j\times 1} \) is column vector of ones.
For every pair \( \left({\Sigma}_{c_{j,c}},{\Sigma}_{v_{j,v}}\right) \), B pairs of Normal-distributed matrices \( {X}_{c_r} \) and \( {X}_{v_r} \) (with r = {1, 2, …, B}) of dimension ni × mj are generated. For this purpose, the NumPy [54] Python package was used. The number of experimental replicates was B = 100.
Performance assessment procedure
We used the correct classification rate (accuracy) as the assessment criterion to measure the performance of our method as follows:
Merge each \( \left({X}_{c_r},{X}_{v_r}\right) \) into a single matrix \( {X}_{Total}=\left[\begin{array}{c}{X}_{c_r}\\ {}{X}_{v_r}\end{array}\right]\in {\mathbb{R}}^{2n\times m} \).
For every pair \( \left({X}_{c_r},{X}_{v_r}\right) \), execute the proposed algorithm with each row sample \( {x}_{p_i}={X}_{Tota{l}_i}\left[i,:\right] \), i = {1, 2, …, 2n}, and classify \( {x}_{p_i} \).
Compute the average classification accuracy as:
$$ \mathrm{Accuracy}=100\times \frac{N}{2n} $$
where N is the number of correctly classified samples.
Performance assessment results with synthetic data
Table 3 summarizes the main results. Our method exhibits exceptional accuracy for all the configurations tested. Interestingly, accuracy decreases as the number of features m decreases and the sample size n increases.
Table 3 Performance of the proposed method for synthetic datasets. Configurations (n, m) not reported showed 100% Classification Accuracy
Validation with real datasets
In this section, we study the performance of the proposed method using two real-world datasets, which contain OTU count tables obtained through 16S rRNA gene sequencing data from microbiota experiments. We also compare the classification accuracy of our method with those of two state-of-the-art methods: SVM [39] and SVM-RFE [41].
The first dataset is from the American Gut Project (AGP) [51], which is one of the largest crowd-funded microbiome research projects. The second dataset is the Greengenes (GG) database [52], created with the PhyloChip 16s rRNA microarray. For the comparison experiment, only fractions of the datasets were used. In particular, a total of 578 samples and 127 features comprised the AGP data set, while 500 samples and 26 features comprised the GG data set. In both data sets, 50% of the samples correspond to cases.
Validation scenarios results
Datasets were preprocessed as described in section "Data pretreatment". Further, the proposed method, as well as the SVM and SVM-RFE methods, were applied after separating the whole data set into training, testing, and validation sets using 70, 20, and 10% of the data, respectively. For the SVM-RFE method, the number of features to select was \( {n}_{features}=\left\{5,10,15,\frac{n_{features}}{2}\right\} \) and the average of the results was calculated. The tuning parameters used for the SVM and SVM-RFE methods were C = 1 and γ = 0.05, where C trades off the correct classification of training examples against the maximization of the decision function's margin, and γ defines how far the influence of a single training example reaches.
Table 4 shows the main results. For the AGP data set, SVM is the least accurate, and SVM-RFE has the highest accuracy. This latter result is mostly due to all the strong features of SVM and the ability of the SVM-RFE method to eliminate variables that are not highly relevant in the data. Interestingly, our method outperforms SVM and is a close competitor of SVM-RFE.
Table 4 Classification accuracy for each method for the AGP and GG data sets
For the GG dataset, although the number of variables is small, the SVM-RFE and our method showed accuracy values above 90%, while the accuracy for the SVM method is below this threshold. It is worth highlighting that, for this data set, our method outperforms both the SVM and SVM-RFE methods. The latter result is thanks to the artificial dimensional reduction conducted to balance the natural weights when the number of samples is greater than the number of variables. Figure 5 provides a graphical illustration of the proposed method's classification outcome for both real datasets used for validation, i.e., the AGP and the GG.
Illustration of new samples and the line that separates both groups with the proposed method for the AGP (left) and GG (right) data sets
The ability to characterize populations of patients, species, or biological features, usually comprising a large number of variables in order to use the extracted characteristics to classify new samples into one of such populations' categories is a relevant tool for biological and medical studies. When data describing these populations is compositional, further limitations and challenges arise.
Here, we proposed a new method to classify samples into one of two previously known categories. The method uses a new metric developed to quantify the overall correlation structure deviation between two datasets, and a new dimensionality reduction technique. Although we illustrated the usefulness of our proposal with compositional data, its application is not limited, under any circumstances, to data of this nature. In fact, when data is not compositional, the centered log-ratio transformation and the zero-replacement algorithm must not be applied.
Validation with synthetic data showed that the proposed method achieves accuracy values above 98%. Moreover, comparison of the performance of our method with that of SVM and the SVM-RFE (i.e., two state-of-the-art classification techniques), using two real-world datasets from 16 s RNA sequencing experiments, showed that our method outperforms the SVM method in both data sets, outperforms the SVM-RFE method in the GG data set, and is a close competitor of the SVM-RFE method in the AGP data set.
Future studies may address the ability of our proposed method to perform accurately for a broader range of dimensions (number of variables and samples) and assess its performance for more scenarios of dissimilar correlation structures other than that for ρc = 0.1 and ρv = 0.2. Moreover, our method may be extrapolated for multi-category classification, and a performance assessment may be conducted to test its classification accuracy in non-binary scenarios.
The source code, implemented in Python 3, is readily available in the following GitHub site: https://github.com/JoaoRacedo/arn_seq_pipeline. This code generates synthetic datasets to demonstrate the use of the pipeline. The American Gut Project's datasets can be found on the following website: http://americangut.org. Finally, the Greengenes' datasets can be found on: https://greengenes.lbl.gov/Download/OTUs/.
AGP:
American gut project
alr:
Additive lo-ratio
ANN:
Bayesian multiplicative (algorithm)
clr:
Centered log-ratio
CSVM-RMFE:
Correlation based support vector machine–recursive multiple feature elimination
GG:
Greengenes (database)
GrBoost:
ilr:
Isometric log-ratio
NN:
Nearest Neighbor
OTU:
Operational taxonomic unit
PCA:
rRNA:
Ribosomal ribonucleic acid
SPIEC-EASI:
Sparse inverse covariance estimation for ecological association inference
SVM:
Support vector machine
SVM-RFE:
Support vector machine recursive feature elimination
Turnbaugh PJ, Ley RE, Hamady M, Fraser-Liggett CM, Knight R, Gordon JI. The Human Microbiome Project. Nature [Internet]. 2007;449(7164):804–10. Available from: https://doi.org/10.1038/nature06244.
Kitano H. Looking beyond the details: a rise in system-oriented approaches in genetics and molecular biology. Curr Genet [Internet]. 2002 [cited 2019 Nov 13];41(1):1–10. Available from: https://doi.org/10.1007/s00294-002-0285-z.
Oltvai ZN. Life's complexity pyramid Zoltán N. Oltvai. 2010;763(2002).
Kitano H. Systems biology: a brief overview. 2015;(April 2002).
Voorhies AA, Ott CM, Mehta S, Pierson DL, Crucian BE, Feiveson A, et al. Study of the impact of long-duration space missions at the International Space Station on the astronaut microbiome. Sci Rep [Internet]. 2019;1–17. Available from: https://doi.org/10.1038/s41598-019-46303-8
Somerville C, Somerville S. Plant functional genomics. Science. 1999;285(5426):380–3.
Gill R, Datta S, Datta S. A statistical framework for differential network analysis from microarray data. BMC Bioinformatics. 2010;11(1):95.
Gill R, Datta S, Datta S. dna: an R package for differential network analysis. Bioinformation. 2014;10(4):233.
Juric D, Lacayo NJ, Ramsey MC, Racevskis J, Wiernik PH, Rowe JM, et al. Differential gene expression patterns and interaction networks in BCR-ABL—positive and—negative adult acute lymphoblastic leukemias. J Clin Oncol. 2007;25(11):1341–9.
Van Treuren W, Ren B, Gevers D, Kugathasan S, Denson LA, Va Y, et al. Resource the treatment-naive microbiome in new-onset Crohn's disease. Cell Host Microbe. 2014;15:382–92.
Ruan D, Young A, Montana G. Differential analysis of biological networks. BMC Bioinformatics. 2015;16(1):327.
Schloss PD, Westcott SL, Ryabin T, Hall JR, Hartmann M, Hollister EB, et al. Introducing mothur: open-source, platform-independent, community-supported software for describing and comparing microbial communities. Appl Environ Microbiol. 2009;75(23):7537–41.
Rao KR, Lakshminarayanan S. Partial correlation based variable selection approach for multivariate data classification methods. Chemom Intell Lab Syst. 2007;86(1):68–81.
Kurtz ZD, Müller CL, Miraldi ER, Littman DR, Blaser MJ, Bonneau RA. Sparse and compositionally robust inference of microbial ecological networks. PLoS Comput Biol. 2015;11(5):e1004226.
Aitchison J. The statistical analysis of compositional data. J R Stat Soc Ser B. 1982:139–77.
Filzmoser P, Hron K, Reimann C. Science of the Total Environment Univariate statistical analysis of environmental (compositional) data: problems and possibilities. Sci Total Environ [Internet]. 2009;407(23):6100–8. Available from: https://doi.org/10.1016/j.scitotenv.2009.08.008.
Clark C, Kalita J. A comparison of algorithms for the pairwise alignment of biological networks. Bioinformatics [Internet]. 2014;30(16):2351–9. Available from: https://doi.org/10.1093/bioinformatics/btu307.
Atchison J, Shen SM. Logistic-normal distributions: some properties and uses. Biometrika. 1980;67(2):261–72.
Aitchison J. A new approach to null correlations of proportions. J Int Assoc Math Geol. 1981;13(2):175–89.
Egozcue JJ, Pawlowsky-Glahn V, Mateu-Figueras G, Barceló-Vidal C. Isometric Logratio transformations for compositional data analysis. Math Geol [Internet]. 2003;35(3):279–300. Available from: https://doi.org/10.1023/A:1023818214614.
Greenacre M, Grunsky E. The isometric logratio transformation in compositional data analysis: a practical evaluation. 2019.
Pan M, Zhang J. Correlation-based linear discriminant classification for gene expression data. Genet Mol Res. 2017;16(1).
Hira ZM, Gillies DF. A review of feature selection and feature extraction methods applied on microarray data. Adv Bioinforma 2015;2015.
Goswami S, Chakrabarti A, Chakraborty B. Analysis of correlation structure of data set for efficient pattern classification. In: 2015 IEEE 2nd International Conference on Cybernetics (CYBCONF); 2015. p. 24–9.
Russell EL, Chiang LH, Braatz RD. Data-driven methods for fault detection and diagnosis in chemical processes. New York: Springer Science & Business Media; 2012.
Saeys Y, Inza I, Larrañaga P. A review of feature selection techniques in bioinformatics. Bioinformatics. 2007;23(19):2507–17.
Serban N, Critchley-Thorne R, Lee P, Holmes S. Gene expression network analysis and applications to immunology. Bioinformatics. 2007;23(7):850–8.
Friedman J, Alm EJ. Inferring correlation networks from genomic survey data. PLoS Comput Biol. 2012;8(9):e1002687.
Radovic M, Ghalwash M, Filipovic N, Obradovic Z. Minimum redundancy maximum relevance feature selection approach for temporal gene expression data. BMC Bioinformatics. 2017;18(1):1–14.
Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010;11(10):R106.
Paulson JN, Stine OC, Bravo HC, Pop M. Differential abundance analysis for microbial marker-gene surveys. Nat Methods. 2013;10(12):1200–2.
Kavitha KR, Rajendran GS, Varsha J. A correlation based SVM-recursive multiple feature elimination classifier for breast cancer disease using microarray. In: 2016 International Conference on Advances in Computing, Communications and Informatics (ICACCI); 2016. p. 2677–83.
Collins GS, Mallett S, Omar O, Yu L-M. Developing risk prediction models for type 2 diabetes: a systematic review of methodology and reporting. BMC Med. 2011;9(1):103.
Aarøe J, Lindahl T, Dumeaux V, Sæbø S, Tobin D, Hagen N, et al. Gene expression profiling of peripheral blood cells for early detection of breast cancer. Breast Cancer Res. 2010;12(1):R7.
Datta S. Classification of breast cancer versus normal samples from mass spectrometry profiles using linear discriminant analysis of important features selected by random forest. Stat Appl Genet Mol Biol. 2008;7(2).
Šonka M, Hlaváč V, Boyle R. Image processing, analysis, and machine vision. International Student Edition; 2008.
Dembélé D, Kastner P. Fold change rank ordering statistics: a new method for detecting differentially expressed genes. BMC Bioinformatics. 2014;15(1):14.
Bevilacqua V, Mastronardi G, Menolascina F, Paradiso A, Tommasi S. Genetic algorithms and artificial neural networks in microarray data analysis: a distributed approach. Eng Lett. 2006;13(4).
Ca DAV, Mc V. Gene expression data classification using support vector machine and mutual information-based gene selection. Proc Comput Sci. 2015;47:13–21.
van Dam S, Vosa U, van der Graaf A, Franke L, de Magalhaes JP. Gene co-expression analysis for functional classification and gene--disease predictions. Brief Bioinform. 2018;19(4):575–92.
Dudoit S, Fridlyand J, Speed TP. Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002;97(457):77–87.
Bhuvaneswari V, et al. Classification of microarray gene expression data by gene combinations using fuzzy logic (MGC-FL). Int J Comput Sci Eng Appl. 2012;2(4):79.
Belciug S, Gorunescu F. Learning a single-hidden layer feedforward neural network using a rank correlation-based strategy with application to high dimensional gene expression and proteomic spectra datasets in cancer detection. J Biomed Inform. 2018;83:159–66.
Dettling M, Bühlmann P. Boosting for tumor classification with gene expression data. Bioinformatics. 2003;19(9):1061–9.
Friedman J, Hastie T, Tibshirani R, et al. Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann Stat. 2000;28(2):337–407.
Freund Y, Schapire RE. A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci. 1997;55(1):119–39.
Fix E, Hodges Jr JL. Discriminatory analysis-nonparametric discrimination: small sample performance; 1952.
Breiman L, Friedman J, Stone CJ, Olshen RA. Classification and regression trees. Boca Raton, FL: CRC Press; 1984.
Martín-Fernández J-A, Hron K, Templ M, Filzmoser P, Palarea-Albaladejo J. Bayesian-multiplicative treatment of count zeros in compositional data sets. Stat Modelling. 2015;15(2):134–58.
Pearson K. Mathematical contributions to the theory of evolution—on a form of spurious correlation which may arise when indices are used in the measurement of organs. Proc R Soc Lond. 1897;60(359–367):489–98.
McDonald D, Hyde E, Debelius JW, Morton JT, Gonzalez A, Ackermann G, et al. American Gut: an open platform for citizen science microbiome research. Msystems. 2018;3(3):e00031–18.
DeSantis TZ, Hugenholtz P, Larsen N, Rojas M, Brodie EL, Keller K, et al. Greengenes, a chimera-checked 16S rRNA gene database and workbench compatible with ARB. Appl Environ Microbiol. 2006;72(7):5069–72.
Acknowledgements and funding
This study was financed by COLCIENCIAS grant No. 1215-5693-4635, contract 0770-2013. By the time this work was developed, IP was a doctoral student at Universidad del Norte, Colombia, whose PhD was funded by COLCIENCIAS and Gobernación del Atlántico (Colombia), grant No. 673 (2014), "Formación de Capital Humano de Alto Nivel para el Departamento del Atlántico". EZ and HSJV are supported in part by award No. R01AI110385 from the National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, MD, USA. JIV was partially supported by research grant FOFICO 32101 PE0031 from Universidad del Norte, Barranquilla, Colombia.
Universidad del Norte, Barranquilla, Colombia
Sebastian Racedo, Ivan Portnoy, Jorge I. Vélez, Homero San-Juan-Vergara, Marco Sanjuan & Eduardo Zurek
Productivity and Innovation Department, Universidad de la Costa, Calle 58 # 55-56, Barranquilla, Colombia
Ivan Portnoy
Sebastian Racedo
Jorge I. Vélez
Homero San-Juan-Vergara
Marco Sanjuan
Eduardo Zurek
Technique design: SR, IP, EZ. Algorithms implementation: SR, IP. Experimental design: JIV, EZ, HSJV, MS. Writing of the manuscript: SR, IP, EZ, JIV, MS, HSJV.
Correspondence to Ivan Portnoy.
Racedo, S., Portnoy, I., Vélez, J.I. et al. A new pipeline for structural characterization and classification of RNA-Seq microbiome data. BioData Mining 14, 31 (2021). https://doi.org/10.1186/s13040-021-00266-7
Received: 04 November 2020
Microbial communities
Compositional nature
Classification method
16 rRNA sequencing | CommonCrawl |
EUROCRYPT2010.pdf
ToC 2020.pdf
CRYPTO 2019.pdf
ArXiv2019.pdf
JCRYPTOL2018.pdf
IACR2013.pdf
Chen, Yi-Hsiu, Mika Goos, Salil P. Vadhan, and Jiapeng Zhang. "A tight lower bound for entropy flattening." In 33rd Computational Complexity Conference (CCC 2018), 102:23:21-23:28. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik: Leibniz International Proceedings in Informatics (LIPIcs), 2018. Publisher's VersionAbstract
CCC2018.pdf
Version History: Preliminary version posted as ECCC TR18-119.
We study entropy flattening: Given a circuit \(C_X\) implicitly describing an n-bit source \(X\) (namely, \(X\) is the output of \(C_X \) on a uniform random input), construct another circuit \(C_Y\) describing a source \(Y\) such that (1) source \(Y\) is nearly flat (uniform on its support), and (2) the Shannon entropy of \(Y\) is monotonically related to that of \(X\). The standard solution is to have \(C_Y\) evaluate \(C_X\) altogether \(\Theta(n^2)\) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit \(C_Y\) for entropy flattening that repeatedly queries \(C_X\) as an oracle requires \(\Omega(n^2)\)queries.
Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [12, 22, 13, 6, 11, 10, 7, 24]. It is also used in reductions between problems complete for statistical zero-knowledge [19, 23, 4, 25]. The \(\Theta(n^2)\) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.
Haitner, Iftach, and Salil Vadhan. "The Many Entropies in One-way Functions." In Tutorials on the Foundations of Cryptography, 159-217. Springer, Yehuda Lindell, ed. 2017. Publisher's VersionAbstract
SPRINGER 2017.pdf
ECCC 5-2017.pdf
ECCC 12-2017.pdf
Earlier versions: May 2017: ECCC TR 17-084
Dec. 2017: ECCC TR 17-084 (revised)
Computational analogues of information-theoretic notions have given rise to some of the most interesting phenomena in the theory of computation. For example, computational indistinguishability, Goldwasser and Micali [9], which is the computational analogue of statistical distance, enabled the bypassing of Shannon's impossibility results on perfectly secure encryption, and provided the basis for the computational theory of pseudorandomness. Pseudoentropy, Håstad, Impagliazzo, Levin, and Luby [17], a computational analogue of entropy, was the key to the fundamental result establishing the equivalence of pseudorandom generators and one-way functions, and has become a basic concept in complexity theory and cryptography.
This tutorial discusses two rather recent computational notions of entropy, both of which can be easily found in any one-way function, the most basic cryptographic primitive. The first notion is next-block pseudoentropy, Haitner, Reingold, and Vadhan [14], a refinement of pseudoentropy that enables simpler and more ecient construction of pseudorandom generators. The second is inaccessible entropy, Haitner, Reingold, Vadhan, andWee [11], which relates to unforgeability and is used to construct simpler and more efficient universal one-way hash functions and statistically hiding commitments.
Vadhan, Salil. "The Complexity of Differential Privacy." In Tutorials on the Foundations of Cryptography, 347-450. Springer, Yehuda Lindell, ed. 2017. Publisher's VersionAbstract
ERRATA 2017.pdf
MANUSCRIPT 2017.pdf
August 2016: Manuscript v1 (see files attached)
March 2017: Manuscript v2 (see files attached); Errata
April 2017: Published Version (in Tutorials on the Foundations of Cryptography; see Publisher's Version link and also SPRINGER 2017.PDF, below)
Differential privacy is a theoretical framework for ensuring the privacy of individual-level data when performing statistical analysis of privacy-sensitive datasets. This tutorial provides an introduction to and overview of differential privacy, with the goal of conveying its deep connections to a variety of other topics in computational complexity, cryptography, and theoretical computer science at large. This tutorial is written in celebration of Oded Goldreich's 60th birthday, starting from notes taken during a minicourse given by the author and Kunal Talwar at the 26th McGill Invitational Workshop on Computational Complexity [1].
Chen, Yi-Hsiu, Kai-Min Chung, Ching-Yi Lai, Salil P. Vadhan, and Xiaodi Wu. "Computational notions of quantum min-entropy." In Poster presention at QIP 2017 and oral presentation at QCrypt 2017, 2017. Publisher's VersionAbstract
ArXiv v1, 24 April 2017 https://arxiv.org/abs/1704.07309v1
ArXiv v3, 9 September 2017 https://arxiv.org/abs/1704.07309v3
ArXiv v4, 5 October 2017 https://arxiv.org/abs/1704.07309v4
We initiate the study of computational entropy in the quantum setting. We investigate to what extent the classical notions of computational entropy generalize to the quantum setting, and whether quantum analogues of classical theorems hold. Our main results are as follows. (1) The classical Leakage Chain Rule for pseudoentropy can be extended to the case that the leakage information is quantum (while the source remains classical). Specifically, if the source has pseudoentropy at least \(k\), then it has pseudoentropy at least \(k−ℓ \) conditioned on an \(ℓ \)-qubit leakage. (2) As an application of the Leakage Chain Rule, we construct the first quantum leakage-resilient stream-cipher in the bounded-quantum-storage model, assuming the existence of a quantum-secure pseudorandom generator. (3) We show that the general form of the classical Dense Model Theorem (interpreted as the equivalence between two definitions of pseudo-relative-min-entropy) does not extend to quantum states. Along the way, we develop quantum analogues of some classical techniques (e.g. the Leakage Simulation Lemma, which is proven by a Non-uniform Min-Max Theorem or Boosting). On the other hand, we also identify some classical techniques (e.g. Gap Amplification) that do not work in the quantum setting. Moreover, we introduce a variety of notions that combine quantum information and quantum complexity, and this raises several directions for future work.
Bun, Mark, Yi-Hsiu Chen, and Salil Vadhan. "Separating computational and statistical differential privacy in the client-server model." In Martin Hirt and Adam D. Smith, editors, Proceedings of the 14th IACR Theory of Cryptography Conference (TCC `16-B). Lecture Notes in Computer Science. Springer Verlag, 31 October-3 November, 2016. Publisher's VersionAbstract
TCC 16-B.pdf
Version History: Full version posted on Cryptology ePrint Archive, Report 2016/820.
Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al. (CRYPTO 2009) proposed several computa- tional relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich (TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks.
Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for NP, that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.
Mahmoody, Mohammad, Tal Moran, and Salil Vadhan. "Publicly verifiable proofs of sequential work." In Innovations in Theoretical Computer Science (ITCS '13), 373-388. ACM, 2013. Publisher's VersionAbstract
ITCS2013.pdf
Version History: Preliminary version posted as Cryptology ePrint Archive Report 2011/553, under title "Non-Interactive Time-Stamping and Proofs of Work in the Random Oracle Model".
We construct a publicly verifiable protocol for proving computational work based on collision- resistant hash functions and a new plausible complexity assumption regarding the existence of "inherently sequential" hash functions. Our protocol is based on a novel construction of time-lock puzzles. Given a sampled "puzzle" \(\mathcal{P} \overset{$}\gets \mathbf{D}_n\), where \(n\) is the security parameter and \(\mathbf{D}_n\) is the distribution of the puzzles, a corresponding "solution" can be generated using \(N\) evaluations of the sequential hash function, where \(N > n\) is another parameter, while any feasible adversarial strategy for generating valid solutions must take at least as much time as \(\Omega(N)\) sequential evaluations of the hash function after receiving \(\mathcal{P}\). Thus, valid solutions constitute a "proof" that \(\Omega(N)\) parallel time elapsed since \(\mathcal{P}\) was received. Solutions can be publicly and efficiently verified in time \(\mathrm{poly}(n) \cdot \mathrm{polylog}(N)\). Applications of these "time-lock puzzles" include noninteractive timestamping of documents (when the distribution over the possible documents corresponds to the puzzle distribution \(\mathbf{D}_n\)) and universally verifiable CPU benchmarks.
Our construction is secure in the standard model under complexity assumptions (collision- resistant hash functions and inherently sequential hash functions), and makes black-box use of the underlying primitives. Consequently, the corresponding construction in the random oracle model is secure unconditionally. Moreover, as it is a public-coin protocol, it can be made non- interactive in the random oracle model using the Fiat-Shamir Heuristic.
Our construction makes a novel use of "depth-robust" directed acyclic graphs—ones whose depth remains large even after removing a constant fraction of vertices—which were previously studied for the purpose of complexity lower bounds. The construction bypasses a recent negative result of Mahmoody, Moran, and Vadhan (CRYPTO '11) for time-lock puzzles in the random oracle model, which showed that it is impossible to have time-lock puzzles like ours in the random oracle model if the puzzle generator also computes a solution together with the puzzle.
Vadhan, Salil, and Colin Jia Zheng. "A uniform min-max theorem with applications in cryptography." In Ran Canetti and Juan Garay, editors, Advances in Cryptology—CRYPTO '13, Lecture Notes on Computer Science, 8042:93-110. Springer Verlag, Lecture Notes in Computer Science, 2013. Publisher's VersionAbstract
ECCC2013.pdf
Full version published on ECCC2013 and IACR ePrint 2013.
We present a new, more constructive proof of von Neumann's Min-Max Theorem for two-player zero-sum game — specifically, an algorithm that builds a near-optimal mixed strategy for the second player from several best-responses of the second player to mixed strategies of the first player. The algorithm extends previous work of Freund and Schapire (Games and Economic Behavior '99) with the advantage that the algorithm runs in poly\((n)\) time even when a pure strategy for the first player is a distribution chosen from a set of distributions over \(\{0,1\}^n\). This extension enables a number of additional applications in cryptography and complexity theory, often yielding uniform security versions of results that were previously only proved for nonuniform security (due to use of the non-constructive Min-Max Theorem).
We describe several applications, including a more modular and improved uniform version of Impagliazzo's Hardcore Theorem (FOCS '95), showing impossibility of constructing succinct non-interactive arguments (SNARGs) via black-box reductions under uniform hardness assumptions (using techniques from Gentry and Wichs (STOC '11) for the nonuniform setting), and efficiently simulating high entropy distributions within any sufficiently nice convex set (extending a result of Trevisan, Tulsiani and Vadhan (CCC '09)).
Dodis, Yevgeniy, Thomas Ristenpart, and Salil Vadhan. "Randomness condensers for efficiently samplable, seed-dependent sources." In Ronald Cramer, editor, Proceedings of the 9th IACR Theory of Cryptography Conference (TCC '12), Lecture Notes on Computer Science, 7194:618-635. Springer-Verlag, 2012. Publisher's VersionAbstract
We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the condenser. That is, we seek functions \(\mathsf{Cond} : \{0,1\}^n \times \{0,1\}^d \to \{0,1\}^m\)such that if we choose a random seed \(S \gets \{0,1\}^d\), and a source \(X = \mathcal{A}(S)\) is generated by a randomized circuit \(\mathcal{A}\) of size \(t\) such that \(X\) has min- entropy at least \(k\) given \(S\), then \(\mathsf{Cond}(X ; S)\) should have min-entropy at least some \(k'\) given \(S\). The distinction from the standard notion of randomness condensers is that the source \(X\) may be correlated with the seed \(S\) (but is restricted to be efficiently samplable). Randomness extractors of this type (corresponding to the special case where \(k' = m\)) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS '00).
We show that:
Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size \(t\) of the adversarial sampling algorithm \(\mathcal{A}\). Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy \(k' = m – \mathcal{O}(\log t)\), i.e. logarithmic entropy deficiency.
Randomness condensers suffice for key derivation in many cryptographic applications: when an adversary has negligible success probability (or negligible "squared advantage" [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.
Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system. | CommonCrawl |
A Computational Synthesis Approach of Mechanical Conceptual Design Based on Graph Theory and Polynomial Operation | springerprofessional.de Skip to main content
vorheriger Artikel Kinematic Sensitivity Analysis and Dimensional ...
nächster Artikel Flexible Bio-tensegrity Manipulator with Multi-...
01.12.2020 | Original Article | Ausgabe 1/2020 Open Access
A Computational Synthesis Approach of Mechanical Conceptual Design Based on Graph Theory and Polynomial Operation
Lin Han, Geng Liu, Xiaohui Yang, Bing Han
As an early and the most creative phase of engineering design, the basic task of conceptual design is developing and synthesizing the building blocks to generate meaningful concept solutions that meet design requirements. In all the feasible design candidates, the optimal and novel one is pursued in this design phase. However, because of the limitation of bias and shorten of knowledge or experienced to the designers, it is always difficult to identify all the feasible design candidates and furthermore to find out the optimal one in them. Therefore, numerous researchers have focused their study on the computer aided design synthesis, and many models and approaches have been developed.
Based on physical working principles, Kota proposed a Function-Structure model and implemented it in the design of hydraulic systems [ 1 ]. Besides, a matrix methodology to the design synthesis was developed by him and Chiou on the basis of that model [ 2 , 3 ]. Another two famous models are Function-Behavior-Structure and Function-Behavior-State [ 4 – 6 ]. The common character of these functional models is that they describe objects or design problems and solutions in terms of their known functions, and regard the design process as a function decomposition process. Therefore, the associated synthesis approaches are so-called function-based synthesis [ 7 – 11 ]. There exist other kinds of synthesis approaches, e.g., the grammar-based synthesis [ 12 – 16 ] and the graph-based synthesis [ 17 – 21 ]. In grammar-based synthesis, the generative grammars, which are a class of production systems that capture design knowledge by defining a vocabulary and rule-set, are constructed and used to generate design alternatives [ 22 ]. In graph-based synthesis, the graph theory is used to represent a product and define the relationships between its components, and the graph concepts and theorems are employed to generate design candidates [ 23 ]. For now, graph-based synthesis approaches have been widely used in the synthesis, analysis and optimization process of linkage systems [ 24 – 26 ] and epicyclic gear trains [ 27 , 28 ]. Moreover, Bin proposed a computational conceptual design synthesis model based on space matrix [ 29 ], and he also integrated the functional synthesis of mechanisms with mechanical efficiency and cost to find out the optimal solution [ 30 , 31 ]. Similarly, Masakazu proposed an integrated optimization for supporting functional and layout designs during conceptual design phase [ 32 ].
With the help of above considerable researches, the paper proposes a novel and more computable synthesis approach of mechanisms. The advantage of this approach is that it builds up the mathematical structure of the design synthesis of mechanisms based on graph theory, and makes the design candidates are calculated by the induced polynomial operation formulas. Besides, each design candidate is finally represented by a matrix form, which will be conducive to the analysis and optimization of the design candidates in the future research. The rest of the paper is outlined as follows. Section 2 describes some basic concepts of the graph theory that will be used in the proposed synthesis approach. Section 3 builds up the graph framework of the synthesis approach. The polynomial operations including the polynomial-walk operation, edge sequence operation and vertex sequence operation are presented in Section 4. The computational flowchart of the synthesis approach is summarized in Section 5. In Section 6, the proposed synthesis approach is applied to figure out all the feasible design candidates of a mechanical system. Some concluding remarks are made in Section 7.
2 Basic Concepts on the Used Graph Theory
As a branch of mathematics, graph theory has gotten numerous applications in many fields of engineering for its ability to concisely represent and handle the relationships between different objects. In this section, some fundamental concepts of graph theory that are essential for the paper proposed synthesis approach are introduced briefly. The detailed descriptions of these concepts can be found in Ref. [ 33 ].
The mathematical structure of a directed graph or digraph D, as illustrated in Figure 1(a), is an ordered triple \(\left( {V(D),E(D),\varPsi_{D} } \right)\). Where V( D) is a nonempty set of vertices, e.g., { V 1, V 2, V 3, V 4}, E( D) is a set of edges, e.g., { e 1, e 2, e 3, e 4, e 5, e 6}, that disjoints from V( D), and \(\psi_{D}\) is an incidence function that assigns every edge e an initial vertex init( e) and a terminal vertex ter( e). The edge e is said to be directed from init( e) to ter( e), e.g., the direction of e 1 is from init( e 1) = V 1 to ter( e 1) = V 2. Usually, an edge is always written as a two-tuple style, e.g., the two-tuple of e 3 is e 3( V 1, V 4).
A digraph and its adjacency matrix
In a digraph, if the vertices of several edges are the same, such edges are called multiple edges, e.g., e 5 and e 6; if an edge whose initial vertex init( e) equals to its terminal vertex ter( e), such edge is called a loop, e.g., e 2. A directed walk W in D is a finite non-null sequence, e.g., W = ( V 1, e 3, V 4, e 6, V 3, e 5, V 4, e 6, V 3, e 4, V 2), whose terms are alternately vertices and edges. A walk is called a path if all the vertices and edges are distinct, e.g., P = ( V 1, e 3, V 4, e 6, V 3), and the length of it is the number of edges between the starting and ending vertices. A path of D is also a subgraph of D, for all its vertices and edges contained in D.
The topological structure, or in other words the connection relationships between vertices, can be conveniently represented in matrix form. A vertex-to-vertex adjacency matrix to a digraph is defined as \(A(D) = [a_{ij} ]\), where \(a_{ij} = \mu (V_{i} ,V_{j} )\) is the number of directed-edges whose init( e) = V i and ter( e) = V j, e.g., Figure 1(b). Let A k( D) be the kth power of A( D), where k is a positive integer number. Then an adjacency matrix theorem can be described as follows.
Adjacency Matrix Theorem: The number of walks of length n from vertex V i to vertex V j in a digraph D is given by the ( i, j) element of A n ( D).
For example, the third power of A( D) in Figure 1(b) is \(A^{3} (D) = \left[ {\begin{array}{*{20}c} 0 & 2 & 0 & 1 \\ 0 & 1 & 0 & 0 \\ 0 & 2 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ \end{array} } \right]\).
Then, the number of walks of length 3 from V 1 to V 2 is A 3( D)(1, 2) = 2, and they are W 1 = ( V 1, e 1, V 2, e 2, V 2, e 2, V 2) and W 2 = ( V 1, e 3, V 4, e 6, V 3, e 4, V 2).
3 Graph Framework
3.1 Kinematic Function Unit
A mechanical system, which is made up by variety mechanisms, is designed to transform the power or motion from the system input to the system output. Therefore, each mechanism has the function of transferring and transforming the power or motion that pass through it. In this paper, the function representation of a mechanism is called a kinematic function unit (KFU). It has four fundamental features: (1) structure type (crank-rocker, spur gear, etc.), (2) motional type (rotation, translation or swing), (3) continuous or intermittent motion, (4) reciprocating motion or not. The relationship between a specific mechanism and the KFU belonging to it is one-to-one or one-to-many, for the kinematic transformation between the input and output members of a specific mechanism may be interchangeable.
For example, if the crank is the driving member in a slider-crank mechanism displayed in Figure 2(a), the slider-crank is able to transform the crank's continuous rotational motion into the slider's reciprocating translational motion. While, if slider is the driving member, the slider-crank is able to transform the slider's reciprocating translational motion into the crank's continuous rotational motion. Therefore, the slider-crank mechanism has two different KFUs. To describe these KFUs in a general way and make it easy to be applied and recognized in computer, a general mathematical structure of KFU is given by
$$KFU = \{ C_{i}^{F} ,MI,MO\} ,$$
where \(C_{i}^{F}\) is the identification of a KFU, and it includes three different symbols, i.e., C, F, i, that have diverse values and meanings. The value of C is the abbreviation for the name of a mechanism to which KFU is subordinate, e.g., SC in Figure 2(b) is the abbreviation of slider-crank. The value of F is the motional transformation type of a KFU, e.g., F = RT in Figure 2(b) represents a rotational motion is transformed into a translational motion, while F = TR represents a translational motion is transformed into a rotational motion. The value of i is a positive number and represents the serial number of a KFU in a design candidate generated through the synthesis process, in case a KFU is used more than once in that solution. The initial value of it to all KFUs are set to 1. MI and MO are two 1 × 3 row vectors and represent the input and output motions of the KFU respectively. The elements x1, x2 and x3 of these vectors are encoded symbols or numerals, and the encoded values and meanings of them are illustrated in Table 1.
Representation of a slider-crank and its KFUs
Meanings and coded values to the elements x1, x2 and x3 of vectors MI and MO
Coded values
Swinging motion
Continuous motion
Intermittent motion
Reciprocating motion
Non-reciprocating motion
3.2 Kinematic Link Graph
The kinematic link graph (KLG) is a directed graph and constructed by looking each KFU as a vertex and the kinematic relationship between two KFUs as an edge. Through KLG, the synthesis problem of mechanisms are transformed from mechanical domain into graph domain, so that the feasible design candidates can be represented and calculated by the methods and theorems of graph theory.
When building a KLG, the first task is to extract KFUs from mechanisms. Table 2 illustrates eight KFUs from seven mechanisms. Here, the input and output of a mechanism system are abstracted as two virtual mechanisms that named as "System Input" and "System Output" respectively. Specially, it should be noticed that "System Input" only has the function to send out the power or motion, while "System Output" only has the function to incept power or motion. Thus, the values of MI of "System Input" as well as the values of MO of "System Output" are set to (0, 0, 0). Since the input and output motions of a mechanism system are various according to different design requirements, the KFUs of them illustrated in Table 2 are only for a simple instance.
Eight KFUs from seven mechanisms
KFUs
Crank-rocker
\(\{ CR_{1}^{RS} ,(R,1, - 1),(S,1,1)\}\)
\(\{ CR_{1}^{SR} ,(S,1,1),(R,1, - 1)\}\)
\(\{ SG_{1}^{RR} ,(R,1, - 1),(R,1, - 1)\}\)
Geneva wheel
\(\{ GW_{1}^{RR} ,(R,1, - 1),(R, - 1, - 1)\}\)
Cam-follower
\(\{ CF_{1}^{RT} ,(R,1, - 1),(T,1,1)\}\)
Pawl-Ratchet wheel
\(\{ PRW_{1}^{SR} ,(S,1,1),(R, - 1, - 1)\}\)
System input
\(\{ SI_{1}^{R} ,(0,0,0),(R,1, - 1)\}\)
System output
\(\{ SO_{1}^{R} ,(R, - 1, - 1),(0,0,0)\}\)
The second task when building a KLG is to establish the rules about how the KFUs connect with each other to form the directed edges. Here, two connection rules are defined and presented below.
Connection Rule 1: If one KFU's MO is equal to another KFU's MI, a directed edge will be formed between them and the direction is from former to latter.
Connection Rule 2: If a KFU'MI is equal to its MO, a directed loop will be formed around it.
Based on the KFUs illustrated in Table 2 and the two connection rules, a kinematic-link graph is built and named as KLG(T2) as Figure 3 shows. The labels of vertices in KLG(T2) are the identifications of the corresponding KFUs. Specially, the vertices that represent the KFUs of "System Input" and "System Output" in a KLG are called input-vertex (e.g., \(SI_{1}^{R}\)) and output-vertex (e.g., \(SO_{1}^{R}\)), respectively. To distinguish them with other vertices, they are marked by " " in KLG(T2).
Graph representation of KLG(T2)
3.3 Graph Representation of Design Candidate
There are two different graph representations of design candidate, i.e., the walk representation and the path representation. They have different application backgrounds and meanings in the synthesis process.
3.3.1 Walk Representation
The walk representation adopts the vertex sequence and edge sequence of a walk to represent the mechanisms and their kinematic relationships in a design candidate. This kind of representation is mainly used in the calculation part of synthesis process.
The walk representation directly derives from KLG, for each walk whose head is input-vertex and tail is output-vertex in a KLG can be looked as a design candidate. For instance, Figure 4(a) displays a walk in KLG(T2) and the design candidate it represents is shown in Figure 4(b). Basically, the constitute of a walk can be divided into four parts: (I) vertex terms (VT), (II) edge terms (ET), (III) vertex sequence (VS), and (IV) edge sequence (ES). Where, VT and ET are two unordered sets that made up by the vertices and edges of a walk respectively, while VS and ES are the ordered sets of VT and ET respectively. Thus, the mathematical model of walk representation is defined as
$$W = \{ VS(W),ES(W)\} .$$
Relationship between the walk representation and path representation of a mechanical system
Figure 4(c) shows the VS(W) and ES(W) of the walk W in Figure 4(a). In a walk, the terms adjacent to one edge are two vertices, and these two vertices are exactly the initial vertex and terminal vertex of that edge. Thus, once the ET and ES of a walk are specified, the VT and VS of that walk are determined. Based on this, the key issues of the proposed computational synthesis approach is to compute the ET and ES of the walks whose head is input-vertex and tail is output-vertex in a KLG firstly, and then figure out their VT and VS.
3.3.2 Path Representation
The path representation is used as the final storage representation of a design candidate, and it is transformed by the walk representation. The transformation relationship between them is one-to-one. Since a path is a subgraph of the graph it belongs to, it can be described by an adjacency matrix. Therefore, the general mathematical model of path representation to a design candidate is defined as
$$P(W) = \{ VS^{P} (W),A^{P} (W)\},$$
where VS P( W) is the vertex sequence of the path representation and it is generated by recoding the subscript index of the vertex who is used more than once in the VS of corresponding walk representation. Figure 4(d) shows the path representation of Figure 4(c). Here, the vertex \(SG_{1}^{RR}\) is used twice in VS( W), so the second \(SG_{1}^{RR}\) is recoded into \(SG_{2}^{RR}\) in VS P( W). A P( W) is the adjacency matrix of the path representation. This matrix representation is very helpful to analysis, optimize and furthermore to find out the optimal solution in the design candidates generated through the synthesis process in the future research. That is why the path representation is more suitable than the walk representation to be the final storage representation.
3.4 Weight Matrix Theorem
The weighted matrix theorem is a theorem to determine the ET of a walk with the aid of weighted graph and weighted matrix. In graph theory, the weighted graph, which exerts a weighted value on each edge of it, has a wide application in many practical problems. The forms of weighted value are arbitrary according to the practical problem, such as numerals, functions, symbols, etc. The weighted matrix \(A_{\omega } = [a_{ij} ]\) is an extension of adjacency matrix to describe the weighted graph in a matrix way. Here, the elements a ij are the weighted values of the corresponding edges.
Let edge labels \(e_{i} (i = 1,2, \ldots ,16)\) be the weighted values of KLG( T2) in Figure 3. Then, KLG( T2) becomes a weighted graph and its weighted matrix is
$$A_{\omega } (KLG(T2)) = \left[ {\begin{array}{*{20}c} 0 & {e_{1} } & {e_{2} } & 0 & {e_{3} } & 0 & {e_{4} } & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & {e_{5} } \\ 0 & 0 & 0 & {e_{6} } & 0 & {e_{7} } & 0 & 0 \\ 0 & {e_{8} } & {e_{9} } & 0 & {e_{10} } & 0 & {e_{11} } & 0 \\ 0 & {e_{12} } & {e_{13} } & 0 & {e_{14} } & 0 & {e_{15} } & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & {e_{16} } \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array} } \right].$$
It is an 8 × 8 symbolic matrix, where its rows and columns are associated with the vertices of KLG( T2), e.g., the first row and first column are associated with the vertex \(SI_{1}^{R}\), while the last (8th) row and last column are associated with the vertex \(SO_{1}^{R}\). As mentioned above, the adjacency matrix theorem gives a way to determine the number of walks of length k from one vertex to another in a directed graph. Based on it, a weighted matrix theorem is defined as follows.
Weighted Matrix Theorem: The edge terms(ET) of walks of length k from vertex V i to vertex V j in a weighted graph G w, who regards the edge labels as the weighted values of each edge, are given by the ( i, j) element of \(A_{\omega }^{k} (G_{w} )\).
Let W( k, V i, V j) be the walks of length k from V i to V j in G w. Based on mathematical inductive method, the theorem can be proved as follows:
When k = 1, the edge term of W(1, V i, V j) is the edge e( V i, V j). Thus, the theorem is definitely correct according to the definition of weighted matrix. Let \(a_{i,j}^{k - 1}\) and \(a_{i,j}^{k}\) be the ( i, j) elements of \(A_{\omega }^{k - 1} (G_{w} )\) and \(A_{\omega }^{k} (G_{w} )\) respectively. Assuming that the edge terms (ET) of W( k −1, V i, V j) are given by \(a_{i,j}^{k - 1}\). Since \(A_{\omega }^{k} (G_{w} ) = A_{\omega }^{k - 1} (G_{w} )A_{\omega }\),
$$a_{i,j}^{k} = \left( {\sum\limits_{m = 1}^{{N_{v} }} {a_{i,m}^{k - 1} } } \right) \cdot a_{m,j} ,$$
where N v is the dimension of the weighted matrix and equals to the number of vertices of G w. As we known, each W( k, V i, V j) is constructed by connecting an edge e( V m, V j) with W( k − 1, V i, V m). In Eq. ( 4), the value of a m,j is 0 or the edge label of e( V m, V j), while the value of \(a_{i,m}^{k - 1}\) is 0 or a polynomial that made up by one or several monomials. Based on the assumption, \(\sum\nolimits_{m = 1}^{{N_{v} }} {a_{i,m}^{k - 1} }\) is the ET to all the walks of W( k − 1, V i, V m). Thus, Eq. ( 4) can be looked as the polynomial operation to construct W( k, V i, V j), and \(a_{i,j}^{k}\) is the ET to all the walks of W( k, V i, V j).
In Eq. ( 4), the value of \(a_{i,j}^{k}\) is also 0 or a polynomial. When \(a_{i,j}^{k}\) is 0, there is not a walk of length k from V i to V j in the weighted graph. When \(a_{i,j}^{k}\) is a polynomial, the alphabetic items of each monomial of it are the ET of a walk of W( k, V i, V j) in the weighted graph. For example, the (1,8) element of \(A_{\omega }^{4} (KLG(T2))\) is
$$a_{1,8}^{4} = e_{2} e_{5} e_{6} e_{8} + e_{3} e_{5} e_{12} e_{14} + e_{3} e_{7} e_{13} e_{16}.$$
It can be found that \(a_{1,8}^{4}\) is made up by three different monomials, and the alphabetic items of each monomial are the ET of a walk of W(4, V 1, V 8) in KLG( T2) as Figure 5 shows. Here, V 1 = \(SI_{1}^{R}\) and V 8 = \(SO_{1}^{R}\) as \(A_{\omega } (KLG(T2))\) shows.
Three walks in KLG( T2) corresponding to the three monomials in \(a_{1,8}^{4}\)
As mentioned above, each walk in KLG( T2) whose head is input-vertex (i.e., \(SI_{1}^{R}\)) and tail is output-vertex (i.e., \(SO_{1}^{R}\)) can be looked as a design candidate. Therefore, the three monomials in polynomial \(a_{1,8}^{4}\) can be looked as three different design candidates. To highlight this kind of monomial and polynomial, they are named as monomial-walk and polynomial-walk respectively in this paper. Thus, it can be said that the synthesis process of mechanical conceptual design is transformed into polynomial operations through the weighted matrix theorem.
4 Polynomial Operations
4.1 Polynomial-Walk Operation
The polynomial-walk operation is used to calculate the ET to all the walk representations of design candidates in a KLG. Based on the weighted matrix theorem, a formula is given as
$$\left\{ {\begin{array}{*{20}l} {P_{W}^{{N_{\text{max} } }} (KLG) = \sum\limits_{k = 2}^{{N_{\text{max} } + 1}} {aA_{\omega }^{k - 2} (KLG)b} }, \hfill \\ {a = A_{\omega } (KLG)(m,:)}, \hfill \\ {b = A_{\omega } (KLG)(:,n)}, \hfill \\ \end{array} } \right.$$
where \(P_{W}^{{N_{\text{max} } }} (KLG)\) is the computed polynomial-walk, while \(A_{\omega } (KLG)\) is the weighted matrix of KLG. N max is a positive number that denotes the maximum number of mechanisms a mechanical system may have without considering the virtual mechanisms of "System Input" and "System Output". The value of it is given by the design specifications. Moreover, m and n are the row index or column index of input-vertex and output-vertex of \(A_{\omega } (KLG)\) respectively. Thus, a and b are the row vector and column vector of input-vertex and output-vertex in \(A_{\omega } (KLG)\) respectively.
For example, in the weighted matrix \(A_{\omega } (KLG(T2))\) of KLG( T2), m = 1 and n = 8. Then, a = ( 0, e 1, e 2, 0, e 3, 0, e 4, 0) and b = (0, e 5, 0, 0, 0, e 16, 0, 0) T. Set N max = 3, the polynomial-walk \(P_{W}^{3} [KLG(T2)]\) calculated by Eq. ( 7) is
$$\begin{aligned} P_{W}^{3} [KLG(T_{2} )] & = e_{1} e_{5} + e_{12} e_{3} e_{5} + e_{16} e_{2} e_{7} + e_{12} e_{14} e_{3} e_{5} \hfill \\ & \quad + e_{13} e_{16} e_{3} e_{7} + e_{2} e_{5} e_{6} e_{8}. \hfill \\ \end{aligned}$$
Let N m be the number of mechanisms in a design candidate and N e be the number of degree to the corresponding monomial-walk. Then, N m =N e − 1. Therefore, Eq. ( 7) illustrates the ET to the walk representations of six feasible design candidates with the number of mechanisms of them from N m = 1 to N m = 3.
4.2 Edge Sequence Operation
Generally, every monomial-walk is made up by two parts, i.e., the numeral item and the alphabetic items. In the monomial-walks of the polynomial-walks \(a_{1,8}^{4}\) and \(P_{W}^{3} [KLG(T2)]\), the value of their numeral items are all equal to 1. However, sometimes the value of numeral item of a monomial-walk may be larger than 1. For example, Figure 6(a) shows the composition of \(M_{1}^{Kw}\), who is a monomial-walk of the (1,8) element of \(A_{\omega }^{8} (KLG(T2))\). Here, the value of the numeral item of \(M_{1}^{Kw}\) is 2. As mentioned above, the alphabetic items of a monomial-walk are the ET (edge terms) to the walk of a KLG. However, it should be noticed that several different walks of a KLG might have the same ET, while a walk is uniquely determined by its ES (edge sequence). It means that the ET of a monomial-walk may be sorted into several different ES to several different walks. Since the numeral item of a monomial-walk is generated by merging homogeneous monomials in the polynomial operation of Eq. ( 5) and Eq. ( 6), the value of it reveals that how many ES the ET of a monomial-walk can be sorted.
Composition of monomial-walk \(M_{1}^{Kw}\)and its two edge sequences
In Figure 6(a), the ET of \(M_{1}^{Kw}\) is ET( \(M_{1}^{Kw}\)) = { e 10, e 13, e 16, e 2, e 6, e 6, e 7, e 9}. It can be sorted into two different edge sequences \(ES_{1} (M_{1}^{Kw} )\) and \(ES_{2} (M_{1}^{Kw} )\) (see Figure 6(b)) according to the connection relationship between the edges shown in Figure 6(c). In fact, Figure 6(c) is also the graph representation of two different walks in KLG( T2) from \({\text{SI}}_{ 1}^{\text{R}}\) to \({\text{SO}}_{ 1}^{\text{R}}\). Both of these two walks have the same edge terms ET( \(M_{1}^{Kw}\)), and \(ES_{1} (M_{1}^{Kw} )\) and \(ES_{2} (M_{1}^{Kw} )\) are exactly the ES of them respectively.
Based on above analysis, the key issue of edge sequence operation is how to figure out all the edge sequences of a monomial-walk \(M^{Kw}\), i.e., \(ES_{\ell } (M^{Kw} )(\ell = 1,2, \ldots ,N_{n} )\), according to its edge terms ET( \(M^{Kw}\)) and the connection relationship between the edges. Here, N n is the value of numeral item of \(M^{Kw}\).
Let e i and e j be any two edges in ET ( \(M^{Kw}\)), if ter( e i) = init( e j), then e i and e j may be adjacent in \(ES_{\ell } (M^{Kw} )\). Besides, let e in and e out be the first and last edge of \(ES_{\ell } (M^{Kw} )\) respectively, then init( e in) equals to the input-vertex of KLG while ter( e out) equals to the output-vertex of KLG. According to above statements, an edge sequence sorting algorithm is presented below.
Edge Sequence Sorting Algorithm:
Here, \(S_{k - set}^{in}\) is a set in which each item \(s_{i,k - set}^{in}\) is an ordered set of k edges that starting with e in. While, \(S_{k - set}^{out}\) is a set in which each item \(s_{m,k - set}^{out}\) is an ordered set of k edges that ending with e out. \(S_{(k + ) - set}^{in}\) and \(S_{(k + 1) - set}^{out}\) have the similar meanings with \(S_{k - set}^{in}\) and \(S_{k - set}^{out}\) respectively, instead that the items in them contain k + 1 edges. The value of function Ter() equals to the terminal vertex of the last edge in corresponding ordered set, while the value of function Init() equals to the initial vertex of the first edge in corresponding ordered set. For example, assuming an ordered set of edges is { e 2, e 6, e 9}, then Ter({ e 2, e 6, e 9}) = ter( e 9) = \(CR_{1}^{RS}\) and Init({ e 2, e 6, e 9})= init(e 2)= \(SI_{1}^{R}\). The values of functions First() and Last() equal to the first edge and last edge in the corresponding ordered set respectively, e.g., First({ e 2, e 6, e 9})= e 2 and Last({ e 2, e 6, e 9})= e 9. To explain the steps of edge sequence sorting algorithm clearer, \(M_{1}^{Kw}\) is taken as an example to show the execution process of the algorithm as illustrated in Table 3.
Execution steps and execution results for each step of edge sequence algorithm with \(M_{1}^{Kw}\)acting as an example
Execution steps
Results for each step
k = 1, k + 1 = 2 < (8/2 = 4)
\(e_{in} = e_{2}\), \(e_{out} = e_{16}\), thus \(S_{1 - set}^{in} = \{ \{ e_{2} \} \}\), \(S_{1 - set}^{out} = \{ \{ e_{16} \} \}\), then \(S_{2 - set}^{in} = \{ \{ e_{2} ,e_{6} \} ,\{ e_{2} ,e_{7} \} \}\), \(S_{2 - set}^{out} = \{ \{ e_{7} ,e_{16} \} \}\)
k = 2, ( k + 1 = 3) < (8/2 = 4)
\(S_{3 - set}^{in} = \{ \{ e_{2} ,e_{6} ,e_{9} \} ,\{ e_{2} ,e_{6} ,e_{10} \} \}\), \(S_{3 - set}^{out} = \{ \{ e_{9} ,e_{7} ,e_{16} \} ,\{ e_{13} ,e_{7} ,e_{16} \} \}\)
k = 3, ( k + 1 = 4) = (8/2 = 4)
\(S_{4 - set}^{in} = \{ \{ e_{2} ,e_{6} ,e_{9} ,e_{6} \} ,\{ e_{2} ,e_{6} ,e_{10} ,e_{13} \} ,\) \(\{ e_{2} ,e_{6} ,e_{9} ,e_{7} \} \}\),
\(S_{4 - set}^{out} = \{ \{ e_{6} ,e_{9} ,e_{7} ,e_{16} \} ,\{ e_{10} ,e_{13} ,e_{7} ,e_{16} \} \} .\)
\(Ter(\{ e_{2} ,e_{6} ,e_{9} ,e_{6} \} ) = Init(\{ e_{10} ,e_{13} ,e_{7} ,e_{16} \} )\)
= \(CR_{1}^{SR}\), and \(Ter(\{ e_{2} ,e_{6} ,e_{10} ,e_{13} \} ) = Init(\{ e_{6} ,e_{9} ,e_{7} ,e_{16} \} )\) = \(CR_{1}^{RS}\),
\(ES_{1} (M_{1}^{Kw} )\)
= \(\{ e_{2} ,e_{6} ,e_{9} ,e_{6} \} \cup \{ e_{10} ,e_{13} ,e_{7} ,e_{16} \}\)
= \(\{ e_{2} ,e_{6} ,e_{9} ,e_{6} ,e_{10} ,e_{13} ,e_{7} ,e_{16} \},\)
= \(\{ e_{2} ,e_{6} ,e_{10} ,e_{13} \} \cup \{ e_{6} ,e_{9} ,e_{7} ,e_{16} \}\)
= \(\{ e_{2} ,e_{6} ,e_{10} ,e_{13} ,e_{6} ,e_{9} ,e_{7} ,e_{16} \}\)
4.3 Vertex Sequence Operation
The relationship between the vertex sequence \(VS_{\ell } (M^{Kw} )\) and edge sequence \(ES_{\ell } (M^{Kw} )\) of a monomial-walk M Kw is one-to-one. Here, the task of vertex sequence operation is to figure out all the \(VS_{\ell } (M^{Kw} )\) of M Kw based on its \(ES_{\ell } (M^{Kw} )\). In order to complete this task, a formula is given as
$$VS_{\ell } (M^{Kw} ) = [\bigcup\limits_{i = 1}^{{N_{E} }} {init(e_{i} )] \cup ter(e_{f} )\begin{array}{*{20}c} {} & {(e_{i} ,e_{f} \in ES_{\ell } (M^{Kw} ))}, \\ \end{array} }$$
where init( e i) is the initial vertex of the ith-edge e i in \(ES_{\ell } (M^{Kw} )\), while ter( e f) is the terminal vertex of the final edge e f in \(ES_{\ell } (M^{Kw} )\). N E is the number of edges in \(ES_{\ell } (M^{Kw} )\). For example, \(ES_{1} (M_{1}^{Kw} )\) and \(ES_{2} (M_{1}^{Kw} )\) are two edge sequences of \(M_{1}^{Kw}\) (see Figure 6(b)), and the final edge of both of them is e 16. Based on Eq. ( 8), the two vertex sequences \(VS_{1} (M_{1}^{Kw} )\) and \(VS_{2} (M_{1}^{Kw} )\) of \(M_{1}^{Kw}\) are
$$\left\{ {\begin{array}{*{20}c} \begin{aligned} VS_{1} (M_{1}^{Kw} ) = \{ SI_{1}^{R} ,CR_{1}^{RS} ,CR_{1}^{SR} ,CR_{1}^{RS} ,CR_{1}^{SR} ,SG_{1}^{RR} ,CR_{1}^{RS} , \hfill \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} PRW_{1}^{SR} ,SO_{1}^{R} \}, \hfill \\ \end{aligned} \\ \begin{aligned} VS_{2} (M_{1}^{Kw} ) = \{ SI_{1}^{R} ,CR_{1}^{RS} ,CR_{1}^{SR} ,SG_{1}^{RR} ,CR_{1}^{RS} ,CR_{1}^{SR} ,CR_{1}^{RS} , \hfill \\ {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} PRW_{1}^{SR} ,SO_{1}^{R} \}. \hfill \\ \end{aligned} \\ \end{array} } \right.$$
5 Computational Flowchart of Synthesis Approach
The proposed computational synthesis approach begins with the user-specified design requirements that including the input motion, output motion and the maximum number of mechanisms N max in the design candidates to be generated. Then, the graph framework is built and different polynomial operations are adopted to figure out the walk representations and path representations to all the feasible design candidates. The computational flowchart of the synthesis approach is illustrated in Figure 7. To summarize and explain this figure, the steps to the computational process are:
Computational flowchart of the synthesis approach
Step 1: Set design requirements and extract KFUs from the chosen mechanisms.
Step 2: Construct KLG based on the two connection rules, and generate its vertices set V( KLG) and weighted matrix \(A_{\omega } (KLG)\) so that the KLG can be recognized and used in computer.
Step 3: Run polynomial-walk operation to compute polynomial-walk \(P_{W}^{{N_{\text{max} } }} (KLG)\), and then separate and save all its monomial-walks \(M_{i}^{Kw}\)in set \(S_{{M^{Kw} }}\).
Step 4: Run edge sequence operation for each \(M_{i}^{Kw}\) to figure out all its edge sequences \(ES_{\ell } (M_{{^{i} }}^{Kw} )\), and then save them in set S ES.
Step 5: Run vertex sequence operation for each \(ES_{\ell } (M_{{^{i} }}^{Kw} )\) to figure out its vertex sequence \(VS_{\ell } (M_{{^{i} }}^{Kw} )\). Then, a walk representation \(W_{\ell }\) of \(M_{i}^{Kw}\) is formulated.
Step 6: Generated and save the vertex sequence \(VS^{P} (W_{\ell } )\) and adjacency matrix A P( \(W_{\ell }\)) to the path representation of \(W_{\ell }\).
6 Design Example
The task here is to figure out all the feasible design candidates of a mechanical system, who is able to transform the input continuous rotational motion into the output reciprocating translational motion, from the chosen mechanisms. The maximum number of mechanisms in the mechanical system N max is set to 3.
6.1 Extract and Formulate the KFUs
Due to space consideration of the paper, only four kinds of mechanisms, i.e., slider-crank, worm gear, cam-follower and spur gear, from the 43 identified mechanisms in Ref. [ 3 ] are chosen as the building blocks to construct the mechanical system. The KFUs extracted from these four mechanisms and two virtual mechanisms, i.e., "System Input" and "System Output", are illustrated in Table 4.
Seven KFUs of the design example
Slider-crank
\(\{ SC_{1}^{RT} ,(R,1, - 1),(T,1,1)\}\)
\(\{ SC_{1}^{TR} ,(T,1,1),(R,1, - 1)\}\)
Worm-gear
\(\{ WG_{1}^{RR} ,(R,1, - 1),(R,1, - 1)\}\)
\(\{ SO_{1}^{R} ,(T,1,1),(0,0,0)\}\)
6.2 Construct Kinematic Link Graph KLG
The structure of KLG is up to the types of KFUs and their kinematic relationship. Based on Table 4, the graph representation to the KLG of design example, i.e., KLG( T4), is shown in Figure 8(a). It is a directed graph that has seven vertices and twenty edges, and the input-vertex is \(SI_{1}^{R}\) while the output-vertex is \(SO_{1}^{T}\). The vertices set V( KLG( T4)) and weighted matrix \(A_{\omega } (KLG(T4))\) of KLG( T4) are shown in Figure 8(b) for subsequent computing applications.
Graph representation, vertices set and weighted matrix of KLG( T4)
6.3 Compute Polynomial-Walk
In the weighted matrix \(A_{\omega } (KLG(T4))\), the row index of \(SI_{1}^{R}\) is m=1 while the column index of \(SO_{1}^{T}\) is n = 7. Then, the row vector of \(SI_{1}^{R}\) is a = ( 0, e 1, e 2, 0, e 3, e 4, 0) while the column vector of \(SO_{1}^{T}\) is b = ( 0, 0, e 10, 0, 0, e 20, 0) T. Run polynomial-walk operation and substitute the values of m, n, a, b, N max and \(A_{\omega } (KLG(T4))\) into Eq. ( 6), the computed polynomial-walk \(P_{W}^{3} [KLG(T4)]\) is
$$\begin{aligned} P_{W}^{3} [KLG(T4)] = e_{2} e_{10} + e_{4} e_{20} + e_{1} e_{6} e_{10} + e_{1} e_{8} e_{20} + e_{3} e_{10} e_{16} + \hfill \\ e_{3} e_{18} e_{20} + e_{1} e_{5} e_{6} e_{10} {\kern 1pt} + e_{2} e_{9} e_{10} e_{12} + e_{1} e_{5} e_{8} e_{20} + \hfill \\ e_{1} e_{7} e_{10} e_{16} + e_{3} e_{6} e_{10} e_{15} + e_{2} e_{9} e_{14} e_{20} + e_{4} e_{10} e_{12} e_{19} + \hfill \\ e_{1} e_{7} e_{18} e_{20} + e_{3} e_{8} e_{15} e_{20} + e_{3} e_{10} e_{16} e_{17} + \hfill \\ e_{4} e_{14} e_{19} e_{20} + e_{3} e_{17} e_{18} e_{20}. \hfill \\ \end{aligned}$$
It can be found that there are 18 different monomial-walks \(M_{i}^{Kw} (i = 1,2, \ldots ,18)\) in \(P_{W}^{3} [KLG(T4)]\), whose numeral item are all equal to 1. Thus, it can be said that there are a total of 18 different design candidates that are assembled by the four different mechanisms in accordance with the design requirements.
6.4 Compute the Walk Representations and Path Representations to the Design Candidates
Run edge sequence operation and vertex sequence operation for each \(M_{i}^{Kw}\) in turn, the achieved edge sequences and vertex sequences of the walk representations to all design candidates are illustrated in Table 5, Column 3. To each walk representation, its path representation is transformed and illustrated in Table 5, Column 4. Furthermore, the design candidate that corresponds to the walk representation and path representation is displayed by a two-dimensional motion sketch and illustrated in Table 5, Column 5.
The monomial-walks, walk representations and path representations to the 18 design candidates
Monomial-walks
Walk representations
Path representations
Design candidates
\(M_{1}^{Kw}\)
\(e_{2} e_{10}\)
\(W_{1} (M_{1}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{1}^{Kw} ) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{1}^{Kw} ) = \{ e_{2} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{1}^{Kw} )) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(A^{P} (W_{1} (M_{1}^{Kw} )) = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \\ \end{array} } \right]\)
\(W_{1} (M_{2}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{2}^{Kw} ) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{2}^{Kw} ) = \{ e_{4} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{2}^{Kw} )) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(e_{1} e_{6} e_{10}\)
\(W_{1} (M_{3}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{3}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{3}^{Kw} ) = \{ e_{1} ,e_{6} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{3}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(A^{P} (W_{1} (M_{3}^{Kw} )) = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ \end{array} } \right]\)
\(W_{1} (M_{4}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{4}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{4}^{Kw} ) = \{ e_{1} ,e_{8} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{4}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(e_{3} e_{10} e_{16}\)
\(W_{1} (M_{5}^{Kw} ) =\left\{ {\begin{array}{*{20}l} {VS_{1} (M_{5}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{5}^{Kw} ) = \{ e_{3} ,e_{16} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{5}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{6}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{6}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{6}^{Kw} ) = \{ e_{3} ,e_{18} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{6}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(e_{1} e_{5} e_{6} e_{10}\)
\(W_{1} (M_{7}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{7}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,WG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{7}^{Kw} ) = \{ e_{1} ,e_{5} ,e_{6} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{7}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,WG_{2}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(A^{P} (W_{1} (M_{7}^{Kw} )) = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} } \right]\)
\(e_{2} e_{9} e_{10} e_{12}\)
\(W_{1} (M_{8}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{8}^{Kw} ) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SC_{1}^{TR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{8}^{Kw} ) = \{ e_{2} ,e_{9} ,e_{12} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{8}^{Kw} )) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SC_{1}^{TR} ,SC_{2}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{9}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{9}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,WG_{1}^{RR} ,{\kern 1pt} {\kern 1pt} CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{9}^{Kw} ) = \{ e_{1} ,e_{5} ,e_{8} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{9}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,WG_{2}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(M_{10}^{Kw}\)
\(W_{1} (M_{10}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{10}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{10}^{Kw} ) = \{ e_{1} ,e_{7} ,e_{16} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{10}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SG_{2}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(A^{P} (W_{1} (M_{10}^{Kw} )) = \left[ {\begin{array}{*{20}c} 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \\ \end{array} } \right]\)
\(W_{1} (M_{11}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{11}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,WG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{11}^{Kw} ) = \{ e_{3} ,e_{15} ,e_{6} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{11}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,WG_{2}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{12}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{12}^{Kw} ) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SC_{1}^{TR} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{12}^{Kw} ) = \{ e_{2} ,e_{9} ,e_{14} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{12}^{Kw} )) = \{ SI_{1}^{R} ,SC_{1}^{RT} ,SC_{1}^{TR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(e_{4} e_{10} e_{12} e_{19}\)
\(W_{1} (M_{13}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{13}^{Kw} ) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SC_{1}^{TR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{12}^{Kw} ) = \{ e_{4} ,e_{19} ,e_{12} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{13}^{Kw} )) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SC_{1}^{TR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{14}^{Kw} ) =\left\{ {\begin{array}{*{20}l} {VS_{1} (M_{14}^{Kw} ) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SG_{1}^{RR} ,{\kern 1pt} CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{14}^{Kw} ) = \{ e_{1} ,e_{7} ,e_{18} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{14}^{Kw} )) = \{ SI_{1}^{R} ,WG_{1}^{RR} ,SG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{15}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{15}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,WG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{15}^{Kw} ) = \{ e_{3} ,e_{15} ,e_{8} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{15}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,WG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{16}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{16}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SG_{1}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{16}^{Kw} ) = \{ e_{3} ,e_{17} ,e_{16} ,e_{10} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{16}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SG_{2}^{RR} ,SC_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{17}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{17}^{Kw} ) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SC_{1}^{TR} ,{\kern 1pt} CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{17}^{Kw} ) = \{ e_{4} ,e_{19} ,e_{14} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{17}^{Kw} )) = \{ SI_{1}^{R} ,CF_{1}^{RT} ,SC_{1}^{TR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
\(W_{1} (M_{18}^{Kw} ) = \left\{ {\begin{array}{*{20}l} {VS_{1} (M_{18}^{Kw} ) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SG_{1}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \} } \hfill \\ {ES_{1} (M_{18}^{Kw} ) = \{ e_{3} ,e_{17} ,e_{18} ,e_{20} \} } \hfill \\ \end{array} } \right.\)
\(VS^{P} (W_{1} (M_{18}^{Kw} )) = \{ SI_{1}^{R} ,SG_{1}^{RR} ,SG_{2}^{RR} ,CF_{1}^{RT} ,SO_{1}^{T} \}\)
This paper presents a novel computable synthesis approach of mechanical conceptual design. It employs the walk and path mathematical structures in graph theory to represent the design candidates. A kinematic link graph (KLG) is constructed by looking every kinematic function unit (KFU) extracted from the mechanisms as a vertex. Two connection rules are defined to build the edges between those vertices. In KLG, each walk starting from the input-vertex and ending at the output-vertex is looked as a design candidate. Since a walk is represented by its edge sequence (ES) and vertex sequence (VS), the synthesis problem of mechanisms is turned into figuring out all the specified walks' ES and VS in KLG. Thus, a weighted matrix theorem is defined and proved. Based on this theorem, a formula is induced to compute the polynomial, in which every monomial's alphabetic items are the edge terms (ET) to one or several different walks' ES. Then, an edge sequence sorting algorithm is given to sort and split the alphabetic items into one or several different ES. Furthermore, a formula is given to compute the VS of a walk based on its ES. The proposed synthesis approach was successfully applied in figuring out all the feasible design candidates of a mechanical system, who is able to transform the input continuous rotational motion into the output reciprocating translational motion, from the four chosen mechanisms. All the walk representations, path representations and two-dimensional motion sketches to the 18 design candidates of the design example are presented. Through the proposed synthesis approach, there is no need to search and match the mechanisms in the mechanisms library exhaustively to find out all the feasible solutions. All the feasible solutions can be figured out through one polynomial operation by the proposed approach. Besides, though the proposed approach is a graph-based approach, there is no pseudo-isomorphic graph problem in it.
GL was in charge of the whole trial; LH wrote the manuscript; XY and BH assisted with building up the framework of the research. All authors read and approved the final manuscript.
Lin Han, born in 1988, is currently a PhD candidate at Shaanxi Engineering Laboratory for Transmissions and Controls, Northwestern Polytechnical University (NWPU), China. He received his bachelor degree from Northwestern Polytechnical, China, in 2011. His research interests include computer-aided design and optimization design of mechanical system.
Geng Liu, born in 1961, is currently a professor and a supervisor of PhD candidates and director of Shaanxi Engineering Laboratory for Transmissions and Controls, Northwestern Polytechnical University ( NWPU), China. His research interests include mechanical transmissions; dynamics of mechanical systems; virtual and physical prototyping simulation and design technology of mechanical systems; finite element methods; contact mechanics.
Xiaohui Yang, born in 1970, is currently a professor at Shaanxi Engineering Laboratory for Transmissions and Controls, Northwestern Polytechnical University (NWPU), China. His research interests include mechanical design, measurement and control technology and visualization in scientific computing.
Bing Han, born in 1981, is currently an engineer at Shaanxi Engineering Laboratory for Transmissions and Controls, Northwestern Polytechnical University (NWPU), China. His research interests include collaborative design and simulation integration technology of mechanical system, data and process management of design process.
Supported by State Key Program of National Natural Science Foundation of China (Grant No. 51535009), and 111 Project of China (Grant No. B13044).
Zurück zum Zitat S Kota, C Lee. A functional framework for hydraulic systems design using abstraction/decomposition hierarchies. ASME International Computers in Engineering Conference, American Society of Mechanical Engineers, Boston, August, 1990. p. 327–340. S Kota, C Lee. A functional framework for hydraulic systems design using abstraction/decomposition hierarchies. ASME International Computers in Engineering Conference, American Society of Mechanical Engineers, Boston, August, 1990. p. 327–340.
Zurück zum Zitat S Kota, S J Chiou. Conceptual design of mechanisms based on computational synthesis and simulation of kinematic building blocks. Research in Engineering Design, 1992, 4: 75–87. CrossRef S Kota, S J Chiou. Conceptual design of mechanisms based on computational synthesis and simulation of kinematic building blocks. Research in Engineering Design, 1992, 4: 75–87. CrossRef
Zurück zum Zitat S J Chiou, S Kota. Automated conceptual design of mechanisms. Mechanism and Machine Theory, 1999, 34: 467–495. MATHCrossRef S J Chiou, S Kota. Automated conceptual design of mechanisms. Mechanism and Machine Theory, 1999, 34: 467–495. MATHCrossRef
Zurück zum Zitat U Yasushi, I Masaki, Y Masaharu, et al. Supporting conceptual design based on the function-behavior-state modeler. Ai Edam Artificial Intelligent for Engineering Design Analysis & Manufacturing, 1996, 10(4): 275–288. CrossRef U Yasushi, I Masaki, Y Masaharu, et al. Supporting conceptual design based on the function-behavior-state modeler. Ai Edam Artificial Intelligent for Engineering Design Analysis & Manufacturing, 1996, 10(4): 275–288. CrossRef
Zurück zum Zitat D Sanderson, J C Chaplin, S Ratchev. A function-behavior-structure design methodology for adaptive production systems. International Journal of Advanced Manufacturing Technology, 2019, 19: 1–12. D Sanderson, J C Chaplin, S Ratchev. A function-behavior-structure design methodology for adaptive production systems. International Journal of Advanced Manufacturing Technology, 2019, 19: 1–12.
Zurück zum Zitat H Z Zhang, X Han, R Li, et al. A new conceptual design method to support rapid and effective mapping from product design specification to concept design. International Journal of Advanced Manufacturing Technology, 2016, 87: 2375–2389. CrossRef H Z Zhang, X Han, R Li, et al. A new conceptual design method to support rapid and effective mapping from product design specification to concept design. International Journal of Advanced Manufacturing Technology, 2016, 87: 2375–2389. CrossRef
Zurück zum Zitat Y Zu, R B Xiao, X H Zhang. Automated conceptual design of mechanisms using enumeration and functional reasoning. International Journal of Materials and Product Technology, 2009, 34(3): 273–294. CrossRef Y Zu, R B Xiao, X H Zhang. Automated conceptual design of mechanisms using enumeration and functional reasoning. International Journal of Materials and Product Technology, 2009, 34(3): 273–294. CrossRef
Zurück zum Zitat B Chen. Conceptual design synthesis based on series-parallel functional unit structure. Journal of Engineering Design, 2018, 29(3): 87–130. CrossRef B Chen. Conceptual design synthesis based on series-parallel functional unit structure. Journal of Engineering Design, 2018, 29(3): 87–130. CrossRef
Zurück zum Zitat B Chen, Y B Xie. A computer-assisted automatic conceptual design system for the distributed multi-disciplinary resource environment. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2016, 231(6): 1094–1112. B Chen, Y B Xie. A computer-assisted automatic conceptual design system for the distributed multi-disciplinary resource environment. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2016, 231(6): 1094–1112.
Zurück zum Zitat B Chen, Y B Xie. A function unit integrating approach for the conceptual design synthesis in the distributed resource environment. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2017, 232: 759–774. B Chen, Y B Xie. A function unit integrating approach for the conceptual design synthesis in the distributed resource environment. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2017, 232: 759–774.
Zurück zum Zitat B Chen, Y B Xie. Functional knowledge integration of the design process. Science China Technological Sciences, 2016, 60(2): 209–218. CrossRef B Chen, Y B Xie. Functional knowledge integration of the design process. Science China Technological Sciences, 2016, 60(2): 209–218. CrossRef
Zurück zum Zitat J H Lee, M J Ostwald, N Gu. A syntactical and grammatical approach to architectural configuration, analysis and generation. Architectural Science Review, 2015, 58(3): 189–204. CrossRef J H Lee, M J Ostwald, N Gu. A syntactical and grammatical approach to architectural configuration, analysis and generation. Architectural Science Review, 2015, 58(3): 189–204. CrossRef
Zurück zum Zitat M I Campbell, S Kristina. Systematic rule analysis of generative design grammars. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 2014, 28(3): 227–238. CrossRef M I Campbell, S Kristina. Systematic rule analysis of generative design grammars. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 2014, 28(3): 227–238. CrossRef
Zurück zum Zitat Y Zou, J Lü, X P Tao. Research on context of implicit context- sensitive graph grammars. Journal of Computer Languages, 2019, 51: 241–260. CrossRef Y Zou, J Lü, X P Tao. Research on context of implicit context- sensitive graph grammars. Journal of Computer Languages, 2019, 51: 241–260. CrossRef
Zurück zum Zitat I Jowers, C Earl, G Stiny. Shapes, structures and shape grammar implementation. Computer-Aided Design, 2019, 111: 80–92. MathSciNetCrossRef I Jowers, C Earl, G Stiny. Shapes, structures and shape grammar implementation. Computer-Aided Design, 2019, 111: 80–92. MathSciNetCrossRef
Zurück zum Zitat S Maneth, F Peternek. Grammar-based graph compression. Information Systems, 2018, 76: 19–45. CrossRef S Maneth, F Peternek. Grammar-based graph compression. Information Systems, 2018, 76: 19–45. CrossRef
Zurück zum Zitat H L Oh, T Lee, R Lipowski. A graph theory based method for functional decoupling of a design with complex interaction structure. Proceedings of the ASME 2010 International Design Engineering Technical Conference & Computers and Information in Engineering Conference IDETC/CIE, Montreal, Quebec, Canada, 2010: 123–132. H L Oh, T Lee, R Lipowski. A graph theory based method for functional decoupling of a design with complex interaction structure. Proceedings of the ASME 2010 International Design Engineering Technical Conference & Computers and Information in Engineering Conference IDETC/CIE, Montreal, Quebec, Canada, 2010: 123–132.
Zurück zum Zitat V R Shanmukhasundaram, Y V D Rao, S P Regalla. Enumeration of displacement graphs of epicyclic gear train from a given rotation graph using concept of building of kinematic units. Mechanism and Machine Theory, 2019, 134: 393–424. CrossRef V R Shanmukhasundaram, Y V D Rao, S P Regalla. Enumeration of displacement graphs of epicyclic gear train from a given rotation graph using concept of building of kinematic units. Mechanism and Machine Theory, 2019, 134: 393–424. CrossRef
Zurück zum Zitat C Shi, H W Guo, M Li, et al. Conceptual configuration synthesis of line-foldable type quadrangular prismatic deployable unit based on graph theory. Mechanism and Machine Theory, 2018, 121: 563–582. CrossRef C Shi, H W Guo, M Li, et al. Conceptual configuration synthesis of line-foldable type quadrangular prismatic deployable unit based on graph theory. Mechanism and Machine Theory, 2018, 121: 563–582. CrossRef
Zurück zum Zitat L Sun, X Chen, C Y Wu, et al. Synthesis and design of rice pot seedling transplanting mechanism based on labeled graph theory. Computers and Electronics in Agriculture, 2017, 143: 249–261. CrossRef L Sun, X Chen, C Y Wu, et al. Synthesis and design of rice pot seedling transplanting mechanism based on labeled graph theory. Computers and Electronics in Agriculture, 2017, 143: 249–261. CrossRef
Zurück zum Zitat V V Kamesh, K M Rao, A B S Rao. An innovative approach to detect isomorphism in planar and geared kinematic chains using graph theory. Journal of Mechanical Design, 2017, 139(12): 122301. CrossRef V V Kamesh, K M Rao, A B S Rao. An innovative approach to detect isomorphism in planar and geared kinematic chains using graph theory. Journal of Mechanical Design, 2017, 139(12): 122301. CrossRef
Zurück zum Zitat A Chakrabarti, K Shea, R Stone, et al. Computer-based design synthesis research: an overview. Journal of Computing & Information Science in Engineering, 2011, 11(2): 519–523. CrossRef A Chakrabarti, K Shea, R Stone, et al. Computer-based design synthesis research: an overview. Journal of Computing & Information Science in Engineering, 2011, 11(2): 519–523. CrossRef
Zurück zum Zitat L AI-Hakim, A Kusiak, J Mathew. A graph-theoretic approach to conceptual design with functional perspectives. Computer-Aided Design, 2000, 32(14): 867–875. CrossRef L AI-Hakim, A Kusiak, J Mathew. A graph-theoretic approach to conceptual design with functional perspectives. Computer-Aided Design, 2000, 32(14): 867–875. CrossRef
Zurück zum Zitat G Li, Z H Miao, B Li, et al. Type synthesis to design variable camber mechanisms. Advances in Mechanical Engineering, 2016, 8(8): 1–16. G Li, Z H Miao, B Li, et al. Type synthesis to design variable camber mechanisms. Advances in Mechanical Engineering, 2016, 8(8): 1–16.
Zurück zum Zitat Y H Zou, P He, Y L Pei. Automatic topological structural synthesis algorithm of planar simple joint kinematic chains. Advances in Mechanical Engineering, 2016, 8(3): 1–12. CrossRef Y H Zou, P He, Y L Pei. Automatic topological structural synthesis algorithm of planar simple joint kinematic chains. Advances in Mechanical Engineering, 2016, 8(3): 1–12. CrossRef
Zurück zum Zitat Z F Shen, G Allison, L Cui. An integrated type and dimensional synthesis method to design one degree-of-freedom planar linkages with only revolute joints for exoskeletons. Journal of Mechanical Design, 2018, 140: 092302. CrossRef Z F Shen, G Allison, L Cui. An integrated type and dimensional synthesis method to design one degree-of-freedom planar linkages with only revolute joints for exoskeletons. Journal of Mechanical Design, 2018, 140: 092302. CrossRef
Zurück zum Zitat W J Yang, H F Ding, B Zi, et al. New graph representation for planetary gear trains. Journal of Mechanical Design, 2018, 140: 012303. CrossRef W J Yang, H F Ding, B Zi, et al. New graph representation for planetary gear trains. Journal of Mechanical Design, 2018, 140: 012303. CrossRef
Zurück zum Zitat V V Kamesh, K M Rao, A B S Rao. Topological synthesis of epicyclic gear trains using vertex incidence polynomial. Journal of Mechanical Design, 2017, 139: 062304. CrossRef V V Kamesh, K M Rao, A B S Rao. Topological synthesis of epicyclic gear trains using vertex incidence polynomial. Journal of Mechanical Design, 2017, 139: 062304. CrossRef
Zurück zum Zitat B He, S Wei, Y G Wang. Computational conceptual design using space matrix. Journal of Computing & Information Science in Engineering, 2015, 15(1): 011004. CrossRef B He, S Wei, Y G Wang. Computational conceptual design using space matrix. Journal of Computing & Information Science in Engineering, 2015, 15(1): 011004. CrossRef
Zurück zum Zitat B He, P C Zhang, L L Liu. Simultaneous functional synthesis of mechanisms with mechanical efficiency and cost. International Journal of Advanced Manufacturing Technology, 2014, 75: 659–665. CrossRef B He, P C Zhang, L L Liu. Simultaneous functional synthesis of mechanisms with mechanical efficiency and cost. International Journal of Advanced Manufacturing Technology, 2014, 75: 659–665. CrossRef
Zurück zum Zitat B He, P C Zhang, J Wang. Automated synthesis of mechanisms with consideration of mechanical efficiency. Journal of Engineering Design, 2014, 25: 213–237. CrossRef B He, P C Zhang, J Wang. Automated synthesis of mechanisms with consideration of mechanical efficiency. Journal of Engineering Design, 2014, 25: 213–237. CrossRef
Zurück zum Zitat M Kobayashi, Y Suzuki, M Higashi. Integrated optimization for supporting functional and layout designs during conceptual design phase. Proceedings of the ASME 2009 International Design Engineering Technical Conference & Computers and Information in Engineering Conference IDETC/CIE, San Diego, California, USA, August 30-September 2, 2009: 881-889. M Kobayashi, Y Suzuki, M Higashi. Integrated optimization for supporting functional and layout designs during conceptual design phase. Proceedings of the ASME 2009 International Design Engineering Technical Conference & Computers and Information in Engineering Conference IDETC/CIE, San Diego, California, USA, August 30-September 2, 2009: 881-889.
Zurück zum Zitat J A Bondy, U S R Murty. Graph theory. Springer Berlin, 2008. MATHCrossRef J A Bondy, U S R Murty. Graph theory. Springer Berlin, 2008. MATHCrossRef
Lin Han
Xiaohui Yang
Bing Han
Research Review of Principles and Methods for Ultrasonic Measurement of Axial Stress in Bolts
Mechanism, Actuation, Perception, and Control of Highly Dynamic Multilegged Robots: A Review
Novel Surface Design of Deployable Reflector Antenna Based on Polar Scissor Structures
Intelligent Modularized Reconfigurable Mechanisms for Robots: Development and Experiment
A Fast Multi-tasking Solution: NMF-Theoretic Co-clustering for Gear Fault Diagnosis under Variable Working Conditions
A Comparative Study of Fractional Order Models on State of Charge Estimation for Lithium Ion Batteries | CommonCrawl |
How did Maxwell's theory of electrodynamics contradict the Galilean principle of relativity? (Pre-special relativity)
The Galilean principle of relativity:
The laws of classical mechanics apply in all inertial reference systems
No experiment carried out in an inertial frame of reference can determine the absolute velocity of the frame of reference
These two statements written above are equivalent.
Maxwell's equations were discovered later. My question is (1) how did Maxwell's equations contradict the Galilean principle of relativity?
Furthermore if one studies the two postulates of Einstein's special theory of relativity, they can be simply translated as follows:
Postulate 1: Galileo was right.
Postulate 2: Maxwell was right.
(2) How did the Maxwell equations retain the same form in all inertial frames by obeying the Lorentz transformation?
special-relativity classical-electrodynamics inertial-frames lorentz-symmetry galilean-relativity
psmears
$\begingroup$ (1) Maxwell equations predict that electromagnetic waves (light) travel with constant speed $c$, which is independent of the reference frame. This contradicts Galileo's transformations (not the principle of relativity though), according to which the velocity of light has to achieve additional contributions when passing to other frames of reference. $\endgroup$ – Prof. Legolasov Sep 21 '16 at 10:45
$\begingroup$ I don't think this is a very clear question. To 1, the answer is simply that they do. If you perform a galilean transformation ($t'=t$, $x'=x-vt$), Maxwell's equations don't retain the same form. To 2, the answer is simply that if you perform a lorentz transformation on Maxwell's equations (detailed on wikipedia here: en.wikipedia.org/wiki/… ), Maxwell's equations do retain the same form. I don't know if anyone here wants to write out all four equations on $\vec{E}$ and $\vec{B}$ and go through the motions, step-by-step, explicitly. $\endgroup$ – user12029 Sep 21 '16 at 10:47
$\begingroup$ @NeuroFuzzy: could you please elaborate the algebra (If you perform a galilean transformation (t'=t, x′=x−vt), Maxwell's equations don't retain the same form) or could you please specify a link where it is elaborate? $\endgroup$ – user103515 Sep 21 '16 at 10:53
$\begingroup$ @NeuroFuzzy no........to your last point, they can easily be found on wikipedia, and a thousand other places. $\endgroup$ – user108787 Sep 21 '16 at 11:10
$\begingroup$ You should check out this video, The speed of light is not about light. It discusses the invariance issues with the different transformations, and how it was resolved, all without getting too deep into the math and keeping it on a conceptual level. $\endgroup$ – Cody Sep 21 '16 at 16:23
A Galilean set of frames are an obvious/common sense way of viewing motion if we assume the validity of 3 also apparently obvious postulates.
All clocks measure time at the same rate, independent of their velocity.
Objects have no limit on their potential velocity.
Rulers have the same length (difference in position between the lengths at a common time), independent of their velocity.
When Maxwell formulated/compiled his equatons, implying that light speed was invariant in every frame, Einstein was forced to consider the implications of this for Galilean transformations and their "obvious" underlying assumptions.
If light speed is invariant in all frames, then something has to give to preserve that invariance, and the 3 assumptions above needed to be abandoned to preserve Maxwell's laws.
How the Maxwell equations retained the same form in all inertial frames by obeying Lorentz transformation?
By the development of the Faraday tensor $F_{\mu v}$ based on a vector potential $\vec A $ and a scalar potential $\Phi $ .
$\begingroup$ "When Maxwell formulated/compiled his equatons, implying that light speed was invariant in every frame..." No. Actually Maxwell's theory implied the opposite (the speed of light relative to the observer varies with the speed of the observer): pitt.edu/~jdnorton/papers/Chasing.pdf "That [Maxwell's] theory allows light to slow and be frozen in the frame of reference of a sufficiently rapidly moving observer." $\endgroup$ – Pentcho Valev Sep 22 '16 at 5:19
My question is (1) how Maxwell's equations contradicted Galilean principle of relativity.
Maxwell's equations have wave solutions that propagate with speed $c = \frac{1}{\sqrt{\mu_0\epsilon_0}}$.
Since velocity is relative (speed c with respect to what?), it was initially thought that the what is an luminiferous aether in which electromagnetic waves propagated and which singled out a family of coordinate systems at rest with respect to the aether.
If so, then light should obey the Galilean velocity addition law. That is, a lab with a non-zero speed relative to the luminiferous aether should find a directionally dependent speed of light.
However, the Michelson–Morley experiment (original and follow-ups) failed to detect such a directional dependence. Some implications are
(1) there is no aether and electromagnetic waves propagate at an invariant speed. This conflicts with Galilean relativity for which two observers in relative uniform motion will measure different speeds for the same electromagnetic wave. This path leads to special relativity theory.
(2) there is an aether but it is undetectable. This path leads to Lorentz aether theory.
Alfred CentauriAlfred Centauri
$\begingroup$ In 1887 (prior to FitzGerald and Lorentz advancing the ad hoc length contraction hypothesis) the Michelson-Morley experiment UNEQUIVOCALLY confirmed the variable speed of light predicted by Newton's emission theory of light and refuted the constant (independent of the speed of the light source) speed of light predicted by the ether theory and later adopted by Einstein as his special relativity's second postulate. $\endgroup$ – Pentcho Valev Sep 22 '16 at 5:35
$\begingroup$ If the Galilean velocity addition law is broken, why does it mean that the laws of Newtonian Mechanics need to be adjusted? Maybe this question is trivial but I don't see the direct implication. $\endgroup$ – philmcole Feb 6 '18 at 19:32
The difference between Galilean and special relativity is the details of how spacetime coordinates change between reference frames. The Galilean transformation $t'=t,\,\mathbf{x}' =\mathbf{x}-\mathbf{v}t$ relates reference frames of relative velocity $\mathbf{v}$. This implies that, if $A$ has velocity $\mathbf{u}$ relative to $B$ and $B$ has velocity $\mathbf{v}$ relative to $C$, $A$ has velocity $\mathbf{u}+\mathbf{v}$ relative to $C$. This implies no speed can be invariant across reference systems. For example, if I shine a torch while on a train that's going past you, you and I should disagree on the speed of the torch's light.
However, Maxwell's theory contains waves of speed $c:=1/\sqrt{\mu_0\varepsilon_0}$, so cannot apply in all reference frames if they are related as per Galileo's formulae. In a region with no electric charges or currents, Maxwell's equations imply the wave equations $$\nabla^2\mathbf{B}=c^{-2}\partial_t^2\mathbf{B},\,\nabla^2\mathbf{E}=c^{-2}\partial_t^2\mathbf{E}.$$
Special relativity still claims physical laws are the same in all reference frames, but they relate their coordinates differently, viz. $$t'=\frac{t-\mathbf{v}\cdot\mathbf{x}/c^2}{\sqrt{1-v^2/c^2}},\,\mathbf{x}' =\frac{\mathbf{x}-\mathbf{v}t}{\sqrt{1-v^2/c^2}}.$$One can show that a change in reference frames preserves the above speed-$c$ wave equations.
J.G.J.G.
From a mathematical point it is rather simple: Considering by $A^\mu$ the usual 4-vector potential and assuming the Lorenz gauge $ \partial_{\mu}A^\mu = 0$ the Maxwell equations of vacuum write as $\square A^{\mu} =\partial_{\nu}\partial^{\nu}A^{\mu} = 0$. However the D'Alembert operator $\partial_{\nu}\partial^{\nu} = \frac{1}{c^2}\partial_t^2 - \partial_x^2 -\partial_y^2 - \partial_z^2$ is invarariant under a linear transformation given by a matrix ${\Lambda^{\sigma}}_{\tau}$ if and only if ${\Lambda^{\sigma}}_{\mu}{\Lambda^{\tau}}_{\nu}g^{\mu\nu} = g^{\sigma\tau} = \mathrm{diag}(1,-1,-1,-1)$. These are precisely the Lorentz transformations. However, the Galilei transformations do not form a subgroup of these.
The basic idea of this approach is the idea that the physical laws (and therefore the corresponding differential operators) have to keep their form under valid frame transformations. But it is then postulated that (somehow vice versa) all transformations keeping the form (Lorentz-transformations, that is) are actually valid changes of reference frames.
Peter WildemannPeter Wildemann
Imagine a stationary electron sitting next to a long length of wire with current flowing through it. Since the wire is neutrally charged, there is no electric force on the electron, and since the electron is stationary, there is no magnetic force.
Now imagine the whole system is moving lengthwise at a constant velocity. All of a sudden the electron is moving through a magnetic field and experiences a force. This seems to be a contradiction.
In relativity, this will be answered by the differing length contractions of the positive (protons)/negative (electrons) parts of the wire, creating an electric force on the electron that balances the magnetic force. This also serves to show the difficulty of distinguishing electric from magnetic forces, as one may become the other in a different reference frame.
OwenOwen
protected by Qmechanic♦ Sep 21 '16 at 17:57
Not the answer you're looking for? Browse other questions tagged special-relativity classical-electrodynamics inertial-frames lorentz-symmetry galilean-relativity or ask your own question.
Special Relativity: Transforming Maxwell's equations
Can one derive Galilean transformations from the harmonic oscillator equations of motion and the relativity principle?
Importance of the Galilean principle of relativity
Relationship bewtween the principle of Galilean Relativity and absolute time
Galilean relativity & the road to special relativity
Was the first principle of Special Relativity new to Physics?
Galilean Relativity and Electrodynamics
What does a Galilean transformation of Maxwell's equations look like?
How would General Relativity be different if we assumed Galilean instead of Lorentz transformations?
Sound wave and Principle of relativity | CommonCrawl |
Effects of adiabatic compression on thermal convection in super-Earths of various sizes
Takehiro Miyagoshi ORCID: orcid.org/0000-0002-8908-99901,
Masanori Kameyama2 &
Masaki Ogawa3
We present two-dimensional numerical models of thermal convection of a compressible fluid in the mantles of super-Earths calculated under the truncated anelastic liquid approximation to discuss how adiabatic compression affects the thermal convection, depending on planetary mass. The convection is driven by basal heating, the viscosity depends on temperature, and the thermal expansivity and the reference density depend on the depth. We varied all of the magnitude of adiabatic heating, the Rayleigh number, the depth profile of the thermal expansivity, and that of the reference density in accordance with the planetary mass. The effects on thermal convection become substantial, when the planetary mass normalized by the Earth's mass Mp exceeds a threshold Mc, about 4. Hot plumes ascending from the core–mantle boundary become thinner with increasing Mp; they become almost invisible except around the core–mantle boundary, when Mp > Mc. The lithosphere that develops along the surface boundary due to the temperature dependence of viscosity becomes thicker with increasing Mp and is about twice as thick as that at Mp = 1 when Mp = 9.4. The convective velocity is almost independent of Mp. These results are in a striking contrast with earlier predictions that are made based on the models where the effects of adiabatic compression are neglected; it is important to take account of the effects of adiabatic compression properly in the exploration of mantle dynamics such as plate tectonics and hot spot volcanisms in massive super-Earths. Further researches are necessary to clarify the dependence of Mc on the surface temperature and the material properties of the convecting mantle.
Motivated by recent detection of a large number of super-Earths, i.e., extrasolar planets with a mass of up to ten times the Earth's (e.g., Borucki et al. 2011), many researchers have explored dynamics of the mantle of super-Earths, because it is a key to understanding their tectonics, thermal history, and surface environments (e.g., Valencia et al. 2007; Valencia and O'Connell 2009; van Heck and Tackley 2011; Foley et al. 2012; Lenardic and Crowley 2012; Stein et al. 2013; Tackley et al. 2013; Stamenković and Breuer 2014). In the terrestrial planets of our solar system, two of the most important factors that dominate mantle dynamics are the lithosphere and hot ascending plumes (e.g., Schubert et al. 2001; Davies 2011). Whether the lithosphere on super-Earths is rifted into mobile plates as observed on the Earth or remains stagnant as observed on Venus and Mars has been a central issue, and the predictions made in the literature range from the stagnant lithosphere (e.g., O'Neill and Lenardic 2007; Stein et al. 2013) to active plate tectonics (e.g., Valencia et al. 2007; Valencia and O'Connell 2009; van Heck and Tackley 2011; Foley et al. 2012; Tackley et al. 2013). In their review of parameterized convection models for super-Earths, Stamenković and Breuer (2014) conclude that whether plate tectonics can operate or not sensitively depends on the detail of the parameterization. The vigor of hot ascending plumes, although it has not drawn so much attention, is also an important issue, since hot plumes affect the activity of hot spot volcanism and large igneous provinces that have been suggested to rift the lithosphere on the Earth (e.g., Richards et al. 1989). Before challenging these issues on the real super-Earths, however, it is important to explore thermal convection in the mantle of super-Earths as a problem of fluid dynamics. Recently, Miyagoshi et al. (2014, 2015, 2017) have focused on the effects of adiabatic compression in super-Earths of ten times the Earth's mass, and found that the effects are substantial, as summarized below. Here, we apply our earlier numerical models to super-Earths of various mass to clarify the threshold in planetary mass above which the effects of adiabatic heating we found become important.
An important finding of Miyagoshi et al. (2014, 2015) is that adiabatic compression substantially reduces the vigor of thermal convection in a massive super-Earth in contrast to the predictions of many of earlier studies that are based on the Boussinesq approximation where adiabatic compression is neglected (e.g., Valencia et al. 2007; Valencia and O'Connell, 2009; van Heck and Tackley 2011; Foley et al. 2012). In their models, Miyagoshi et al. (2014, 2015) found that the lithosphere becomes much thicker and hot ascending plumes become thinner than expected from Boussinesq models in a planet of ten times the Earth's mass. These results suggest that it is necessary to revisit the issue of plate tectonics and hot spot volcanism in super-Earths, taking account of the effects of adiabatic heating (e.g., Valencia et al. 2007; Valencia and O'Connell 2009; van Heck and Tackley 2011; Foley et al. 2012). In this paper, we apply our earlier models to super-Earths of various mass Mp (the planetary mass divided by the Earth's mass) 1, 2, 4, 6, and 9.4 to clarify the minimum planetary mass above which the effects of adiabatic compression on mantle convection become prominent. To highlight the effects of adiabatic compression, we keep our models as simple as possible and neglect other factors like the possible effects of complicated rheology of mantle materials, the process of planetary formation, the boundary condition on the core–mantle boundary, and internal heating.
We only briefly describe the model and basic equations here; the readers are referred to our previous studies for more detail (Miyagoshi et al. 2014, 2015, 2017).
We calculated a thermal convection of an infinite Prandtl number fluid with a temperature-dependent viscosity in a two-dimensional rectangular box of the aspect ratio four under the truncated anelastic liquid approximation (TALA). All of the boundaries are shear stress free and impermeable. The surface temperature T *s and the bottom temperature T *b are both fixed, and the normalized surface temperature T *s /(T *b − T *s ) is fixed at 0.1. (The asterisks stand for dimensional quantities). The sidewalls are insulating.
The employed non-dimensional momentum equation is
$$ - \nabla p + \nabla \cdot \tau + Ra \cdot \bar{\rho }\left( z \right) \cdot \alpha \left( z \right) \cdot T \cdot {\mathbf{e}}_{{\mathbf{z}}} = 0, $$
$$ \tau_{ij} = \eta \left( T \right) \, \left[ {\left( {\nabla {\mathbf{u}} + {}^{{\mathbf{t}}}\nabla {\mathbf{u}}} \right)_{ij} - \frac{2}{3}\nabla \cdot {\mathbf{u}}\delta_{ij} } \right], $$
the mass conservation equation,
$$ \nabla \cdot [\overline{\rho } \left( z \right){\mathbf{u}}] = 0, $$
and the energy equation
$$ \overline{\rho } \left( z \right) \, \left[ {\frac{{{\text{d}}T}}{{{\text{d}}t}} + Di \cdot \alpha \left( z \right)wT} \right] = \nabla^{2} T + \frac{Di}{Ra}\tau_{ij} \frac{{\partial u_{i} }}{{\partial x_{j} }}. $$
The length scale for the normalization is the mantle depth d*; the temperature scale is the temperature difference ∆T* ≡ T *b − T *s ; the time scale is d*2/κ* where κ* is the thermal diffusivity. \( \overline{\rho } \)(z) and α(z) are the depth-dependent reference density and thermal expansivity, respectively; η(T) is the viscosity; δij is the Kronecker's delta; z is the height measured from the bottom boundary; p is non-hydrostatic pressure; u and ez are the fluid velocity vector and the vertical unit vector; w is the vertical component of fluid velocity; t is time; Ra is the Rayleigh number defined by Ra = ρ *0 α *0 g*ΔT*d*3/η *0 κ*; Di is the dissipation number defined by Di = α *0 g*d*/C *p . Here, ρ *0 and α *0 are the density and thermal expansivity at the surface, respectively, g* is the gravity, C *p is the specific heat, and η * 0 is the viscosity on the bottom boundary that is chosen as the reference viscosity.
The viscosity η depends on the temperature as \( \eta = \eta^{*} /\eta_{0}^{*} { \exp }[\ln \left( r \right)\left( {T_{\text{b}} - T} \right)] \) where r is the viscosity contrast between the top and bottom boundaries; we fixed r at 105 regardless of Mp, in order to focus on the effects of adiabatic compression. The value of r is chosen so that the lithosphere develops along the cold surface of the planet. We do not take account for the pressure-dependent viscosity, because the physical properties under very high-pressure expected in massive super-Earths are still not well known. For example, Karato (2011) suggests that such a high pressure may reduce rather than increase the viscosity. As shown in Table 1, the thermal expansivity decreases with the hydrostatic pressure that is calculated from the depth as
Table 1 The pressure dependence of the thermal expansivity assumed in the model
$$ \overline{p}^{*} (z) = (127.4M_{\text{p}} + 7.241)(1 - z)\quad \left[ {\text{GPa}} \right] $$
(Tachinami et al. 2014; Miyagoshi et al. 2014, 2015, 2017). The reference density depends on z as
$$ \overline{\rho } (z) = 1 + [1.75(M_{\text{p}} )^{0.258} - 1](1 - z) . $$
The basic equations are discretized by a finite difference method. We employed the ACuTEMAN numerical code (Kameyama 2005; Kameyama et al. 2005) for the calculation. The employed mesh is uniform and contains 1024 times 256 grid points.
We carried out numerical calculations for planets with Mp= 1, 2, 4, 6, and 9.4. The corresponding values of (Ra, Di) are (\( 5 \times 10^{7} \), 0.87), (\( 1.3 \times 10^{8} \), 1.49), (\( 3.2 \times 10^{8} \), 2.57), (\( 5.5 \times 10^{8} \), 3.52), and (\( 1.0 \times 10^{9} \), 5.00), respectively. In these estimates, we assume that the modeled planet has a similar chemical composition with the Earth's and that g* and d* depend on the planetary mass normalized by the Earth's mass Mp as \( g^{*} = g_{ \oplus } \left( {M_{\text{p}} } \right)^{0.5} \) and \( d^{*} = d_{ \oplus } \left( {M_{\text{p}} } \right)^{0.28} \), respectively (Valencia et al. 2006, 2007), where the subscript \( \oplus \) stands for the Earth's value; Ra thus calculated depends on Mp as \( Ra = Ra_{ \oplus } \left( {M_{\text{p}} } \right)^{1.34} \), while Di depends on Mp as \( Di = Di_{ \oplus } \left( {M_{\text{p}} } \right)^{0.78} \). Here, \( Di_{ \oplus } = 0.87 \) and \( Ra_{ \oplus } = 5 \times 10^{7} \), as estimated with \( C_{\text{p}} = 1.3 \times 10^{3} \) J/kg/K, \( \alpha_{0} = 4 \times 10^{ - 5} \) K−1, \( g_{ \oplus } = 9.8 \) m/s2, and \( d_{ \oplus } \) = 2900 km. We fixed the temperature contrast across the mantle ΔT* at 3000 K in the estimates of Ra; the effects of variation in Ra due to a variation in ΔT* on numerical results can be readily estimated from the parametrized relationships presented in Tachinami et al. (2014) and Miyagoshi et al. (2015). We also fixed the non-dimensional surface temperature Ts at 0.1, since Tachinami et al. (2014) has already investigated its effects on numerical results for iso-viscous models.
We shortly note the validity of TALA in the numerical model of mantle convection in the presence of strong compression. Compared with its untruncated counterpart (ALA), the effect on buoyancy of dynamic pressure is neglected in TALA. Such a truncation can be safely done in the mantles of terrestrial planets particularly with large mass, because the dynamic pressure is much smaller than the static one in the planetary interiors. This has been already confirmed from numerical results with Mp= 10 in Kameyama and Yamamoto (2018).
The left column of Fig. 1 shows snapshots of the distribution of potential temperature Tp(x,z) (color) and fluid velocity (arrows) we obtained at various Mp at the statistically steady state, while the right column shows the horizontal average \( \overline{{T_{\text{p}} }} \) of the potential temperature Tp(x,z). The potential temperature is calculated from the temperature T(x,z) as shown in Fig. 2
The left column shows snapshots of the distribution of the non-dimensional potential temperature Tp(x,z) (color) and fluid velocity (arrows) at various Mp. Tp is normalized with the temperature unit of ΔT* = 3000 K. The color scale is changed from panel to panel so that Tp of the isothermal core of the convection is represented by green color. See the sample arrow beneath the panel (i) for the non-dimensional magnitude of the velocity. The right column shows the horizontally averaged Tp(x,z), \( \overline{{T_{\text{p}} }} \) (the red lines). The dashed black lines show the location of the base of the lithosphere (for details, see the text)
Snapshots of the temperature distribution (color) from which the potential temperature presented in Fig. 1 is calculated
$$ T_{\text{p}} (x,z) = T(x,z)\exp \left( { - Di\int_{z}^{1} {\alpha \left( \xi \right)d\xi } } \right) , $$
(Miyagoshi et al. 2014, 2015, 2017). We present Tp rather than T in Fig. 1, because Tp is more directly related to the buoyancy force that drives thermal convection.
At Mp= 1, hot plumes ascending from the core–mantle boundary are as prominent as the cold plumes that descend from the top cold thermal boundary layer (TBL), and the convective velocity around the cold plumes is comparable to that around the hot plumes. The heads of the hot plumes reach the base of the top TBL.
Hot ascending plumes become, however, less prominent as Mp increases: At Mp= 2, the convective flow induced by hot plumes is still an important element of the convection in the mantle. The excess temperature of the heads of hot plumes with respect to the surrounding mantle is, however, smaller than that calculated at Mp= 1 shown in Fig. 1a. At Mp= 4 and 6, the excess temperature of hot plume heads becomes even smaller. Hot plumes become faint except around the core–mantle boundary, and the heads are often detached from the stems of hot plumes before they ascend to the base of the top TBL. The excess temperature of hot plumes rapidly decreases as they ascend, while cold descending plumes are prominent at all depths. At Mp= 9.4, hot plumes become almost invisible, while cold descending plumes are still conspicuous. The effects of adiabatic compression on temperature contrast between plumes and the surrounding mantle are more conspicuous for hot plumes than for cold plumes, because the effect is proportional to the temperature (Eq. 4).
Figure 2 shows snapshots of the temperature distribution from which the potential temperature shown in Fig. 1 is calculated. Cold plumes are conspicuous at all Mp. Hot plumes, in contrast, become fainter as Mp increases and are almost invisible at Mp= 4 or larger, although the structures can be slightly observed in the potential temperature distribution.
Figure 3a shows the normalized thickness of the top TBL, or the lithosphere, plotted against Mp. The base of the lithosphere is located at the depth level where \( \overline{{T_{\text{p}} }} - T_{\text{s}} \) becomes 98% of \( \overline{{T_{\text{pm}} }} - T_{\text{s}} \): \( \overline{{T_{pm} }} \) is the local maximum of \( \overline{{T_{\text{p}} }} \) in the shallow mantle (Miyagoshi et al. 2015). The height of the base of the lithosphere thus defined is shown in the right column in Fig. 1 by the black dashed lines. The figure shows that the normalized thickness of the lithosphere is almost independent of Mp. This behavior is different from that found in the earlier models where the Boussinesq approximation is employed: Under this approximation, the flux of convective heat transport q, which is approximately proportional to the inverse of the normalized thickness of the lithosphere, is known to depend on the Rayleigh number as \( q \propto Ra^{0.258} \) (e.g., Deschamps and Sotin 2000) where Ra depends on Mp as M 1.34p , as derived in "Model" section. Thus, the normalized thickness depends on Mp as \( L (Mp = 1 )\cdot Mp^{ - 0.346} \) under the Boussinesq approximation, where \( L (Mp = 1 ) \) is the normalized thickness at Mp= 1. This dependence is shown by the dotted line in Fig. 3a. The plot shows that the lithosphere becomes substantially thicker, when the effect of adiabatic compression is taken into account. (In the figure, the lithosphere at Mp= 1 is much thicker than that we usually observe for the Earth, because we neglected internal heating and plate motion in our models.)
a The plot of L (the thickness of the lithosphere normalized by the mantle depth) against Mp. The dotted line shows the plot that is expected from the Boussinesq approximation \( L (M_{\text{p}} = 1 )\cdot M_{\text{p}}^{ - 0.346} \) where \( L (M_{\text{p}} = 1 ) \) is the normalized thickness at Mp= 1 (for details, see the main text). b The dependence of the thickness of the lithosphere h = Ld* on Mp, where d* is the mantle depth that depends on Mp as \( d^{*} = d_{ \oplus } \left( {M_{\text{p}} } \right)^{0.28} \); the thickness is normalized by its value at Mp= 1. The dashed line shows the dependence suggested earlier from the Boussinesq approximation (Valencia et al., 2007); the thickness depends on Mp as \( M_{\text{p}}^{ - 0.45} \) in their models. c The dimensional root-mean-square average velocity V *rms of the convection plotted against Mp. The dashed line shows the relationship of \( V_{\text{rms}}^{ *} (M_{\text{p}} = 1 )\cdot M_{\text{p}}^{1.19} \) suggested in Valencia et al. (2007). (Here, V *rms (Mp= 1) is the value of V *rms at Mp= 1.)
When converted to dimensional quantity by using \( d^{*} = d_{ \oplus } \left( {M_{\text{p}} } \right)^{0.28} \), which is described in "Model" section, the thickness of the lithosphere (= d* times the normalized thickness shown in Fig. 3a) increases with increasing Mp, as shown in Fig. 3b (again, the thickness is normalized by the value at Mp= 1), despite that a larger Mp implies a thicker mantle, and hence a higher Rayleigh number. The lithosphere becomes about twice thicker, as the planetary mass becomes 9.4 times larger. For comparison, we show the dependence of the dimensional thickness on Mp expected in an earlier parameterized convection model where the Boussinesq approximation is employed (Valencia et al. 2007); here the thickness is expressed as M −0.45p (the dashed line). When the effects of adiabatic compression are considered, the lithosphere becomes thicker rather than thinner with increasing Mp.
We also present a plot of the dimensional root-mean-square velocity of the convection V *rms = Vrms κ*/d* (here κ* = 10−6 [m2/s]) against Mp. The non-dimensional average velocity Vrms is given as \( V_{\text{rms}} = \left\langle {\sqrt {\left\langle {V_{x}^{2} + V_{z}^{2} } \right\rangle_{\text{ave}} } } \right\rangle_{{{\text{time\_ave}}}} \), where Vx and Vz are the velocity vector component of horizontal and vertical directions, \( \left\langle {\,} \right\rangle_{\text{ave}} \) means spatially averaged, and \( \left\langle {\,} \right\rangle_{{{\text{time\_ave}}}} \) means time averaged. As shown in the figure, V *rms is almost independent of Mp, when the effect of adiabatic compression is taken into account. Although the non-dimensional Vrms increases by a factor of about 2 as Mp increases from 1 to 9.4 (see the arrows in Fig. 1), this increase in Vrms is canceled out in V *rms by the increase in d* due to increasing Mp. In contrast, in the Boussinesq approximation model (Valencia et al. 2007), the V *rms rapidly increases with increasing Mp as \( V_{\text{rms}}^{ *} (M_{\text{p}} = 1 )\cdot M_{\text{p}}^{1.19} \) where V *rms (Mp= 1) is the value of V *rms at Mp= 1 (see the dashed line in Fig. 3c).
In our numerical models, the effects of adiabatic compression on thermal convection become substantial in the mantle of super-Earths, when the mass Mp exceeds a threshold value Mc about 4. Hot ascending plumes become less prominent as Mp increases and cannot directly ascend to the base of the lithosphere in a planet with Mp larger than Mc ~ 4. As a consequence, the flows induced by cold descending plumes become the major component of the convective flow at large Mp. The thickness of the lithosphere normalized by the mantle depth is also affected by adiabatic compression. The normalized thickness becomes almost independent of Mp, and hence the dimensional thickness of the lithosphere increases with increasing Mp, as shown in Fig. 3b.
The hot plumes lose their buoyancy as they ascend in the mantle of planets with a large Mp, because the temperature contrast between a hot plume and the surrounding mantle δΤ decreases by
$$ \frac{d\delta T}{dz} = - \alpha (z)Di \cdot \delta T , $$
owing to adiabatic decompression, as the plume ascends. (See Eq. (7).) At Mp= 2 (Di = 1.49), hot plumes ascend to the base of the lithosphere that is at the height z of \( \approx \) 0.68; the coefficient \( \alpha \left( z \right)Di \) is about 0.59 there, according to Table 1 and Eq. (5). At Mp= 4 (Di = 2.57), however, the same value of the coefficient is reached at z \( \approx \) 0.58, which is well below the base of the lithosphere. At larger Mp, the same value of the coefficient \( \alpha \left( z \right)Di \) is reached even at lower z. The lower level of z is the reason why the hot plumes stop ascending before they reach the base of the lithosphere at Mp equal to 4 or greater.
An estimate of the average dissipation number \( \overline{Di} = Di\int_{0}^{1} {{\alpha \mathord{\left/ {\vphantom {\alpha {\alpha_{0} }}} \right. \kern-0pt} {\alpha_{0} }}} {\text{d}}z \) also leads to the same conclusion. \( \overline{Di} \) is 0.44, 0.57, 0.69, 0.74, 0.79 at Mp= 1, 2, 4, 6, and 9.4, respectively; \( \overline{Di} \) at Mp= 4 is already about 90% of that at Mp= 9.4. The rather large value of \( \overline{Di} \) at Mp > 4 is also the reason why the lithosphere becomes thicker (Fig. 3b) and the root-mean-square average velocity does not increase (Fig. 3c) with increasing Mp at Mp > 4 in spite of the larger Ra at larger Mp. The stronger effects of adiabatic compression counteract the effects of higher Ra at large Mp.
The role of adiabatic compression in controlling the thickness of lithosphere is also readily concluded from the comparison with results in our earlier work (Miyagoshi et al. 2015). When both r and Di are fixed, the Nusselt number increases as Ra increases (see Figure 8 of Miyagoshi et al. (2015)), indicating the decrease in non-dimensional thickness of the lithosphere. (It also shows that its non-dimensional thickness is not determined by only r). In our present model where both Ra and Di increase with Mp, in contrast, its non-dimensional thickness does not decrease for given r. The difference from the results of Miyagoshi et al. (2015) clearly shows that the almost constant non-dimensional thickness of the lithosphere for the values of Mp employed here (Fig. 3a) is caused by the counteraction between the increase in Ra and that in Di with Mp.
The thick lithosphere (Fig. 3b) and the inactive convection (Fig. 3c) observed at large Mp imply that it is necessary to revisit the issue of plate tectonics on super-Earths. In the literature, the stress induced in the lithosphere by mantle convection has been estimated and has been compared with the rupture strength of the lithosphere to see if plate tectonics operates in super-Earths (e.g., Valencia et al. 2007; Valencia and O'Connell 2009). In these estimates, the stress is proportional to V *rms /h* where h* is the thickness of the lithosphere. Since V *rms increases and h* decreases with increasing Mp under the Boussinesq approximation, it has been concluded that the stress in the lithosphere exceeds its rupture strength, and that plate tectonics takes place more readily in super-Earths. When the adiabatic compression is taken into account, however, h* increases with increasing Mp, and V *rms is almost independent of Mp, as shown in Fig. 3; the figure suggests that the stress decreases as Mp increases, and that plate tectonics may be more difficult to operate in massive super-Earths.
The low activity of hot ascending plumes at large Mp shown in Fig. 1 contains an implication for the volcanism on super-Earths, too. On the Earth, hot plumes are known to cause hot spot volcanism like the one observed at Hawaii and large igneous provinces (LIP) that causes rifting of plates (e.g., Coffin and Eldholm 1994). However, the lowered activity of hot plumes due to the effect of adiabatic compression in our results suggests that such volcanism as the one observed on the Earth is not important in super-Earths, when Mp exceeds around 4.
Of course, we neglected many factors that may affect thermal convection to fix the model presented here. As has been already discussed, the vigor of mantle convection measured by the Nusselt number significantly depends on the surface temperature in massive super-Earths Tachinami et al. (2014). The dependence of viscosity on pressure is also important (e.g., Tackley et al., 2013). Besides, we neglected internal heating and high-pressure induced phase changes such as the post-perovskite transition. Further numerical studies with these effects are necessary to predict with more confidence the threshold in Mp above which the effects of adiabatic compression becomes important.
In summary, our numerical models suggest that the strong effects of adiabatic compression make both plate tectonics and volcanism caused by hot ascending plumes difficult to take place in super-Earths with Mp larger than about 4 under our model settings. To confirm this conclusion, further numerical experiments are necessary in the future, taking account of the complicated rheology of mantle materials (e.g., pressure- or stress-dependent viscosity), internal heating and the long-lasting effects of the initial condition on the thermal state of the mantle (Miyagoshi et al. 2017).
Borucki WJ, Koch DG, Basri G et al (2011) Characteristics of planetary candidates observed by Kepler. II. analysis of the first four months of data. Astrophys J 736:19. https://doi.org/10.1088/0004-637x/736/1/19
Coffin MF, Eldholm O (1994) Large igneous provinces: crustal structure, dimensions, and external consequences. Rev Geophys 32:1–36
Davies GF (2011) Mantle convection for Geologists. Cambridge Univ. Press, U. K.
Deschamps F, Sotin C (2000) Inversion of two-dimensional numerical convection experiments for a fluid with a strongly temperature-dependent viscosity. Geophys J Int 143:204–218
Foley BJ, Bercovici D, Landuyt W (2012) The conditions for plate tectonics on super-Earths: inferences from convection models with damage. Earth Planet Sci Lett 331–332:281–290. https://doi.org/10.1016/j.epsl.2012.03.028
Kameyama M (2005) ACuTEMan: a multigrid-based mantle convection simulation code and its optimization to the Earth Simulator. Journal of the Earth Simulator 4:2–10
Kameyama M, Yamamoto M (2018) Numerical experiments on thermal convection of highly compressible fluids with variable viscosity and thermal conductivity: implications for mantle convection of super-Earths. Phys Earth Planet Inter 274:23–36. https://doi.org/10.1016/j.pepi.2017.11.001
Kameyama M, Kageyama A, Sato T (2005) Multigrid iterative algorithm using pseudo-compressibility for three-dimensional mantle convection with strongly variable viscosity. J Comput Phys 206:162–181. https://doi.org/10.1016/j.jcp.2004.11.030
Karato S (2011) Rheological structure of the mantle of a super-Earth: some insights from mineral physics. Icarus 212:14–23. https://doi.org/10.1016/j.icarus.2010.12.005
Lenardic A, Crowley JW (2012) On the notion of well-defined tectonic regimes for terrestrial planets in this solar system and others. Astrophys J 755:132. https://doi.org/10.1088/0004-637X/755/2/132
Miyagoshi T, Tachinami C, Kameyama M, Ogawa M (2014) On the vigor of mantle convection in super-Earths. Astrophys J Lett 780:L8. https://doi.org/10.1088/2041-8205/780/1/L8
Miyagoshi T, Kameyama M, Ogawa M (2015) Thermal convection and the convective regime diagram in super-Earths. J Geophys Res Planets 120:1267–1278. https://doi.org/10.1002/2015JE004793
Miyagoshi T, Kameyama M, Ogawa M (2017) Extremely long transition phase of thermal convection in the mantle of massive super-Earths. Earth Planets Space 69:46. https://doi.org/10.1186/s40623-017-0630-6
O'Neill C, Lenardic A (2007) Geological consequences of super-sized Earths. Geophys Res Lett 34:L19204. https://doi.org/10.1029/2007GL030598
Richards MA, Duncan RA, Courtillot VE (1989) Flood basalts and hot-spot tracks: plume heads and tails. Science 246:103–107. https://doi.org/10.1126/science.246.4926.103
Schubert G, Turcotte DL, Olson P (2001) Mantle convection in the earth and planets. Cambridge Univ. Press, Cambridge
Stamenković V, Breuer D (2014) The tectonic mode of rocky planets: part 1—driving factors, models & parameters. Icarus 234:174–193. https://doi.org/10.1016/j.icarus.2014.01.042
Stein C, Lowman JP, Hansen U (2013) The influence of mantle internal heating on lithospheric mobility: implications for super-Earths. Earth Planet Sci Lett 361:448–459. https://doi.org/10.1016/j.epsl.2012.11.011
Tachinami C, Senshu H, Ida S (2011) Thermal evolution and lifetime of intrinsic magnetic fields of super-earths in habitable zones. Astrophys J 726:70. https://doi.org/10.1088/0004-637X/726/2/70
Tachinami C, Ogawa M, Kameyama M (2014) Thermal convection of compressible fluid in the mantle of super-Earths. Icarus 231:377–384. https://doi.org/10.1016/j.icarus.2013.12.022
Tackley PJ, Ammann M, Brodholt JP, Dobson DP, Valencia D (2013) Mantle dynamics in super-Earths: post-perovskite rheology and self-regulation of viscosity. Icarus 225:50–61. https://doi.org/10.1016/j.icarus.2013.03.013
Valencia D, O'Connell RJ (2009) Convection scaling and subduction on Earth and super-Earths. Earth Planet Sci Lett 286:492–502. https://doi.org/10.1016/j.epsl.2009.07.015
Valencia D, O'Connell RJ, Sasselov DD (2006) Internal structure of massive terrestrial planets. Icarus 181:545–554. https://doi.org/10.1016/j.icarus.2005.11.021
Valencia D, O'Connell RJ, Sasselov DD (2007) Inevitability of plate tectonics on super-earths. Astrophys J Lett 670:L45–L48. https://doi.org/10.1086/524012
van Heck HJ, Tackley PJ (2011) Plate tectonics on super-Earths: equally or more likely than on Earth. Earth Planet Sci Lett 310:252–261. https://doi.org/10.1016/j.epsl.2011.07.029
TM performed numerical simulations, analysis of simulation data, and prepared the manuscript. MK developed the numerical simulation code ACuTEMan and prepared the manuscript. MO prepared the manuscript. All authors discussed numerical simulation results. All authors read and approved the final manuscript.
The authors thank two anonymous reviewers for their helpful comments. Numerical simulations were performed on the Earth Simulator and the supercomputer system at Japan Agency for Marine-Earth Science and Technology.
Data supporting the figures in this article are available upon request to the authors.
This work was supported by JSPS KAKENHI Grant Numbers JP25287110, JP15H05834, a joint research program at Geodynamics Research Center, Ehime University, and "Exploratory Challenge on Post-K computer" (Elucidation of the Birth of Exoplanets [Second Earth] and the Environmental Variations of Planets in the Solar System).
Department of Deep Earth Structure and Dynamics Research, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama, Kanagawa, 236-0001, Japan
Takehiro Miyagoshi
Geodynamics Research Center, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime, 790-8577, Japan
Masanori Kameyama
Department of Earth Sciences and Astronomy, University of Tokyo at Komaba, 3-8-1 Komaba, Meguro, Tokyo, 153-8902, Japan
Masaki Ogawa
Correspondence to Takehiro Miyagoshi.
Miyagoshi, T., Kameyama, M. & Ogawa, M. Effects of adiabatic compression on thermal convection in super-Earths of various sizes. Earth Planets Space 70, 200 (2018). https://doi.org/10.1186/s40623-018-0975-5
Mantle convection
Super-Earths of various sizes
Adiabatic compression
Numerical simulations
7. Planetary science | CommonCrawl |
Arrival and Registration
13:15 - 13:30 Roderich Moessner (Director MPIPKS)
& Scientific Organizers
Opening of the Workshop
13:30 - 14:00 Sergei Turitsyn (Aston University)
Nonlinear communications technologies
14:00 - 14:15 Maria Chiara Braidotti (University of L'Aquila)
Squeezing and shock waves
Squeezed states are quantum states of light, which recently proved to play an important role in modern physics due to their connection with quantum information and their employment in gravitational wave detection. The spectral properties of the squeezing operator were discussed in the late 80s. The squeeze operator has a purely continuous spectrum that covers the unitary circle in the complex plane. However, this statement has been recently generalized to include a family of discrete eigenvalues and eigenvectors (Gamow vectors) in a rigged Hilbert space. [1] In this talk, we want to show that squeezed states appear in the development of a classical optical shock phenomenon. A shock wave is a singular solution of a hyperbolic partial differential equation, which is a class of equations that describes a wide variety of wave-like phenomena in physics, ranging from fluido-dynamics to plasmas and Bose-Einstein condensation (see e.g. [2,3]). Recently, it has been proved that shock waves evolution in a nonlinear and highly nonlocal medium can be described through the generalized eigenstates of a reversed harmonic oscillator (RHO), called Gamow states. [4] Here we show that the Gamow vector formalism allows to interpret the squeezing operator as the wave-function propagator of a shock wave. Indeed, we can express the evolution of a Gaussian wave-function in a nonlinear nonlocal medium as a squeezing operator acting on the eigenfunctions of a RHO. Theoretical analysis and numerical simulations reveal that the quadrature (x-p) and (x+p), where x is the position operator and p is the momentum operator, are squeezed during the excitation of a shock wave of a Gaussian light beam both in the temporal and spatial cases. This gives evidence of the presence of squeezed light during shock dynamics. [5] These results open new perspectives on the shock phenomena and links quantum optics and its applications to nonlinear waves in extreme regimes. 1. D. Chruscinski, Phys. Lett. A, 327, 290-295 (2004) 2. W. Wan, Shu Jia and J. W. Fleischer, Nat. Phys. 3, 46 - 51 (2007) 3. M. Conforti, F. Baronio and S. Trillo, Phys. Rev. A 89, 013807 (2014) 4. S. Gentilini, M. C. Braidotti, G. Marcucci, E. DelRe and C. Conti, Phys. Rev. A, 92, 023801 (2015) 5. M.C. Braidotti, A. Mecozzi, C. Conti, in preparation.
14:15 - 14:30 Lukas Maczewsky (Universität Rostock)
Optical simulations of unphysical Majorana evolutions
The novel material class of topological insulators exhibits topological protected scatter-free and unidirectional transport along their edges independent of the exact details of the edges. A common way to characterize a two-dimensional topological system is the Chern number of the bands. This number is equal to the difference between the numbers of edge modes entering each band from below and exiting above. However, in periodically driven systems with a time-dependent Hamiltonian the Chern number does not give a full characterization of the topological properties. We implement a waveguide system exhibits chiral edge modes despite the fact that the Chern numbers of all bands are zero. We show the topologically protected edge transport in this fs laser written lattice. Therefore we demonstrate the implementation of such anomalous Floquet topological insulators in the photonic regime. Authors: Lukas J. Maczewsky, Julia M. Zeuner, Stefan Nolte und Alexander Szameit
14:45 - 15:15 Konstantinos Makris (University of Crete)
Constant-Intensity waves in discrete, nonlinear and disordered media
Following the recent advances in non-Hermitian photonics that were initiated by Parity-Time (PT)-symmetric optics, we will present a new type of waves of constant intensity that exist only in complex media with suitably engineered gain-loss distributions. Several aspects and properties of such waves will be examined in discrete coupled waveguides-cavities (unidirectional invisibility), nonlinear systems (modulation instabilities) and disordered scattering media (perfect transmission).
15:15 - 15:45 Dmitry Skryabin (University of Bath)
Theory of frequency combs in microrings
15:45 - 16:00 Fabrizio Sgrignuoli (LENS and Dipartimento di Fisica e Astronomia)
Light propagation in random media: interplay between order, disorder and correlation
Nowadays, the integration of a single quantum emitter with different photonic architectures poses a great research challenge. Indeed, this interconnection can be considered as the heart of various cutting edge research topics including cryptography, cavity quantum electrodynamics, and quantum information networks. A common requirement for all these implementations is highly engineered optical photonic platforms that are inherently sensitive to fabrication imperfections. In other words, the term disorder in photonics tends to carry a negative stigma. Nevertheless, it represents also the common denominator of a plethora of phenomena, characterized by a multiple-scattering description. In other words, multiple scattering of photons in disordered dielectric structures offers an alternative route to light confinement. Recent advances in the engineering of light connement in this system, have demonstrated the ability to fully control the spectral properties of an individual photonic mode paving the way for the creation of open transmission channels in strongly scattering media. The optical confinement and the coupling between modes open new perspectives over the control of the light flow in random media, as well as the possibility to build architectures tuned for efficient light-matter interaction. We have manipulated the interplay between order, disorder and correlation to tune and to control the flow of light in complex systems. In details, we have investigated the appearance of quasi-modes, called necklace states, to create a chain of hybridized localized states extended from one end of disordered dielectric membranes to the other. Preliminary results show a profound relations between the mode spatial extent and the reminiscence of the photonic band gap of different random membranes. In other words, the occurrence of pseudogaps in the photon density of states seems to be the fundamental ingredient to design optical-disordered membranes with an high degree for system interconnections.
16:30 - 17:30 dindos17 Colloquium
Chair: André Eckardt (MPIPKS)
Mordechai Segev (Israel Institute of Technology)
Topological photonics and topological insulator lasers
Welcome banquet
09:00 - 09:45 Immanuel Bloch (Max-Planck-Institut für Quantenoptik)
Probing topological and localization phenomena using ultracold atoms
09:45 - 10:15 Matthias Heinrich (Friedrich-Schiller-Universität Jena)
Supersymmetric photonics
In recent years, the ever-increasing demand for high-capacity transmission systems has driven remarkable advances in technologies that encode information on an optical signal. Mode-division multiplexing makes use of individual modes supported by an optical waveguide as mutually orthogonal channels. The key requirement in this approach is the capability to selectively populate and extract specific modes. Optical supersymmetry (SUSY) has recently been proposed as a particularly elegant way to resolve this design challenge in a manner that is inherently scalable, and at the same time maintains compatibility with existing multiplexing strategies. Supersymmetric partners of multimode waveguides are characterized by the fact that they share all of their effective indices with the original waveguide. The crucial exception is the fundamental mode, which is absent from the spectrum of the partner waveguide. Here, we demonstrate experimentally how this global phase-matching property can be exploited for efficient mode conversion. Multimode structures and their superpartners are experimentally realized in coupled networks of femtosecond laser-written waveguides, and the corresponding light dynamics are directly observed by means of fluorescence microscopy. We show that SUSY transformations can readily facilitate the removal of the fundamental mode from multimode optical structures. In turn, hierarchical sequences of such SUSY partners naturally implement the conversion between modes of adjacent order. Our experiments illustrate just one of the many possibilities of how SUSY may serve as a building block for integrated mode-division multiplexing arrangements. Supersymmetric notions may enrich and expand integrated photonics by versatile optical components and desirable, yet previously unattainable, functionalities.
10:15 - 10:45 Hui Cao (Yale University)
Spatial coherence engineering of lasers
11:15 - 11:45 Mario Molina (Universidad de Chile)
A simplified model for magnetic metamaterials: case studies
Magnetic metamaterials consist of artificial structures whose magnetic response can be tailored to a certain extent. A common realization of such system consists of an array of metallic split-ring resonators (SRRs) coupled inductively. This type of system can feature negative magnetic response in some frequency window, making them attractive for use as a constituent in negative refraction index materials. In the absence of dissipation and nonlinearity, periodic arrays of SRR support the propagation of magnetoinductive waves, and the eigenfrequencies of the system form bands. In this talk, we will show several examples about localized modes, fano resonances, PT-symmetry and flat bands, in the context of magnetic metamaterials. In particular we will focus on: (a) Bulk and surface magnetoinductive breathers in binary metamaterials (b) Defect modes, Fano resonances and Embedded states in Magnetic Metamaterials. (c) Bounded dynamics of finite PT-symmetric magnetoinductive arrays (d) Flat band modes in quasi one dimensional magnetic metamaterials
11:45 - 12:00 Markus Gräfe (Friedrich-Schiller-Universität Jena)
Bloch Oscillations of non-local N00N states
Wave functions traversing a lattice potential subjected to an external gradient force exhibit a periodic spreading and re-loalization known as Bloch oscillations (BOs). Albeit BOs are deeply rooted into the very foundations of quantum mechanics, all experimental observations of this phenomenon so far have only considered dynamics of one or two particles initially prepared in separable local states. A more general description of the phenomenon should evidently contemplate the evolution of nonlocal states. In our work, we experimentally investigate BOs of two-photon nonlocal N00N states, defined as: $\ket{\psi^{(\theta)}} = \frac{1}{2}[(\hat{a}^{\dagger}_m)^2 + \e^{\i\theta}(\hat{a}^{\dagger}_n)^2]\ket{0}$, where the indices $m$ and $n$ refer to the lattice sites and $\theta$ is a relative phase. \par Using the direct laser writing technique, we have fabricated BO lattices with 16 waveguides following a curved trajectory, designed to investigate the time-evolution of the quantum correlations of N00N states undergoing BOs. Three input states are considered (symmetric: $\theta = 0$, antisymmetric: $\theta=\pi$, and partially symmetric: $\theta\in(0,\pi)$). Importantly, all three states are prepared on-chip by introducing detuned directional couplers, which guarantee ultra-high experimental stability. \par Quantum features of light are probed by two-photon correlations. The spatial correlation function $\Gamma_{k,l}(z)$ is obtained through measurements of the probability of simultaneously detecting one photon exiting from waveguide $k$ and its twin at site $l$ after propagation over $z$. Our results clearly indicate a correlation turning point and verify the theoretically predicted cyclic behavior of the correlation patterns. This may render BO lattices as a quantum simulator capable of tailoring particle statistics.
12:00 - 12:30 Klaus Ziegler (Universität Augsburg)
Edge modes in photonic crystals
In the presence of strong random scattering the behavior of photons in photonic crystals with degenerate spectra is quite different from Anderson localization of photons in a single band: it creates geometric states rather than confining the photons to an area of the size of the localization length. This type of confinement can be understood as angular localization, where the photons of a local light source can propagate only in certain directions. The directions are determined by the boundary of the spectrum. Thus, the system's properties on the shortest scales determine the behavior of the photon propagation on the largest scales.
Lunch and discussions
14:15 - 14:45 Netanel Lindner (Technion - Israel Institute of Technology)
Probing anomalous topological phenomena
Periodically driven quantum systems provide a novel and versatile platform for realizing topological phenomena. Among these are analogs of topological insulators and superconductors, attainable in static systems. However, some of these phenomena are unique to the periodically driven case. I will describe how the interplay between periodic driving, disorder, and interactions gives rise to new steady states exhibiting robust topological phenomena, with no analogues in static systems. Specifically, I will show that disordered two dimensional driven systems admit an "anomalous" phase with chiral edge states that coexist with a fully localized bulk. I will show that the micromotion of particles in this phase gives rise to a quantized time-averaged magnetization density when the system is filled with fermions. Furthermore we find that a quantized current flows around the boundary of any filled region of finite extent. The quantization has a topological origin: we relate the time-averaged magnetization density to a topological winding number characterizing the new phase. Thus, we establish the winding number invariant can be accessed directly in bulk measurements, and propose an experimental protocol to do so using interferometry in cold-atom based realizations. In addition, I will show that the bulk topological invariant can be accessed by measuring the current flowing between two terminals when a large bias is applied.
14:45 - 15:15 Goëry Gentry (Tampere University of Technology)
Real-time measurement of chaotic nonlinear fiber systems
15:45 - 16:15 Sergey Skipetrov (Centre National de la Recherche Scientifique (CNRS))
Anderson transitions for light and sound in 3D
16:15 - 16:30 Alexander Iomin (Technion - Israel Institute of Technology)
Hyper-diffusion of quantum waves in random photonic lattices
16:30 - 16:45 Carlo Danieli (Institute for Basic Science)
Quasiperiodic driving of Anderson localized waves in one dimension
We consider a quantum particle in a one-dimensional disordered lattice with Anderson localization, in the presence of multi-frequency perturbations of the onsite energies. Using the Floquet representation, we transform the eigenvalue problem into a Wannier-Stark basis. Each frequency component contributes either to a single channel or a multi-channel connectivity along the lattice, depending on the control parameters. The single channel regime is essentially equivalent to the undriven case. The multi-channel driving substantially increases the localization length for slow driving, showing two different scaling regimes of weak and strong driving, yet the localization length stays finite for a finite number of frequency components.
16:45 - 17:15 Neil Broderick (University of Auckland)
Noise-induced death of temporal external cavity solitons
We present experimental and theoretical investigations into noise induced switching between bistable states of an optically pumped semiconductor laser with feedback. In particular we show that even small amounts of noise can destroy a regular pulse train while a much greater amount of noise is required to create it. The statistics of the experimental system are compared to realistic simulations and a good agreement is found. Future applications for creating more robust systems will be discussed.
17:15 - 17:30 Martin Wimmer (Friedrich Schiller Universität Jena)
Experimental demonstration of anomalous transport in a photonic mesh lattice
We demonstrate an experimental measurement of the Berry curvature in an optical network. By coupling two fiber loops of different length, a mesh lattice is generated, which is spanned by a discrete time and position. The dimension of the system is further increased by implementing a temporal driving of the lattice similar to a Thouless pump. However, as our system is topological trivial, a geometrical charge pump is instead used, which relies on the excitation of a spectrally narrow wave packet, which probes the underlying Berry curvature. By tracing the wave packet during its propagation through the lattice with a high accuracy, a lateral displacement is measured, which originates from the anomalous velocity due to the Berry curvature of the driven lattice. Furthermore, the time dependent measurement of the displacement allows for a full reconstruction of the Berry curvature, which we demonstrated for two different temporal drivings. M. Wimmer et al. accepted for publication in Nature Physics.
09:00 - 09:45 Nader Engheta (University of Pennsylvania)
Photonic doping
09:45 - 10:15 Diederik Wiersma (LENS , University of Florence)
Can heterogeneity enhance light trapping?
10:15 - 10:45 Shanhui Fan (E.L. Ginzton Laboratory)
Parity-time symmetry and robust wireless power transfer
Group picture (to be published on event's webpage) & Coffee break
11:15 - 11:45 Cornelia Denz (Universität Münster)
Catastrophes in light: Spatial caustics in nonlinear photonics
Many intriguing phenomena in nature are connected to singularities, ranging from black holes to phase transitions. In optics, the most prominent example of singularities are those appearing in the phase of light as optical vortices or in polarization as points and lines, ranging from sunlight to the light transmitted by birefringent materials. A unique type of singularities common in wave phenomena are caustics, representing singularities in the intensity of a wave field. Caustics appear when multiple rays of light coalesce and form bright focusing features, representing the singular point. They have been first observed in the 70ies and 80ies in natural optics, at rainbows, or as networks of high intensity at the bottom of shallow waters. It is only recently that with the maturity of tailoring and sculpting light the artificial and controlled embedding caustics in light fields led to accelerating Airy and Pearcey beams with unique propagation properties. Like many other types of singularities, caustics have universal properties, and are not restricted to optics. Their structure and diffraction patterns can be described in terms of universal classes, according to the dimensionality of the system. Mathematically, caustics are described in terms of catastrophe theory, a branch of bifurcation theory in nonlinear physic, which studies how the qualitative behavior of dynamical system changes upon small shifts in external control parameters known as bifurcations. Each caustic can be associated with a catastrophe class that dictates its properties. In our contribution, we will exploit the basic class of fold catastrophe leading to Airy beams as well as higher-order caustics to create nonlinear refractive index lattices. Due to the striking features of these caustic light structures including acceleration in the transverse direction, auto-focusing, and form-invariant propagation, we demonstrate photonic lattices with extraordinary wave guiding and splitting behavior. Nonlinear light-matter interaction of complex light with caustic lattices also leads to novel solitary light phenomena.
11:45 - 12:00 Diego Julio Cirilo-Lombardo (BLTP-JINR and INFIP-CONICET)
Coherent states, dark qubits and thermalization
In previous works [1,2] new bound coherent states construction, based on a Keldysh conjecture, was introduced. We will show in this contribution that these coherent states of [1,2] are formed by bilinesr combination of paraboson states. As was shown in [1] the particular group structure arising from the model leads to new symmetry transformations for the coherent states system. As was shown, the emergent new symmetry transformation is reminiscent of the Bogoliubov ones and was successfully applied to describe an excitonic system showing that is intrinsically related to the stability and its general physical behavior. The group theoretical structure of the model permits to analyze its thermal properties in theoretical frameworks that arise as a consequence of the definition of the squeezed-coherent states as transformed vacua under the automorphism group of the commutation relations, as the thermofield dynamics case given by Umezawa and other similar developments. On the other hand, the idea of a possible application of such coherent state construction to the quantum dot-confined dark exciton in semiconductors has attracted the attention of both experimentalists and theoreticians recently being one of the main questions what happens with the influence of non zero temperature in the case of qubit stability[3]. In this paper we considered the theoretical treatment of the thermal behaviour for dark exciton by mean a new coherent state construction in a quantum field theoretical context. 1) Cirilo-Lombardo, Phys.Part.Nucl.Lett. 11 (2014) 502-505 2) Cirilo-Lombardo, D.J. J Low Temp Phys (2015) 179: 55. doi:10.1007/s10909-014-1236-z 3)I. Schwartz, E. R. Schmidgall, L. Gantz, D. Cogan, E. Bordo, Y. Don, M. Zielinski, and D. Gershoni Phys. Rev. X 5, 011009
12:00 - 12:15 Alain Bertrand Togueu Motcheyo (University of Yaounde I)
An alternative way of generating supratransmission phenomenon in nonlinear discrete systems
For the natural supratransmission phenomenon, there exists a threshold of amplitude above which energy can flow in the lattice. We numerically show that nonlinear band gap transmission is possible with driven amplitude below the threshold using Togueu Motcheyo et al. method's [Phys. Rev. E 88, 040901(R) (2013)].
Excursion and Conference Dinner
09:00 - 09:30 Marco Ornigotti (Universität Rostock)
Quantum X Waves with Orbital Angular Momentum in Nonlinear Dispersive media
X waves are solutions of Maxwell's equations, which exhibit neither diffraction nor dispersion during propagation. Traditionally, X waves are understood as superpositions of zeroth order Bessel beams, therefore carrying no orbital angular momentum (OAM). A generalisation to the case of OAM-carrying X waves, however, has been proposed recently, highlighting new features and possibilities, especially in free-space classical and quantum communications [1]. For the latter case, in particular, a careful analysis of he quantum properties of X waves with OAM is needed. In this work, we present a complete and consistent quantum theory of generalised X waves carrying orbital angular momentum, propagating in nonlinear dispersive media. Our quantisation scheme is based on canonical quantisation in terms of collections of harmonic oscillators solidal with the X wave itself. Within this framework, we show that the resulting quantised pulses are resilient against external perturbation, due to their intrinsic non-diffracting and non-dispersive nature. Moreover, as an example of application of our formalism, we consider explicitly the case of squeezing in $\chi^2$-media. Our findings reveal that the OAM carried by the X wave regulates which quadrature of the field is squeezed, while the X wave velocity (namely, its Bessel cone angle) regulates the amount of squeezing. in particular, we show that there exist an optimal Bessel cone angle, for which the squeezing is maximum. References: [1] M. Ornigotti, C. Conti and A. Szameit, Phys. Rev. Lett. 115, 100401 (2015).
09:30 - 09:45 Fabio Revuelta (Universidad Politecnica de Madrid and ICMAT)
Dynamical localization in non-ideal kicked rotators
A new theoretical framework is developed to study dynamical localization (quantum suppression of classical diffusion) in the context of ultracold atoms in periodically shaken optical lattices. Our method is capable to account for finite-time modulations with different shapes, thus going beyond the paradigmatic kicked rotator.
09:45 - 10:15 Sergej Flach (Institute for Basic Science, Daejeon)
Designing and perturbing flatband networks
10:15 - 10:45 Gael Favraud (KAUST University)
Complex epsilon-near-zero biomimetic metasurfaces for structural colors, photovoltaics and photocatalysis applications
11:15 - 11:45 Yaroslav Kartashov (ICFO-Institut de Ciencies Fotoniques)
Edge states and quasi-solitons in polariton topological insulators
11:45 - 12:15 Robert Thomson (Heriot Watt University)
Photonic lanterns and their applications
During this talk, I will discuss the "photonic-lantern" - a remarkable photonic technology that facilitates the efficient coupling of multimode light to single-mode photonic devices. I will also discuss how these devices are being used for a variety of applications, including astronomy and telecommunications.
12:15 - 12:30 Ragnar Fleischmann (Max Planck Institute for Dynamics and Self-Organization)
Channeling of branched flow in weakly scattering anisotropic media
When waves propagate through weakly scattering but correlated, disordered environments they are randomly focussed into pronounced branch-like structures, a phenomenon referred to as \emph{branched flow} which has been studied in a wide range of isotropic random media. In many natural environments, however, the fluctuations of the random medium typically show pronounced anisotropies. We study the influence of anisotropy on such natural focusing events and find a strong and non-intuitive dependence on the propagation angle which we explain by stochastic ray theory.
14:15 - 14:45 Kestutis Staliunas (Universitat Politecnica de Catalunya)
Photonic crystal microchip laser
14:45 - 15:15 Andrey Sukhorukov (Australian National University)
Simulation of multi-dimensional and topological effects in planar waveguide arrays
Optical photonic lattices attract strong interest as a flexible experimental platform with applications ranging from beam shaping and switching to quantum photonics. Importantly, various array designs were suggested by mapping quantum Hamiltonian dynamics to the optical beam evolution. We suggest a novel concept and formulate a general mathematical procedure to exactly map the system dynamics from arbitrary multi-dimensional lattices to light propagation in a one-dimensional optical waveguide array. We realize this approach experimentally in a fs laser written waveguide lattice, and demonstrate the equivalent mapping of a 2D square lattice to a planar array. We anticipate that our results can open new possibilities for implementation of multi-dimensional effects, including topological phenomena, in various planar integrated platforms including Si photonic chips.
15:45 - 16:15 Peter Schmelcher (Universität Hamburg)
Local symmetries as a systematic pathway to the breaking of discrete symmetries in wave propagation
16:15 - 16:30 Morteza Kamalian Kopae (Aston University)
Periodic nonlinear Fourier transform in fibre-optic communication
Recent demonstrations of the nonlinear Fourier transform (NFT) based fibre-optic communication systems has shown its potential to overcome some fibre impairments. These systems usually consider the signal to vanish as time tends to infinity. However, working with the periodic signals brings about some benefits which are of avail especially regarding the communication application requirements. To name a few; periodic NFT (PNFT) makes it possible to control the time duration of the signal which makes the encoding procedure of the transmitter more straightforward in comparing with its vanishing signal counterpart. Supporting the idea of inserting cyclic extension to avoid inter-symbol interference, the processing window of the PNFT system can be considerably smaller especially in high rate transmissions. On top of that, one can avoid the sudden drop in the signal power ensued from the necessary condition of the conventional NFT to maintain the vanishing boundaries. In this work we simulate and evaluate the performance of a communication system based on PNFT.
16:30 - 16:45 Sebabrata Mukherjee (Heriot-Watt University)
Experimental observation of anomalous topological edge modes in a slowly-driven photonic lattice
Topologically protected quantum effects can be observed in a periodically driven system where the Hamiltonian is time-periodic. In the limit of low frequency driving (driving frequency ~ hopping amplitude), the standard topological invariants, such as Chern numbers, may not be sufficient to describe the topology of the system. In this situation, chiral edge modes can exist even if the Chern numbers of all the bulk bands are zero. We demonstrate the experimental observation of such "anomalous" topological edge modes in a two-dimensional photonic lattice, where these propagating edge states are shown to coexist with a quasi-localized bulk. Ref. Nat. Commun, 8, 13918 (2017).
16:45 - 17:15 Frank Wise (Cornell University)
Self-organized instability in multimode optical fiber with disorder
17:15 - 17:30 Sebastian Schönhuber (Technische Universität Wien)
Random lasers for broadband directional emission
Broadband coherent light sources are becoming increasingly important for sensing and spectroscopic applications, especially in the mid-infrared and terahertz (THz) spectral regions, where the unique absorption characteristics of a whole host of molecules are located. The desire to miniaturize such light emitters has recently led to spectacular advances, with compact on-chip lasers that cover both of these spectral regions. The long wave-length and small size of the sources result, however, in a strongly diverging laser beam that is difficult to focus on the target that one aims to perform spectroscopy with. In my presentation I will introduce an unconventional solution to this vexing problem, relying on a random laser to produce coherent broadband THz radiation as well as an almost diffraction-limited far-field emission profile [1]. Our quantum cascade random lasers do not require any fine-tuning and thus constitute a promising example of practical device applications for random lasing (see a recent discussion of our work in [2]). References [1] S. Schönhuber, M. Brandstetter, T. Hisch, C. Deutsch, M. Krall, H. Detz, G. Strasser, S. Rotter, and K. Unterrainer "Random lasers for broadband directional emission," Optica 3, 1035 (2016). [2] D. S. Wiersma "Optical physics: Clear directions for random lasers." Nature 539, 360 (2016).
09:00 - 09:45 Federico Capasso (Harvard Univeristy)
New paths to structured light and structured color with metasurfaces
Patterning surfaces with subwavelength spaced metallo-dielectric features (metasurfaces) allows one to generate complex wavefronts by locally controlling the amplitude, phase and polarization of the scattered light.1,2 Recent developments in high performance metalenses and applications to chiral imaging 3 and high resolution ultracompact spectrometers 4 will be discussed, along with spin-to-orbital angular momentum conversion5 and the creation of arbitrary Bessel beams using a single phase plate.6 We will present a method which significantly expands the scope of metasurface polarization optics by enabling the imposition of two independent and arbitrary phase profiles on any pair of orthogonal states of polarization linear, circular, or elliptical relying only on simple, linearly birefringent wave plate elements arranged into metasurfaces.7 Using this approach, we demonstrated chiral holograms characterized by fully independent far fields for each circular polarization and elliptical polarization beam splitters, both in the visible.7 Finally we developed a new approach based on large scale disordered metasurfaces, which combines de-alloyed subwavelength structures at the nanoscale with loss-less, ultra-thin dielectrics coatings.8 By using theory and experiments, we show how sub-wavelength dielectric coatings control a mechanism of resonant light coupling with epsilon-near-zero (ENZ) regions generated in the metallic network, manifesting the formation of saturated structural colors that cover a wide portion of the spectrum. 1. P. Genevet, et al. Optica 4, 139 (2017) 2. N. Yu and F. Capasso, Nature Materials 13, 139 (2014) 3. M. Khorasaninejad et al. Nano Letters 16, 4595 (2016) 4. Alexander Y. Zhu et al. APL Photonics 2, 036103 (2017) 5. Robert C. Devlin et al. Optics Express 25, 377 (2017) 6. Wei Ting Chen et al. Light: Science & Applications (2017) 6 e16259; doi:10.1038/lsa.2016.259 7. J. P. Balthasar Mueller et al. Phys. Rev. Lett. 118, 113901 (2017) 8. H. Galinski, et al. Light: Science & Applications (2017) 6, e16233; doi: 10.1038/lsa.2016.233.
09:45 - 10:15 Claudio Conti (National Research Council)
Quantum soliton evaporation and related phenomena
We review our recent work on quantum solitons and quantum-inspired phenomena in nonlinear wave propagation.
10:30 - 11:00 Alexander Szameit (Universität Rostock)
Non-hermitian topological photonics
The recent development of topological insulator states for photons1–3 has led to an exciting new field of "topological photonics," which has as a central goal the development of extremely robust photonic devices4. Topological insulator physics has been studied extensively in condensed matter5 and cold atoms6. However, photonic systems offer the possibility of non-Hermitian effects through the engineering of optical gain and loss, which opens an entire new field of physics. In this talk, we will discuss out recent experimental work on non-Hermitian topological phenomena in coupled lattice systems, in particular the first demonstration of non-hermitian PT-symmetric topological interface state. References [1] Haldane, F. D. M. & Raghu, S., Phys. Rev. Lett. 100, 013904 (2008) [2] Wang, Z. et al., Nature 461, 772–775 (2009) [3] Rechtsman, M. C. et al., Nature 496, 196–200 (2013) [4] Hafezi et al., Nature Physics 7, 907-912 (2011) [5] Hasan, M. Z. & Kane, C. L., Rev. Mod. Phys. 82, 3045–3067 (2010) [6] Jotzu, G. et al., Nature 515, 237–240 (2014)
12:00 - 12:15 discussions | CommonCrawl |
Cognitive Research: Principles and Implications
Lineup fairness: propitious heterogeneity and the diagnostic feature-detection hypothesis
Curt A. Carlson ORCID: orcid.org/0000-0002-3909-77731,
Alyssa R. Jones1,
Jane E. Whittington1,
Robert F. Lockamyeir1,
Maria A. Carlson1 &
Alex R. Wooten1
Cognitive Research: Principles and Implications volume 4, Article number: 20 (2019) Cite this article
This article has been updated
The Correction to this article has been published in Cognitive Research: Principles and Implications 2019 4:30
Researchers have argued that simultaneous lineups should follow the principle of propitious heterogeneity, based on the idea that if the fillers are too similar to the perpetrator even an eyewitness with a good memory could fail to correctly identify him. A similar prediction can be derived from the diagnostic feature-detection (DFD) hypothesis, such that discriminability will decrease if too few features are present that can distinguish between innocent and guilty suspects. Our first experiment tested these predictions by controlling similarity with artificial faces, and our second experiment utilized a more ecologically valid eyewitness identification paradigm. Our results support propitious heterogeneity and the DFD hypothesis by showing that: 1) as the facial features in lineups become increasingly homogenous, empirical discriminability decreases; and 2) lineups with description-matched fillers generally yield higher empirical discriminability than those with suspect-matched fillers.
Mistaken eyewitness identification is one of the primary factors involved in wrongful convictions, and the simultaneous lineup is a common procedure for testing eyewitness memory. It is critical to present a fair lineup to an eyewitness, such that the suspect does not stand out from the fillers (known-innocent individuals in the lineup). However, it is also theoretically possible to have a lineup with fillers that are too similar to the suspect, such that even an eyewitness with a good memory for the perpetrator may struggle to identify him. Our first experiment tested undergraduate participants with a series of lineups containing computer-generated faces so that we could control for very high levels of similarity by manipulating the homogeneity of facial features. In support of two theories of eyewitness identification (propitious heterogeneity and diagnostic feature-detection), the overall accuracy of identifications was worst at the highest level of similarity. Our second and final experiment investigated two common methods of creating fair lineups: selecting fillers based on matching the description of the perpetrator provided by eyewitnesses, or matching a suspect who has already been apprehended. A nationwide sample of participants from a wide variety of backgrounds watched a mock crime video and later made a decision for a simultaneous lineup. We found that description-matched lineups produced higher eyewitness identification accuracy than suspect-matched lineups, which could be due in part to the higher similarity between fillers and suspect for suspect-matched lineups. These results have theoretical importance for researchers and also practical importance for the police when constructing lineups.
Mistaken eyewitness identification (ID) remains the primary contributing factor to the over 350 false convictions revealed by DNA exonerations (Innocence Project, 2019), and is a factor in 29% of the over 2200 exonerations nationally (National Registry of Exonerations, 2018). As a result, psychological scientists continue to study the problem, researching aspects of the crime as well as the ID procedure and other issues. Here, we investigate how police should select fillers for lineups in order to maximize eyewitness accuracy.
A lineup should be constructed so that the suspect does not stand out, with reasonably similar fillers (e.g., Doob & Kirshenbaum, 1973; Lindsay & Wells, 1980; Malpass, 1981; National Institute of Justice, 1999). Often the goal is to reduce bias toward the suspect in a lineup (Lindsay, 1994), but sometimes the issue of too much filler similarity is addressed. For example, Lindsay and Wells (1980) found that using fillers that matched the perpetrator's description, as opposed to matching the suspect, reduced false IDs more than correct IDs (see also Luus & Wells, 1991). They concluded that eyewitness ID accuracy is best if the fillers do not match the suspect too poorly (see also Lindsay & Pozzulo, 1999) and do not match the suspect too well, as they can when matched to the suspect rather than description of the perpetrator.
This recommendation to avoid a kind of upper limit of filler similarity is based largely on investigating the impact of different filler selection methods (e.g., match to description versus match to suspect) on correct ID rates separately from false ID rates. Usually the recommended procedure is the one that reduces the false ID rate without significantly reducing the correct ID rate (e.g., Lindsay & Pozzulo, 1999). However, Clark (2012) showed that these kinds of "no cost" arguments do not hold under scrutiny. The true pattern of results that arises when manipulating variables to enhance the performance of eyewitnesses is a tradeoff, such that a manipulation (e.g., unbiased lineup instructions, more similar fillers, sequential presentation of lineup members) tends to lower both false and correct IDs.
The best method for determining whether system variable manipulations are producing a tradeoff or actually affecting eyewitness accuracy is receiver operating characteristic (ROC) analysisFootnote 1 (e.g., Gronlund, Wixted, & Mickes, 2014; Mickes, Flowe, & Wixted, 2012; Wixted & Mickes, 2012). This approach is based on signal detection theory (SDT; see Macmillan & Creelman, 2005), which separates performance into two parameters: response bias versus discriminability. The tradeoff explained by Clark (2012) is best described by SDT as a shift in response bias, whereas the true goal of system variable manipulations is to increase discriminability. Whenever correct and false ID rates are moving in the same direction, even if one is changing to a greater extent, this pattern could be driven by changes in response bias, discriminability, or both. ROC analysis is needed to make this determination, and we will apply this technique to manipulations of lineup composition in order to shed light on the issue of fillers matching the suspect too well.
Four recent studies also applied ROC analysis to manipulations of lineup fairness. Wetmore et al. (2015, 2016) were primarily concerned with comparing showups (presenting a suspect alone rather than with fillers) with simultaneous lineups, but tangentially compared biased with fair simultaneous lineups. A lineup is typically considered biased if the suspect stands out in some way from the fillers. They found that fair lineups yielded higher empirical discriminability compared with biased lineups. Colloff, Wade, and Strange (2016) and Colloff, Wade, Wixted, and Maylor (2017) also found a significant advantage for fair over biased lineups, but defined bias as the presence of a distinctive feature on only one lineup member, and fair as either the presence of the feature on all lineup members or concealed for all members. It is unclear how these distinctive lineups would generalize to more common lineups containing no such obvious distinctive feature. Lastly, Key et al. (2017) found that fair lineups yielded higher empirical discriminability than biased lineups with more realistic stimuli (no distinctive features). However, their target-present and target-absent lineups were extremely biased, containing fillers that matched only one broad characteristic with the suspect (e.g., weight). The official level of fairness was around 1.0 for these biased lineups based on Tredoux's E' (Tredoux, 1998), which ranges from 1 to 6, with 1 representing extreme bias, and 6 representing a very fair lineup. They compared these biased lineups with a target-present and target-absent lineup of intermediate fairness (Tredoux's E' of 3.77 and 3.15, respectively). Our first experiment will add to this literature by evaluating high levels of similarity between fillers and target faces as a test of propitious heterogeneity and the diagnostic feature detection hypothesis (described below). Our second experiment will contribute at a more practical level as the first comparison of suspect-matched and description-matched lineups with ROC analysis.
Theoretical motivations: propitious heterogeneity and diagnostic feature-detection
Wells, Rydell, and Seelau (1993) argued that lineups should follow the rule of propitious heterogeneity, such that fillers should not be too similar to each other or the suspect (Luus & Wells, 1991; Wells, 1993). At the extreme would be a lineup of identical siblings, such that even a perfect memory of the perpetrator would not help to make a correct ID. Fitzgerald, Oriet, and Price (2015) utilized face morphing software to create lineups with very similar-looking faces. They found that lineups containing highly homogenous faces reduced correct as well as false IDs, thereby creating a tradeoff. More recently, Bergold and Heaton (2018) also found that highly similar lineup members could be problematic, reducing correct IDs and increasing filler IDs. However, neither of these studies applied ROC analysis to address the impact of high similarity among lineup members on empirical discriminability. We will address this issue in the present experiments.
Propitious heterogeneity is a concept with testable predictions (e.g., discriminability will decline at very high levels of filler similarity), but it is not a quantitatively specified theory. In contrast, the diagnostic feature-detection (DFD) hypothesis (Wixted & Mickes, 2014) is a well-specified model that can help explain why it is preferable to have some heterogeneity among lineup members. DFD was initially proffered to explain how certain procedures (e.g., simultaneous lineup versus showup) could increase discriminability. According to this theory, presenting all lineup members simultaneously allows an eyewitness to assess facial features they all share, helping them to determine the more diagnostic features on which to focus when comparing the lineup members to their memory of the perpetrator. However, this should only be useful when viewing a fair lineup in which all members share the general characteristics included in an eyewitness's description of a perpetrator (e.g., Caucasian man in his 20s with dark hair and a beard). Presenting all members simultaneously (as opposed to sequentially or a showup) allows the eyewitness to quickly disregard these shared features in order to focus on features distinctive to their memory for the perpetrator (see also Gibson, 1969).
DFD theory also predicts that discriminability will be higher for fair over biased simultaneous lineups (Colloff et al., 2016; Wixted & Mickes, 2014). All members of a fair lineup should equivalently match the description of the perpetrator, which should allow the eyewitness to disregard these aspects and focus instead on features that could distinguish between the innocent and the guilty. For example, imagine a perpetrator described as a tall heavy-set Caucasian man with dark hair, a beard, and large piercing eyes. Police would likely ensure that all fillers in the lineup match the general characteristics such as height, weight, race, hair color, and that all have a beard. However, the distinctive eyes would be more difficult to replicate. Therefore, when an eyewitness views a simultaneous lineup, he or she should discount the diagnosticity of these broad characteristics, thereby focusing on internal facial features such as the eyes to make their ID. This process, according to DFD theory, should increase discriminability. In contrast, if the only lineup member with a beard is the suspect (innocent or guilty), the lineup would be biased, and an eyewitness might base their ID largely on this distinctive but nondiagnostic feature. Doing so would reduce discriminability.
It is important to note that there is an important distinction between theoretical and empirical discriminability (see Wixted & Mickes, 2018). DFD predicts changes in theoretical discriminability (i.e., underlying psychological discriminability), which involves latent memory signals affecting decision-making in the mind of an eyewitness. Empirical discriminability is the degree to which eyewitnesses can place innocent and guilty suspects into their appropriate categories. Our experiments will focus on empirical discriminability, which is more relevant for real-world policy decisions (e.g., Wixted & Mickes, 2012, 2018). Empirical discriminability can be used to test the DFD hypothesis because "theoretical and empirical measures of discriminability usually agree about which condition is diagnostically superior" (Wixted & Mickes, 2018, p. 2). In other words, the goal of our experiments is to utilize a theory of underlying psychological discriminability to make predictions about empirical discriminability. Other researchers have noted that it is critical to ground eyewitness ID research in theory (e.g., Clark, Benjamin, Wixted, Mickes, & Gronlund, 2015; Clark, Moreland, & Gronlund, 2014).
The four ROC studies mentioned above (Colloff et al., 2016, 2017; Key et al., 2017; Wetmore et al., 2015) have provided some support for DFD theory by comparing biased with fair lineups. We instead test another prediction that can be derived from the theory: lineups at the highest levels of similarity between fillers and suspect will actually reduce empirical discriminability. In other words, when fillers are too similar to the suspect, potentially diagnostic features are eliminated, which will reduce discriminability according to DFD theory. Similarly, Luus and Wells (1991) predicted that diagnosticity would decline as fillers become more and more similar to each other and the suspect, and Clark, Rush, and Moreland (2013) predicted diminishing returns as filler similarity increases, based on WITNESS model (Clark, 2003) simulations.
We addressed this issue of high filler similarity first in an experiment with computer-generated faces for experimental control. We then conducted a more ecologically valid mock-crime experiment with real faces to test the issue of high filler similarity in the context of description-matched versus suspect-matched fillers. Matching fillers to the suspect could increase the overall level of similarity among lineup members too much (Wells, 1993; Wells et al., 1993), reducing empirical discriminability. If this is the case, we would minimally expect that the similarity ratings between match-to-suspect fillers and the target should be higher than those between match-to-description fillers and the target (Tunnicliff & Clark, 2000). As described below (Experiment 2), we addressed this and also compared description-matched and suspect-matched lineups in ROC space to determine effects on empirical discriminability. There is still much debate in the literature regarding the benefits of matching fillers to description versus suspect (see, e.g., Clark et al., 2013; Fitzgerald et al., 2015). To our knowledge, we are the first to investigate which approach yields higher empirical discriminability. Moreover, despite the historical advocacy for a description-matched approach, to date there are few direct tests of description-matched versus suspect-matched fillers. Lastly, Clark et al. (2014) found that the original accuracy advantage for description-matched fillers has declined over time. One of our goals is to determine if the advantage is real.
Experiment 1
We utilized FACES 4.0 (IQ Biomatrix, 2003) to tightly control all stimuli in our first experiment.Footnote 2 This program allows for the creation of simple faces based on various combinations of internal (e.g., eyes, nose, mouth) and external (e.g., hair, head shape, chin shape) facial features. The FACES software is commonly used by police agencies (see www.iqbiometrix.com/products_faces_40.html), and has also been used successfully by eyewitness researchers (e.g., Flowe & Cottrell, 2010; Flowe & Ebbesen, 2007), yielding lineup ID results paralleling results from real faces. Moreover, there is some evidence that FACES are processed similarly to real faces, at least to a degree (Wilford & Wells, 2010; but see Carlson, Gronlund, Weatherford, & Carlson, 2012). Regardless of the artificial nature of these stimuli, we argue that the experimental control they allow in terms of both individual FACE creation as well as lineup creation provides an ideal testing ground for theory. Specifically, with FACES we can precisely control the homogeneity of facial features among lineup members, and then work backward from this extreme level to provide direct tests of propitious heterogeneity and the DFD hypothesis.
Our participants viewed three types of FACES. In one condition, all FACES in all lineups were essentially target clones, except for one feature that was allowed to vary (the eyes, nose, or mouth; see Fig. 1 for examples). Therefore, participants could base their decision on just one feature rather than the entire FACE. The other two conditions varied two versus three features, respectively. DFD theory predicts that discriminability should increase as participants can base their ID decision on more features that discriminate between guilty and innocent suspects. Therefore, we predicted that empirical discriminability would be best when three features vary, followed by two features, and worst when only one feature varies across FACES in each lineup.
Example lineups from Experiment 1 composed of facial stimuli from FACES 4.0. Only the eyes vary in the top left, the eyes and nose vary in the top right, and eyes, nose, and mouth vary in the bottom
The theoretical rationale is presented in Table 1, which is adapted from Table 1 of Wixted and Mickes (2014). Whereas they were interested in comparing showups with simultaneous lineups, here we present three levels of simultaneous lineups that differ only in the number of features that vary across all fillers. As will be described below, we did not have a designated innocent suspect, but the logic is the same, so we will continue with the "Innocent Suspect" label from Wixted and Mickes. Focus first on the Guilty Suspect rows. Following Wixted and Mickes, and based on signal detection theory, we assume that the target (guilty suspect) was encoded with memory strength values of M = 1 and SD = 1.22 (so, variance approximately = 1.5 in the table). This, of course, is the case regardless of the fillers, so this remains constant for every lineup type and feature manipulated in a lineup (f1, f2, f3). These three features (f1–3) are the only source of variance (i.e., potentially diagnostic information) in the lineup. If only one feature varies, this means that all fillers (for both target-present and target-absent lineups) are identical to the target except for one feature (eyes, nose, or mouth in our experiments). If two features vary, then all fillers are identical to the target except for two features; if three features vary, then all fillers are identical to the target except for three features.
Table 1 Memory strength values of three facial features that are summed to yield an aggregate memory strength value for a face in a simultaneous lineup (adapted from Wixted & Mickes, 2014)
Critically, the Innocent Suspect rows change across these levels of similarity, reflecting featural overlap with the guilty suspect. When only one feature varies in the lineup, only f3 differs between fillers and guilty suspect, and f1 and f2 are identical. For example, this occurs when the participant in this condition sees that the lineup is entirely composed of clones except that all lineup members have a different mouth. This is the case for target-present (TP) and target-absent (TA) lineups, making the mouth diagnostic of suspect guilt (only one lineup member serves as the target with the correct mouth). This is represented by the top rows of Table 1: One Feature Varies. For that feature (f3; e.g., mouth), the memory strength values for the innocent suspect are M = 0 and SD = 1 (see Wixted & Mickes, 2014). Moving down to the next lineup type, two features vary, so now the memory strength values for the innocent suspect are set to M = 0 and SD = 1 for f2 as well as f3. This would be the case if, for example, both the nose and the mouth differ between innocent suspect (i.e., all fillers, as in our experiments) and guilty suspect. Finally, the bottom rows represent lineups in which all three features vary (eyes, nose, and mouth), which decreases the overlap between innocent and guilty suspects even further (i.e., between fillers and the target). As can be seen in the far-right column, underlying psychological discriminability is expected to increase as more features are diagnostic of suspect guilt in the lineup, based on the unequal variance signal detection model:
$$ {d}_a=\frac{\upmu_{guilty}-{\mu}_{innocent}}{\sqrt{\left({\upsigma}_{guilty}^2+{\upsigma}_{innocent}^2\right)/2}} $$
We assessed whether empirical discriminability would increase as more facial features in each of the fillers differ from the target (i.e., as more features are present that are diagnostic of suspect guilt). In other words, as the fillers look less and less like the target (with more features allowed to vary), participants should be better able to identify the target and reject fillers.
Students from the Texas A&M University – Commerce psychology department subject pool served as participants (N = 100). Based on the within-subjects design described below, this sample size allowed us to obtain 300 data points per cell. Although some more recent eyewitness studies applying ROC analysis to lineup data have included around 500 or more participants or data points per cell (e.g., Seale-Carlisle, Wetmore, Flowe, & Mickes, 2019) other studies have shown that 100–200 is sufficient (e.g., 100–130/cell in Carlson & Carlson, 2014; around 150/cell in Mickes et al., 2012), and so both experiments in this paper included at least 200 data points per experimental cell. We obtained approval from the university's institutional review board for both experiments in this paper, and informed consent was provided by each participant at the beginning of the experiment.
We utilized the FACES 4.0 software (IQ Biometrix, 2003) to create our stimuli (see Fig. 1 for examples). No face had any hair or other distinguishing external characteristics; all shared the same external features as seen in Fig. 1. The only features that varied were the eyes, nose, and/or mouth. The critical independent variable, manipulated within subjects, was how many of these features varied in a given lineup. Under one condition, only one of these features varied in a given lineup. For example, all members of a given lineup were clones except that each would have different eyes. Therefore, participants could base their lineup decision (for both TP and TA lineups) on the eyes alone. The same logic applied to lineups with only the mouth being different among the lineup members, as well as those in which only the nose varied. However, when encoding each face prior to the lineup, participants did not know which of the three features (or how many features, as this was manipulated within subjects) would vary in the upcoming lineup. Under another condition, two of these three features varied in a given lineup, thereby providing participants with more featural information on which to base their ID decision (again, for both TP and TA lineups). Lastly, all three features varied under the third condition of this independent variable. Each target was randomly assigned to a position during creation of the TP lineups (see Carlson et al., 2019, for the importance of randomizing or counter-balancing suspect position), and there was no designated innocent suspect in TA lineups.
Procedure and design
Participants took part in a face recognition paradigm with 18 blocks, and research has shown that lineup responses across multiple trials are similar to single-trial eyewitness ID paradigms (Mansour, Beaudry, & Lindsay, 2017). Both target presence (TP vs. TA lineup) and the number of diagnostic features in each lineup (1–3) were manipulated within subjects. Each of the 18 blocks contained the same general procedure: encoding of a single FACE, distractor task, then lineup. For each encoding phase, we simply presented the target FACE for 1 s in the middle of the screen. The distractor task in each block was a word search puzzle on which participants worked for 1 min between the encoding and lineup phase of each block. The final part of each block was the critical element: a simultaneous lineup of six FACES presented in a 2 × 3 array, and participants were instructed to identify the target presented earlier in that block, which may or may not be present. They could choose one of the six lineup members or reject the lineup. After their decision, they entered their confidence on an 11-point scale (0–100% in 10% increments), and then the next block automatically began. There were three blocks dedicated to each of the six experimental cells: 1) TP vs TA lineup with one feature varying; 2) TP vs TA lineup with two features varying; and 3) TP vs TA lineup with three features varying. Each participant viewed a randomized order of these blocks.
See Table 2 for all correct, false, and filler IDs, along with lineup rejections. We will first describe the results of ROC analysis, followed by TP versus TA lineup data separately (Gronlund & Neuschatz, 2014). We applied Bonferroni correction (α = .05/3 = .017) to control Type I error rate due to multiple comparisons.
Table 2 Number of identifications and rejections from Experiment 1
ROC analysis
It is important to determine how our manipulations affected empirical discriminability independently of a bias toward selecting any suspect (whether guilty or innocent), which is what ROC analysis is designed to accomplish (e.g., Gronlund et al., 2014; Rotello & Chen, 2016; Wixted & Mickes, 2012). As shown in Fig. 2, each condition results in a curve in ROC space constructed from correct and false ID rates across levels of confidence. In order to be comparable to the correct ID rates of targets from TP lineups, the total number of false IDs from TA lineups were divided by the number of lineup members (6) to calculate false ID rates, which is a common approach in the literature when there is no designated innocent suspect (e.g., Mickes, 2015). All data from a given condition reside at the far-right end of its curve, and then the curve extends to the left first by dropping participants with low levels of confidence. Thus, the second point from the far right of each curve excludes IDs that were supported by confidence of 0–20%, then the third point excludes these IDs as well as those supported by 30–40% confidence. This process continues for each curve until the far-left point represents only those IDs supported by the highest levels of confidence (here 90–100%). Confidence thereby serves as a proxy for the bias for choosing any suspect (regardless of guilt), with the most conservative suspect IDs residing on the far left, and the most liberal on the far right.
ROC data from Experiment 1. The curves drawn through the empirical data points are not based on model fits, but rather are simple trendlines drawn in Excel. The correct ID rate on the y axis is the proportion of targets chosen from the total number of target-present lineups in a given condition. The false ID rate on the x axis is the proportion of all filler identifications from the total number of target-absent lineups in a given condition (as we had no designated innocent suspects), divided by the nominal lineup size (six) to provide an estimated innocent suspect ID rate
The level of empirical discriminability for each curve is determined with the partial area under the curve (pAUC; Robin et al., 2011). The farther a curve resides in the upper-left quadrant of the space, the greater the empirical discriminability. The pAUC rather than full AUC is calculated because TA filler IDs are divided by six, thereby preventing false ID rate on the x axis from reaching 1.0. Finally, each pair of curves can be compared with D = (pAUC1 – pAUC2)/s, where s is the standard error of the difference between the two pAUCs after bootstrapping 10,000 times (see Gronlund et al., 2014, for a tutorial).
As seen in Fig. 2, there was no significant difference between three features (pAUC = .088 [.079–.097]) and two features (pAUC = .086 [.075–.096]), D = 0.46, not significant (ns). However, having multiple diagnostic features boosted empirical discriminability beyond just one feature (pAUC = .061 [.050–.072]): (a) two features were better than one, D = 3.98, p < .001, and (b) three features were better than one, D = 4.58, p < .001. This pattern largely supports both the concept of propitious heterogeneity and the DFD hypothesis.
Separate analyses of TP and TA lineups
The number of diagnostic features in each lineup significantly impacted correct IDs, Wald (2) = 9.48, p = .009. Chi-square analyses revealed that, though there was no difference between two and three diagnostic features (χ2 = 1, N = 600) = 0.72, ns), we did confirm that having just one diagnostic feature yielded fewer correct IDs compared with both two features (χ2 (1, N = 600) = 8.79, p = .002, ϕ = .12) and marginally fewer compared with three features (χ2 (1, N = 600) = 4.52, p = .02, ϕ = .09).
False IDs (of any lineup member from TA lineups) were affected even more so by the number of diagnostic features, Wald (2) = 159.59, p < .001. Participants were much more likely to choose lineup members from TA lineups when only one feature varied compared with two features (χ2 (1, N = 600) = 81.10, p < .001, ϕ = .37) or three features (χ2 (1, N = 600) = 167.69, p < .001, ϕ = .53). There were also more false alarms when two features varied compared with three, χ2 (1, N = 600) = 19.33, p < .001, ϕ = .18. In summary, unsurprisingly, the more the lineup members matched the target (i.e., with fewer features varying across members), the more participants chose these faces.
In support of other research investigating lineups of high filler similarity (e.g., Fitzgerald et al., 2015), these results indicate that lineups containing very similar fillers could be problematic, as they tended to lower ID accuracy (see also simulations by Clark et al., 2013). We went a step beyond the literature to show with ROC analysis that empirical discriminability declines at the upper levels of filler similarity. Allowing more features to vary among lineup members generally increased accuracy. These preliminary findings support the principle of propitious heterogeneity (e.g., Wells et al., 1993) and the DFD hypothesis (Wixted & Mickes, 2014).
Here, our goal was to extend the logic of the first experiment to an issue of more ecological importance than lineups of extremely high levels of featural homogeneity, which would not occur in the real world. Instead, we focused on whether police should select fillers based on matching a suspect's description or a suspect himself. Both should lead to fair lineups that yield higher empirical discriminability compared with showups (Wetmore et al., 2015; Wixted & Mickes, 2014) or compared with biased lineups (e.g., Key et al., 2017). However, suspect-matched lineups could have fillers that are more similar to the suspect than description-matched lineups because each filler is selected based directly on the suspect's face. Features that otherwise would be diagnostic of guilt could thereby be replicated in TP lineups, which could reduce correct ID rate. A greater overlap of diagnostic features would also reduce discriminability according to the DFD hypothesis. In this experiment, we compared suspect-matched with description-matched lineups to determine which should be recommended to police. Others have compared these filler selection methods (e.g., Lindsay, Martin, & Webber, 1994; Luus & Wells, 1991; Tunnicliff & Clark, 2000), but we make two contributions beyond this prior research: 1) we will assess which method yields higher empirical discriminability; and 2) we will test a theoretical prediction based on propitious heterogeneity and the DFD hypothesis that higher similarity between fillers and suspect in suspect-matched lineups will contribute to lower empirical discriminability compared with description-matched lineups.
As mentioned above, based on eyewitness ID studies utilizing ROC analysis (e.g., Carlson & Carlson, 2014; Mickes et al., 2012), we sought a minimum of 200 participants for each lineup that we created. As described below, we created nine lineups, requiring a minimum of 1800 participants. We utilized SurveyMonkey to offer this experiment to a nationwide sample of participants (N = 2159) in the United States. We dropped 194 participants for providing incomplete data or failing to answer our attention check question correctly, leaving 1965 for analysis (see Table 3 for demographics).
Table 3 Demographics for Experiment 2
Mock crime video
We used a mock crime video from Carlson et al. (2016), which presents a woman sitting on a bench surrounded by trees in a public park. A male perpetratorFootnote 3 emerges from behind a large tree in the right of the frame, approaches the woman slowly, and grabs her purse before running away. He is visible for 10 s, and is approximately 3 m from the camera when he emerges from behind the tree, and about 1.5 m away when he reaches the victim. A photo of the perpetrator taken a few days later was used as his lineup mugshot.
Description-matched lineups
In order to create description-matched lineups, we first needed a modal description for the perpetrator. A group of undergraduates (N = 54Footnote 4) viewed the mock crime video and then answered six questions regarding the perpetrator's physical characteristics. We used the most frequently reported descriptors to create the modal description (white male, 20–30 years old, tall, short hair, stubble-like facial hair). We gave this description to four research assistants (none of whom ever saw the mock crime video or perpetrator mugshot) and asked each of them to pick 20 matches from various public offender mugshot databases (e.g., State of Kentucky Department of Corrections) to create a pool of 80 description-matched fillers.
We randomly selected 10 mugshots from the description-matched pool to serve as fillers in the two description-matched TP lineups. In order to avoid stimulus-specific effects lacking generalizability (Wells & Windschitl, 1999), we used two designated innocent suspects who were randomly selected from the description-matched pool. To further increase generalizability, we then created two TA lineups for each of these two innocent suspects, for a total of four description-matched TA lineups. Twenty additional mugshots were randomly selected from the pool to serve as fillers in these lineups.
To assess lineup fairness, we presented an independent group of undergraduates (N = 28) with each lineup and they chose the member that best matched the perpetrator's modal description. We used these data to calculate Tredoux's E' (Tredoux, 1998), which is a statistic ranging from 1 (very biased) to 6 (very fair): TP Lineup 1 (3.09), TP Lineup 2 (4.17), Lineup 1 for Innocent Suspect 1 (4.08), Lineup 2 for Innocent Suspect 1 (5.09), Lineup 1 for Innocent Suspect 2 (4.04), and Lineup 2 for Innocent Suspect 2 (4.36).
Suspect-matched lineups
We started by providing the perpetrator's mugshot to a new group of four research assistants, asking each of them to pick 20 matches from the mugshot databases (e.g., State of Kentucky Department of Corrections) to create a pool of 80 suspect-matched fillers. We randomly selected five mugshots from this pool to serve as fillers in the suspect-matched TP lineup. We then randomly selected 49 mugshots from the description-matched pool, which an independent group of undergraduates (N = 30) rated for similarity to each of the innocent suspects using a 1 (least similar) to 7 (most similar) Likert scale. The five most similar faces to each innocent suspect served as fillers in their respective suspect-matched TA lineup. We therefore had a total of three suspect-matched lineups: one for the perpetrator and one for each innocent suspect (these are the same two innocent suspects as in the description-matched lineups, as police would never apprehend a suspect because he matches a perpetrator). The same group of 28 participants who reviewed the description-matched lineups also evaluated these lineups for fairness, resulting in Tredoux's E' (Tredoux, 1998) of 3.27 for the TP lineup, 4.45 for TA Lineup 1, and 5.16 for TA Lineup 2. These results are comparable to the description-matched lineups.
According to the prediction of Luus and Wells' (1991) that a suspect-matched procedure could produce fillers that are too similar to the suspect, similarity ratings should be higher for suspect-matched lineups than for description-matched lineups (see also Tunnicliff & Clark, 2000). This is also necessary according to the DFD hypothesis to create a situation that would lower empirical discriminability. To establish the level of similarity, an independent group of participants (N = 50Footnote 5) rated the similarity of the suspect to each of the five fillers in their respective lineups on a 1 (least similar) to 7 (most similar) Likert scale. Indeed, overall mean similarity between each filler and the suspect was higher for suspect-matched lineups (M = 2.84, SD = 1.26) compared with description-matched lineups (M = 2.11, SD = 1.20), t (49) = 9.05, p < .001. This pattern is consistent across both TP (suspect-matched M = 3.56, SD = 1.39; description-matched M = 2.20, SD = 1.18; t(49) = 9.31, p < .001) and TA lineups (suspect-matched M = 2.48, SD = 1.32; description-matched M = 2.07, SD = 1.22; t(49) = 5.91, p < .001). These patterns, as well as the overall low similarity ratings (all less than mid-point of 7-point Likert scale) are consistent with results from earlier studies (e.g., Tunnicliff & Clark, 2000; Wells et al., 1993).
Design and procedure
This experiment conformed to a 2 (filler selection method: suspect-matched vs. description-matched lineup) × 2 (TP or TA lineup) between-subjects factorial design. After informed consent, participants watched the mock crime video followed by another video (about protecting the environment) serving as a distractor for 3 min. After answering a question about the distractor video to confirm that they watched it, each participant was randomly assigned to view a six-person TP or TA simultaneous lineup, containing either suspect-matched or description-matched fillers. All lineups were formatted in a 2 × 3 array, and the position of the suspect was randomized. Each lineup was accompanied with instructions that stated that the perpetrator may or may not be present. Immediately following their lineup decision, participants rated their confidence on a 0%–100% scale (in 10% increments). Finally, they answered an attention check question ("What crime did the man in the video commit?") as well as demographic questions pertaining to age, sex, and race.
As with our earlier experiment, we will first present the results of ROC analysis to determine differences in empirical discriminability, followed by logistic regression and chi-square analyses to the TP data separately from the TA data. All reported p values are two-tailed. See Table 4 for all ID decisions across all lineups.
Our primary goal was to determine whether description-matched lineups would increase empirical discriminability compared with suspect-matched lineups. To address this, we compared the description-matched ROC curve with the suspect-matched curve, collapsing over individual lineups (specificity = .84Footnote 6; see Fig. 3). As predicted, matching fillers to description (pAUC = .052 [.045–.059]) increased empirical discriminability compared with matching fillers to suspect (pAUC = .037, [.029–.045]), D = 2.61, p = .009. As for the bias toward choosing any suspect, description-matched lineups overall induced more liberal suspect choosing (as shown by the longer ROC curve in Fig. 3) compared with the suspect-matched lineups. This effect on response bias replicates other research comparing these two methods of filler selection without ROC analysis (Lindsay et al., 1994; Tunnicliff & Clark, 2000; Wells et al., 1993).
ROC data (with trendlines) from Experiment 2 collapsed over the different description-matched and suspect-matched lineups. The false ID rate on the x axis is the proportion of innocent suspect identifications from the total number of target-absent lineups in a given condition
In order to address the robustness of the overall effect on empirical discriminability, we then broke down the curves into four description-matched curves and two suspect-matched curves (Fig. 4; specificity = .66). The description-matched curves were based on correct ID rates from the two TP lineups (each with the same target but different description-matched fillers) combined with false alarm rates from four TA lineups (two with fillers matching the description of innocent suspect 1, and two with fillers matching the description of innocent suspect 2). The two suspect-matched curves are based on the correct ID rate from the one suspect-matched TP lineup and the false alarm rates from the two suspect-matched TA lineups (one for innocent suspect 1 and one for innocent suspect 2). See Table 5 for the pAUC of each curve and Table 6 for the comparison between each description-matched and suspect-matched curve (Bonferroni-corrected α = .05/8 = .006). No suspect-matched curve ever increased discriminability compared with a description-matched curve. Rather, two description-matched curves yielded greater discriminability than both suspect-matched curves.Footnote 7
ROC data (with trendlines) for all description-matched and suspect-matched lineups from Experiment 2
Table 5 Results of receiver operating characteristic analysis for Experiment 2
Table 6 Comparison of each suspect-matched lineup with each description-matched lineup from Experiment 2
We begin with the correct IDs. As a reminder, there was one suspect-matched TP lineup and two description-matched TP lineups, so Bonferroni-corrected α = .05/2 = .025. The full logistic regression model was significant, showing that there were more correct IDs for the description-matched lineups compared with the suspect-matched lineup, Wald (2) = 21.57, p < .001. This pattern was supported by follow-up chi-square tests comparing the suspect-matched lineup with: (a) Description-Matched TP1, χ2 (1, N = 426) = 15.03, p < .001, ϕ = .19; and (b) Description-Matched TP2, χ2 (1, N = 425) = 17.79, p < .001, ϕ = .21. As for filler IDs from TP lineups, the full logistic regression model was again significant, Wald (2) = 46.82, p < .001. The filler ID rate was higher for the suspect-matched lineup compared with both Description-Matched TP1, χ2 (1, N = 426) = 24.41, p < .001, ϕ = .24, and Description-Matched TP2, χ2 (1, N = 425) = 42.52, p < .001, ϕ = .32. Lastly, the model for TP lineup rejections was not significant, Wald (2) = 4.89, p = .087.
Turning to the TA lineups, there were two suspect-matched (each based on its own innocent suspect) and four description-matched (the same two innocent suspects × 2 sets of fillers each). The full model comparing false IDs across all six lineups was significant, Wald (5) = 36.47, p < .001. A follow-up chi-square found that the false ID rate was lower for the suspect-matched lineups compared with the description-matched lineups overall, χ2 (1, N = 1328) = 4.12, p = .042, ϕ = .06. There was no difference in filler IDs or correct rejections. The next step was to compare each suspect-matched lineup with each description-matched lineup to determine the consistency of the pattern of false IDs (Bonferroni-corrected α = .05/8 = .006). Of the eight comparisons, only two were significant: (a) Suspect-Matched TA1 yielded fewer false IDs than Description-Matched TA1.2, χ2 (1, N = 412) = 15.74, p < .001, ϕ = .20; and (b) Suspect-Matched TA2 yielded fewer false IDs than Description-Matched TA1.2, χ2 (1, N = 422) = 16.89, p < .001, ϕ = .20. As can be seen in Table 4, Description-Matched TA1.2 had a higher false ID rate than any other TA lineup, which drove the overall effect of more false IDs for description-matched over suspect-matched lineups. The more consistent finding was no difference in false IDs between the two filler selection methods. We reviewed these lineups in light of these results, and could not determine why the false ID rate was higher for TA1.2, as the innocent suspect does not appear to stand out from the fillers. In fact, this lineup had the highest level of fairness (E' = 5.09) compared with the other description-matched TA lineups (4.08, 4.04, and 4.36). This indicates that Tredoux's E', and likely other lineup fairness measures that are based on a perpetrator's description, could inaccurately diagnose a lineup's level of fairness. This point has recently been supported by a large study comparing several methods of evaluating lineup fairness (Mansour, Beaudry, Kalmet, Bertrand, & Lindsay, 2017).
Confidence-accuracy characteristic analysis
Discriminability is an important consideration when it comes to system variables, such as filler selection method, but the reliability of an eyewitness's suspect identification, given their confidence, is also critical. Whereas ROC analysis is ideal for revealing differences in discriminability, some kind of confidence-accuracy characteristic (CAC) analysis is needed to investigate reliability (Mickes, 2015). In other words, to a judge and jury evaluating an eyewitness ID from a given case, one piece of information will be the filler selection method used by police when constructing the lineup. Another piece of information will be the eyewitness's confidence in their lineup decision, which studies have shown has a strong relationship to the accuracy of the suspect ID given that it is immediately recorded after the suspect ID, and the lineup was conducted under good conditions (e.g., double-blind administrator and a fair lineup; see Wixted & Wells, 2017). Recent studies have supported a strong CA relationship across various manipulations, such as weapon presence during the crime (Carlson et al., 2017), amount of time to view the perpetrator during the crime (Palmer, Brewer, Weber, & Nagesh, 2013), and lineup type (simultaneous versus sequential; Mickes, 2015). The present experiment allowed us to test suspect- versus description-matched filler selection methods in terms of the CA relationship. We had no explicit predictions regarding this comparison, but provide the CAC analysis due to its applied importance.
As can be seen in Fig. 5, there is a strong CA relationship across both filler selection methods. The x axis represents three levels of confidence (0–60% for low, 70–80% for medium, and 90–100% for high), which is typically broken down in this way for CAC analysis (see Mickes, 2015). The y axis represents the conditional probability (i.e., positive predictive value): given a suspect ID, what is the likelihood that the suspect was guilty, represented as guilty suspect IDs/(guilty suspect IDs + innocent suspect IDs). Two results are of note from Fig. 5: 1) confidence is indicative of accuracy, such that both curves have positive slopes; and 2) suspect IDs supported by high confidence are generally accurate (85% or higher).
CAC data from Experiment 2. The bars represent standard errors. Proportion correct on the y axis is #correct IDs/(#correct IDs + #false IDs)
This is the first experiment (to our knowledge) to address which method of filler selection, description- versus suspect-match, yields the highest empirical discriminability. We found that matching fillers to description appears to be the preferred approach, as it increased the ability of our participant eyewitnesses to sort innocent and guilty suspects into their proper categories. This was the case when collapsing over all individual lineups and, when making all pairwise comparisons between description- and suspect-matched lineups, we found that no suspect-matched lineup ever increased discriminability beyond a description-matched lineup. Rather, description-matched lineups were either better than, or equivalent to, suspect-matched lineups. We discuss the potential reasons for the overall advantage for description-matched lineups below.
We supported two theories from the eyewitness identification literature: propitious heterogeneity (e.g., Wells et al., 1993) and diagnostic feature-detection (DFD; Wixted & Mickes, 2014) by showing that empirical discriminability decreases as fillers become too similar to each other and the suspect. Our first experiment demonstrated this phenomenon with computer-generated faces that we could manipulate to precisely control levels of similarity among lineup members. Experiment 2 extended this effect to the real-world issue of filler selection, showing that police should match fillers to the description of a perpetrator rather than to a suspect. However, this recommendation is not without its caveats, such as the level of detail of a particular eyewitness's description.
This issue of specificity of the description for description-matched lineups is a question ripe for empirical investigation. To our knowledge, there has been no research on the influence of description quality (i.e., number of fine-grained descriptors) on the development of lineups and resulting empirical discriminability. Based on our findings, we would predict an inverted U-shaped function on empirical discriminability, such that eyewitnesses would perform best on description-matched lineups with fillers matched to a description that is not too vague (see Lindsay et al., 1994) and also not too specific. The former could yield biased lineups, whereas the latter could yield lineups with fillers that are too similar to the perpetrator, akin to the suspect-matched lineups that we tested. We encourage researchers to investigate this important issue of descriptor quality and eyewitness ID. Minimally, this research would address the issue of boundary conditions for description- versus suspect-matched lineups. At what point are suspect-matched lineups superior? Surely, if the description of the perpetrator is sufficiently vague, discriminability would be higher for suspect-matched lineups, but this is an empirical question.
Other than filler similarity, there is at least one more explanation for the reduction in empirical discriminability that we found for suspect-matched lineups. In the basic recognition memory literature, within-participant variance in responses has been shown to reduce discriminability (e.g., Benjamin, Diaz, & Wee, 2009). Mickes et al. (2017) found that variance among eyewitness participants can reduce empirical discriminability in a similar manner. Their variance was created by different instructions prior to the lineup (to induce conservative versus liberal choosing), which could have been interpreted or adhered to differently across participants. Similarly, suspect-matched lineups have an additional source of variance compared with description-matched lineups, which could have contributed to the lowering of empirical discriminability for suspect-matched lineups. For description-matched lineups, all fillers are selected based on matching a single description. Assuming the description is not too vague, this should limit the overall variance across fillers. In contrast, suspect-matched fillers are matched to the target for TP lineups and to a completely different individual (the innocent suspect) for TA lineups. This would likely add variance to the similarity of fillers across these two conditions, thereby lowering empirical discriminability. However, although alternative explanations such as criterial variability are always possible, it is important to note that the DFD theory predicted our results in advance, making it a particularly strong competitor with other potential explanations of the effect of lineup fairness and filler similarity on empirical discriminability. This also illustrates the importance of theory-driven research for the field of eyewitness identification (e.g., Clark et al., 2015).
Conclusion and implications
It is unlikely that a large number of police departments construct highly biased lineups, as most report that they select fillers by matching to the suspect (Police Executive Research Forum, 2013). Therefore, we argue that eyewitness researchers, rather than comparing very biased with fair lineups, should focus on varying levels of reasonably fair lineups that are more like those used by police. Moreover, we acknowledge that it is not always possible to follow a strict match to description procedure. When the description of a perpetrator is very vague, or when there is a significant mismatch between the description and suspect's appearance, matching to the suspect can be acceptable, or some combination of the two procedures (see Wells et al., 1998). However, only about 10% of police in the United States select fillers according to the match to description method recommended by the NIJ (Police Executive Research Forum, 2013; Technical Working Group for Eyewitness Evidence, 1999). This is problematic if additional research supports our finding that suspect-matched lineups reduce empirical discriminability.
However, CAC analysis revealed a strong confidence-accuracy relationship regardless of filler selection method, in agreement with recent research on other variables relevant to eyewitness ID (e.g., Semmler, Dunn, Mickes, & Wixted, 2018; Wixted & Wells, 2017). Therefore, although the ROC results indicate that policy makers should recommend that fillers be selected based on match to (a sufficiently detailed) description, the CAC results indicate that judges and juries should not be concerned with which method was utilized in a given case. If an eyewitness provides immediate high confidence in a suspect ID, this carries weight in gauging the likely guilt of the suspect.
The datasets from these experiments are available from the first author on reasonable request.
An error occurred during the publication of a number of articles in Cognitive Research: Principles and Implications. Several articles were published in volume 4 with a duplicate citation number.
We note that there is still some debate in the literature regarding the applicability of ROC analysis to lineup data, with some opposed (e.g., Lampinen, 2016; Smith, Wells, Smalarz, & Lampinen, 2018; Wells, Smalarz, & Smith, 2015), but many in favor (e.g., Gronlund et al., 2012; Gronlund, Wixted & Mickes, 2014; National Research Council, 2014; Rotello & Chen, 2016; Wixted & Mickes, 2012, 2018)
We initially conducted three pilot experiments to test our FACES stimuli. See Additional file 1 for information on these experiments.
We will refer to the perpetrator as the target in the results, in order to be consistent with terminology (e.g., target-present and target-absent lineups) from our initial experiments.
Most eyewitness researchers do not go to these lengths when creating lineups, but we needed to follow these steps to carefully establish well-operationalized suspect-matched versus description-matched lineups. Prior research following similar steps to create fair lineups has also started with a modal description of the perpetrator, but based on a much smaller group of participants (e.g., N = 5; e.g., Carlson, Dias, Weatherford, & Carlson, 2017). We had 10 times as many participants (54) provide descriptions because the resulting modal description was so critical to the purpose of our final experiment, and we therefore wanted it to have a stronger foundation empirically. Later we had only 28 participants choose from each of our lineups the person who best matched the modal description, but this has been shown to be a roughly sufficient number in the literature (e.g., Carlson et al., 2017, based on their Tredoux's E' calculations on 30 participants).
In order to ensure that we had a sufficient number of participants for similarity ratings, we had a sample size somewhat larger than another eyewitness ID study featuring pairwise similarity ratings (N = 34; Charman, Wells, & Joy, 2011).
This specificity is based on the maximum false alarm rate for the most conservative curve (i.e., the shortest curve) so that no extrapolation is required. We repeated all analyses with specificities based on the most liberal curves so that all data from all conditions could be included. The pattern of results in Table 6 remained the same, and overall, suspect-matched lineups (pAUC = .061 [.050–.072]) still had lower discriminability than description-matched lineups (pAUC = .085 [.076–.094]), D = 3.30, p < .001.
With Bonferroni-corrected alpha of .006, one of these four comparisons (Description Match 4 vs. Suspect Match 2; see Table 6) is marginally significant, with p = .01. When setting specificity based on the most liberal rather than most conservative condition's maximum false alarm rate, this difference is significant at p = .001.
Benjamin, A. S., Diaz, M., & Wee, S. (2009). Signal detection with criterion noise: application to recognition memory. Psychological Review, 116, 84–115. https://doi.org/10.1037/a0014351.
Bergold, A. N., & Heaton, P. (2018). Does filler database size influence identification accuracy? Law and Human Behavior, 42, 227. https://doi.org/10.1037/lhb0000289.
Carlson, C. A., & Carlson, M. A. (2014). An evaluation of lineup presentation, weapon presence, and a distinctive feature using ROC analysis. Journal of Applied Research in Memory and Cognition, 3, 45–53. https://doi.org/10.1016/j.paid.2013.12.011.
Carlson, C. A., Dias, J. L., Weatherford, D. R., & Carlson, M. A. (2017). An investigation of the weapon focus effect and the confidence–accuracy relationship for eyewitness identification. Journal of Applied Research in Memory and Cognition, 6(1), 82–92.
Carlson, C. A., Gronlund, S. D., Weatherford, D. R., & Carlson, M. A. (2012). Processing differences between feature-based facial composites and photos of real faces. Applied Cognitive Psychology, 26, 525–540. https://doi.org/10.1002/acp.2824.
Carlson, C. A., Jones, A. R., Goodsell, C. A., Carlson, M. A., Weatherford, D. R., Whittington, J. E., Lockamyeir, R. L. (2019). A method for increasing empirical discriminability and eliminating top-row preference in photo arrays. Applied Cognitive Psychology. in press. doi: https://doi.org/10.1002/acp.3551
Carlson, C. A., Young, D. F., Weatherford, D. R., Carlson, M. A., Bednarz, J. E., & Jones, A. R. (2016). The influence of perpetrator exposure time and weapon presence/timing on eyewitness confidence and accuracy. Applied Cognitive Psychology, 30, 898–910. https://doi.org/10.1002/acp.3275.
Charman, S. D., Wells, G. L., & Joy, S. W. (2011). The dud effect: adding highly dissimilar fillers increases confidence in lineup identifications. Law and Human Behavior, 35(6), 479–500.
Clark, S. E. (2003). A memory and decision model for eyewitness identification. Applied Cognitive Psychology, 17(6), 629–654.
Clark, S. E. (2012). Costs and benefits of eyewitness identification reform: psychological science and public policy. Perspectives on Psychological Science, 7, 238–259. https://doi.org/10.1177/1745691612439584.
Clark, S. E., Benjamin, A. S., Wixted, J. T., Mickes, L., & Gronlund, S. D. (2015). Eyewitness identification and the accuracy of the criminal justice system. Policy Insights from the Behavioral and Brain Sciences, 2, 175–186. https://doi.org/10.1177/2372732215602267.
Clark, S. E., Moreland, M. B., & Gronlund, S. D. (2014). Evolution of the empirical and theoretical foundations of eyewitness identification reform. Psychonomic Bulletin & Review, 21(2), 251–267.
Clark, S. E., Rush, R. A., & Moreland, M. B. (2013). Constructing the lineup: law, reform, theory, and data. In B. L. Cutler (Ed.), Reform of eyewitness identification procedures. Washington, DC: American Psychological Association
Colloff, M. F., Wade, K. A., & Strange, D. (2016). Unfair lineups don't just make witnesses more willing to choose the suspect, they also make them more likely to confuse innocent and guilty suspects. Psychological Science, 27, 1227–1239. https://doi.org/10.1177/0956797616655789.
Colloff, M. F., Wade, K. A., Wixted, J. T., & Maylor, E. A. (2017). A signal-detection analysis of eyewitness identification across the adult lifespan. Psychology and Aging, 32, 243–258. https://doi.org/10.1037/pag0000168.
Doob, A. N., & Kirshenbaum, H. M. (1973). Bias in police lineups-partial remembering. Journal of Police Science and Administration, 1(3), 287–293.
Fitzgerald, R. J., Oriet, C., & Price, H. L. (2015). Suspect filler similarity in eyewitness lineups: a literature review and a novel methodology. Law and Human Behavior, 39, 62–74. https://doi.org/10.1037/lhb00000095.
Flowe, H., & Cottrell, G. W. (2010). An examination of simultaneous lineup identification decision processes using eye tracking. Applied Cognitive Psychology, 25, 443–451. https://doi.org/10.1002/acp.1711.
Flowe, H. D., & Ebbesen, E. B. (2007). The effect of lineup member similarity on recognition accuracy in simultaneous and sequential lineups. Law and Human Behavior, 31, 33–52. https://doi.org/10.1007/s10979-006-9045-9.
Gibson, E. J. (1969). Principles of perceptual learning and development. New York: Appleton-Century-Crofts.
Gronlund, S. D., Carlson, C. A., Neuschatz, J. S., Goodsell, C. A., Wetmore, S. A., Wooten, A., & Graham, M. (2012). Showups versus lineups: an evaluation using ROC analysis. Journal of Applied Research in Memory and Cognition, 1(4), 221–228.
Gronlund, S. D., & Neuschatz, J. S. (2014). Eyewitness identification discriminability: ROC analysis versus logistic regression. Journal of Applied Research in Memory and Cognition, 3, 54–57 Retrieved from https://doi.org/10.1016/j.jarmac.2014.04.008.
Gronlund, S. D., Wixted, J. T., & Mickes, L. (2014). Evaluating eyewitness identification procedures using receiver operating characteristic analysis. Current Directions in Psychological Science, 23, 3–10. https://doi.org/10.1177/0963721413498891.
Innocence Project. (2019). DNA exonerations worldwide. Retrieved from http://www.innocenceproject.org
IQ Biometrix. (2003). FACES, the Ultimate Composite Picture (Version 4.0) [Computer software]. Fremont, CA: IQ Biometrix, Inc.
Key, K. N., Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Cash, D. K., & Lane, S. (2017). Line-up fairness affects postdictor validity and 'don't know' responses. Applied Cognitive Psychology, 31, 59–68. https://doi.org/10.1002/acp.3302.
Lampinen, J. M. (2016). ROC analyses in eyewitness identification research. Journal of Applied Research in Memory and Cognition, 5, 21–33. https://doi.org/10.1016/j.jarmac.2015.08.006.
Lindsay, R. C. L. (1994). Biased lineups: where do they come from? In D. Ross, J. D. Read, & M. Toglia (Eds.), Adult eyewitness testimony: current trends and developments. New York: Cambridge University Press.
Lindsay, R. C. L., Martin, R., & Webber, L. (1994). Default values in eyewitness descriptions: a problem for the match-to-description lineup filler selection strategy. Law and Human Behavior, 18, 527–541. https://doi.org/10.1007/BF01499172.
Lindsay, R. C. L., & Pozzulo, J. D. (1999). Sources of eyewitness identification error. International Journal of Law and Psychiatry, 22, 347–360. https://doi.org/10.1016/S0160-2527(99)00014-X.
Lindsay, R. C. L., & Wells, G. L. (1980). What price justice? Exploring the relationship of lineup fairness to identification accuracy. Law and Human Behavior, 4(4), 303–313.
Luus, C. E., & Wells, G. L. (1991). Eyewitness identification and the selection of distracters for lineups. Law and Human Behavior, 15(1), 43–57.
Macmillan, N. A., & Creelman, C. D. (2005). Detection theory: a user's guide. Mawhaw: Lawrence Erlbaum Associates Publishers.
Malpass, R. S. (1981). Effective size and defendant bias in eyewitness identification lineups. Law and Human Behavior, 5(4), 299–309.
Mansour, J. K., Beaudry, J. L., Kalmet, N., Bertrand, M. I., & Lindsay, R. C. L. (2017). Evaluating lineup fairness: variations across methods and measures. Law and Human Behavior, 41, 103–115. https://doi.org/10.1037/lhb0000203.
Mansour, J. K., Beaudry, J. L., & Lindsay, R. C. L. (2017). Are multiple-trial experiments appropriate for eyewitness identification studies? Accuracy, choosing, and confidence across trials. Behavior Research Methods, 49, 2235–2254. https://doi.org/10.3758/s13428-017-0855-0.
Mickes, L. (2015). Receiver operating characteristic analysis and confidence–accuracy characteristic analysis in investigations of system variables and estimator variables that affect eyewitness memory. Journal of Applied Research in Memory and Cognition, 4, 93–102. https://doi.org/10.1016/j.jarmac.2015.01.003.
Mickes, L., Flowe, H. D., & Wixted, J. T. (2012). Receiver operating characteristic analysis of eyewitness memory: comparing the diagnostic accuracy of simultaneous versus sequential lineups. Journal of Experimental Psychology: Applied, 18, 361–376. https://doi.org/10.1037/a0030609.
Mickes, L., Seale-Carlisle, T. M., Wetmore, S. A., Gronlund, S. D., Clark, S. E., Carlson, C. A., … Wixted, J. T. (2017). ROCs in eyewitness identification: instructions versus confidence ratings. Applied Cognitive Psychology, 31, 467–477. https://doi.org/10.1002/acp.3344.
National Institute of Justice (1999). Eyewitness evidence: a guide for law enforcement. Washington: DIANE Publishing.
National Registry of Exonerations. (2018). The National Registry of Exonerations. Retrieved from http://www.law.umich.edu/special/exoneration/Pages/about.aspx
National Research Council (2014). Identifying the culprit: assessing eyewitness identification. Washington, DC: The National Academic Press.
Palmer, M. A., Brewer, N., Weber, N., & Nagesh, A. (2013). The confidence-accuracy relationship for eyewitness identification decisions: effects of exposure duration, retention interval, and divided attention. Journal of Experimental Psychology: Applied, 19(1), 55–71.
Police Executive Research Forum (2013). A national survey of eyewitness identification processes in law enforcement agencies. Washington, DC: U.S. Department of Justice Retrieved from https://www.policeforum.org/assets/docs/Free_Online_Documents/Eyewitness_Identification/a%20national%20survey%20of%20eyewitness%20identification%20procedures%20in%20law%20enforcement%20agencies%202013.pdf.
Robin, X., Turck, N., Hainard, A., Tiberti, N., Lisacek, F., Sanchez, J. C., & Müller, M. (2011). pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics, 12, 77.
Rotello, C. M., & Chen, T. (2016). ROC Curve analyses of eyewitness identification decisions: an analysis of the recent debate. Cognitive Research: Principles and Implications, 1, 10. https://doi.org/10.1186/s41235-016-0006-7.
Seale-Carlisle, T. M., Wetmore, S. A., Flowe, H. D., & Mickes, L. (2019). Designing police lineups to maximize memory performance. Journal of Experimental Psychology: Applied (in press).
Semmler, C., Dunn, J., Mickes, L., & Wixted, J. T. (2018). The role of estimator variables in eyewitness identification. Journal of Experimental Psychology: Applied, 24(3), 400–415.
Smith, A. M., Wells, G. L., Smalarz, L., & Lampinen, J. M. (2018). Increasing the similarity of lineup fillers to the suspect improves the applied value of lineups without improving memory performance: commentary on Colloff, Wade, and Strange (2016). Psychological Science, 29, 1548–1551. https://doi.org/10.1177/0956797617698528.
Technical Working Group for Eyewitness Evidence (1999). Eyewitness evidence: a guide for law enforcement. Washington, D.C.: U.S. Department of Justice, Office of Justice Programs.
Tredoux, C. G. (1998). Statistical inference on measures of lineup fairness. Law and Human Behavior, 22(2), 217–237.
Tunnicliff, J. L., & Clark, S. E. (2000). Selecting fillers for identification lineups: matching suspects or descriptions? Law and Human Behavior, 24(2), 231–258.
Wells, G. L. (1993). What do we know about eyewitness identification? American Psychologist, 48(5), 553–571.
Wells, G. L., Rydell, S. M., & Seelau, E. P. (1993). The selection of distractors for eyewitness lineups. Journal of Applied Psychology, 78, 835–844. https://doi.org/10.1037/0021-9010.78.5.835.
Wells, G. L., Smalarz, L., & Smith, A. M. (2015). ROC analysis of lineups does not measure underlying discriminability and has limited value. Journal of Applied Research in Memory, 4, 313–317. https://doi.org/10.1016/j.jarmac.2015.08.008.
Wells, G. L., Small, M., Penrod, S., Malpass, R. S., Fulero, S. M., & Brimacombe, C. A. E. (1998). Eyewitness identification procedures: Recommendations for lineups and photospreads. Law and Human Behavior, 22, 603–647.
Wells, G. L., & Windschitl, P. D. (1999). Stimulus sampling and social psychological experimentation. Personality and Social Psychology Bulletin, 25(9), 1115–1125.
Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Wooten, A., Goodsell, C. A., & Carlson, C. A. (2015). Effect of retention interval on showup and lineup performance. Journal of Applied Research in Memory and Cognition, 4, 8–14. https://doi.org/10.1016/j.jarmac.2014.07.003.
Wetmore, S. A., Neuschatz, J. S., Gronlund, S. D., Wooten, A., Goodsell, C. A., & Carlson, C. A. (2016). Corrigendum to 'Effect of retention interval on showup and lineup performance. Journal of Applied Research in Memory and Cognition, 5(1), 94.
Wilford, M. M., & Wells, G. L. (2010). Does facial processing prioritize change detection? Change blindness illustrates costs and benefits of holistic processing. Psychological Science, 21, 1611–1615. https://doi.org/10.1177/0956797610385952.
Wixted, J. T., & Mickes, L. (2012). The field of eyewitness memory should abandon probative value and embrace receiver operating characteristic analysis. Perspectives on Psychological Science, 7, 275–278. https://doi.org/10.1177/1745691612442906.
Wixted, J. T., & Mickes, L. (2014). A signal-detection-based diagnostic-feature-detection model of eyewitness identification. Psychological Review, 121, 262–276. https://doi.org/10.1037/a0035940.
Wixted, J. T., & Mickes, L. (2018). Theoretical vs. empirical discriminability: the application of ROC methods to eyewitness identification. Cognitive Research: Principles and Implications, 3. https://doi.org/10.1186/s41235-018-0093-8.
Wixted, J. T., & Wells, G. L. (2017). The relationship between eyewitness confidence and identification accuracy: a new synthesis. Psychological Science in the Public Interest, 18(1), 10–65.
The authors thank all research assistants of the Applied Cognition Laboratory for help with stimuli preparation and data collection.
Data collection for Experiment 2 via SurveyMonkey was supported by a Criminal Justice and Policing Reform Grant from the Charles Koch Foundation to JEW and CAC.
Texas A&M University – Commerce, PO Box 3011, Commerce, TX, 75429, USA
Curt A. Carlson, Alyssa R. Jones, Jane E. Whittington, Robert F. Lockamyeir, Maria A. Carlson & Alex R. Wooten
Curt A. Carlson
Alyssa R. Jones
Jane E. Whittington
Robert F. Lockamyeir
Maria A. Carlson
Alex R. Wooten
CAC, ARJ, JEW, and RFL designed and conducted the experiments. CAC wrote the first draft of the manuscript. MAC assisted with data analysis and some writing. ARW provided valuable feedback on drafts of the manuscript. All authors read and approved the final manuscript.
Correspondence to Curt A. Carlson.
All experiments reported in this paper were approved by the Institutional Review Board of Texas A&M University – Commerce, and all participants provided informed consent prior to participation.
Additional file
Supplemental material: pilot experiments. (DOCX 36 kb)
Carlson, C.A., Jones, A.R., Whittington, J.E. et al. Lineup fairness: propitious heterogeneity and the diagnostic feature-detection hypothesis. Cogn. Research 4, 20 (2019). https://doi.org/10.1186/s41235-019-0172-5
Simultaneous lineup
Lineup fairness
Diagnostic feature-detection hypothesis
Propitious heterogeneity | CommonCrawl |
Reading: Climatic benefits of black carbon emission reduction when India adopts the US onroad emissio...
Climatic benefits of black carbon emission reduction when India adopts the US onroad emission level
Ashish Sharma ,
Department Civil Engineering, University of Toledo, Toledo, OH 43606, USA, US
Chul E. Chung
Desert Research Institute, Reno, NV 89512, USA, US
India is known to emit large amounts of black carbon (BC) particles, and the existing estimates of the BC emission from the transport sector in the country widely range from 72 ~ 456 Gg/year (for the 2000's). First, we reduce the uncertainty range by constraining the existing estimates by credible isotope analysis results. The revised estimate is from 74 ~ 254 Gg/year. Second, we derive our own BC estimate of the transport section in order to gain a new insight into the mitigation strategy and value. Our estimate shows that the transport section BC emission would be reduced by about 69% by adopting the US standards. The highest emission reduction comes from the vehicles in the 5–10 year old age group. The minimum emission reduction would be achieved from the vehicles in the 15–20 year old age category since their population is comparatively small in comparison to other age categories. The 69 % of 74 ~ 254 Gg/year is 51 ~ 175 Gg/year, which is the estimated BC emission reduction by switching to the US on-road emission standard. Assuming that global BC radiative forcing is 0.88 Wm−2 for 17.8 Tg/year of BC emission, we find that the reduced BC emission translates into −0.0025 ~ −0.0087 W m−2 in global forcing. In addition, we find that 51 ~ 175 Gg of BC emission reduction amounts to 0.046 – 0.159 B carbon credits which are valued at 0.56 – 1.92 B US dollars (using today's carbon credit price). In a nutshell, India could potentially earn billions of dollars per year by switching from the current on-road emission levels to the US levels.
Keywords: Indian road transport, Diesel emissions, Gasoline emissions, Black carbon, BC forcing, Climatic benefits
How to Cite: Sharma, A. and Chung, C.E., 2015. Climatic benefits of black carbon emission reduction when India adopts the US onroad emission level. Future Cities and Environment, 1, p.13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Accepted on 20 Dec 2015 Submitted on 13 Jul 2015
Black carbon (BC) is a product that results from incomplete combustion. Black carbon (BC) is also known as "soot" or "soot carbon" [1]. BC aerosols are emitted as primary aerosols from fossil fuel combustion, biomass burning and biofuel burning, and thus largely anthropogenic. Specifically, the combustion of diesel and coal, the burning of wood and cow dung, savanna burning, forest fire and crop residue burning are the common sources for BC. In order to improve air quality, developed countries have reduced ambient aerosol concentration by a variety of measures in the last few decades. For instance, wood as the fuel for cooking was replaced by natural gas or electricity. This kind of clean-air act not only reduced the overall aerosol concentration (including BC concentration) but also reduced the relative amount of BC to other anthropogenic aerosols such as sulfate, as evident from the state-of-the-art emission estimate dataset by [2]. Developing countries, conversely, have high levels of aerosol concentration and also a relatively large amount of BC [2]. India too, as a developing nation, exhibits these characteristics. The BC emission in India has steadily increased [3].
BC has many unique aspects. First, while most aerosols scatter solar radiation and thus act to cool the earth, BC strongly absorbs sunlight and contributes to the global warming [4]. Second, while CO2 itself is not an air pollutant, BC is both an air pollutant and climate warmer. Thus, reducing BC concentration is more easily justified than reducing CO2 concentration. Third, BC emission is generally much easier to mitigate than CO2 emission, since the former originates largely from poor life styles in developing countries. For example, it is much easier and cheaper to replace a cow-dung burning facility by a modern natural-gas stove in a kitchen than installing a solar panel. Fourth, since aerosols stay in the atmosphere for less than a few weeks, reducing BC emission results in an immediate reduction in BC concentration, whereas reducing CO2 emission leads to a reduction in CO2 concentration many decades later. In view of this fourth aspect, Ramanathan and Xu [5] and Shindell et al. [6] demonstrated using climate models that reducing BC emission is among the most effective tools to slow down the warming immediately.
In the current study, we aim to quantify the BC emission from the transport sector in India and how much this BC emission can be reduced by adopting the US on-road emission rates immediately. We do this because vehicles in India emit far more particulate matter (i.e., far more aerosols) per vehicle than in the West. The aerosols emitted from vehicles consist largely of black carbon [7]. In comparison, biofuel combustion emits a relatively more organic carbon and less black carbon [7]. While BC is definitely a climate warmer, organic carbon may be a cooler [8, 9]. Thus, BC emission decrease in the transport sector seems more appealing in combating the global warming than that in biofuel or biomass burning. Accordingly, [10], for instance, suggested that diesel engine is one of a few good examples for reducing BC emission and fighting the global warming. Diesel engines are the main contributor to aerosol emission from the transport sector [11–13]. Furthermore, mitigating diesel engine emissions would reduce BC concentration with a relatively small reduction in sulfate (a cooling agent), whereas mitigating emissions from coal combustion in power plants would reduce both BC and sulfate substantially [14]. Thus, quantifying the BC emission in the transport section is very valuable in the global warming mitigation study.
Studies exist that estimated the BC emission from the road sector in India [7, 15–18]. These estimations give a widely-varying range of 71.76 ~ 456 Gg in the annual emission, and also a wide range of 6.5 % ~ 34 % in the percentage of the total BC emissions by the road transport sector. This large uncertainty in estimated BC emission or its contribution to total BC emission makes it difficult for policy makers to make decisions. Thus, one of the objectives in the present study is to reduce this uncertainty.
The novelty of our study is also that it quantifies the potential climatic benefits of mitigating the road transport sector BC emissions in India via implementation of the US on-road emission levels, which are more stringent than the Indian levels. Previous studies in this regard [19–21] quantified the percentage reduction by applying EU standards and our study is the first study to quantify the percentage reduction by applying the US standard. Applying the US standard has advantages because US standards have similar emission requirements for both diesel and gasoline vehicles. Europe emission regulations, relative to the counterpart U.S. program, tolerate higher PM emissions from diesel vehicles. Applying the US emission standards in India is particularly more important in order to target BC emissions from the heavy duty diesel vehicles (buses and trucks) in India. Not only do we quantify the BC emission reduction in switching to the US standard, we also translate this reduction into the climatic benefit by applying observationally-constrained (thus accurate) BC climate forcing estimation studies and today's carbon emission price.
Lastly, in the present study, we will also attempt to quantify the contribution of different categories of vehicles towards the transport sector BC emissions in India for the year 2010 according to vehicle age. We do this, because this further information would be very valuable to environmental policy makers. We organize the paper in 4 sections. In Section 1, we provided a general overview of how this study was conducted and we highlighted the present state of transport sector emissions in India. Here, we also clarified some of the key findings from the similar studies conducted in the past and the shortcomings of the existing studies. In section 2, we discuss the methods we adopted for revising the existing estimates of the transport-sector BC emission in India and what approach we adopted for providing our own estimate the transport-sector BC emission in India. In Section 3, we discusses results. Here, we provided our own estimate of transport section BC emission and BC emission reduction by implementing higher emission standards. Section 3 also pertains to the BC forcing reduction and its monetary value and Section 4 is dedicated for discussions and conclusions.
Revising the existing estimates of the transport-sector BC emission in India
As stated earlier, the previous estimates of the BC emission from the road sector in India give a widely-varying range of 71.76 ~ 456 Gg in the annual emission, and also a wide range of 6.5 % ~ 34 % in the percentage of the total BC emissions by the road transport sector [7, 15–18]. The aforementioned estimates are based on a bottom-up approach, and there is a wide range in the estimates due to uncertainty in (a) fleet average emission factors and (ii) modelling of the on-road vehicle stock. Additionally, emission inventories without calibrating the national fuel balance would have much higher uncertainties [22].
The aforementioned previous BC emission estimates did not utilize the isotope analysis results by Gustafsson et al. [23]. Most of carbon in the earth is carbon-12 (12C). 14C, also referred to as radiocarbon, is a radioactive isotope of carbon, and decays into nitrogen-14 over thousands of years. Live plants and animals maintain a high ratio of 14C to 12C by photosynthesis, vegetable eating and carnivores eating herbivores, as the source for 14C is cosmic rays in the atmosphere. Thus, biomass contains a high ratio of 14C to 12C. On the other hand, fossil fuel arose from vegetation and animals that died a long time ago, and therefore contains no 14C. The ratio of 14C to 12C is thus proportional to the ratio of biomass to fossil fuel. Gustafsson et al. [23] analyzed 14C mass and 12C mass data in collected aerosols, and apportioned the carbon between fossil fuel combustion and biomass/biofuel burning sources. Unlike in the previous BC emission estimates, the apportionment based on carbon isotope data should be considered non-controversial and credible. Furthermore, the aerosols collected for the analysis were in the South Asian outflow instead of near emission sources, which means that the results by Gustafsson et al. [23] represent the overall conditions in India. In view of this, in the present study we apply the results of Gustafsson et al. [23] to existing BC estimates.
Here is how we use Gustafsson et al.'s [23] results. According to Gustafsson et al. [23], the corresponding share of fossil fuel combustion and biomass/biofuel burning to total BC emissions is 32 ± 5 and 68 ± 6 % respectively in South Asia. Existing BC emission estimates for the transport sector in India also give the BC emission estimates for other sectors. We adjust the ratio of estimated BC emission from fossil fuel combustion (including transportation) to estimated BC emission from biomass and biofuel burning in each past estimation study so that the adjusted ratio would be 32 ± 5 : 68 ± 6 in all the BC estimates, as consistent with that from Gustafsson et al. [23]. During the adjustment, we do not adjust the magnitude of total BC emission from all the sectors. The adjusted ratio leads to adjusted BC estimates for the transport sector, and the adjusted estimates must be more accurate. The original and adjusted estimates of the percentage share of road transport BC emissions to the total BC emissions in India are shown in Table 1. We propose that the community uses the adjusted estimates shown in Table 1.
Percentage share of road transport BC emissions to total BC emissions in India. (Adjustment in accordance with Gustafsson et al.'s isotope analysis)
Emission year
Original estimate
Adjusted estimate
Lu et al., 2011 [15] (Emission year 2010) 2010 11.00 % 4.00 – 5.00 %
Klimont et al., 2009 [16] (Emission year 2010) 2010 6.50 % 17.00 – 23.00 %
Bond et al., 2004 [7] (Emission year 2009) 2009 30.00 % 21.00 – 29.00 %
Sahu et al., [17] (Emission year 2001) 2001 34.00 % 10.67 – 14.62 %
Figure 1 compares the two (i.e., original and adjusted) estimates in the magnitude of BC emission. In this figure, we removed the estimate for the 90's and only retained those for the 2000's. As clearly shown in Fig. 1, the original estimates varied from 72 ~ 456 Gg/year (with the arithmetic average of 264 Gg/year), while the adjusted estimates now vary from 74 ~ 254 Gg/year (with the arithmetic average of 164 Gg/year). We computed the average estimate to develop the consensus, and do not intend the average estimate to be the best estimate. The average was obtained by assigning the same weight to each estimate. To summarize the results, the mean BC estimate is reduced by 38 % after adjustments with Gustafsson et al.'s [23] results. More importantly, we have sharply reduced the uncertainty in the transport sector BC emission (from 72 ~ 456 Gg/year to 74 ~ 254 Gg/year) by employing Gustafsson et al.'s [23] results.
Original vs. adjusted estimates of BC emissions in India (adjusted according to [23])
Our own estimate of the transport-sector BC emission in India
In the present study, we develop our own estimate of the BC emission from the transport section in India because our own data would facilitate the quantification of BC emission reduction in the implementation of other emission standards. In addition, we provide BC emission according to the age categorization of the vehicles – a feature not represented in the previous studies and yet important for policy makers. In our estimation, we adopt an emission factor (EF) based approach with an aim to estimate the emissions for the year 2010. Emission factors (EFs) are relations between a specific emission and the concerned activity leading to that emission, and normally determined in an empirical manner. Road vehicle EFs represent a quantity of pollutants emitted given a unit distance driven, amount of fuel used or energy consumed [24]. In addition to an EF-based technique, many other techniques are being used in the community for quantifying emissions from a large number of real world vehicles. These techniques include remote sensing of tailpipe exhaust, chassis dynamometer tests, random roadside pullover tampering studies, tunnel studies, and ambient speciated hydrocarbon measurements [25]. Employing some of these techniques for determining actual vehicle emissions in our study would be very costly as it requires dedicated human resource. Thus, we use an emission factor based approach here.
As for the emission factors for Indian vehicles, we use the data from Baidya et al. [22]. We use the emission factors from Baidya et al. [22] for the following main reasons. (a) Most importantly, they utilized the data in South Asia and South East Asia. (b) They constrained the categorization of vehicles by the data availability and data authenticity, thus accounting for the characteristics of data in South Asia. (c) In particular, the following key factors were considered within each vehicle category: Fuel vs. engine, and kinds of engine (e.g., two or four strokes). Thus their EF's are well representative of local conditions in India and useful for the present study. Please note that Baidya et al. [22] provided the emission factors for particulate matter (PM) instead of BC, and they also gave the estimates of PM emission instead of BC emission. Here, we give BC emission estimates using known PM/BC ratios and these data for various categories.
We differentiated vehicles at different levels and obtained the emission factor for each category. First, on the basis of vehicles type such as heavy duty trucks, buses, cars and motorbikes. Motor bikes are further disaggregated as 2-stroke motorbikes and 4-stroke motor bikes. Second, the vehicles are differentiated on the basis of fuel type used – diesel or gasoline. Third, vehicles are further differentiated according to four age groups as: 0–5 year old, 5–10 year old, 10–15 year old and 15–20 year old. These age groups correspond to the 5-years brackets of the Indian exhaust emission regulations: Until 1990; 1991–1995; 1996–2000 and 2001–2005. For further classifying the vehicles according to age differentiation, we assumed the percentage of vehicle belonging to a specific age group such as 0 ~ 5 years old; 5 ~ 10 years old; 10 ~ 15 years old and 15 ~ 20 years old. Then we further assume the percentage distribution of trucks and buses for specific fuel types such as diesel and petrol. This disaggregation of vehicles into specific age groups is important for the purpose of this study because India has a significant percentage of the old vehicle fleets which have not yet retired and such old vehicles have considerably higher emission rates as the emission relevant parts deteriorate [25]. The calculation of PM emissions using age specific emission factors is crucial for identifying the vehicles responsible for higher PM emissions and thus this could be used to design vehicular emission mitigation strategies.
The scientific justification for the chosen EFs is not only explained in Baidya et al.[22] but is also supported in independent reports [19, 26]. All the EFs are defined in g/km. The EFs used in the present work are tabulated in Table 2. We are aware that EFs vary from region to region, and the given EFs in Table 2 are meant to be for the country-average values of various vehicle types.
PM emission factor by vehicle category and age group in India (from Baidya et al. 2009)
Gram/km
Vehicles manufactured
Heavy duty truck (diesel) 0.49 1.22 2.03 2.7
Bus (diesel) 0.59 1.49 2.48 3.3
Diesel car (diesel) 0.19 0.46 0.77 1.03
Diesel car (gasoline) 0.06 0.07 0.09 0.1
Motorbike (2 stroke, gasoline) 0.18 0.26 0.32 0.46
Motorbike (4 stroke, gasoline) 0.06 0.08 0.1 0.14
In the next step, we multiply the total vehicle activity (vehicle kilometers traveled) and the fuel specific emission factors (Eq. 1) to estimate PM emission in Gigagram. This multiplication method is a common approach of emission calculation and it has been widely used in similar studies conducted in the past [17, 22, 27–29]. Please note that the unit of emission factors used in this equation is g/km. The annual emissions of pollutants are estimated for each individual vehicle type a, fuel type b, and emission standard c according to the following standard equation –
Equ1_TeX\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document} \[ {\mathbf{E}}_{\mathbf{T}\left(\mathbf{a},\mathbf{b},\mathbf{c}\right)}=\sum \left(\mathbf{P}\mathbf{o}{\mathbf{p}}_{\mathbf{a},\mathbf{b},\mathbf{c}}\mathbf{x}\mathbf{E}{\mathbf{F}}_{\mathbf{a},\mathbf{b},\mathbf{c}}\mathbf{x}\mathbf{V}\mathbf{K}{\mathbf{T}}_{\mathbf{a}}\right) \] \end{document}
ET(a,b,c) = Total Emissions
Popa,b,c = vehicle population
VKTa = annual vehicle kilometers traveled by vehicles of type a
EFa,b,c = Emission factor for vehicle per driven kilometer for vehicle type a, fuel type b, and emission standard c
We obtained the statistics of registered motor vehicles in India from various agencies in India, including the Ministry of Shipping, Road Transport & Highway. The data collection was extremely tedious due to an inferior information storage system in India. Data were obtained by combining internet search, peer-reviewed literature and reports, and personal communication with multiple research groups and agencies (both private and government agencies) in India and abroad via e-mails and phone calls. The annual average distance in kilometers travelled by various Indian vehicles was obtained from the Road Transport Yearbook [30] published by the Government of India. Upon analyzing the data, we found that older vehicles travelled lesser distance compared to the newer vehicles. The data are sorted out in terms of vehicle category, fuel type (diesel or gasoline), type of engine (two or four strokes), operation (e.g. taxi and private use for passenger cars and the emission control standard compliance. Driving conditions are defined as either urban or rural conditions. We categorized vehicles broadly into the following four categories: trucks (diesel), buses (diesel), passenger cars (including taxis and private cars powered by diesel and gasoline) and motor bikes (2-stroke and 4-stroke motor bikes powered by gasoline). The percentage share of each categoriy of vehicles on the basis of fuel type has been obtained from The Automotive Research Association of India (ARAI). Motorized two wheelers are differentiated by two stroke and four stroke engine.
For compliance with the latest emission regulation (i.e., emission regulation for 2010), we assumed that vehicles manufactured in a specific emission year are in compliance with the emission legislation enforced by Indian government in that model year and the vehicles were not improved (in terms of emission factor) afterwards. So, the compliance with the latest emission legislation is factored in this fashion. For example, let us say, a vehicle is manufactured in the model year 1998. Then we can say that this vehicle can be assigned an age group of 10–15 year old from the year 2010 standpoint. This will correspond to the Indian exhaust emission regulation during the 5-year bracket of 1996–2000. The emission factor for this specific vehicle is calculated from the emission standards during the 5-year bracket of 1996–2000. Please note that there is a lack of Inspection and Maintenance (I & M) data for Indian vehicles. If such data were available, we did not need to make the aforementioned assumptions.
In our own estimate, the total PM emission from the road transportation in India is 507 Gigagram for the year 2010 at the present Indian on-road emission levels (shown in Fig. 2). The total PM emission for a specific vehicle category is shown in Table 3. In the last step, we convert this PM emission into BC emission by applying the ratio of BC/PM2.5. We obtain this ratio for on-road mobile sources from the EPA's report on black carbon [31], assuming that this ratio is primarily controlled by whether fuel and engine are either gasoline or diesel based. The BC/PM2.5 ratio is 0.74 and 0.19 for diesel and gasoline mobile sources respectively. We obtain the total BC emission separately for diesel and gasoline vehicles. The details of conversion calculation are illustrated in Table 4. The total BC emission following Indian emission levels is estimated to be 344.5 Gg/year and 7.84358 Gg/year for on-road diesel and gasoline vehicles respectively. From Fig. 2, it is quite evident that heavy duty diesel trucks are the main culprits with the largest contribution of 64 % towards total PM emissions in India following the present Indian on-road emission levels. The 2nd largest source of on-road PM emissions are diesel buses. For the heavy commercial vehicles including buses and trucks we assumed that there is 100 % diesel penetration. The 3rd largest contribution to total PM emission comes from diesel passenger cars which are used as personal as well as multi utility passenger vehicles as a taxi, etc. The 4th largest emission sources are 4 stroke gasoline motor bikes followed by diesel passenger cars and 2 stroke gasoline motor bikes.
Estimated PM emission of Indian vehicles with Indian emission levels
PM emission reduction if India adopts US on-road emission level
Vehicle category
Present Indian on-road emission levels
Present U.S on-road emission levels
Reduction in PM emission in switching from Indian emission levels to the US on-road emission levels
PM emission (kilotons/year) (Total-all age category)
PM emission in kilotons/year) (Total-all age category)
Diesel truck 323.9 128.9 352.3
Diesel bus 93.6 12.8 Gigagram
Diesel PC 48.1 3.5
Petrol passenger car 13.9 1.0
Motor Bike-2S petrol 8.6 6.5
Motor Bike-4S petrol 18.8 1.7
Estimated BC emission from on-road mobile sources in India
BC/PM2.5
PM emission (Gigagram/year)
BC emission = (BC/PM) ratio PM emission (Gigagram/year)
Indian on-road emission level
On-road diesel vehicles 0.74 465.5 344.5
On-road gasoline vehicles 0.19 41.3 7.8
US on-road emission level
On-road gasoline vehicles 0.19 9.3 1.8
BC emission reduction in switching from Indian emission levels to US emission levels = 243 Gigagram
BC emission reduction when India adopts the US on-road emission levels
In section 2.1, we revised the estimates for on-road BC emission in India from existing studies to be 74 ~ 254 Gg/year using adjustment with isotope analysis results. This section pertains to calculating the reduction in BC emission. This is accomplished in the following steps:
First, we obtain the emission factors for US vehicles using US EPA MOVES.
We compare the emission factors for Indian vehicles, as discussed in section 2.2, with those for US vehicles obtained in this section.
Then, we calculate the reduction in BC emission in switching from Indian emission levels to the US emission levels.
Emission factors for US vehicles
The emission factors for US vehicles were derived using the EPA's MOVES model for the year 2010. MOVES is currently the US state-of-the-art model for estimating on-road emissions. MOVES2010b is the latest version of MOVES. MOVES gives an emission for each driving mode and in this regard is considered a modal emission model. In the following, we summarize the main features of the model based on the work by Bai et al. [32] and the model documentation. Owing to the modal nature of the MOVES emission rates, MOVES is capable of quantifying emissions accurately on various scales (e.g., individual transportation projects as well as regional emission inventories). The current improved design of the model has the following advantages – a) the databases can be easily updated as per the availability of the new data; and b) the model permits and simplifies the import of data relevant to the user's own needs. The MOVES model applies various corrections for temperature, humidity, fuel characteristics, etc., before it comes up with emission estimates. MOVES also bases emission estimates on representative cycles, not on single emission rates. Furthermore, MOVES is different from traditional models such as MOBILE and EMFAC, in that a) instead of using speed correction factors, MOVES uses vehicle specific power and speed in combination; and b) it factors in vehicle operating time instead of mileage for determining emission rates. In view of this, we believe MOVES is a superior analysis tool.
In this study, we specified the following parameters as the input parameters while running the MOVES model: a) geographic bound, we chose the national level; b) time span, the year 2010; c) road type, we specified urban road with unrestricted access; and d) in the emission source, we selected all the exhaust processes (consisting of running exhaust; start exhaust; crankcase running exhaust; crankcase start exhaust ; crankcase extended idle exhaust ; extended idle exhaust) but did not include the emissions from fueling or evaporation since our BC emission estimate in section 3 did not include the latter source either.
The model output we used is the total travelled distance and annual PM2.5 emission. This output data was selected against specific vehicle types from the MySql output database of MOVES. Then, we compute emission factors by dividing total emission by total distance traveled and this gave us emission factors in gram/km for corresponding vehicle types. The emission factors were further sorted based on vehicle age. The vehicle age is calculated as a difference of reference year (year 2010) and the manufacturing year and finally, we inserted these emission factors in equation 1 discussed in chapter 3, to obtain total PM emission according to the US emission levels.
Comparison of emission factors: Indian vehicles vs. US vehicles
As we compare the PM emission factors from the India motor vehicles with those from US vehicles, we clearly see that the Indian vehicles have significantly higher emission factors than those in the US (see Fig. 3). Moreover, this difference becomes even larger for older vehicles. Higher emission factors associated with older vehicles can be attributed to the deterioration of the vehicle engines upon aging and accumulation of the mileage. The vehicle engine seemingly deteriorates on aging due to poor maintenance of the vehicles. The vehicles in India are often poorly maintained and have higher average age relative to those in the US. We believe that this faster deteriorating also stems due to lack of effective inspection and maintenance systems to be enforced by the combination of government policies. Figure 3 also shows that the difference in the emission factors is highest for the heavy commercial vehicles (diesel buses and diesel trucks). In addition, considering the case of motor bikes, we can see that the emission factors for the motor bikes in US seems to be constant with the aging because the MOVES model does not incorporate age deterioration factor for motor bikes, however, the emission factors for motor bikes in India shows an increasing trend with an increasing age. In India, there is a significant share of on-road 2-stroke motor bikes as they are an attractive option to the middle and lower middle classes in India [33]. This is contrary to the motor bike ownership scenario in US where 2-stroke motor bikes are completely out of use and are superseded by 4-stroke motorbikes. Thus, almost all of the on-road motorcycles in MOVES at this point are 4-stroke.
Comparison of PM emission factors of Indian vehicles and US vehicles
Here, we summarize the merits of four stroke engines over two stroke engines and vice-versa from the study by Kojima et al. [34] published with the World Bank. Their study primarily focused on reducing emissions from two-stroke engines in South Asia. The key advantages of 4-stroke engines over 2-stroke gasoline engine vehicles are: lower particulate and hydrocarbon emissions, better fuel economy, and moderate noise levels while in operation. However, the only relative advantages of 2-stroke engines are: lower purchase prices; mechanical simplicity leading to low maintenance costs; and lower NO2 emissions. Our comparative analysis of the emission factors from 2-stroke and 4-stroke engine technologies clearly points out the need for encouraging 4-stroke two wheelers over 2-stroke two wheelers in India. This implies that the pollution levels can be brought down to safer levels in spite of the rising two-wheeler population if the 4-stroke technology for the two-wheeler segment is promoted in India.
Reduction in BC emission in switching to the US emission levels
We combine the US emission factors with the driving activities in India to estimate total on-road PM emission in India if India hypothetically adopts the US emission levels (using Eq. 1). The result is the 155 Gigagram for the year 2010, as shown in Fig. 4. This PM emission is further converted to equivalent BC emission by applying the ratio of BC/PM2.5 (discussed in section 2.2) and details of calculation are presented in Table 4.
Estimated PM emission of Indian vehicles with US on-road emission levels
The reduction in BC emission in switching to the US emissions levels is expressed in terms of reduction in PM emission (Table 3 and Fig. 5) and reduction in BC emission (Table 4). The total BC emission reduction following the US emission levels is estimated to be 243 Gg (equaling 69 % reduction), and this number is split into 236.7 Gg and 6 Gg for on-road diesel and gasoline vehicles respectively. Please note that this reduction is in BC emission. In Fig. 5, we analyze the emission reduction in terms of age category of the vehicles. We find that the highest emission reduction in switching to US on-road emission levels would result from the vehicles in the age group of 5–10 years old followed by the vehicles in the age group of 0–5 years old and vehicles in the age group of 10–15 years old. The least emission reduction will result from the vehicles which are in the age group of 15–20 years old. This estimate incorporating vehicle age categorization is one of the novelties of the present study.
PM emission reduction from Indian vehicles according to their age (US on-road emission levels)
Our estimate of BC emission in section 3 is not necessarily a better estimate than previous ones. Thus, we restrict our BC emission estimate to the use of the ratio of the emission factors in India to those in US. This ratio is combined with the adjusted previous estimates (according to Gustafsson et al. [23], as discussed in section 2) to yield the reduction of BC emission. For the total BC emission reduction from all the vehicles in India, we apply the 69 % BC emission reduction to the adjusted previous estimates. The 69 % of 74 ~ 254 Gg/year is 51 ~ 175 Gg/year. This is the estimated BC emission reduction by switching to the US on-road emission standard.
BC radiative forcing reduction and its value
We calculated earlier that 51 ~ 175 Gg/year of BC emission reduction is possible in India (year 2000's) from the road transport sector if India adopts the US on-road emissions levels. Based on the study by Cohen and Wang [35] and Bond et al. [10], 17.8 Tg/year (i.e. 17800 Gigagram/year) of global BC emission makes 0.88 W/m2 of global BC forcing. We estimate that 51 ~ 175 Gg/year of BC emission reduction contributes to a reduction of −0.0025 ~ −0.0087 W m−2 in global BC forcing. Thus, we conclude that a reduction in BC forcing of −0.0025 ~ −0.0087 W m−2 is possible if India adopts the US on-road emissions levels.
Quantifying climatic benefits of a reduction in BC forcing (in USD)
Both CO2 emission and BC emission contribute to the global warming. The Kyoto Protocol introduced a concept called "carbon credit". 1 carbon credit is a permit to emit 1 tonne of CO2. Such permits can be sold and bought in a market, and Fig. 6 shows the market price moves of 1 carbon credit in the last 12 months. The Kyoto Protocol also allows for other climate warmers (such as methane) than CO2 to be traded in carbon credit markets. For non-CO2 warming matters, 1 carbon credit is a permit to emit an amount of the matter equivalent to 1 tonne of CO2 in the global warming. Each warming agent (such as methane) has its own atmospheric-residence time scale, spatial distribution, etc., and so comparing a particular warming agent to CO2 is not always straightforward. For BC, 100 year (or 20 year) global-warming-potential (GWP) is commonly used in this regard. According to Bond et al. [10], the 100 year GWP value for BC is 910. This means 1 tonne of BC emission adds as much energy to the earth over the next 100 years as 910 tonnes of CO2. Please note that the mass of BC refers to that of the carbon component while the CO2 mass refers to the combined mass of carbon and oxygen.
Average Carbon Price in US Dollar per tonne of CO2 emission or its equivalent (July 2013 – July 2014)
Applying Bond et al.'s [10] estimate for the BC GWP, we find that 51 ~ 175 Gg/year of BC emission reduction amounts to 0.046 billion - 0.159 billion carbon credits. Using $12.104 (US dollars) as the average price of 1 carbon credit, as shown in Fig. 6, 0.046 billion - 0.159 billion credits are valued at $ 0.56 B – $ 1.92 B (US dollars). In short, India could earn $ 0.56 B – $ 1.92 B (US dollars) every year by switching from the current on-road emission levels to the US levels. Please note that the Kyoto Protocol did not address BC but the next climate treaty will likely include BC.
We furthermore note that the 5th IPCC report [36] endorsed Bond et al.'s [10] study as a credible estimate of BC GWP. Bond et al.'s [10] estimate can be considered credible for many reasons. First, observationally-constrained estimates of BC forcing were used, and thus these estimates are similar to that of Cohen and Wang [35]. Second, the rapid adjustment due to the atmospheric heating by BC (i.e., semi-direct forcing) was included as well, and in this aspect the best semi-direct forcing estimate was used.
Conclusion and discussions
The existing estimates of the BC emission from the transport sector in India have a wide range of values, giving huge uncertainties in this regard. In the present study, we have substantially reduced the uncertainty by constraining the existing estimates with credible isotope analysis results. Next, we have derived our own BC estimate of the transport section, and then we have provided the estimate of anticipated BC emission reduction possible in India as a result of switching to the more stringent US on-road emission levels. This emission reduction is found to be about 69 %, and coupled with the adjusted previous BC emission estimates the emission reduction is expected to be in the range of 51 ~ 175 Gg of BC per annum. What is more, we have expressed the proposed BC emission reduction in terms of global BC radiative forcing, which is estimated to range from −0.0025 ~ −0.0087 W m−2, i.e., a reduction of global BC forcing by 0.0025 ~ 0.0087 W m−2 due to a reduction of BC emission in India. We have also quantified the climate benefits of the BC emission reduction in USD. The BC reduction of 58.2 ~ 151.3 Gg is equivalent to 0.046 billion - 0.159 billion carbon credits which are valued at $ 0.56 B US dollars – $ 1.92 B US dollars (using today's carbon credit price).
Although we fully accounted for uncertainties in estimating the BC emission from the transport section in India, we did not address the uncertainties in the 69 % reduction estimate, nor did we assess the uncertainties in local BC forcing or BC GWP. In our view, addressing these uncertainties is beyond the scope of the current paper and deserves a separate study. To elaborate, BC GWP is not globally uniform. BC emission in some areas can contribute to the global warming more than the emission of the same amount in other areas, since BC forcing depends on sunlight, low cloud fraction, etc. Even if we consider the uncertainty in global BC GWP, we also need to evaluate local BC GWP and address its uncertainty. To simplify the computation, we used the best estimate of global BC forcing and BC GWP and scaled these numbers to estimate the climatic benefit of reducing the BC emission in India. Nevertheless, we believe the ranges of all the estimates in our study are sufficiently large to cover most uncertainties, because we maximized the uncertainties in estimating the BC emission from the transport sector (by picking possibly the most extreme values to represent the range), and the other estimates are based on the BC emission estimates.
Overall, our study provides another reason that vehicles should be cleaner in India. Why the vehicles in India emit more aerosols than those in the US needs additional discussions, as there are many factors behind this. There are studies [21, 22, 29] and government reports [19, 37] which highlight key reasons accountable for excessive particulates emission from Indian vehicles. One of the key reasons relates to the Indian emission controls, as they have been traditionally based on Euro style emission standards. Such emission standards allow higher particulate emissions from diesel vehicles compared to the gasoline vehicles. In comparison, the US has been setting and reinforcing the same standards without considering any specific type. In another reason, emission standards in India are lax compared to international best practices. The lax standards in India reflect that Indian schedules for adopting emission and fuel quality standards lag those in the West. Other reasons include a weak enforcement of emission standards; and a significant percentage of older vehicles in India which are poorly maintained and have poor fuel economy.
An additional and important cause of excessive emission from Indian vehicles might be that transport fuels in India have high sulfur content which results in higher sulfate emissions. During combustion, sulfur in diesel fuel is emitted in the form of sulfur dioxide (SO2) gas which later condenses and becomes sulfate aerosols in the atmosphere. SO2 emission is not part of PM emission in typical PM estimation studies (since gas is not aerosol) but SO2 gas (at least some of it) ultimately becomes aerosols. Since the present study is about BC aerosols, we refrain from discussing high sulfur content extensively here.
In the end, vehicles in India emit excessive aerosols because such dirty vehicles are cheap to buy and operate. Such dirty vehicles are common among poor countries and so this is not limited to India. Clean technological solutions are available but unfortunately at additional costs. We discuss some technology examples in the next. Compared to gasoline engines, diesel engines have lower CO and HC gas emissions but higher NOx and PM emissions [38]. In gasoline engines tailpipe emissions can be significantly reduced by an efficient use of three-way catalytic converters, but at the expense of fuel economy [39]. In general, the emission control technologies for diesel and gasoline engines can be broadly divided into two groups: in-cylinder control and after-treatment control [40]. Posada et al. [41] give a good review of these emission control technologies. For instance, for diesel emission controls, PM filters are an example of after-treatment tools. Minjares et al. [13] and EPA [31] report that these particle filter devices reduce diesel PM emissions by as much as 85 to 90 % and BC emissions by up to 99 %.
Despite all these costs, the benefits could be substantial. Here, we have discussed the benefits by adopting the US on-road aerosol emission levels immediately. The idea of switching the on-road emission to the US levels immediately is unrealistic. Thus, our results can be taken as the upper limit of the benefit and such results are still useful to policymakers. On the other hand, while we only discussed the climate benefits, the benefits are not limited to the climate, and more importantly pertain to health benefits. A number of studies [31, 39, 42–45] substantiated the health benefits of BC emission reduction. It has been well established that fine particulates emitted from diesel motor vehicles contain toxic substances and the exposure to these fine particles can prompt lung tumor, serious respiratory grimness and mortality including wellbeing results, for example, worsening of asthma, interminable bronchitis, respiratory tract contaminations, coronary illness and stroke.
Besides the issues of outdated vehicle technology in India, there are other issues (behavioral and psychological issues) such as a lack of environmental conviction in the Indian car consumers that leads to higher traffic emissions [46–48]. They often dump old tires, battery or even scrap car. Irrespective of a large number of consumers who are conscious about the environment, very few people are actually willing to adapt their lifestyle in order to solve the issues such as deteriorating air quality. There is a very negligible percentage of people who actually push themselves out of their comfort zone by acting at their personal expenses, such as paying premiums for environmentally friendly products and making a sacrifice in their present lifestyles. Therefore, there is a need of behavioral changes at personal level which includes - (i) raising public awareness to prefer public transportation to using personal vehicles; (ii) living near the workplace rather than commuting a long distance to workplace every day; (iii) car pool; and (iv) commuting to workplace with bicycles. The increasing use of public transportation would mean fewer vehicles on the road, which means less emission and less negative effects on climate and health [49]. Hence, there is not a single effective tool to mitigate transport sector emissions. Taken together, we propose that in order to assure effective environmental protection, psychological as well as technological measures need to be in place.
The authors are thankful to Dr. George Scora of University of California Riverside University of California, Riverside), Dr. Sarath Guttikunda of Urban Emissions, India, Mr. Michael P. Walsh, Mr. Gaurav Bansal, Mr. John German of ICCT (Washington, DC), Mr. Stevens Plotkins of Argonne National Laboratory, and Mr. Narayan Iyer of Bajaj Auto-India for their expertise. We are also thankful to the government and semi-government agencies such as US EPA, CARB, CSE (Delhi), SIAM (India) and ARAI (India) for their prompt responses to our data requests and doubts in using emission models. This research was funded by the Korea Meteorological Administration Research and Development Program under Grant Weather Information Service Engine (WISE) project (#: KMA-2012-0001).
Andreae, M and Gelencsér, A Black carbon or brown carbon? The nature of light-absorbing carbonaceous aerosolsAtmos Chem Phys200661031313148https://doi.org/10.5194/acp-6-3131-2006
Lamarque, J-F, Bond, TC, Eyring, V, Granier, C, Heil, A, Klimont, Z, Lee, D, Liousse, C, Mieville, A and Owen, B Historical (1850–2000) gridded anthropogenic and biomass burning emissions of reactive gases and aerosols: methodology and applicationAtmos Chem Phys2010101570177039https://doi.org/10.5194/acp-10-7017-2010
Novakov T, Ramanathan V, Hansen J, Kirchstetter T, Sato M, Sinton J, Sathaye J: Large historical changes of fossil‐fuel black carbon aerosols. Geophysical Res Letters 2003, 30(6). doi:10.1029/2002GL016345.
Chung, CE, Ramanathan, V and Decremer, D Observationally constrained estimates of carbonaceous aerosol radiative forcingProc Natl Acad Sci2012109291162411629https://doi.org/10.1073/pnas.1203707109
Ramanathan, V and Xu, Y The Copenhagen accord for limiting global warming: criteria, constraints, and available avenuesProc Natl Acad Sci20101071880558062https://doi.org/10.1073/pnas.1002293107
Shindell, D, Kuylenstierna, JC, Vignati, E, Dingenen, R, Amann, M, Klimont, Z, Anenberg, SC, Muller, N, Janssens-Maenhout, G and Raes, F Simultaneously mitigating near-term climate change and improving human health and food securityScience20123356065183189https://doi.org/10.1126/science.1210026
Bond TC, Streets DG, Yarber KF, Nelson SM, Woo JH, Klimont Z: A technology‐based global inventory of black and organic carbon emissions from combustion. J Geophysical Res Atmospheres (1984–2012) 2004, 109(D14). doi:10.1029/2003JD003697.
Myhre, G, Bellouin, N, Berglen, TF, Berntsen, TK, Boucher, O, Grini, A, Isaksen, IS, Johnsrud, M, Mishchenko, MI and Stordal, F Comparison of the radiative properties and direct radiative effect of aerosols from a global aerosol model and remote sensing data over oceanTellus B2007591115129https://doi.org/10.1111/j.1600-0889.2006.00226.x
Stier, P, Seinfeld, JH, Kinne, S and Boucher, O Aerosol absorption and radiative forcingAtmos Chem Phys200771952375261https://doi.org/10.5194/acp-7-5237-2007
Bond, TC, Doherty, SJ, Fahey, D, Forster, P, Berntsen, T, DeAngelo, B, Flanner, M, Ghan, S, Kärcher, B and Koch, D Bounding the role of black carbon in the climate system: A scientific assessmentJ Geophysical Res Atmospheres20131181153805552https://doi.org/10.1002/jgrd.50171
Kroeger T: Black Carbon Emissions in Asia: Sources, Impacts and Abatement Opportunities. Contractor Report Prepared by International Resources Group for USAID, ECO-Asia Clean Development and Climate Program USAID Regional Development Mission for Asia. USAID: Bangkok, Thailand 2010.
Liggio, J, Gordon, M, Smallwood, G, Li, S-M, Stroud, C, Staebler, R, Lu, G, Lee, P, Taylor, B and Brook, JR Are emissions of black carbon from gasoline vehicles underestimated? Insights from near and on-road measurementsEnviron sci Technol201246948194828https://doi.org/10.1021/es2033845
Minjares R, Wagner D, Baral A, Chambliss S, Galarza S, Posada F, SHARPE B, Wu G, Blumberg K, Kamakate F (2014) Reducing black carbon emissions from diesel vehicles: impacts, control strategies, and cost-benefit analysis. The World Bank
Garg, A, Shukla, P, Bhattacharya, S and Dadhwal, V Sub-region (district) and sector level SO< sub > 2</sub > and NO< sub > x</sub > emissions for India: assessment of inventories and mitigation flexibilityAtmos Environ2001354703713https://doi.org/10.1016/S1352-2310(00)00316-2
LU J (2011) Environmental Effects of Vehicle Exhausts, Global and Local Effects: A Comparison between Gasoline and Diesel. Thesis, Halmstad University
Klimont, Z, Cofala, J, Xing, J, Wei, W, Zhang, C, Wang, S, Kejun, J, Bhandari, P, Mathur, R and Purohit, P Projections of SO2, NOx and carbonaceous aerosols emissions in AsiaTellus B2009614602617https://doi.org/10.1111/j.1600-0889.2009.00428.x
Sahu S, Beig G, Sharma C: Decadal growth of black carbon emissions in India. Geophysical Research Letters 2008, 35(2). DOI:10.1029/2007GL032333.
Reddy, MS and Venkataraman, C Inventory of aerosol and sulphur dioxide emissions from India. Part II—biomass combustionAtmos Environ2002364699712https://doi.org/10.1016/S1352-2310(01)00464-2
ARAI.: The Automotive Research Association of India, 2007. Emission Factor Development for Indian Vehicles, As a Part of Ambient Air Quality Monitoring and Emission Source Apportionment Studies. AFL/2006‐07/IOCL/Emission Factor Project/2007.
Baidya, S Trace Gas and particulate matter emissions from road transportation in India: quantification of current and future levels2008StuttgartReports in University of Stuttgart
Bansal G, Bandivadekar A (2013) OVERVIEW OF INDIA'S VEHICLE EMISSIONS CONTROL PROGRAM. ICCT, Beijing, Berlin, Brussels, San Francisco, Washington
Baidya, S and Borken-Kleefeld, J Atmospheric emissions from road transportation in IndiaEnergy Policy2009371038123822https://doi.org/10.1016/j.enpol.2009.07.010
Gustafsson, Ö, Kruså, M, Zencak, Z, Sheesley, RJ, Granat, L, Engström, E, Praveen, P, Rao, P, Leck, C and Rodhe, H Brown clouds over South Asia: biomass or fossil fuel combustion?Science20093235913495498https://doi.org/10.1126/science.1164857
Franco, V, Kousoulidou, M, Muntean, M, Ntziachristos, L, Hausberger, S and Dilara, P Road vehicle emission factors development: A reviewAtmos Environ2013708497https://doi.org/10.1016/j.atmosenv.2013.01.006
Sawyer, RF, Harley, RA, Cadle, S, Norbeck, J, Slott, R and Bravo, H Mobile sources critical review: 1998 NARSTO assessmentAtmos Environ2000341221612181https://doi.org/10.1016/S1352-2310(99)00463-X
Central Pollution Control Board MoEF (2011) Air Quality Monitoring, Emission Inventory and Source Apportionment Study for Indian Cities. Ministry of Environment & Forests: India
Ramachandra, T Emissions from India's transport sector: Statewise synthesisAtmos Environ2009433455105517https://doi.org/10.1016/j.atmosenv.2009.07.015
Nagpure, AS, Sharma, K and Gurjar, BR Traffic induced emission estimates and trends (2000–2005) in megacity DelhiUrban Climate201346173https://doi.org/10.1016/j.uclim.2013.04.005
Sahu, SK, Beig, G and Parkhi, N Critical emissions from the largest on-road transport network in south AsiaAerosol Air Qual Res2014141135144
India road transport year book 2009–20112012New Delhi, IndiaMinistry of Road Transport and Highways (India)
EPA U: Report to Congress on black carbon. US Environmental Protection Agency, Washington, DC. 2012, EPA-450/R-12-001 388pp.
Bai S, Eisinger D, Niemeier D: MOVES vs. EMFAC (2009) A Comparison of Greenhouse Gas Emissions Using Los Angeles County. In: Transportation Research Board 88th Annual Meeting, Paper: Washington DC. 09–0692.
Das, S, Schmoyer, R, Harrison, G and Hausker, K Prospects of inspection and maintenance of two-wheelers in IndiaJ Air Waste Manage Assoc2001511013911400https://doi.org/10.1080/10473289.2001.10464369
Kojima M, Brandon C, Shah JJ: Improving urban air quality in South Asia by reducing emissions from two-stroke engine vehicles. In.: World Bank; 2000
Cohen JB, Wang C: Estimating global black carbon emissions using a top‐down Kalman Filter approach. J Geophysical Res Atmospheres 2014 199;307-323
Randall, D.A., R.A. Wood, S. Bony, R. Colman, T. Fichefet, J. Fyfe, V. Kattsov, A. Pitman, J. Shukla, J. Srinivasan, R.J. Stouffer, A. Sumi, K.E. Taylor, 2007: Cilmate Models and Their Evaluation. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor, H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
Ministry of Shipping RTH, Government of India; ARAI.: 2007.
Mondt JR (2000) Cleaner Cars: The history and technology of emission control since the 1960s; Society of Automotive Engineers, Inc.
Zelenka, P, Cartellieri, W and Herzog, P Worldwide diesel emission standards, current experiences and future needsAppl Catal B Environ1996101328https://doi.org/10.1016/0926-3373(96)00021-5
Faiz, A, Weaver, CS and Walsh, MP Air pollution from motor vehicles standards and technologies for controlling emissions1996Washington, D.CWorld Bank
Minjares, R, Blumberg, K and Posada Sanchez, F Alignment of policies to maximize the climate benefits of diesel vehicles through control of particulate matter and black carbon emissionsEnergy Policy2013545461https://doi.org/10.1016/j.enpol.2012.09.053
Lim, SS, Vos, T, Flaxman, AD, Danaei, G, Shibuya, K, Adair-Rohani, H, AlMazroa, MA, Amann, M, Anderson, HR and Andrews, KG A comparative risk assessment of burden of disease and injury attributable to 67 risk factors and risk factor clusters in 21 regions, 1990–2010: a systematic analysis for the Global Burden of Disease Study 2010Lancet2013380985922242260https://doi.org/10.1016/S0140-6736(12)61766-8
Ghose, MK, Paul, R and Banerjee, S Assessment of the impacts of vehicular emissions on urban air quality and its management in Indian context: the case of Kolkata (Calcutta)Environ Sci Pol200474345351https://doi.org/10.1016/j.envsci.2004.05.004
Sydbom, A, Blomberg, A, Parnia, S, Stenfors, N, Sandström, T and Dahlen, S Health effects of diesel exhaust emissionsEur Respir J2001174733746https://doi.org/10.1183/09031936.01.17407330
Morgan, W, Reger, R and Tucker, D Health effects of diesel emissionsAnn Occup Hyg1997416643658https://doi.org/10.1093/annhyg/41.6.643
Klocke, U Scholl, W and Sydow, H eds. Conditions of environmental mobility decisions: environmental protection through government action, at the individualAssociation for European transport and contributors 2009 16 modal choice and when buying a car2000MünsterMobility in adolescence and adulthood, Waxmann
Kruger N, Pareigis J: Influencing Car Buying Decisions from an Environmental Perspective-A Conceptual Framework Based on Real Option Analysis. In: European Transport Conference, 2009: Netherlands 2009
Joshi N, Rao P: Environment Friendly Car: Challenges ahead in India. Global J Manage Business Res 2013, 13(4):11-19.
Laffel N (2006) Promoting Public Transportation for Sustainable Development. Thesis, Princeton University
Sharma, A. and Chung, C.E., 2015. Climatic benefits of black carbon emission reduction when India adopts the US onroad emission level. Future Cities and Environment, 1, p.13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma A, Chung CE. Climatic benefits of black carbon emission reduction when India adopts the US onroad emission level. Future Cities and Environment. 2015;1:13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma, A., & Chung, C. E. (2015). Climatic benefits of black carbon emission reduction when India adopts the US onroad emission level. Future Cities and Environment, 1, 13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma A and Chung CE, 'Climatic Benefits of Black Carbon Emission Reduction When India Adopts the US Onroad Emission Level' (2015) 1 Future Cities and Environment 13 DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma, Ashish, and Chul E. Chung. 2015. "Climatic Benefits of Black Carbon Emission Reduction When India Adopts the US Onroad Emission Level". Future Cities and Environment 1: 13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma, Ashish, and Chul E. Chung. "Climatic Benefits of Black Carbon Emission Reduction When India Adopts the US Onroad Emission Level". Future Cities and Environment 1 (2015): 13. DOI: http://doi.org/10.1186/s40984-015-0013-8
Sharma, A.and C. E. Chung. "Climatic Benefits of Black Carbon Emission Reduction When India Adopts the US Onroad Emission Level". Future Cities and Environment, vol. 1, 2015, p. 13. DOI: http://doi.org/10.1186/s40984-015-0013-8 | CommonCrawl |
Study on deep-learning-based identification of hydrometeors observed by dual polarization Doppler weather radars
Haijiang Wang ORCID: orcid.org/0000-0002-3648-42251,2,
Yuanbo Ran1,
Yangyang Deng1 &
Xu Wang1,2
EURASIP Journal on Wireless Communications and Networking volume 2017, Article number: 173 (2017) Cite this article
Hydrometeor classification for dual polarization Doppler weather radar echo is a procedure that identifies hydrometeor types based on the scattering properties of precipitation particles to polarized electromagnetic waves. The difference in shape, size, or spatial orientation among different types of hydrometeor will produce different scattering characteristics for the electromagnetic waves in a certain polarization state. Moreover, the polarimetric measurements, which are calculated from the radar data and closely associated with these characteristics, are also different. The comprehensive utilization of these polarimetric measurements can effectively improve the identification accuracy of the phase of various hydrometeors. In this paper, a new identification method of the hydrometeor type based on deep learning (DL) and fuzzy logic algorithm is proposed: firstly, the feature extraction method based on deep learning is used for training the correlation among multiple parameters and extracting the relatively independent features. Secondly, the Softmax classifier is applied to classify the precipitation patterns, including rain, snow, and hail, and it is based on the features extracted by deep learning algorithm. Finally, the fuzzy logic algorithm is adopted to identify the hydrometeor types in various precipitation patterns. In order to test the accuracy of the classification results, the hydrometeor classifier has been applied to a stratiform cloud precipitation process, and it is found that the classification results agree well with the other polarimetric products.
The dual linear polarization radar can transmit horizontal and vertical polarization waves alternately or simultaneously, and it can also use different signal processing methods to deal with the echo signals from two polarization directions. Moreover, it is easy to obtain the horizontal reflectivity (Z H ), differential reflectivity (Z DR), co-polar correlation coefficient (ρ HV), differential propagation phase constant (K DP), and other polarization parameters. The difference in shape, size, or spatial orientation between different types of hydrometeor will produce different polarization parameters, and it can promote the development of hydrological meteorological classification by these polarization parameters. Compared with the conventional Doppler weather radar system, its ability to estimate the precipitation and recognize the hydrometeor phase has been improved significantly. What is more, it is an important tool in the fields of artificial influence on weather, aviation warning, and disaster monitoring [1,2,3,4,5].
Liu et al. [6] established a hydrometeor classification system based on fuzzy logic and neural network. In the system, the horizontal reflectivity, differential reflectivity, differential propagation phase shift, correlation coefficient, linear depolarization ratio, and the corresponding height are used as the inputs, and the neural network learning algorithm is applied to adjust the parameters. Finally, the inputs and parameters of the system are calculated to determine the type of hydrometeors [6,7,8].
Chandrasekar et al. [9] summarized the researches on echo classification and the identification of hydrological fluid, which are based on dual polarization radar in recent years. The classification principle of various types was described, and the characteristics of hydrometeor classification were analyzed. It promoted the study in the hydrometeor classification of the dual polarization radar greatly [9, 10].
Besic et al. [11] used a semi-supervised approach to realize the classification of hydrometeors. In this study, the K-medoids (KM) approach is used to cluster the sample data and the clustering results are evaluated by the Kolmogorov-Smirnov (KS) test method. Finally, the fuzzy logic algorithm is applied to realize the highly precise classification of hydrometeors based on the clustering results [11].
Hinton et al. [12] published an article in Science, which opened a gate for deep learning in the field of machine learning. Deep learning, as a kind of emerging learning algorithm of multi-layer neural network, has solved the local minimum defect in the traditional training algorithm. Moreover, it has been widely used in machine learning and computer vision and has aroused widespread concern in various fields [12,13,14,15]. Tao et al. [16] used the deep learning approach to realize the precipitation identification with bispectral satellite information. In this study, the effectiveness of the deep learning (DL) approaches to extract useful features from bispectral satellite information, infrared (IR), and water vapor (WV) channels and to produce rain/no-rain (R/NR) detection is explored [16].
Based on the data detected by WSR-98D/XD dual polarization Doppler weather radar, which has been upgraded, we designs a hydrometeor identification system based on deep learning and fuzzy logic methods. In this paper, there is a matrix of polarimetric measurements corresponding to each range bin, and it is composed of the data of the range bin and adjacent range bin, which covers approximately 1 km2. Therefore, the matrix of 21 × 21 ×4 can reflect the precipitation information of the current range bin. The system used the deep learning algorithm to extract the features from the matrix of polarimetric measurements, and the results are sent into the Softmax classifier to identify the precipitation pattern. Then, the fuzzy logic method is applied to judge the type of precipitation particles based on the precipitation pattern which was known. Finally, the optimal identification results of hydrometeor type can be obtained. The deep learning method is used for initial clustering, and then, the fuzzy logic method is used for accurate clustering; the result shows that it can improve the hydrometeor classification accuracy significantly.
The structure of this article is as follows. In Section 2, the dual polarization radar measurements used for various classifications, the deep learning method, and the fuzzy logic algorithm are briefly described. The process of experimental design is described in Section 3. In Section 4, the performance of the identification system of hydrometeor types is evaluated by the assessment data collected by the experiment center of atmospheric exploration in China.
Dual linear polarization Doppler weather radar
Polarization refers to the direction of vibration of the electric field when the electromagnetic wave propagates, and when the electric field vibrates in the horizontal direction, it is named the horizontal polarized wave. Otherwise, if in the vertical direction, it is named the vertical polarized wave. The dual linear polarization Doppler weather radar mainly alternately or simultaneously transmits horizontal and vertical polarization waves. Moreover, the distribution of the propagation medium in the space is not uniform, so the different polarization signal attenuation and phase shift will be different. As a result, it is possible to obtain the attenuation difference and the phase shift of the two polarization waves. When the frequency of the radar is higher, the impact of the propagation will be more serious. Through the corresponding processing and calculation, a series of polarimetric measurements such as horizontal reflectivity, vertical reflectivity, differential reflectivity factor, differential propagation phase change, differential propagation phase constant, and co-polar correlation coefficient can be obtained. For the above parameters, the detailed introduction is as follows.
Horizontal reflectivity factor Z H and vertical reflectivity factor Z V
When transmitting horizontal polarization, the expression of the horizontal reflectivity factor is
$$ {Z}_H=\underset{0}{\overset{D_{\mathrm{max}}}{\int }}N(D)\bullet {D_H}^6{d}_D $$
where D H is the size of the particle detected by the radar in the horizontal direction and N(D) is the drop distribution of the precipitation particles. The expression of the vertical reflectivity factor is
$$ {Z}_V=\underset{0}{\overset{D_{\mathrm{max}}}{\int }}N(D).{D_V}^6{d}_D $$
In the expression, N(D) is the size of the particle detected by radar in the vertical direction.
The differential reflectivity factor Z DR
The differential reflectivity factor Z DR is mainly calculated by the horizontal reflectivity Z H and vertical reflectivity factor Z V :
$$ {Z}_{\mathrm{DR}}=10\cdotp \log \left(\frac{Z_H}{Z_V}\right) $$
In the expression, Z DR is corresponding to the size and axial ratio of the precipitation particles, and the axis ratio is defined as a/b, in which a and b are respectively the horizontal and vertical axis radius of the particle.
The differential propagation phase change Φ DP and differential propagation phase constant K DP
The dual linear differential propagation phase change is defined as
$$ {\varPhi}_{\mathrm{DP}}={\varPhi}_{\mathrm{HH}}-{\varPhi}_{\mathrm{VV}} $$
where Φ HH and Φ VV are respectively the two-way phase angle at a certain distance from the arrival of the antenna when the radar signal is in the horizontal and vertical polarization. What is more, the differential propagation phase constant K DP can be defined as
$$ {K}_{\mathrm{DP}}=\frac{\varPhi_{\mathrm{DP}}\left({r}_2\right)-{\varPhi}_{\mathrm{DP}}\left({r}_1\right)}{2\left({r}_2-{r}_1\right)} $$
In the expression, K DP is the propagation constant difference of the horizontal and vertical polarization wave, and it contains the difference of the isotropic and anisotropic particles. Since the isotropic particles have similar phase shifts for the horizontal and vertical polarized waves, the difference of K DP is mainly due to the difference in the composition of the anisotropic particles. In general, K DP increases with the increase of the dielectric constant and ellipticity. Compared with the reflectivity factor, K DP is not sensitive to the change of particle distribution. Moreover, its measurement is not affected by partial beam blocking and isotropic particles; however, it depends on the density of particles.
The co-polar correlation coefficient ρ HV
The co-polar correlation coefficient is defined as the magnitude of the zero-lag correlation coefficient between the horizontal and the vertical polarization echo signal. What is more, it reflects the correlation of the backscatter characteristics between the horizontal and vertical polarized waves. As can be seen from the characteristics of ρ HV, the particle shape, the spatial orientation, and the number of precipitation particles are the main factors that affect its value.
Besides, the dual polarization Doppler radar also contains other polarimetric measurements such as linear depolarization ratio (LDR), covariance coefficient (CC), velocity (V), and spectral width (W).
The deep learning method for fuzzy logic classification
Deep learning aims at building a deep neural network model by simulating the learning process of brain. Combined with a lot of training data, the relationship among variables is to be obtained. Moreover, common network models include auto encoding network, deep belief, and convolution network. The model used in this paper is a convolutional neural network model [17], and it is a supervised multi-layer neural network, which is mainly composed of the convolution and the subsampling parts.
The selection of sample data
The data used in this study are Z H , Z DR, K DP, and ρ HV from the WSR-98D/XD dual polarization weather radar, which is located in Chengdu, Sichuan, China (30° 34′ N, 103° 55′ E), and its performance indicators are presented in Table 1.
Table 1 The main technical indicators of the WSR-98D/XD dual polarized weather radar
In most products of the dual polarization radar, the length of each range bin is 50 m, and the beam width is 1°. For the PPI scan mode, a volume scan consists of 14 elevation angles, and each elevation contains 350 to 370 radial directions; moreover, each radial direction includes 6000 range bin, while the valid data locate in the former 2000 range bins.
In this study, the data on each range bin is treated as a basic unit, so each basic unit should contains four polarization parameters, and they are Z H , Z DR, K DP, and ρ HV. Considering the high correlation of data between the range bin and its adjacent range bin, we choose to use a matrix of polarimetric measurements to reflect the type of precipitation particles corresponding to each range bin, and the matrix composed of the data of the range bin and adjacent range bin, which is near the range bin within 0.5 km. The implementation steps for obtaining the matrix are as follows.
Firstly, a sample data extraction window is established, and it is composed of 21 × 21 cells. Moreover, the length of each cell is 50 m, and it is equal to the length of a range bin.
Secondly, the data extraction window is used to obtain the matrix of polarimetric measurements corresponding to a range bin, and its center coincides with the range bin. For each cell of the data extraction window, when the range bin falls into it, the data from the range bin is used as the data of the cell.
Finally, through the above steps, we can obtain a sample of 21 × 21 × 4. In order to obtain the sample label, we adopt the voting principle, where the precipitation particle type with the most votes in the sample data is used as the label of the range bin.
In summary, for each range bin, a sample data matrix of 21 × 21 × 4 and a sample label data can be obtained.
The convolutional neural network
Convolutional neural network (CNN) is a multi-layer sensor, and it is inspired by the biological visual neural mechanism. It consists of multiple convolution and subsampling layers, with the ability to automatically extract the sample features. In the convolution layer, the neurons of each network layer are connected to the neurons of the upper layer only in a small neighborhood. Moreover, it can extract the features from input and keep the spatial structure of the original signal. Therefore, the image can be used for the input of the neural network, avoiding the complex process of data reconstruction and feature extraction in the traditional recognition algorithm. In the subsampling layer, the original data is compressed by sampling, which reduces the computational complexity and establishes the invariance of the spatial structure. In addition, CNN is more similar to the biological neural network by using the weight-sharing network structure, and it has achieved the best performance in pattern recognition. Nowadays, CNN has become an important research tool in many fields such as the image recognition and the automatic speech recognition.
CNN, a typical supervised learning algorithm, relies on a large number of labeled samples to train, and the back propagation (BP) algorithm is adopted in the training process. Once the original data is transmitted via the network, the corresponding output is obtained. If the actual output does not match the label data, the error is propagated backwards through the hidden layer to the input layer. It will be assigned to all the cells of each layer so as to obtain the error of signals in each unit, which are applied to correct the weights of the each unit. Therefore, the learning process can be summarized as the process of adjusting the weights by error back propagation, and it will continue to reduce the error until the error is less than the acceptable level or exceeds the number of iterations that has been set in advance.
The fuzzy classifier
The fuzzy logic method was first proposed by Zadeh in 1965. The traditional fuzzy logic method mainly includes four steps: fuzzification, inference, aggregation, and defuzzification. Considering the difference of the performance characteristics of precipitation particles among different precipitation patterns, different fuzzy logic classifiers are designed for the three precipitation patterns about rain, snow, and hail. These fuzzy logic classifiers of this paper use Z H ,Z DR, K DP, and ρ HV as input and then processed by fuzzification, inference, aggregation, and defuzzification. In the end, the results are transformed into a single precipitation particle type. The design process of the fuzzy logic method corresponding to the rainfall pattern is described in detail as follows, and its final output is 1 (drizzle), 2 (rain), and 3 (heavy rain). Its structure is presented in Fig. 1.
The structure of the fuzzy logic classifier corresponding to the rainfall pattern
Fuzzification and selection of membership function
The fuzzy logic algorithm for hydrometeor-type identification needs four input variables, which will be processed fuzzily firstly and be transformed into a fuzzy basis using the membership function. The certain input data can belong to different fuzzy basis, and it has different degree of membership in different fuzzy basis [9]. It is obvious that the selection of membership function is the most important part. By conducting a large number of comparative experiments, it is found that the beta membership function is best for the identification of hydrometeor type, and its expression is
$$ \beta \left(x,a,b,m\right)=\frac{1}{1+{\left|\frac{x-m}{a}\right|}^{2b}} $$
In the expression, the input variable is x, while the width of the function is a. Moreover, the slope of the function is b, and the range of output value is between 0 and 1.
After the previous analysis, the core of the classifier based on fuzzy logic mainly lies in the construction of membership function and rules. The IF-THEN rules for this hydrometeor-type classification can be described as follows:
IF (X 1 is MBF1j and X 2 is MBF2j and X 3 is MBF3j and X 4 is MBF4j ),
THEN hydrometeor is j.
where j = 1, 2, 3 corresponds to the three types of precipitation particles about drizzle, rain, and heavy rain and MBF ij represents the degree of membership function corresponding to the four input parameters. Moreover, the intensity R j of the j-type precipitation particles can be expressed by the following expression:
$$ {R}_j=\sum \limits_{i=1}^4\left[{W}_{ij}\times {\mathrm{MBf}}_{\mathrm{ij}}\left({X}_i\right)\right] $$
In the expression, W ij is the contribution of the ith parameter to the j-type precipitation particles and MBF ij (X i ) indicates the membership value of the characteristic parameter X i that corresponds to the jth-type particle.
The results obtained from the inference of individual rules are aggregated by the maximum aggregation method, and the maximum aggregation method regards the result of the maximum truth value as the final result. Moreover, the formula can be expressed as \( C=\underset{j}{\max}\left[{R}_j\right] \).
Defuzzification
The purpose of the fuzzy process is to find the index value which is corresponding to the maximum rule strength, and then, it will be outputted as the result.
The overview of the experimental design is presented in Fig. 2. In this study, two important modules are built to realize the hydrometeor-type identification system. The module of precipitation pattern classification system based on deep learning is applied to identify the precipitation pattern corresponding to a range bin, and the other module of precipitation particle identification system based on fuzzy logic is utilized to judge the hydrometeor-type corresponding to a range bin. For the above modules, the detailed introductions are as follows.
Overview of the hydrometeor-type identification system
The precipitation pattern classification system based on deep learning
In the system, the input is a matrix of 21 × 21 × 4 that reflects the precipitation pattern of a range bin, and the convolution neural network is applied to extract the features from the input. Moreover, the Softmax classifier is applied to classify the precipitation pattern such as rain, snow, and hail based on the features extracted by the convolution neural network algorithm. The structure diagram of this system is presented in Fig. 3.
The structure of the precipitation pattern identification system
As shown in Fig. 3, the system is composed of two convolution-subsampling layers and a Softmax classifier, and its workflow is mainly divided into the training process and the testing process. The detailed descriptions are as follows.
The training process
In the training process, the BP algorithm is used to adjust the parameters of the system based on a large number of labeled sample data, so that these parameters can meet the requirements of accurate classification. Its implementation steps are described as follows.
Firstly, the input data is convolution with the convolution layer, and the results are sampled by the subsampling layer. Repeat the above steps once; 40 features can be extracted from the input.
Secondly, the Softmax classifier is applied to identify the precipitation pattern based on the 40 features from step 1. The output including 1 (rain), 2 (snow), or 3 (hail) can be obtained.
Finally, if the actual output does not match the label data, the BP algorithm is applied to correct the weights of the each neuron in the system.
The training process can be summarized as the process of adjusting the weights by the BP algorithm, and it will continue to reduce the error until the error reaches to the acceptable level or exceeds the number of iterations that has been set in advance.
The testing process uses some labeled samples to evaluate the accuracy of the system by comparing the output of the system with the label. In this study, we use many useful sample data detected in some special weather such as rain, snow, and hail to test the system; it is found that the system has a high accuracy with 81.17%.
The precipitation particle identification system based on fuzzy logic
The precipitation pattern corresponding to a range bin has been obtained in Section 3.1. Considering the different precipitation pattern including different precipitation particles, we need to build three different fuzzy logic classifiers corresponding to the three different precipitation patterns such as rain, snow, and hail. The relationship between the precipitation pattern and the hydrometeor type is presented in the Table 2.
Table 2 The relationship between the precipitation pattern and the hydrometeor type
As shown in Fig. 2, the system uses the Z H , Z DR, K DP, and ρ HV of the current range bin as input, and then, the input is processed by fuzzification, inference, aggregation, and defuzzification. In the end, the results are transformed into a single precipitation particle type.
Performance evaluation of the system
The assessment data is from a stratiform precipitation process detected at 1508 UTC 27 March 2017, and the main PPI products including the horizontal reflectivity (Z H ), differential reflectivity (Z DR), co-polar correlation coefficient (ρ HV), and differential propagation phase constant (K DP) are displayed in Fig. 4, which the corresponding detection elevation is 1.4°.
The radar measurements corresponding to the case of 27 March 2017. a Z H . b Z DR. c K DP. d ρ HV
The classification result of precipitation pattern corresponding to the assessment data is presented in Fig. 5.
The classification result of precipitation pattern corresponding to the case of 27 March 2017. a PPI of the classification result of precipitation pattern from the product of dual polarization weather radar. b PPI of the classification results of precipitation pattern from the classification system
As represented in Fig. 5, Fig. 5a shows the classification results from the radar product, while Fig. 5b shows the classification results from the precipitation pattern classification system. By comparing and analyzing the two images, it can be seen that their distribution is similar, and the difference is that Fig. 5b is more smoother than Fig. 5a. Some clutter interference has been removed, making the classification result better. It is mainly because that the system has used the feature matrix as the input, and the feature matrix consists of all the data of near the range bin within the scope of 1 km2 rather than a feature vector only from the range bin. Although this method will increase the complexity of data processing, it can remove the interference of clutter and noise. Moreover, the influence of measurement error in radar system on classification accuracy can be reduced effectively, so that the identification accuracy of the system can improve significantly.
As described in Section 3, the hydrometeor-type identification system has realized the identification of precipitation particles about crystals (CR), drizzle (LR), rain (RN), heavy rain (HR), wet snow (WS), dry snow (DS), ice crystals (IC), graupel (GR), ice hail (IH), and rain hail (RH), and the identification results corresponding to the assessment data are presented in Fig. 6.
The identification result of hydrometeor type corresponding to the case of 27 March 2017
The identification system in this paper mainly identifies the hydrometeor type within 100 km, and the distance between the range rings is 30 km. As presented in Fig. 6, rain is the main hydrometeor type in the region within 70 km from the radar station. Comparing to the Fig. 4, the intensity of the echo is ranging from 10 to 25 dBz, and the value of Z DR is mostly from − 1.5 to 1.5 dB. The value of K DP is ranging from − 1 to 1°/km, and the value of ρ HV is ranging from 0.95 to 1. From the distribution of these polarization parameters, it can be inferred that the precipitation particles in this region should be raindrops, and it is consistent with the classification results of the system.
As shown in these purple regions of the Fig. 6, its hydrometeor type mainly consists of ice crystals, graupel, and ice hail. The distance from the radar station is 70 km, and the height is about 1.7 km. Moreover, above these regions, its hydrometeor type is hail, and rain is the main hydrometeor type under these regions. It can be inferred that the freezing level is located in these regions. As presented in Fig. 4, the value of Z H is ranging from 25 to 40 dB, and the value of Z DR is mostly from 1 to 2.5 dB. Moreover, the value of ρ HV is ranging from 0.92 to 0.95. From the distribution of these polarization parameters, it can be inferred that these regions belong to the freezing level; and it is consistent with the identification results of the system.
An intelligent hydrometeor-type identification system for dual polarization Doppler weather radar data has been developed. It combines the main advantages of both deep learning and fuzzy logic algorithms, and it keeps the potential operational implementation reasonably simple. The hydrometeor-type identification system mainly consists of two modules. The first module, referred to as the precipitation pattern classification system based on deep learning in this paper, applies deep learning techniques to automatically extract useful features from the matrix of polarimetric measurements. Moreover, the Softmax classifier is used to classify the precipitation type based on the features extracted by the deep learning algorithm. The second module, referred to as the particle identification system based on fuzzy logic in this paper, applies the fuzzy logic methods to judge the hydrometeor type based on the Z H , Z DR, K DP, ρ HV, and the precipitation pattern which was classified by module 1.
In the system, because of the use of convolutional neural network algorithm for initial clustering, the interference of clutter and noise have been removed; meanwhile, the influence caused by the measurement error of polarization parameters can be reduced effectively. In addition, the fuzzy logic method is used for accurate clustering. The final classification result depends on the membership only rather than the specific values, so it won't be affected by the inaccurate values of some parameters. From the March 27, 2017, case, we can see that the identification result agrees well with other polarized products.
Zhang Z, Zheng G, Wei M, et al. Research on the system of advanced X-band Doppler weather radar with dual-linear polarization capability[C]//Advanced Computational Intelligence (ICACI), 2012 IEEE Fifth International Conference on. IEEE, 2012: 739–741.
Schuur T, Ryzhkov A, Heinselman P, et al. Observations and classification of echoes with the polarimetric WSR-88D radar[J]. (2003)
Hershberger J, Pratt T, Kossler R. Automated calibration of a dual-polarized SDR radar[C]// Antenna Measurements & Applications. IEEE, (2017)
Galati G, Pavan G. Estimation techniques for rainfall rate using differential phase shift in X-band weather radar[C]//Geoscience and Remote Sensing Symposium Proceedings, 1998. IGARSS'98. 1998 IEEE International. IEEE, 1: 138–140 (1998)
Bringi V N, Thurai M, Hannesen R. Dual-polarization weather radar handbook[J]. AMS-Gematronik GmbH, (2007)
H Liu, V Chandrasekar, Classification of hydrometeors based on polarimetric radar measurements: development of fuzzy logic and neuro-fuzzy systems, and in situ verification[J]. J. Atmos. Ocean. Technol. 17(2), 140–164 (2000)
Takagi T, Sugeno M. Fuzzy identification of systems and its applications to modeling and control[J]. IEEE transactions on systems, man, and cybernetics, (1): 116–132 (1985)
J Pavelka, On fuzzy logic I many-valued rules of inference[J]. Math. Log. Q. 25(3–6), 45–52 (2010)
V Chandrasekar, R Keränen, S Lim, et al., Recent advances in classification of observations from dual polarization weather radars[J]. Atmos. Res. 119, 97–111 (2013)
HS Park, AV Ryzhkov, DS Zrnić, et al., The hydrometeor classification algorithm for the polarimetric WSR-88D: description and application to an MCS[J]. Weather Forecast. 24(3), 730–748 (2009)
N Besic, J Grazioli, M Gabella, et al., Hydrometeor classification through statistical clustering of polarimetric radar measurements: a semi-supervised approach[J]. Atmos. Meas. Tech. 9(9), 4425–4445 (2016)
Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks[J]. Science 313(5786), 504–507 ( 2006)
Y Lecun, Y Bengio, G Hinton, Deep learning[J]. Nature 521(7553), 436–444 (2015)
J Schmidhuber, Deep learning in neural networks: an overview[J]. Neural Netw. Official J. Int. Neural Netw. Soc. 61, 85–117 (2014)
Sutskever I, Martens J, Dahl G, et al. On the importance of initialization and momentum in deep learning[C]// International Conference on International Conference on Machine Learning. JMLR.Org, 2013:III-1139.
Tao Y, Gao X, Ihler A, et al. Precipitation Identification with Bispectral Satellite Information Using Deep Learning Approaches[J]. Journal of Hydrometeorology, 18(5), 1271–1283 (2017)
Erhan D, Courville A, Bengio Y. Understanding representations learned in deep architectures[R]. Department d'Informatique et Recherche Operationnelle, University of Montreal, QC, Canada, 1–5 (2010)
The authors are grateful for the helpful insights provided by all the workmates of CMA Key Laboratory of Atmospheric Sounding.
This research is funded by the Department of Science and Technology of Sichuan Province (award number 2016JY0106) and Education Department of Sichuan Province (award number 16ZA0209).
The datasets supporting the conclusions of this article are private, and it came from the CMA Key Laboratory of Atmospheric Sounding, Chengdu, Sichuan, China.
College of Electronic Engineering, Chengdu University of Information Technology, Chengdu, Sichuan, 610225, China
Haijiang Wang
, Yuanbo Ran
, Yangyang Deng
& Xu Wang
CMA Key Laboratory of Atmospheric Sounding, Chengdu, Sichuan, 610225, China
Search for Haijiang Wang in:
Search for Yuanbo Ran in:
Search for Yangyang Deng in:
HJW carried out the feature extraction on the matrix of polarimetric measurements with CNN algorithm. Moreover, he realized the classification of precipitation pattern with Softmax classifier basing on the features. YBR conducted the identification of hydrometeor type with fuzzy logic method basing on the precipitation patterns, which are classified by HJW. YYD and XW analyzed the identification performance. All authors read and approved the final manuscript.
Correspondence to Haijiang Wang.
Wang, H., Ran, Y., Deng, Y. et al. Study on deep-learning-based identification of hydrometeors observed by dual polarization Doppler weather radars. J Wireless Com Network 2017, 173 (2017) doi:10.1186/s13638-017-0965-5
Received: 21 August 2017
Hydrometeor
Polarimetric measurements
Fuzzy logic algorithm
Radar and Sonar Networks | CommonCrawl |
Fokus »Compliance« in der Digitalen Fachbibliothek | springerprofessional.de Skip to main content
weitere Buchkapitel
Chapter 5. Intelligent Interaction in Accessible Applications
Advances in artificial intelligence over the past decade, combined with increasingly affordable computing power, have made new approaches to accessibility possible. In this chapter we describe three ongoing projects in the Department of Computer Science at North Carolina State University. CAVIAR, a Computer-vision Assisted Vibrotactile Interface for Accessible Reaching, is a wearable system that aids people with vision impairment (PWVI) in locating, identifying, and acquiring objects within reach; a mobile phone worn on the chest processes video input and guides the user's hand to objects via a wristband with vibrating actuators. TIKISI (Touch It, Key It, Speak It), running on a tablet, gives PWVI the ability to explore maps and other forms of graphical information. AccessGrade combines crowd-sourcing with machine learning techniques to predict the accessibility of Web pages.
Sina Bahram, Arpan Chakraborty, Srinath Ravindran, Robert St. Amant
4. Überlegungen zur Ausgestaltung von Personalentwicklungsprogrammen
Ein Blick in den Verwaltungsalltag zeigt, dass die "Chefs in der Amtsstube" über ihre Ausbildung hinaus qualifiziert werden. Führungskräfteentwicklung findet regelmäßig und flächendeckend statt. Diese Praxis ist zum einen das Ergebnis freiwilliger Leistungen von Verwaltungen. Sie erfolgt mit dem Ziel, Führungskräfte im Haus einfach "besser" zu machen. Gleichzeitig ist sie direkte Folge des geltenden Tarif- und Beamtenrechts (§ 5 TVöD für den Bereich Verwaltung und Landesbeamtengesetze bzw. Laufbahnverordnungen). Beinah "klassisch" sind beispielsweise Seminare zum Thema Kommunikation, zur Konfliktlösung oder zum Stressmanagement. Fortbildungsabteilungen und Studieninstitute bieten ganze Führungskräftereihen an (u. a. im Rahmen der Modularen Qualifizierung) oder vereinzelt Coaching. Hinzukommen Mitarbeitergespräche, Mitarbeiterbefragungen zum Führungsverhalten oder Potenzialanalysen. Auch sie werden genutzt, um Führungskräften Rückmeldungen zu geben und sie anschließend zu qualifizieren.
Christina Winners
Leisure Industry and Hotels: The Importance of Wellness Services for Guests' Well-Being
In the last decades, the wellness industry has experienced a real economic boom worldwide. Wellness centres within hotel complexes have become one of the most important components in the development of the leisure industry. Aside from the standard offering that wellness centres include (hydrotherapy, cosmetic treatments, massage, fitness, meditation, nutritive balanced menus, etc.), one of the frequently neglected functions of wellness centres is certainly their educational nature—the spreading of the wellness philosophy with an accent on a healthy lifestyle. Such a holistic approach of applying the wellness concept can have a positive impact on the subjective feeling of well-being among the visitors of wellness centres. Wellness tourism, and wellness as a lifestyle, are certainly a part of modern trends that hotels have to adjust to for the sake of financial success and the achievement of competitive advantage in the market. Thus, a critical approach to the literature in this field would represent a useful theoretical base, both for the managers of hotels and for their customers. The results could be of interest to all stakeholders in the hotel and leisure businesses, since they could be applied when setting future standards in the fields of the leisure industry and wellness tourism.
Milica Rančić Demir, Marko D. Petrović, Ivana Blešić
Cross-level Co-simulation and Verification of an Automatic Transmission Control on Embedded Processor
This work proposes a method for the development of cyber-physical systems starting from a high-level representation of the control algorithm, performing a formal analysis of the algorithm, and co-simulating the algorithm with the controlled system both at high level, abstracting from the target processor, and at low level, i.e., including the emulation of the target processor. The expected advantages are a smoother and more controllable development process and greater design dependability and accuracy with respect to basic model-driven development. As a case study, an automatic transmission control has been used to show the applicability of the proposed approach.
Cinzia Bernardeschi, Andrea Domenici, Maurizio Palmieri, Sergio Saponara, Tanguy Sassolas, Arief Wicaksana, Lilia Zaourar
Impact of Demonetization on Textile and Apparel Industry
Demonetization drive has directly or indirectly affected the entire Indian textile and apparel industry, but the major impact of demonetization on Micro Small and Medium Enterprises (MSME) and the unorganized sector is evident. MSME sector of the textile industry is majorly driven by the contractual labor as well as a daily wager. Due to the cash crunch, textile and apparel industry was unable to pay wages and meet daily expenses for running their mills. The negative effects of demonetization are workforce layoff, closure of small units, reduction in manufacturing activities, increase in production cost, rise in raw cotton prices in some areas. The positive outcomes are that cashless transaction has been increased significantly and around 5 lakhs bank accounts have opened in the major textile clusters like Tirupur, Surat, Ludhiana and Bhiwandi. The major objective of this review is to study the impact of demonetization drive on the textile and apparel supply chain.
Anju Choudhery, Kiran Choudhery, Varinder Kaur, Parambir Singh Malhi, Sachin Kumar Godara
5. When and Where Does Genocide Occur?
This chapter is designed to be a brief history of the concept of genocide. This chapter is designed to introduce students to critical thinking about cases of genocide that might have occurred before the word/crime existed.Case studies include the Herero genocide, the Armenian genocide, the Bangladesh genocide, Darfur, and the Rohingya genocide. The case studies are designed to explore the history that led to the genocide. In this way similarities and differences can be analyzed in order to create potential warning signs of impending genocide.
William R. Pruitt
4. Who Commits Genocide?
This chapter explores the question of who commits genocide. It may not seem obvious that the "state" in many cases is committing the act. To understand this concept, it is presented by showing that the "state" does not exist by itself, it is a collection of individuals. When their work turns to crime or genocide, the state is responsible for the action.Essentially, this chapter is looking at how perpetrators can be at the macro-level (state) and the micro-level (individuals). At the micro-level, there are many reasons people commit genocide from ideology to fear to duress. These motivations also affect their personal liability for the crime. Who commits genocide is more than just "crazy" murderers; it includes the state itself and individuals.
2. How Can We Understand Genocide?
Trying to understand why genocide occurs is crucial to a full understanding of the crime and hopefully its prevention. There is no single reason why genocide might occur; there are many reasons and many possible explanations for it. This chapter looks at the different academic explanations for genocide from a variety of disciplines. Ideally, after exploring the myriad theories one can look for common factors among the disciplines that might help explain how we understand genocide.These understandings come from different disciplines but feed into the recent criminological theories of genocide. Since criminology is an interdisciplinary field it is easy to connect these diverse fields to criminology and explain what aspects of these fields could be incorporated into a general criminological theory of genocide.
6. How Do We Respond to Genocide?
This chapter is designed to explain the methods available to respond to genocide. This can include military or political intervention during an ongoing genocide or a legal response after a genocide has occurred. By exploring the Rwandan and Kosovo situations we understand that the UN response to genocide is not enough to prevent its occurrence. Following the disastrous response to the Rwandan genocide much work was put into what became known as R2P—the responsibility to protect.A discussion of how individuals can respond to genocide includes simple actions one can take when confronted with genocide. The goal of this chapter is to explore how to respond to genocide and not whether we should respond.
Chapter 14. One River and 40+ Dams: The China Factor in the Amazonian Tapajós Waterway
The recent surge in exports of Brazilian soybeans to China and Beijing's strong interest in securing control of international energy sources have emboldened the Brazilian agribusiness and energy sectors to pressure the government to open the Tapajós River for navigation, with the goal of decreasing logistics costs in the export of commodities. In order to make the Tapajós River navigable by large barge convoys, the Tapajós Waterway requires the construction of over 40 hydroelectric dams and the completion of the Tapajós Hydroelectric Complex in a river basin the size of France. This will result in the flooding of 18,700 hectares of land currently inhabited by Munduruku Indigenous communities and the clearing of up to 950,000 hectares of forest. Chinese companies have recently acquired many hydropower plants in Brazil with financing from Chinese banks, which have also offered oil-backed loans for infrastructure development in the country. This chapter discusses how the growth in Brazilian soybean exports to China, the construction of private ports in the Amazon by logistics groups, and recent Chinese direct investment in Brazilian hydropower assets have intertwined to make the Tapajós Hydroelectric Complex—and the social and environmental damages involved—increasingly inevitable.
Ricardo Andrade
Chapter 11. China's Hydro-Hegemony in the Mekong Region: Room for Improvement
Freshwater resources do not respect political boundaries and are frequently shared among countries, making water management an issue of international concern in many places. Worryingly, the number of conflicts between riparian neighbours in international river basins has, overall, been on the rise. In the light of this situation, the role of the most powerful riparian countries, the so-called hydro-hegemons, takes centre stage, as their large power capabilities are arguably accompanied by the responsibility to provide good water governance. In the Asian context, China readily comes to mind as a hydro-hegemon due to its favourable geographical upstream position and superior material power. But how has China made use of these and related assets? Simply put, has China been a good or bad hydro-hegemon? To answer this question, this chapter will apply the framework of hydro-hegemony as introduced by (Zeitoun and Warner in Water Policy 8:435–460, 2006) to China's performance in the Mekong region. The chapter's main argument is that China could still do much more to be a better hydro-hegemon in this particular river basin, providing more positive leadership and sharing the Mekong's resources in a more cooperative way.
Sebastian Biba
Chapter 6. Social Stability, Migrant Subjectivities, and Citizenship in China's Resettlement Policies
This chapter rethinks citizenship and migrant subjectivities in the context of dam development in China. In detail we answer the following questions: How have the rationalities put forth by Chinese dam resettlement policies changed since the 1980s? Which new practices have evolved that are used to form dam migrant subjects? And which types of citizenship are produced as a consequence? Building upon previous studies that have shown how China has applied a graduated citizenship approach towards internal migrants and national minorities, we argue that post-resettlement support schemes such as 'Constructing a Beautiful Home' (meili jiayuan jianshe) introduce new forms of social citizenship that further differentiate society. This scheme builds on pastoral and benevolent technologies of government aimed at reducing the perceived risk of social instability by reintegrating affected households into the Chinese national development narrative. In doing so, the scheme establishes a neo-socialist governmentality that further marginalises dam migrants. We show that new programmes implemented in dam resettlement villages are designed to create self-responsible and docile migrant subjects that are proud of their new identity rather than contesting it.
Sabrina Habich-Sobiegalla, Franziska Plümmer
The Future of Think Tanks: A Brazilian Perspective
Carlos Ivan Simonsen Leal, President and Marlos Correia de Lima , Executive Director of Fundação Getúlio Vargas in Rio de Janeiro, Brazil, explores the Future of Think Tanks and Policy Advice around the World.
Carlos Ivan Simonsen Leal, Marlos Correia de Lima
Data Is a Powerful Assistant but the Think Tanker Is Still in Charge
Pascal Boniface, Director of the French Institute for International and Strategic Affairs (IRIS) in Paris, France, explores the Future of Think Tanks and Policy Advice around the World.
Chapter 8. Conclusion: Contributions, Impacts and Recommendations for Future
This chapter summarises the study findings for each main task stated in Introduction chapter. The chapter includes four main sections, justification of how the main tasks are delivered, contributions to knowledge, implications on the practice, and recommendations for the future work.
Saleh Seyedzadeh, Farzad Pour Rahimian
Meta-modeling of Space Probe On-Board Computing Complexes
The study of small bodies of the Solar System (SBSS) is a real and important scientific problem. It demands a space probe landing onto SBSS surface. At the same time, modern space-probe landing procedures make ever-increasing requirements for on-board computing complexes (OBCC). One of the main requirements is an adaptability to the specific mission tasks. It is necessary to reduce the cost of developing the on-board computing complexes with a regard to the growing number of missions to the small bodies in the Solar System. We assume that a meta-modeling is a key to the problem for designing and modeling the on-board computational complexes. The paper describes a meta-modeling and interpretation of a spacecraft on-board hardware and software complexes intended for researching and landing onto small bodies of the solar system. An original approach that could be implemented as a CASE-technology for a full life cycle of OBCC is proposed. The approach is based on a combination a visual algorithmic modeling, a programming language and the SADT methodology. It is designed to meet the requirements of the OBCC ergonomics. The developed meta-model makes it possible to implement modeling of both hardware and software subsystems. Also, it is possible to construe the models automatically into a source code under the meta-data and metalanguage rules. The interpretation is shown in C# programming language. The approach proposed in the paper could significantly optimize the process of spacecraft OBCC design and creation. A hierarchical decomposition of the functional schemes is intended to describe in more detail both the interaction between individual elements and specific submodules of the space probe computational complexes.
Alexander Lobanov, Natalia Strogankova, Roman Bolbakov
Chapter 2. Building Energy Performance Assessment Methods
Buildings are responsible for a vast amount of GHG emission. Therefore, most countries have set regulations to decrease the gas emission and energy consumption of buildings. These regulations are diverse targeting different areas, new and existing buildings and usage types. This paper reviews the methods employed for building energy performance assessment and summarise the schemes introduced by governments. The challenges with current participates are discussed and solutions will be recommended.
Chapter 1. Understanding Micro and Small Enterprises
This chapter discusses significant factors that elaborate the entrepreneurial environment in the context of micro, small and medium enterprises (MSMEs). The business environment and entrepreneurial ecosystem are central to the discussion in this chapter, which emphasizes on micro- and macro-economic factors affecting the entrepreneurial process and performance. In addition, innovation and technology, and marketing strategies, have been discussed in this chapter. The chapter also focuses on the shifts in organizational design and governance in reference to the different types of organizational structures, work culture, social effects on entrepreneurship, and organizational behavior and decision-making.
Ananya Rajagopal
Toward a Low-Carbon Economy: The Clim'Foot Project Approach for the Organization's Carbon Footprint
The EU Emission Trading System (ETS) represents an essential part of the European policies on Climate Change, targeting the most polluting organizations, which cover 45% of the GHG emissions. However, no common framework has been proposed yet for "non-ETS organizations." The reduction of direct emissions in most of the cases is not enough for significantly tackling climate change, but an approach that encompasses also indirect emissions should be adopted, as promoted in the Carbon Footprint of Organisations (CFO), for achieving the ambitious targets set in the European Green Deal. The application of the CFO supports organizations in defining and monitoring the effects of mitigation actions: thanks to CFO, organizations are encouraged to innovate their management system, improve the use of resources, strengthen relationships in the supply chain, beside obtaining a reduction of their costs. In this context, the LIFE Clim'Foot project has given a contribution to foster public policies for calculation and reduction of the CFO. The project has dealt with two key aspects: (i) the need for national policies addressing GHG emissions of non-ETS organizations and the strategic role of structured and robust tools, such as national databases of Emission Factors; (ii) the relevance of organizations' training in fostering their commitment to account for and mitigate GHG emissions. This chapter illustrates the development and application of Clim'Foot approach for promoting the calculation of the CFO and definition of mitigation actions and to highlight the results of the testing phase in Italy. The approach is described in terms of (i) the toolbox developed (national databases of emission factors, training materials and carbon footprint calculator), (ii) the voluntary program set up to engage public and private organizations and (iii) the role played by decision-makers. Strengths and weaknesses of the Clim'Foot approach are discussed, together with opportunities of replicability and transferability of the results to support the development of a dynamic European network for carbon accounting.
Simona Scalbi, Patrizia Buttol, Arianna Dominici Loprieno, Gioia Garavini, Erika Mancuso, Francesca Reale, Alessandra Zamagni
Carbon Footprint Assessment with LCA Methodology
Extended carbon footprints are causing damage to nature and their systems propagating a series of disastrous events. These extending footprints are due to the nonsustainable industrial practices, use of fossil fuels, improper disposal, and waste management, etc. Therefore, an assessment becomes a must prior to the mitigation efforts. The chapter deals with the carbon footprint and its assessment using LCA. During which, several other topics such as carbon footprint, environmental concerns, and their mitigation, need for assessment, LCA for various methods, and LCA tools, are discussed in detail. The chapter outlines the efficacy of LCA for footprint assessment justified by the study and investigations carried out worldwide, delivering attainability to a detailed description of LCA in the assessment.
Gaurav Jha, Shatrughan Soren, Kapil Deo Mehta
Application of Intelligent Compaction (IC) as a Quality Control Tool: An Oklahoma Experience
Performance of asphalt pavements depends on the quality of compaction achieved during construction. Asphalt cores as an indicator of construction quality are not reliable because they typically cover less than 1% of the constructed pavement. Intelligent compaction (IC) estimates the level of compaction of the entire pavement layer during construction. IC rollers are equipped with accelerometers for measuring vibration, a GPS for monitoring spatial location, a temperature sensor for measuring surface temperature and an on-board computer for real-time execution of software and data storage. Although IC shows great promise as a quality control tool, there are concerns regarding the quality and analysis of data including missing data, data accuracy, data filtering, and data interpretation. Also, verifying compliance of the IC output with Department of Transportation (DOT) requirements needs accurate project boundaries. Project boundaries obtained from the onboard GPS may not be adequate for verifying compliance. In this study, the IC data from three pilot projects in Oklahoma were analyzed using the VETA (v 5.1) software, which is a map-based tool for viewing and analyzing IC data. Three different IC providers were used in collecting these IC data. A high degree of variability in collected data was observed, including inconsistent file naming, unspecified target for number of roller passes and inadequate layout of project boundaries. Despite variability, coverage, number of roller pass, compaction temperature, roller speed and roller frequency were found useful as indicators of compaction quality. Project size and operator training were also found to be important factors for successful implementation of intelligent compaction as a quality control tool.
Mohammad Ashiqur Rahman, Musharraf Zaman, Blake Gerard, Jason Shawn, Syed Ashik Ali, Kenneth R. Hobson
A Numerical-Analytical Method for Dynamic Analysis of Piles in Non-homogeneous Transversely Isotropic Media
This paper presents a novel numerical-analytical method for dynamics of piles embedded in non-homogeneous transversely isotropic soils. In the method proposed, the piles are modelled using beam-column elements, while a new type of elements called radiation discs are defined at the nodal points of the elements to simulate the wave propagation through the non-homogeneous soil medium. By using radiation discs, the discretisation is only required along the length of piles, while discretisation of surrounding medium, top free surface boundary, and cross sections of piles are avoided. Numerical results are presented and the effect of soil non-homogeneity on lateral compliances of piles and pile groups is particularly emphasised.
B. Shahbodagh, H. Moghaddasi, N. Khalili
Application of an Innovative Displacement Based Design Approach for Earth Embankments on Piled Foundations
Deep foundations are commonly employed as settlement reducers for earth embankments on soft soil layers. Owing to the presence of piles, both stresses transmitted to the foundation soil and average settlements reduce. Since the piles may be interpreted as a vertical heterogeneity for the system, during the embankment construction and the soil consolidation, differential settlements accumulate at both the base and the top of the embankment. This complex interaction mechanism is severely influenced by the embankment height and by the relative stiffness of the various elements constituting this peculiar geostructure. Nevertheless, the approaches commonly adopted in the current engineering practice and the ones suggested by design codes do not explicitly consider the material deformability and do not allow the embankment settlement estimation.In this paper, the practical application of a new displacement based design approach for earth embankment on piled foundations under drained condition is presented. This model, based on the concept of "plane of equal settlement", explicitly puts in relationship the embankment height, interpreted as a generalized loading variable, and the settlements at the top of the embankment.
Luca Flessati
Neither in the Programs Nor in the Data: Mining the Hidden Financial Knowledge with Knowledge Graphs and Reasoning
Vadalog is a logic-based reasoning language for modern AI solutions, in particular for Knowledge Graph (KG) systems. It is showing very effective applicability in the financial realm, with success stories in a vast range of scenarios, including: creditworthiness evaluation, analysis of company ownership and control, prevention of potential takeovers of strategic companies, prediction of hidden links between economic entities, detection of family businesses, smart anonymization of financial data, fraud detection and anti-money laundering. In this work, we first focus on the language itself, giving a self-contained and accessible introduction to Warded Datalog+/-, the formalism at the core of Vadalog, as well as to the Vadalog system, a state-of-the-art KG system. We show the essentials of logic-based reasoning in KGs and touch on recent advances where logical inference works in conjunction with the inductive methods of machine learning and data mining. Leveraging our experience with KGs in Banca d'Italia, we then focus on some relevant financial applications and explain how KGs enable the development of novel solutions, able to combine the knowledge mined from the data with the domain awareness of the business experts.
Luigi Bellomarini, Davide Magnanimi, Markus Nissl, Emanuel Sallinger
Development and Usability Assessment of a Semantically Validated Guideline-Based Patient-Oriented Gestational Diabetes Mobile App
Studies have shown the benefits of following Clinical Practice Guidelines (CPGs) in the daily practice of medicine. Nevertheless, the lack of digitalization of these guidelines makes their update and reliability to be a challenge. With the aim of overcoming these issues, Computer Interpretable Guidelines (CIGs) have been promoted to use in Clinical Decision Support Systems (CDSS). Moreover, the implementation of Semantic Web Technologies (SWTs) to formalize the guideline concepts is a powerful method to promote the standardization and interoperability of these systems. In this paper, the architecture of a CIG-based and semantically validated mobile CDSS is introduced. For that, the development of a patient-oriented mobile application for the management of gestational diabetes is described, and the design and results of its usability assessment are presented. This validation was carried out following the System Usability Scale (SUS) with some additional measurements, and results showed excellent usability scores.
Garazi Artola, Jordi Torres, Nekane Larburu, Roberto Álvarez, Naiara Muro
Chapter 2. The Research Concept
The proposed research has a combined descriptive and exploratory purpose. The descriptive research covers the review of the renewable energy policy instruments supporting the development of offshore wind, large onshore wind, and solar PV projects. Feed-in tariffs policy instrument and its derivatives has been a very efficient way to develop renewable energy. However, role model countries in the energy transition such as Germany are now reducing the extent of their support. The research work describes this phenomenon and its consequences for the renewable energy project developers.
Matthieu Jaunatre
Chapter 1. Issues of Historical and Managerial Research
This chapter introduces the main methodological issues in the formation and development of the History of Management Thought (HMT). Here are the main questions that the history of management thought should answer: "Why and for What purpose has one or another management idea been proposed?" Why was it proposed at specifically this time and place? "Which conditions and circumstances have affected the emergence of a new management idea?" Here emphasis is being made on the relevance of increasing scientific validity in management decisions, the general and specific characteristics of HMT as a scientific, applied and educational discipline, the role and place of HMT in the history of science, issues of organization of research and methods of HMT development, source study and other issues of HMT are disclosed. Here the reader will get acquainted with the three main concepts of management thought—the models of the police, legal, and cultural states.
Vadim I. Marshev
Chapter 2. The Origins of Management Thought: From Fifth Millennium B.C. to the Fifth Century
This chapter describes the main sources and origins of global management thinking over the centuries, from the emergence of the first human civilizations to the early feudalism era. Managerial aspects of the ancient world's management monuments—treatises of thinkers, statesmen, leaders of economies, and social, religious, and military leaders—are disclosed. The objects of comparative analysis of views on economic management were representatives of ancient states—Egypt, Front Asia, China, India, Greece, and Rome. This chapter also briefly describes management ideas in the Old and New Testaments.
Chapter 4. The Emergence and Formation of Management Thought in Russia (Ninth to Eighteenth Centuries)
This chapter examines genesis, formation, and the development of management thinking in Russia of the ninth to eighteenth centuries. The authors of ideas here are state and religious figures, academics, representatives of various Russian categories, and classes, including representatives of the nascent third word. The sources were ancient records and tales, legislation, monographs of scientists and thinkers, archival documents, and memoirs. Of particular interest is the Sylvester' treatise "The Domostroy," containing many original ideas of household management. Among the heroes and creators of Russia's management thought of this period are princes, emperors and representatives of imperial families, statesmen, advisers to emperors, and scientists such as Yurij Krizhanich, Ivan Pososhkov, and Michail Lomonosov.
Chapter 6. Western Schools of Management of the Twentieth Century
This chapter introduces the main western schools of management of the twentieth century. In all known works in the history of public opinion, this era is referred to as the era of scientific management. The characteristics of management schools show both their continuity with the management ideas of the past and their fundamentality and paradigmality in terms of future theories and concepts of management. The objects of comparative analysis are both the creators of new concepts, theories, and even schools of management (F. Taylor, A. Fayol, E. Mayo, G. Minzberg, etc.), and the axiomatics and results of the schools of management themselves (the school of "scientific management," administrative school, school of human relations, the roles of managers, etc.).
Chapter 7. Development of the Scientific Basis of Management in the USSR and in Russia in Twentieth Century
This chapter is devoted to the history of Soviet management thought, from the work of the promoters of Taylor's "Scientific management" and H. Fayol's Administrative School of Management to the original work of Soviet academics and practitioners of management on issues of effective management of the planned socialist economy before 1990. The sources were the works of researchers of this period—Russian and foreign scientists, the works of statesmen, as well as the materials of the HMT&B and AOM conferences, the work of the Leningrad School of Management, headed by professors Yuri Lavrikov and Eduard Koritsky. Particular attention in this chapter is paid to the formation and development of the Soviet scientific school of management, headed since the 1960s by Moscow State University professor Gavriil Popov, to a comparative analysis of the works of representatives of this school, as creators of the original concept of "Management as a system" and the original substantiation of "Management as a science."
Chapter 5. Management Thought in Russia in 1800–1917
This chapter of the textbook reflects the development of management thought in Russia in the 1800–1917. At this time, the works of M. Speransky appeared, for the first time, сameralist branches at Russian universities were opened, treatises on management of higher school and materials of the-trade and industry and Industrial Congresses (which focused on relevant issues of management) were published, and management reforms led by Russian government officials were implemented. The objects of comparative analysis of views on the management of the Russian state and private economy were representatives of four major socio-political movements in Russia that existed in Russia during the period under review—the revolutionary democrats, the populists, the bourgeoisie, and the proletariat. In addition, this chapter discusses various forms of creating and developing management ideas—the opening of commercial and legal schools, special training courses in management, holding major all-Russian trade and industrial congresses, etc.
6. Transformational Leadership in New Work Organizations
Leadership is defined as the way of motivating and directing a group of people to jointly work towards achieving common goals and objectives (Helmold & Samara, 2019; Fatma, 2015). The leader is the person in the group that possesses the combination of personality and leadership skills that makes others want to follow his or her direction. Leadership implies formal and informal power distribution. The Tannenbaum–Schmidt Leadership Continuum is a model showing the relationship between the level of authority you use as a leader and the freedom this allows your team (Tannenbaum & Schmidt, 2009). At one end of the continuum are managers who simply tell their employees what to do. At the other end of the continuum are managers who are completely hands off. As you move from one end of the continuum to the other, the level of freedom you give your team will increase and your use of authority will decrease. Most managers and leaders will lie somewhere in the middle between these two extremes. The Leadership Continuum was developed by Robert Tannenbaum and Warren Schmidt in their 1958 Harvard Business Revie (HBR) article: "How to Choose a Leadership Pattern". Tannenbaum was an organizational psychologist and Professor at the UCLA Anderson School of Management. Schmidt was also a psychologist who taught at the UCLA Anderson School of Management. Most leadership models ringfence a leadership style and analyse it in isolation from other leadership styles. However, in practice, a single leadership style is not appropriate for all situations. Sometimes you might want to borrow elements of another leadership style to use with an individual within your team. Other times you might completely change your style if the situation requires it. Tannenbaum and Schmidt argued that there are certain questions to be considered when selecting a leadership style (Figs. 6.1 and 6.2):
Marc Helmold
13. New Work and Corporate Social Responsibility (CSR)
The term Corporate Social Responsibility (CSR) was used in 1953 by Howard R. Bowen in his book Social Responsibilities of the Businessman and stands for the social responsibility of companies. Bowen preaches in his book for greater consideration for society by the large corporations in the USA (Corporate America), since these corporations have considerable power and, with their economic endeavors, have a major impact on the lives of ordinary citizens (Bowen5]. In the decades that followed, the concept of Corporate Social Responsibility (CSR) evolved continuously, initially through the zeitgeist of the social movements in the 1960s, for example through the civil rights movement, the consumer movement, the environmental movement and the women's movements.
1. Introduction to the New Work Concept
Working concepts, styles and behaviour have been undergoing fundamental and structural changes for several years. New Work is the outcome of this transformation and cultural change (Bergmann, 2019). The triggers for this development of New Work are many. Digitization, connectivity and globalization as well as demographic change are among the factors that contribute to the change in the world of work. The question of how companies and societies deal with the megatrend New Work is becoming increasingly important (Bergmann, 2019). The core values of the New Work concept are independence, freedom and participation in the community as outlined by the scientist Bergmann back in the 1980s. In addition to freedom and participation, New Work also integrates elements like liberty or self-esteem, a purposeful profession, development and social responsibility as shown in Fig. 1.1.
Chapter 4. Ecological and Energy Analysis of the Green Areas and the Surface Layer of Atmospheric Air in the Districts of the Kyiv City
In this chapter, the calculation of the variant of the emergetic balance between green areas (natural component) and technogenic components (cost component) of the Kyiv city's districts is carried out. The urboecosystem and its ordering can be considered as a result of plant photosynthesis and the life-sustaining ability of environmental factors together with the consumption of non-renewable resources and the further production of goods and services. Green area energy indicators were calculated for city districts in order to assess the balance of energy consumption and production within a year. Emergy is one kind of available energy that has been previously used directly or indirectly to produce a product or service. The following were used as indicators for the emergetic balance of the expenditure component: consumed energy, calculated in terms of the equivalent fuel, and energy of noise and electromagnetic radiation.
Dmytro Gulevets, Artur Zaporozhets, Volodymyr Isaienko, Kateryna Babikova
Chapter 3. Research of Chemical and Physical Pollution in Kyiv City
In this chapter, a study of the complex influence of physical and chemical factors on the state of the environment is carried out using the example of the Kyiv city. To carry out research on the chemical pollution of the surface layer of the atmospheric air of the Kyiv city, a database was formed according to the data of the Central Geophysical Observatory. Statistical samples of pollutant concentrations are characterized by large volumes and include a complete list of pollutants that were monitored. Full-scale acoustic measurements were carried out in the vicinity of the international airport "Kyiv". For the current traffic intensity, contour areas of equivalent sound levels were calculated for day and night.
Chapter 1. Research of Scientific Bases and Methodologies for Evaluating the State of Ecological Safety in Urban Areas
Ensuring the ecological safety of an urbanized area, namely the development of special measures to maintain the quality of the urban environment at the required level, is a relatively new area of research. The substantive content of the scientific and conceptual basis for the ecological safety of an urbanized territory is considered on the example of the state of quality of the surface layer of atmospheric air, taking into account individual factors of chemical and physical pollution. Determination of the quality of the surface layer of atmospheric air includes the following stages: an inventory of factors and sources of impact; collection and generalization of information on the levels of exposure to adverse factors on the population and the environment; determination of points or zones within the residential area, for which it is necessary to reduce man-made impact.
Chapter 11. Conclusions
Francisco Eduardo Beneke Avila
Chapter 5. Determinants of the Investment Rate in the Economy and Their Relevance in Entry Analysis
The issue of economic growth has been the focus of great attention in the economics literature for a long period of time. Based on the model developed by Solow, neoclassical growth theory predicts differences in growth rates based on, among other factors, a country's propensity to invest. Levine and Renelt find empirical support for this prediction. As a result, there have been numerous works that seek to identify the determinants of investment rates. From the myriad of variables that have been studied, there have been efforts to identify the ones that are more robustly associated with economic performance. This literature has a special value for antitrust law and economics because the factors that influence a firm's decision to invest are directly related to the decision of entering a market. Not taking them into account can lead to error costs in enforcement interventions.
Chapter 9. Comparative Analysis: Do Latin American Countries Follow the EU or US Standards for Entry Analysis?
In this chapter, the standards of entry analysis between the sample of Latin American countries, on one hand, and the EU and the US, on the other, will be compared. The analysis will be both positive and normative. First, the similarities and differences between the jurisdictions have to be established before one can make a conclusion of whether the relative points of view suit the different social and economic circumstances of the countries. Specifically, as will be seen, US courts seem to be more prone to dismiss cases on the basis of low entry barriers than the European Commission and the EU judiciary. Therefore, the comparative analysis will establish which rules of entry analysis have influenced the selected group of Latin American authorities and to which degree this influence is justified.
Chapter 10. How to Incorporate the Country Characteristics Studied in the Present Research Into Entry Analysis
Throughout the present work, some consideration has already been made regarding the relevant factors that antitrust authorities can take into account when incorporating into entry analysis the country characteristics under study. In this chapter, these considerations will be presented in a more in-depth and systematic way in order to provide a guideline on how the assessment can be performed.
Chapter 8. Entry Analysis in Abuse of Dominance Cases
This chapter presents the analysis of recent representative abuse of dominance cases in the selected Latin American countries. The present work will focus on the Latin American administrative authorities' decisions rather than on their judicial review. Since the present work is concerned with highly specific factors regarding entry analysis, the antitrust agencies' criteria provide a richer source of reasoning on specific barriers to entry. Higher courts usually rule on more general matters of law and, therefore, it is less likely to come across passages with reasoning that is of interest to the present research.
Chapter 4. Dynamic Aspects of Entry
This chapter deals with the theoretical foundations of dynamic aspects of market entry—adjustment costs and real options—in order to provide a structured framework for the analysis of country characteristics covered in the next chapter.
Study and Design Conceptualization of Compliant Mechanisms and Designing a Compliant Accelerator Pedal
Compliant mechanisms work on the elastic body deformations of a material to transfer or/and amplify an input displacement to an output desired displacement. They are highly preferred in applications demanding friction-less and backlash-free motion with high precision. In this paper, the accelerator lever arm and the torsional spring are replaced by an equivalent distributed compliant mechanism. In this mechanism, flexibility of lever arm eliminates the use of torsional spring and additional bolted parts in the entire assembly. The traditional accelerator pedal of passenger car consists of a stationery mount and a lever arm, loaded by a torsional spring, which is directly linked to the accelerator cable operated by the driver. The compliant accelerator pedal can be fabricated as an entire monolithic piece of polypropylene using 3D printing technique. The CAD model of the assembly is modelled on SolidWorks and simulated on ANSYS. The mathematical model of accelerator pedal is developed and simulated using MATLAB. Nonlinear model results that the necessary displacement can be achieved with precision control over the pull of the accelerator cable without compromising the automobile ergonomics. This mechanism is validated and used as an accelerator pedal in an all-terrain Baja vehicle which is designed and fabricated at the university.
Harshit Tanwar, Talvinder Singh, Balkesh Khichi, R. C. Singh, Ranganath M. Singari
Cloud of Things Assimilation with Cyber Physical System: A Review
Cloud of Things (CoT) provides a smart intelligent platform having capability of doing limitless computations by using the resources in an optimized fashion. With the advent of social networks, modern-day society relies on advanced computing technologies and communication infrastructures to share real-world events. Here comes the role of typical cyber-physical systems (CPSs) that derives tightly coupled computations with the physical world by integrating the computations with physical processes. The linkage between the computational resources and the physical systems can be done by using sensors and actuators. CoT can play a crucial role for extending the capabilities of CPSs. While this integration is still seeking more efforts for its own shape, the future CPS can dynamically adopted in different domains like healthcare, manufacturing, disaster management, agriculture and transportation etc. The deficiency of overall architectural awareness provides ample space and motivation for the academicians and industries to get involved in further studies. In this chapter, a complete literature study of CoT and CPSs oriented architectures are done which will act as a catalyst to improve the research efforts and understanding of tools and techniques. It also presents the current research opening in this area. The study is made to look beyond the current trends of architectures. The major contribution of this study is to summarize the CoT and CPSs oriented critical infrastructures with respect to the cutting-edge technologies and design considerations with respect to overall impact on the real-world.
Yashwant Singh Patel, Manoj Kumar Mishra, Bhabani Shankar Prasad Mishra, Rajiv Misra
Chapter 18. Attachment 1: BHP DLC
On June 29, 2001, BHP Limited and Billiton Plc completed the formation of a Dual Listed Companies structure, or DLC. To affect the DLC, BHP Limited and Billiton Plc entered into certain contractual arrangements which were designed to place the shareholders of both companies in a position where they effectively had an interest in a single group that combined the assets and was subject to all the liabilities of both companies. BHP Billiton Limited and BHP Billiton Plc had each retained their separate corporate identities and maintained their separate stock exchange listings. BHP Billiton Limited had a primary listing on the ASX and secondary listings in London, Frankfurt, Wellington, Zurich and, in the form of ADSs, on the New York Stock Exchange. BHP Billiton Plc has a primary listing in London and secondary listings in Johannesburg and Paris.
Don Argus, Danny Samson
Chapter 3. Organisational GovernanceGovernance
In this chapter we point out that effectiveness in corporate governance can contribute to an organisation and its outcomes from top to bottom and end to end, being a widely applicable set of sound business practices that go well beyond a compliance approach. Like many other aspects of what makes for a great organisation, effective corporate governance starts at the top, meaning the board and its directors. We outline the capabilities and characteristics of an effective board and director, and some general principles for boards and directors to consider and use as guidance, in their actions and contributions in activities ranging from CEO selection to board and organisational performance management.
Chapter 8. NAB (C): Banking in AustraliaBanking in Australia, NAB's Track Record and Trajectory
In Australia, NAB's retailing activities were principally conducted through its Australian Financial Services business unit, which provided a full range of financial services to over three million customers across all segments. It was in its Australian business that NAB established a strong credit risk management discipline that distinguished it in the industry. During the period up to 2000, its performance reflected that differentiation.
Chapter 14. BHP(F): Mergers and Acquisitions
One of the biggest days in the 130-year history of BHP was 29 June 2001, when the merger was formalised with Billiton Plc. The DLC structure meant that the combined entity was to be operated, even though it involved distinct organisations, to create the best overall outcomes for the aggregate of those entities, which would share the returns, through equal dividends. Although not formally a single entity, the business of the two entities were to be operated as if they were one. The group restructured assets from both previously separate businesses into seven business units called Customer Sector Groups, each with clear financial and operating goals and responsibilities. The group's global footprint had expanded and diversified, including to Pakistan, Gulf of Mexico, South Africa, Chile and many other regions.
Chapter 6. NAB (A): Banking and Financial ServicesNAB (A): Banking and Financial Services, 1960–2020
The changes in strategy, leadership approach, governance, services, technologies and almost every other aspect of business life make the banking/financial services industry a most interesting one to examine with the wisdom of hindsight. Lessons can be effectively learned in all these realms from past successes and mistakes in this sector. The recent Royal Commission and scandals such as the alleged 23 million breaches at Westpac reported in late 2019 make leadership, governance and strategy in the financial sector a very much live issue, with very many challenges to overcome.
Chapter 17. BHP(I): Environment
The challenge for mining companies is to find, extract and process mineral resources with the least possible disruption to the environment. Meeting this challenge requires the adoption of a broad range of protective measures, including: sensitive treatment of land during exploration; environmental and aesthetic management of land under development; environmentally sustainable production procedures during the mining and metallurgical processes and of course decommissioning and reclamation practices aimed at restoring the land.
Chapter 4. Corporate Social ResponsibilityCorporate Social Responsibility
In this chapter we outline and review the modern approach to corporate social responsibility and compare it to the classical approach, using many examples from around the globe to acknowledge the progress being made by businesses and governments, while still acknowledging that this element of strategic leadership is not fully mature in most organisations. Stakeholders are increasingly turning their attention to non-financial organisational outcomes, which for leaders mean that they must face into the challenges of tradeoffs in satisfying those stakeholders, while seeking to formulate strategies that are win-win, across dimensions of 'People-Planet-Profit', that will see their organisations well positioned in the medium and longer terms. We cite many corporate examples of how CSR work and activities are rapidly becoming mainstream and indeed core to organisations' work, including resource allocation and other strategic decisions. Noting the prevalence of published corporate CSR and sustainability reports, it is noted that a broader stakeholder approach and longer-term sustainable development initiatives are providing advantage to many organisations in attracting talented employees, customers and investors, more than ever before.
Chapter 9. BHPBHP (A): 'The Big Australian' Overview and Strategic Roots
The Broken Hill Proprietary Company Limited (BHP) is one of the leading businesses in Australia. It was the fifth biggest company after World War 2 (WW2) when the Bank of New South Wales was the biggest. It grew to become the largest in later years. At the time of writing, it is still the backbone of Australian industrial development. It is a very successful and resilient company; and is now the largest mining company in the world. Insight into its values, strategy and structure, the strategic issues it has faced, and its strategic decisions are instructive and worthy of understanding, thinking about and remembering by students of business.
Chapter 11. BHP (C): Minerals
BHP commenced operations in 1885 as a miner of silver, lead and zinc, and later expanded its principal mineral interests in order to satisfy the majority of the raw material requirements of its steel operations. Those mineral interests have since been further developed, and the Company managed large mining operations, including joint ventures in a number of foreign jurisdictions. BHP Minerals produced iron ore, coking coal and manganese ore in Australia, copper concentrate and gold in Chile, Papua New Guinea and Peru, copper metals in the United States and energy coal in the United States and Australia. BHP also had a 49% interest in an iron ore project in Brazil. It became a major global player in this sector, with strategic leadership being critical to its outcomes.
Chapter 19. Brambles: Dual Listed Company Structures
The investment banking, legal and accounting fraternities spend an extraordinary amount of time and effort developing ideas to pitch to clients—ideas for mergers, corporate reconstructions and financial products: they pitch these with enough detail to make them sound interesting, but also with enough complexity to ensure the client can't work it out alone. They hope of course to earn tidy fees if the client decides to press ahead. In reality, the client often ends up doing a lot of the leg work if they do proceed—the devil is usually in the detail—but the fact remains that few companies are set up or adequately resourced to develop sophisticated structural ideas or financial solutions on their own, nor in our view should they be. This is where the banking and professional firms shine, and can add significant value.
Chapter 1. LeadershipLeadership
This chapter provides an overview of the impact of leadership on organisations and their performance outcomes, illustrated starkly with data from National Australia Bank (NAB) where we worked, and other organisations. We then derive from our experience and from more general knowledge and examples, a set of specific characteristics, often referred to as leadership traits elsewhere, that are the key components of effective leadership. We acknowledge that such traits need to be adjusted for the contingencies of different situations yet argue that sound leadership has these basic characteristics in common, albeit customised. These characteristics can be developed, and for developmental guidance we state a set of 'leadership axioms' that have sound conceptual foundation, and practical value in helping developing leaders to clearly envisage and put in to practice an answer to the key question about leadership: 'What works?'
On Data-Driven Approaches for Presentation Attack Detection in Iris Recognition Systems
With the development of modern machine learning-based techniques for accurate and efficient classification, the paradigm has shifted to automatic intelligent-based methods. The iris recognition systems constitute one of the most reliable human authentication infrastructures in contemporary computing applications. However, the vulnerability of these systems is a major challenge due to a variety of presentation attacks which degrades their reliability when adopted in real-life applications. Hence, to combat the iris presentation attacks, an additional process called as presentation attack detection mechanism is integrated within the iris recognition systems. In this paper, a review of the modern intelligent approaches for iris presentation attack detection (PAD) mechanisms is presented with a special focus on the data-driven approaches. The presented study shows that the machine learning-based approaches provides better classification accuracy as compared to conventional iris PAD techniques. However, one of the open research challenge is to design the robust intelligent iris PAD frameworks with cross-sensor and cross-database testing capabilities.
Deepika Sharma, Arvind Selwal
Data Ingestion and Analysis Framework for Geoscience Data
Big earth data analytics is an emerging field since environmental sciences are probably going to profit by its different systems supporting the handling of the enormous measure of earth observation data, gained and produced through perceptions. It additionally benefits by giving enormous stockpiling and registering capacities. Be that as it may, big earth data analytics requires explicitly planned instruments to show specificities as far as significance of the geospatial data, intricacy of handling, and wide heterogeneity of information models and arrangements [1]. Data ingestion and analysis framework for geoscience data is the study and implementation of extracting data on the system and processing it for change detection and to increase the interoperability with the help of analytical frameworks which aims at facilitating the understanding of the data in a systematic manner. In this paper, we address the challenges and opportunities in the climate data through the climate data toolbox for MATLAB [2] and how it can be beneficial to resolve various climate-change-related analytical difficulties.
Niti Shah, Smita Agrawal, Parita Oza
A Low-Power Hara Inductor-Based Differential Ring Voltage-Controlled Oscillator
The most important component needed for all wireless and communication systems is the voltage-controlled oscillator (VCO). In this paper, a four-stage low-power differential ring voltage-controlled oscillator (DRVCO) is presented. The proposed DRVCO is designed using new differential delay cell with dual delay path and Hara inductor to obtain a high frequency VCO with low-power consumption. Results have been obtained at supply voltage of 1.8 V using 0.18 µm TSMC complementary metal oxide semiconductor (CMOS) process. The tuning range for the proposed VCO varies from 4.6 to 5.5 GHz. This low-power VCO has a power consumption of about 5–10 mW over a control voltage variation of 0.1–1.0 V. The proposed VCO circuit at an offset frequency of 1 MHz achieves a phase noise of −67.9966 dBc/Hz. The figure of merit of proposed circuit is −135 dBc/Hz.
Misbah Manzoor Kiloo, Vikram Singh, Mrinalini Gupta
Chapter 2. The Dialectics of European Integration
This chapter focuses on European identity politics, both in an administrative-institutional sense (EU, nation-state, regional autonomy, ethno-national minority parties) as well as in terms of a vehicle of public dissent and a tool of grassroots mobilization. It shows how a specific type of identity construction based on normative discourses of Enlightenment values, liberal democracy and human rights has turned into a legitimating narrative of European integration and enlargement. Despite the rejection of nation-centric politics and the commitment to a Europe of unity in diversity, this form of post-national universalism not only brackets and seeks to homogenize what is essentially and increasingly so a Europe of plural and contentious voices, but also construes newly objectified and essentialist forms of European societal and political cleavages.
Christoph M. Michael
Chapter 3. Citizenship in a Post-migrant Europe: Socio-Political Cohesion at Breaking Point?
This chapter suggests that political and administrative elites driving European integration may have underestimated—especially so in the post-Cold War era—the resilience and mobilizing force of the national(ist) idiom. Post-national and cosmopolitan paradigms of democratic politics—despite the de-territorialization of sovereignty—were clearly outpaced by national identity politics and popular opposition to immigration. The discussion further shows how debates on asylum and immigration impacted on debates of social cohesion and the integrative functions of citizenship. This does concern, above all, the question of whether there are any real prospects for European citizenship beyond a thin reality of legal nominalism. How this question is answered will not only have direct implications on EU political and social cohesion but on the sustainability of the European project as such.
Chapter 5. The Integration Paradox: Culturalizing Belonging at the End of the "Multiculturalist Era"
This chapter expands on previous arguments by analyzing discourses on the alleged failure of multiculturalism in Europe and the increasing culturalization of mainstream politics. It argues that this not only presents an integration paradox but, in a much more fundamental sense, also entails a redefinition of the basis of European liberal democracy. In a sustained theoretical reflection, the chapter argues against conceptions of liberalism that aim to invisibilize problems of cultural accommodation within a sanitized discourse of individual rights. Its core purpose thus concerns—on a theoretical level—a way of turning the experiences of pervasive pluralism and large-scale immigration into emancipatory sources of liberal democracy rather than into driving forces of its erosion.
Sensitivity of Damping for Diagnostics of Damage in Structure
Available researches on the sensitivity of damping capacity of specimens and structures as applied for the diagnostics of damage are quite conflicting. Certain of researchers revealed sufficiently high sensitivity of damping capacity for the reliable diagnostics of damage, but others declared that the change of damping due to damage is negligibly small. In the presented study this contradiction was attributed to the fact that the change of damping capacity of damaged structures is the function of great number of factors. The influence of these factors on the change of damping of damaged structure was investigated with the developed fracture mechanics-based procedure. In such a way, the sensitivity of damping for the diagnostics of crack in a beam-like structure in the case of bending and axial vibrations was investigated. In particular, it was revealed that the sensibility of damping to crack is dependent on structure's stiffness and on damping capacity of structure in the undamaged state. In addition, the intensity of stress in the damaged area should be sufficiently high to induce the dissipation of energy of vibration. As a result, a simple formula was developed to estimate the sensitivity of damping capacity as applied for the damage detection.
A. Bovsunovsky, E. Soroka
Crowd Management for Power Generation: A Critical Analysis on the Existing Materials and Methods. (Structural Modal Analysis)
Energy harvesting by means of different materials and mechanisms is considered an important topic of interest in past decades. Materials such as piezoelectric, electromagnetic and electrostatic in nature are generally used to harvest energy in many of the sensing applications. Mechanisms based on simple deformation, vibration and magnetism are used to harvest energies in power generation applications. The published research shows that the harvested energy from these materials and mechanisms are still far away from practical feasibility and optimisation. However, not a single review article is available which can provide a critical analysis on the existing materials and methods with regards to the mentioned feasibility and optimisation. In this paper, a review attempt has been made to describe the mentioned analysis. All the past research is described in two categories: Energy Harvesting through Materials and Energy Harvesting through Mechanisms. The materials used for energy harvesting contain characteristic to release electric charge under the influence of an external excitation. Several materials such as Multiwalled Carbon Nano Tubes (MWCNT), Polyvinylidene Fluoride (PVDF), Polydimethyl siloxane (PDMS) and ZirconateTitanate (PZT) with slight change in their internal properties and efficiencies are used to harvest charge. In contrast to the materials, several mechanisms are also in use to produce useful energy from available external forces. Their mechanics is principally based on phenomena like structural vibrations, electromagnetic induction, and magnetism. This review concludes that a methodology of energy harvesting which can utilise any random load and converts into maximum useful energy is still not present.
Abdulaziz O. Alnuman, Muhammad A. Khan, Andrew Starr
D-Beam Theory for Functionally Graded Double Cantilever Beam Analysis
A formulation that takes advantage of both Layer-Wise and Equivalent Single Layer approaches for modeling Functionally Graded beams is presented. Such alternative formulation, referred to as D-Beam, is here applied to model the Double Cantilever Beam specimen in order to estimate the relevant fracture opening mode of metallic graded beams made up by means of Additive Manufacturing technique. Numerical results are presented.
Calogero Orlando
Depreciation Accounting in Longevity Evaluation of Complicated Systems
Consideration is given to the systems, which consist of nonrepairable expendable items and the restorable items operating in a cyclic mode. Durability of the latter, along with the no-failure operation, maintenance ability and storage qualities is characterized by longevity, i.e. the object property to ensure safe operation until offset of the ultimate limit state providing usage of the established maintenance and repair system. Service durability also directly depends on: human errors resulting in the undeliberate result; dependent failures caused by the system latent fault, etc. The most useful technique is the operation life setting, i.e. the number of operating cycles, which the object should perform over the period of its service life. At that, the comprehensive description is given of the operation cycle, environment conditions, and qualifications of personnel. In making of the theoretical assessments to describe the materials fatigue resistance and depletion, the Weibull-Gnedenko distribution is used. In carrying out of tests, in case of revealing of the component part excessive wear (due to the safety criteria) to be checked is the deterioration rate of the devices and units mostly subjected to wear, along with the maintenance works in compliance with the article maintenance documentation. Based on all the information and analysis, the composition of the SPTA sets shall be determined proceeding from the specified values of the system reliability value. Matters related to the depreciation accounting in the complex systems longevity evaluation have been addressed. Flagship approaches to solving of the emerging problems have been outlined.
B. Avotyn', A. Smirnov, B. Belobragin
A Study On Optimal Design of Longitudinal Shape For Improving Small-Overlap Performance
This paper presents a study on the optimal design of longitudinal shape for improving the small-overlap performance, based on a computer-based crash simulation model. The small-overlap frontal impact (SOFI) event was simulated using explicit finite element method. The models were developed for the simulation according to the Insurance Institute for Highway Safety IIHS real test conditions with the Flat 150 mm radius rigid barrier and 25% overlap. Several different cross sections of longitudinal members were subjected to dynamic compression load, which occurs in small overlap frontal impact. The different shapes of longitudinal structure were compared initially to obtain the cross section that fulfills the small overlap performance criteria. The evaluated performance parameters included the absorbed crash energy, crush force efficiency, ease of manufacture and cost. Once the cross section was selected, the design was further enhanced for better crashworthiness performances by investigating the effect of material characterization, increasing the wall thickness and by introducing a trigger mechanism. Real experiments were also performed for …. The results of this study showed that the multi edges profile with 2 mm wall thickness and steel material was a good candidate for energy absorption in SOFI condition.
Nguyen Phu Thuong Luu, Ly Hung Anh
Counterbalancing Asymmetric Information: A Process Driven Systems Thinking Approach to Information Sharing of Decentralized Databases
This paper explores asymmetric information and how to counterbalance it. It utilizes the case study of a hypothetical company called "Hashable". The purpose of this case study is to exemplify a proposed solution to address the information asymmetry faced by buyers of residential real estate in New Zealand. A procedural response is provided for organizing the information needed to make an informed decision on purchasing a property. A causal loop diagram is introduced to develop an understanding of the various stakeholders involved in the proposed solution and their interaction with the information they provide. This paper highlights the core problems regarding information asymmetry within a transaction. It also provides procedural and technological solutions to counterbalance this information asymmetry while simultaneously reducing information costs and increasing reliability of the information provided.
Mark Hoksbergen, Johnny Chan, Gabrielle Peko, David Sundaram
Can Blockchain Fly the Silver Fern?
Exploring the Opportunity in New Zealand's Primary Industries
Blockchain is an emerging technology perceived as ground-breaking. Yet, technology service providers are not realising the untapped market potential as quick as it was predicted. New Zealand is not any different. Currently, the number of blockchain-based solutions available in the country is rather limited. A clear understanding of the market of blockchain is critical for service providers to recognise the opportunities and the challenges. It has been suggested that multiple industries could utilise blockchain technology to attain numerous benefits. The primary industries of New Zealand will be one of them that remains underexplored. Therefore, in this study, we use total addressable market (TAM), a technique to estimate the market size, to explore the available economic opportunity of blockchain-based solutions in New Zealand's primary industries. Our estimation suggests that it may be close to NZ$1.65 billion per year, including self-employed enterprises; or NZ$496 million per year, excluding self-employed enterprises. Besides, our review of secondary sources indicates that blockchain technology could tackle some of the challenges the primary industries are facing like food fraud and foodborne illness. However, lack of strong and practical use cases, lack of streamlined practice for data management, lack of understanding of the technology and its implication to business, and lack of regulation and legislation are the major impediments to blockchain adoption.
Mahmudul Hasan, Johnny Chan
1. Schwierige Zeiten erfordern neue Wege
Dieses Kapitel vermittelt dem Leser, dass Wirtschaftskrisen zwar als solche nichts Neues sind. Die branchenunabhängige Covid-19-Pandemie bewirkt aber einen besonderen globalen Effekt, und das inmitten eines digitalen Umbruchs. Je nach dem weiteren Verlauf der Pandemie sind vor diesem Hintergrund unterschiedliche postpandemische Zukunftsszenarien denkbar. Damit nicht genug, steigen auch die juristischen Anforderungen an Sie als Dienstleister weiter – beispielsweise mit dem neuen Unternehmensstrafrecht und seinen finanziellen Folgen. Um diesen Herausforderungen zu begegnen, müssen Sie nicht nur in der Sache gut sein, Sie müssen auch die richtigen Methoden kennen, mit dem Sie Ihre Leistungen immer wieder optimal auf den Markt bringen können. Dazu müssen Ihnen schließlich auch die aktuellen Informations-, Arbeits- und Konsumwege ihrer (möglichen) Kunden klar sein – Faktoren, die wir anhand exklusiver Statistiken für Sie aufbereitet haben.
Anette Schunder-Hartung, Martin Kistermann, Dirk Rabis
3. Weiterführende Erwägungen
Diesem Kapitel entnehmen Sie, dass das Arbeiten mit SAM – also das systematische, agile und multimediale Vorgehen – ein dynamischer Prozess ist. Dabei schließen strukturiertes und flexibles Arbeiten einander nicht aus, das eine bildet im Gegenteil das Gerüst für das andere. Dabei ist gerade bei agilem Arbeiten die personelle Seite einschließlich eines entsprechenden Werte- und Haltungskanons von grundlegender Bedeutung, wie Sie einem detaillierten praktischen Beispiel entnehmen können. Eng miteinander verknüpft sind sodann auch Agilität und digitale Anwendungen. Und schließlich: Jetzt ist die Zeit für mobile Konferenzen. Dazu bekommen Sie zahlreiche praktische Tipps zu den Does and Don'ts.
Russia's Role in the Consolidation of the Central Asian Elites
Three decades after the collapse of the Soviet Union, many questions still arise in the area. These questions affect not only the functioning and the future of the newly emerged independent republics, but even their own identity. The struggle between, on one hand, the cohesion of national states and, on the other hand, a strong identification with the Russian history and culture and the shared bonds, leads often to political clashes and social and cultural confusion. In this scenario, especially visible in Central Asia, the local elites have a strong influence. And Russia, as the regional leader and the center of that common historical space, keeps being the main reference in the area.
Francesc Serra-Massansalvador
A Model for the Assessment of the Water Footprint of Gardens that Include Sustainable Urban Drainage Systems (SUDS)
The limitations presented by traditional urban water cycle systems, which are linearly designed systems, highlight the need to develop new technologies in a new circular strategic approach. In order to quantify the improvements, new methodologies are needed that integrate indicators that assess direct and indirect water consumption, as well as the origin of the water consumed and the incorporation of grey and rainwater. The methodology proposed provides quantitative data in terms of water to calculate the payback period of the new circular systems, comparing the conventional ones with new installations of Sustainable Urban Drainage Systems (SUDS), which are proposed as alternatives to optimize the urban metabolism by improving the water infiltration. The water footprint indicator (WF) is adapted to the construction sector, it allows to quantify the direct and indirect consumption. The first approximation is made to evaluate the impact of the urban water cycle systems. To this end, three possible scenarios are modelled, one of which is a conventional system and another two with SUDS, but different gardens, one of them with autochthonous vegetation and the second one with ornamental vegetation, with greater water requirements. Through this quantification, the amortization period is analyzed in terms of water, considering; the reduction of direct water consumption achieved with the SUDS as compared to the conventional systems; and the consumption of indirect water embedded in the materials necessary for the execution of the systems. The SUDS implementation works require approximately twice as much indirect water as conventional systems, due to the necessary improvements in the terrain for the proper functioning of these eco-efficient systems. This study, together with the technical and economic evaluation, allows us to analyze the viability of the SUDS and contribute with quantitative data in the decision-making phase for the future incorporation of this type of eco-efficient systems into the urban networks. The results of the impact of an urban space renovation project applying water-sensitive urban design techniques are shown by evaluating the nature of the materials to be incorporated in the work, the hydrological design of the project, its suitability for the urban environment and its capacity to adapt to future scenarios, evaluating both direct and indirect water. Likewise, the calculation of the WF developed by Hoekstra and Chapagain, generally applied to the agricultural sector, is also adapted to the estimation of the water balance of urban systems with the presence of green areas. The methodology incorporates local biophysical, climatic and temporal data, together with the specific data of the project to calculate the water consumption in the urban area derived from the re-naturalization of urban areas, which has been little explored until now, and to have a measurable indicator to quantify economic and environmental impacts, applicable to the construction sector. In the analysis of the results, it is worth highlighting how the scenarios in which water-sensitive urban design technologies are incorporated presents higher WF values (increased by 1.7 times), referring to the materials and execution of the works than a project in which these design technologies are not applied. The saving of water resources during the use and maintenance phases is 82% per year. The balance means that, at the end of the life cycle, 66% less WF is accumulated and the amortization in terms of water of the infrastructures occurs in year 4.
Mª Desirée Alba-Rodríguez, Rocío Ruíz-Pérez, M. Dolores Gómez-López, Madelyn Marrero
Chapter 18. Asset Basic Care
Aim: To describe the features and organizational structures used in caring for assetsAsset basic care \b, from good housekeeping through to basic maintenance. To consider performance monitoring and recording and continuous improvement techniques.Outcomes: After reading this chapter you will understand the importance of good housekeeping and basic maintenance and how to organize the workplace so that these are well managed. You will be aware of the continuous improvement cycle and techniques associated with it.Topics: Introduction Total productive maintenance (TPM) or asset basic care. Workplace tidiness, standards and training Machine knowledge Basic observation and action Japanese guide words Basic maintenance Accreditation of workers Performance recording Test analyze and fix Continuous conformance Continuous improvement Deming wheel Fishbone diagram Improvement coordinator Summary and benefits
Nicholas Anthony John Hastings
Chapter 14. Cost–Benefit Analysis
Aim: The aim of this chapter is to discuss situations where the benefitsCost-benefit analysis of some or all of the activities are not measurable in terms of direct financial returns. We then indicate how asset management decisions may be approached in these cases.Outcomes: After reading this chapter you will be aware of those areas where benefits are wholly or partly of a nature which is not readily quantifiable in financial terms. You will learn about how to approach problems of this type and how to carry out cost–benefit analysis using a planning balance sheet.Topics: Definition Non-financial benefits Cost–benefit analysis outline Needs and wants User pays principle Measures of benefit Cost benefit analysis steps Activity-based cost–benefit example Planning summary sheet Regional health clinics exercise Cost–benefit spider diagram.
Chapter 15. Risk Analysis and Risk Management
Aim: This chapter introduces the concept ofRisk management riskRisk analysis, gives references to standards and major documents which deal with risk and defines terms relating to risk. The procedures in the management of risk are then outlined, and the legislative approach to risk is discussed and illustrated by an example. Various types of risk are described, and hazard analysis and the assessment of consequences are discussed. Factors in mitigating risk and contingency planning are presented. The chapter continues with an example of risk analysis and management in a water supply system. Risk considerations in safety critical plant are addressed in the separate chapter under the title of "Safety."Outcomes: After reading this chapter you will know how to analyze and treat risk. This will include an awareness of the legal approach based on meeting duty of care and regulatory obligations and the analysis and assessment of risk in relation to projects and to plant and machinery. You will be aware of techniques of hazard analysis, the assessment of consequences, the use of contingency allowances and of methods of mitigating risk. You will have seen how risk analysis was used in a water supply system application.Topics: Introduction Definitions Introduction to risk Legislative approach Management of risk Water supply system example Risk identification Risk register Risk analysis and treatment Consequences Likelihood reduction Risk treatment Contingency planning Types of risk Quantitative risk analysis Risk-cost Risk rating Risk matrix Project risk examples.
Chapter 1. Introduction to Asset Management
Aim: The aim of this chapter is to introduce to the main concepts of asset management.Outcomes: After reading this chapter you will know about: The purpose of this book, The historical background to asset management, The ISO 55000 series of standards for asset management, Definitions of assets, liabilities and related terms with reference to ISO 55000 and to accounting applications, The broad types of assets which organizations have, The types of industry to which asset management is particularly important; The aim of asset management within an organization, An outline of the asset management life cycle, The basic questions to be addressed by asset management, The benefits of good asset management, The dangers of the asset death spiral. Topics: Purpose of this book Evolution of asset management ISO 55000 series Asset Management Standards What is an Asset? The asset management role An accountant's view of assets What is asset management? Asset management system The asset life cycle Asset management basic questions Dangers of poor asset management Benefits of good asset management The asset death spiral Exercises
Chapter 11. Strategic Asset Management Planning
Aim: The aim of this chapter is to introduce and discuss the principles of Strategic Asset Management PlanningStrategic asset management planning. This is concerned with providing the organization with asset capabilities which match the needs of the organization's business plan and are consistent with the organization's financial plan. This involves creating or updating: the "Plan for the Asset Portfolio", which shows what physical assets the organization will have over a given planning period, the "Plan for Asset Support" which is the plan for the supporting facilities such as maintenance and logistics over the same planning period, the "Asset Information Systems and Procedures" such as the asset register and documented procedures such as change management. These plans are then incorporated into a document known as the Strategic Asset Management Plan. ISO 55001 at Clause 4.4 states that: "The organization shall develop a Strategic Asset Management Plan (SAMP)…". This is an overview document which provides information both to asset management personnel and to others who need to be aware of the role of asset management in meeting organizational objectives. This includes senior managers and a range of external and internal stakeholders.Outcomes: After reading this chapter you will be aware of how to tackle the issues of aligning the asset capabilities with the business plan. In particular you will be aware of the concepts and roles of the "Plan for the Asset Portfolio", the "Plan for Asset Support" and the "Asset Information Systems and Procedures". You will have seen how these plans are developed. You will be aware of the document called the "Strategic Asset Management Plan" and will have learnt about its purpose and content.Topics: Introduction Strategic Asset Management Planning Elements The Plan for the Asset Portfolio The Plan for Asset Support Asset Information Systems and Procedures Planning Teams Development Steps Plan Summary and Follow-up Action Planning Considerations Top-down and Bottom-up Support Activities Engineering and Technical Services Strategic Asset Management Plan.
Chapter 5. Local Autonomy in the Nordic Countries: Between a Rock and a Hard Place
The autonomy of local governments in the Nordic countries is ranked high in comparative indexes. However, the Nordic welfare state, based on unity, and standardised services, seems to represent the contrary. Are Nordic local governments so autonomous after all? Although having a constitutional guarantee of autonomy, local governments in Denmark, Finland, Norway and Sweden have to constantly redefine the limits of autonomy. The chapter examines the constraints to local autonomy, including heavily regulated services to upscaling and, a pressure on small municipalities to provide the services. It makes recommendations for local autonomy in the Nordic countries to define itself within the welfare state framework.
Pekka Kettunen
Chapter 17. State Supervision of Local Budgets: From Forbearance to No Concession
To guarantee fiscal sustainability of municipalities, all countries establish fiscal regulations and supervision. However, there is no guarantee that such regulation will be effective as the German state North Rhine-Westphalia shows. For decades, fiscal supervision suffered due to weak rules and inadequate implementation. In 2011, the state changed its supervisory system in favour of stricter rules. This chapter identifies causes for the persistence of an obviously failing system and reasons for its transformation. Persistence refers to political considerations and excessive demands in practice. The institutional change was caused by the global financial crisis making the old system unworkable and changing the actors' mindsets. The state reacted by strengthening rules and their implementation simultaneously. Despite positive fiscal effects, the reform had negative impacts on local autonomy.
Christian Person, René Geissler
Chapter 18. New Ways of Limiting Local Government Debt: An Empirical Assessment of the German Case
Severe fiscal pressure experienced by some German municipalities has led to a shift in the way municipalities are controlled by the responsible state governments. Instead of purely relying on a system of approving budgets and borrowing, some states have established debt relief programmes which combine grants and sanctions, or even sent austerity commissioners who take over responsibilities of councils and mayors. Whether these are deemed proportionate and legitimate interventions into the constitutionally guaranteed administrative autonomy of the local level depends heavily on their success in limiting local government debt. Based on an innovative synthetic control approach, this paper undertakes an empirical assessment of a recent debt relief programme in North Rhine-Westphalia and the deployment of an austerity commissioner, revealing that both instruments to some degree positively impacted upon local government debt, as compared to non-intervention. Nevertheless, it finds the effect is limited in substantial terms.
Steffen Zabler
Chapter 8. Life Cycle Planning and Costing
Aim: The aim of this chapter is to describe the techniques of lifeLife cycle planning cycle planning and costing and to illustrate them with an example.Outcomes: After reading this chapter you will understand the reasons for life cycle planning and costing and have seen a check list of factors that go into a life cycle cost analysis. You will understand the concept of a life cycle asset management plan, and you will have seen an example of life cycle planning and cost analysis.Topics: Life cycle asset management planLife cycle asset management plan, Life cycle costingLife cycle costing, Creating life cycle plans and costings, Life cycle planning and costing elements, Life cycle planning and costing example, Input to plans and budgets, Summary of applications of life cycle plans and costs.
Dr. Nicholas Anthony John Hastings
Chapter 22. Safety
Aim: The aimSafety \b of this chapter is to outline the main factors involved in general safety issues related to asset management. Safety critical systems are also considered, and techniques applicable to high risk plant are introduced.Outcomes: After reading this chapter you will be aware of safety issues in two main areas. Firstly, there are general safety concepts which apply to all physical assets. Secondly, there are concepts such as safety integrity levels which apply to high risk plant. You will also be aware of the need for approved engineering standards to be applied by competent personnel when developing repair specifications.Topics: Safety requirement and competence Safety practices Training and information Permits Tags Danger indications Safety critical equipment Risk-based Inspection Asset integrity management Layer of protection analysis Safety integrity level Facility siting and layout Repairs requiring engineering.
Chapter 29. ISO 55000 Series Standards
Aim: The aim of this chapter is to provide a guide to the ISO 55000ISO 55000 series of asset management standards, published by the International Standards Association. The ISO 55000 series standards describe and specify requirements for the implementation of physical asset management in asset-intensive organizations. These standards provide a general framework for the management of physical assets.Outcomes: Reading this chapter will provide information on the requirements of the ISO 55000 series of standards and give cross-references to the relevant sections in this book and in other publications. The adoption of ISO 55000 can provide: A structured view and understanding of asset management, Effective relationships between top management, financial management, asset management, operations and maintenance, Improvements in asset financial returns, Well-informed asset management decisions, Insurance, health and safety, regulatory and risk management benefits, Company recognition/marketing, Improvements in training and development. Topics: ISO 55000 ISO 55001 ISO 55002 ISO/TS 55010 Overview of planning and the standards ISO 55001 Clauses and Book Cross-reference ISO 55001 Clauses and Figures ISO 55002 Annexes and Book Cross-references ISO 55010 Annexes and Book Cross-references Strategic Asset Management Plan (SAMP) Functional gap analysis.
Chapter 28. Performance, Audit and Review
Aim: The aim of this chapter is to introduce and to give examples of indicators of asset performance and to discuss auditing and review of the asset management system itself.Outcomes: After reading this chapter you will be aware of the role of performance indicators and of some of the factors to consider in creating and applying them. You will have available some examples of performance indicators in specific applications. You will be aware of the need for audit of the asset management system and the requirement for review of the system to ensure its continuing effectiveness.Topics: Key performance indicators Railway systems Water supply systems Electricity supply systems Overall equipment effectiveness Maintenance-related performance indicators Audit of the asset management system Management review.
Chapter 12. Where Is Municipal Marketisation Heading? Experiences from England and Scandinavia
Negative experiences, newer reform trends and local circumstances have challenged the reform doctrines of the New Public Management (NPM) and raised the question of whether reforms have entered a post-NPM era. We explore this question in a comparison of experiences with marketisation—a key doctrine in the NPM—within local park and road services in England, Sweden, Denmark and Norway. The comparison draws upon survey data collected from mid-level managers in 2014–16. We conclude that although marketisation is widespread it is not dominant. Also, the practices of marketisation are partly transformed by newer reform trends. In perspective, we find that marketisation is an evolving practice, which trajectories depend on local contextual circumstances and adaption of newer reform ideas.
Andrej Christian Lindholst, Ylva Norén Bretzer, Nicola Dempsey, Merethe Dotterud Leiren, Morten Balle Hansen
Chapter 2. Artificial Intelligence and Online Family Dispute Resolution
The integration of technology with dispute resolution practices follows the identification of a range of value-added benefits in both formal and informal legal proceedings. In recent years, there has been a movement towards investigating how artificial intelligence (AI)Artificial intelligence (AI) can enhance the functioning of online family dispute resolution (OFDR) systems after successful application to other types of disputes. This chapter provides an overview of the development of AI Artificial intelligence (AI) in disputes in order to understand current progress within family law. Several existing negotiation support systemsNegotiation support systems for use in Australian family contexts are described, including Split-Up, Family_Winner, and Asset-Divider Asset Divider , the latter of which incorporates principles of justice in a game theoryGame theory framework. Negotiation support systemsNegotiation support systems facilitate informed decision-making through performance improvement via machine learningMachine learning while the integration of game theoryGame theory assists in the distribution of resources to ensure the best outcome. As online dispute resolution begins to gain public and private traction culminating in the normalisation and institutionalisation of services, it becomes important to carefully consider issues of justice, regulation, and quality assurance.
Elisabeth Wilson-Evered, John Zeleznikow
Chapter 3. Current Research and Practice in Online Family Dispute Resolution
The growth of Online Family Dispute Resolution (OFDR) means that consumers are now presented with a range of options on the market to suit their needs. With the intention of these services to optimise effectiveness and efficiency for their users, it is paramount that robust evidence be demonstrated for their quality to support their preferential use when compared to other forms of dispute resolution service delivery. The literature review presented in this chapter was conducted to scope the current research and practice evidence for online dispute resolution in family law as relating to child custody issues. The use of OFDR services in both Australian and international contexts was investigated across a range of electronic sources since 2011. Of those programs located by the review, it was evident that while more methodologically rigorous research is required, preliminary evidence shows support for OFDR effectiveness in reaching desirable and fair outcomes. The considerations for selecting technologically-enhanced services are discussed, as are the avenues for future research and directions to further develop OFDR as a viable option for informal conflict resolution. This chapter demonstrates how knowing the literature helps to inform future OFDR development and enhance service delivery.
Chapter 7. Sustainable Supplier Segmentation: A Practical Procedure
Sustainability of supply chain is determined by the sustainability performance of each partner in the chain. The relationship between buyer and supplier has an important role in improving the sustainability of supply chain. The purpose of this chapter is to explain the process of segmenting the sustainable suppliers and presenting strategies for collaboration and improvement of the suppliers. In this chapter, a six-step process is suggested in which the output of one step is considered as the input of the other step. At first, the performance indicators in each dimension of sustainability are introduced, and then given the performance indicators in the segmentation model, the sustainable suppliers are assigned into seven segments (three main segments, economic, social, and environmental; three balancing segments, bearable, viable, and equitable; and supplementary segment, sustainable). Finally, some supplier development strategies appropriate to each dimension of sustainability are suggested. The present study is beneficial for the researchers and companies' executive managers. By a deeper understanding of this process, researchers can benefit from the proposed process for sustainable supplier segmentation, and the corporate executives and experts can have the opportunity of using the supplier collaboration and development strategies in the supply chain.
Hamidreza Fallah Lajimi
Chapter 4. Case Study: The Development and Evaluation of Relationship Australia Queensland's Online Family Dispute Resolution System
The previous chapter emphasised the need for more methodologically rigorous evidence in online family dispute resolution (OFDR) if these technologically-enhanced services are to be useful and enduring. The contribution of Australia to furthering OFDR knowledge and practice worldwide is exemplified by an innovative pilot project conducted by Relationships AustraliaRelationships Australia Queensland in 2009. Both the development of the software and its subsequent evaluation were evidence-based and intended to adhere to best practice through the adoption of an iterative designIterative design that incorporated ongoing quantitative and qualitative data from all stakeholders to optimise system functioning and utility. This chapter summarises processes, findings, and recommendations of this pilot across the four stages of the program design: registration, intakeIntake, pre-FDR educationPre-FDR education, and OFDR. Clients and staff reported largely positive attitudes towards OFDR, with need to appreciate the learning curve involved in navigating the system and how the technology qualitatively changes the mediation process. This pilot sets the standard for the development and evaluation ofOnline Family Dispute Resolution (OFDR)evaluation of OFDR services in Australia and worldwide by subjecting the service to extensive systematic testing and evaluation to promote continuous learning and improvement.
Chapter 4. Science and Technology Commitment to the Implementation of the Sendai Framework for Disaster Risk Reduction 2015–2030
The Sendai Framework for Disaster Risk Reduction 2015–2030 was adopted by United Nations (UN) member states on 18 March 2015, at the World Conference on Disaster Risk Reduction held in Japan. The Sendai Framework went on to be endorsed by the UN General Assembly in June 2015. The Sendai Framework is wide in scope. This paper uses many resources of already published material to enable the reader to access a more complete summary of the science and technology commitment to the implementation of the Sendai Framework for Disaster Risk Reduction 2015–2030. In this paper on the role of science and technology engagement to provide evidence to inform policy and practice where possible, the author considered it important to emphasis the partnerships and learning she has been a part of and all significant statements that are included in this paper are in italicized quotes. The author is grateful for the many opportunities to engage at many levels with colleagues who also contributed so much to these opportunities for joint working and shared learning.
Virginia Murray
PDF View full text
Chapter 2. Evaluating Current Research Status and Identifying Most Important Future Research Themes
Concept Notes for the Group Discussion Sessions
This chapter focuses on group discussion sessions targeting the Priority Areas of the Sendai Framework for Disaster Risk Reduction 2015–2030. Day one group discussion session efforts were on Priority Area One—Understanding Disaster Risks; and Day two emphasis was on Priority Areas 2, 3 and 4.
Hirokazu Tatano, Andrew Collins, Wilma James, Sameh Kantoush, Wei-Sen Li, Hirohiko Ishikawa, Tetsuya Sumi, Kaoru Takara, Srikantha Herath, Khalid Mosalam, James Mori, Fumihiko Imamura, Ryokei Yoshimura, Kelvin Berryman, Masahiro Chigira, Yuki Matsushi, Lori Peek, Subhajyoti Samaddar, Masamitsu Onishi, Tom De Groeve, Yuichi Ono, Charles Scawthorn, Stefan Hochrainer-Stigler, Muneta Yokomatsu, Koji Suzuki, Irasema Alcántara Ayala, Norio Maki, Michinori Hatayama
Chapter 17. Disaster Resilient Infrastructure
Inducting the sustainable infrastructure rating systems enable assessment of the degree to Sustainable Development Goals (SDGs) compliance (Diaz-Sarachagaa et al. 2016). Day-to-day revenue and profit pressures relegate disaster preparedness to lower priorities. Policy-makers, processors, and community are taking serious note of disaster risk management targets and indicators. (Mitchell et al. 2013). Integrated assessments put in parenthesis disaster-proofing, technology innovation, SDG compliance and overarchingly, good governance (Lotze−Campen 2015) Integrated models: integrated assessment). Infrastructure sustainability assessment commenced with societal, environmental and economic costs, and value added spanning their life cycles. This was boosted to the next level with disaster preparedness and resilience charted by Hyogo and Sendai frameworks that focused on reduction of global disaster damage to critical infrastructure, access to multi-hazard early warning, strengthen resilience of aging and new critical infrastructure, spanning, water, transportation, telecommunications, education, health care, and sanitation (UNISDR 2015).
Bhumika Gupta, Salil K. Sen
Influence of Selected Internal Factors on the Outputs of the Financial-Sector Companies Traded on the Warsaw Stock Exchange
Financial-sector companies differ significantly from other enterprises in terms of the business activity carried out, which mainly consists in provision of financial services and consultancy with regard to financial products. Financial institutions, similarly to other companies, issue shares that are traded on the stock exchange in order to obtain additional capital. The purpose of the paper is to indicate which financial ratios significantly affect the rates of return of the shares issued by the financial institutions analyzed. The empirical analysis involves financial institutions operating on the Warsaw Stock Exchange in the years 2000–2018. The source data adopted for the analysis result from financial statements that are compliant with the applicable regulations regarding financial reporting, in particular with the International Financial Reporting Standards. Financial reports constitute the basis for analyses and evaluations of the activity of a given financial institution as well as are the source of relevant information that is used by depositors and investors in decision-making processes. The results of the unbalanced panel estimation, carried out using appropriate diagnostic tests, indicated a financial ratio, such as return on assets, that proved to be significant. Moreover, effects were indicated that are specific for particular sectors. The banking sector should be analyzed separately from other financial-sector companies.
Ewa Majerowska, Ewa Spigarska
Chapter 9. Legalizing Artificial Intelligence
Current boardroom technologies concentrate on the production and distribution of information that boards want to assist in their supervisory and strategic roles which mean that many of these technologies do not tender the advantages of AI systems themselves but instead engender the data which is the necessary lifeblood that AI needs. Advanced analytics based on AI algorithms categorize more complex patterns than is possible by human intervention, predominantly in the context of identifying fraud and money laundering in the financial services context. AI can be considered as property and so making the obligation of the clients, owners, or producers if the damage is caused because of it. The European Parliament passed a resolution proposing a form of legal personhood for Artificial Intelligence regardless that legal personality is not lightly conferred in any jurisdiction. As AI entities function at an increasing distance from their developers and owners, these AI entities confront conventional legal frameworks for attribution and liability. This author (Georgios I Zekos) considers that there is a need for attributing legal personhood to AI entities, in their present configuration, in an analogous way that of traditional corporations and the only difference is that of their virtual dimension of function because humans are creating AI entities and put them in operation and so the occurrence of AI entities put in function by other AI entities is for the future where it is supposed to have a legal personhood attributing liability for the original AI entities and be forced in an AI way by AI entities in an AI world.
Georgios I. Zekos
Chapter 6. AI Risk Management
Artificial intelligence has become a new engine for economic growth and as the central driving force of the new round of industrial reforms, artificial intelligence will further discharge the energy accumulated from prior technological revolutions and industrial alterations by generating new powerful engines to modernize economic activities such as production, distribution, exchange, and consumption. The decentralized nature of blockchain generates, the new concept of a token economy in which the community's revenue is allocated to the actual content producers and service users who generate value. In addition, Blockchain is a key technology that enables new protocols for the establishment of a token economy in the future, leading to a new economic paradigm. Digital technologies are now turning the world upside down and so an ongoing series of technological developments have transformed economic and social life. The integration of AI agents into society has led to a different manner in which persons interact with each other, along with a new kind of direct interaction presented with AI agents, which are increasingly posed in society.
Chapter 5. Risk Management Developments
Risk is a tool which makes possible the decision-maker to get knowledge about the event with destructive effects and so, the decision-maker via the analysis of risk makes the event more certain and obtaining control on it. Moreover, risk is the net negative influence of the exercise of vulnerability, regarding both the prospect and the effect of occurrence. Risk management is the procedure of identifying risk, assessing risk, and taking steps to moderate risk to a tolerable point. Furthermore, Risk sharing or risk controlling are central justifications for joining strategic alliances. Credit risk surfaces from the prospective that one participant to a financial tool is triggering a financial loss for the other participant by neglecting to discharge an obligation. Managing risk is one of the key objectives of companies operating globally and managers normally correlate risk with negative result.
Chapter 4. Artificial Intelligence Governance
AI is an approximation of human intelligence for the reason that it leaves open the prospect that AI will exceed human intelligence demonstrating a separate category of intelligence. Moreover, AI is interrelated to using computers to understand human intelligence, but it is not necessarily confined to methods that are biologically observable, which means that AI denotes the competence of a machine to imitate intelligent human behavior. The emerging digital lifeworld delivers resources for a new type of government, which means that the algorithmic government is about extracting facts, entities, concepts, and objects from vast repositories of data making those subjects and objects traceable and amenable to decision and action via the unavoidable power of inference. AAI systems host algorithmic governmentality encompassing governable subjects who function not as real people but rather as temporary aggregates of infra-personal data, which means that by numbering the system will control the globe via the deployment of statistical power encompassed by AAI systems.
Chapter 3. Management and Corporate Governance
Corporate governance refers to the relationships among the different internal and external stakeholders implicated with the governance processes planned to assist a corporation in order to accomplish its objectives. DLT adoption by market participants will involve which means that there is a likelihood that new kinds of corporation stakeholders will appear such as the token holders. Hence, these new players will lead to alterations in the securities' issuance and trading, in the shareholder's involvement, but also to a reinforcement of the rights awarded to the different corporation stakeholders and so a new role will be recognized to corporate stakeholders. Public blockchain systems are "trust-minimized," but "trust-shifting"—which indicates the need to trust in others than the officers and directors of a bona fide corporation and so in these systems that operate money, smart contracts, and possibly many other critical human practices which means that people continue to lead and make vital decisions on behalf of others.
Chapter 2. E-Globalization and Digital Economy
The new era of information technologies is referring to the globalization of communication. The quick decrease of the communication costs enhanced the dealings among countries and is a vital foundation for the structure of a stronger universal civil society. In today's technology-driven world, industry standardization, device interoperability, and product compatibility have turned out to be vital to advancing innovation and competition. The technologies and virtual places that represent cyberspace have been assimilated into the lives of people who accept the Internet as a tool for pursuing their common, real-world needs. E-government brings the government closer to citizens, defeating the barriers of bureaucracy, reducing corruption, and making decision-makers more reactive to people's needs, which means that e-services of e-government are characterized by greater efficiency and transparency.
Chapter 12. AI and International Law
Law can formally be considered as an institutionalization of practical discourse on social norms, and so modern law in Western civilizations is positive by articulating the will of a sovereign lawgiver, legalistic by applying to deviations from norms and formal. Public international law portrays itself as an instrument of universal moral values, of human rights, and of justice. There is a shift from international law to law and globalization providing a new incentive for erasing the artificial boundary between public and private in international law. It is characteristic that modern societies are far more interconnected than societies have ever been in the past, and so with the advances of technology and infrastructure, networks have quickly become an integral part of humans' lives. AAI technological advance unavoidably is altering human and social behaviors demanding an adaptation of existing norms or the creation of specific rules if the law in force proves inadequate or unproductive. Essentially, AAI will be a cross-cutting happening necessitating not only the establishment of specific standards but also the reconsidering of the feasibility and effectiveness of preexisting rules. Zekos considers that at the verge of humanity losing earth's control, there will be a war of humans against AAI machines by immobilizing the AAI intelligence and arriving at zero point for a restart.
It seems that advanced technology and Al acts and thinks like humans. AI is an exceptional information technology demanding the event of a machine that reacts and works as a mind of the human. Moreover, the upcoming of AAI systems lead to global governance challenging the conventional international law. It is worth mentioning here that AAI systems generate a clear-cut need for new sui generis rules to cope with new AAI situations or types of conduct. AAI will ignite morally problematic or politically or strategically disruptive forms of conduct by controlling the global population, deploying fully autonomous weapons forming cyber-warfare systems establishing stability in AAI global society via the AAI. In conclusion, Al has to be utilized by people for the good of the whole society and not being a weaponry on the hands of an elite to conquer earth against any costs of human life and prosperity.
Role of Personality Traits in Work-Life Balance and Life Satisfaction
In this study, we examine the role of personality traits in ensuring work-life balance and life satisfaction. For this purpose, 434 people working in service sector in Kocaeli/Turkey are interviewed through face-to-face. Five-Factor Personality scale is used to measure personality traits, and scales that have international validity and reliability are used to measure work-life balance and life satisfaction. Hayes Process analysis is used to determine whether personality traits have a role in ensuring work-life balance and life satisfaction. According to the results of the analysis, it has been determined that personality has a role in ensuring work-life balance and life satisfaction. When detailed analysis of the sub-dimensions of personality is conducted, it is seen that extroversion, conscientiousness, openness to experience, and emotional balance have a role in terms of work-life balance and life satisfaction, but no relationship is found in the dimension of agreeableness.
Sevda Köse, Beril Baykal, Semra Köse, Seyran Gürsoy Çuhadar, Feyza Turgay, Irep Kıroglu Bayat
Interactions Between Effectiveness and Consolidation of Commercial Banks in the Polish Banking Sector
Bank consolidation is a process that has been observed in the world economy since the early 1980s. Until the global financial crisis formation of bank capital groups and financial conglomerates was considered as banks' response to the globalization of financial system. After the global financial crisis, a "consolidation window" opened up in the global economy, which did not bypass the Polish banking sector. At the same time, it coincided with the support of the Polish government interested in its repolonization, which in a consequence resulted in an increase of the concentration level of bank capital. The main aim of the paper is the analysis of consolidation processes after the global financial crisis, as well as interactions between concentration of commercial banks in Poland (as a quantitative approach to the consolidation process) and their operational efficiency. The research indicates that concentration of the Polish banking sector affects effectiveness of the sector and individual commercial banks. However, scale and strength of these dependencies varies between the analyzed cases.
Irena Pyka, Aleksandra Nocoń, Anna Pyka
Globally Emergent Behavioral Patterns, as a Result of Local Interactions in Strongly Interrelated Individuals
In this paper, we study how local interactions in suitably structured social networks give rise to globally emergent states and observable patterns. An example of such states and patterns is the emergence of Panopticon-like structures that possess global surveillance properties. Our methodology is based on elements of game theory and innovation diffusion graph processes modeling social network local interactions. We provide an example of a simple social network structure in which collective actions induce the emergence of specific behaviors due to the effects of the local dynamics inherent in strongly interconnected individuals. Our work provides a framework for studying and explaining how specific social interaction patterns produce already observable global social patterns.
Christos Manolopoulos, Yannis C. Stamatiou, Rozina Eustathiadou
Insights from Lobbying Research on the Accounting Standard-Setting Process Through Comment Letter Submissions
The purpose of this paper is to provide an overview of lobbying research through comment letter submissions in the accounting standard-setting process. First, we review the theoretical framework that supports lobby behavior in accounting standard-setting process. Second, we examine the participation in lobby process and constituents' incentives to participate worldwide. Third, we analyze the studies that focus on the content of comment letters to understand the position and argument of participants, and finally, we examine the effectiveness of a lobbying strategy through the relationship between the inputs (comment letters) and output (final standard). This paper identifies fundamental questions that remain unanswered and offers avenues for future research.
Lucía Mellado, Laura Parte
Chapter 11. AI and IPRs
First of all, it is worth mentioning here that the law's struggle "to keep pace with technological developments" has always raised questions about intellectual property protections in emerging areas, which means that data-centric technologies are not an exception. Moreover, data-centric technologies have crossed national borders and attained adoption, even while patent law and copyright law have been slow to respond. It is worth noting that data-centric technologies have challenged the precise meanings of intellectual property doctrines, which did not envision such technological advancements challenging the scope of intellectual property doctrines. There is a growing sophistication of AI escalating the capability of AI to engage in knowledge work. Moreover, AI technologies infuse the role of a machine in the invention process, which means that the algorithms at the heart of artificial intelligence are playing a role in conception and reduction to practice of inventions.
Chapter 10. AI and Legal Issues
Al technologies affect the center of private autonomy and its limits, the notion of a contract and its interpretation, the equilibrium of parties' interests, the structure and means of enforcement, the effectiveness of legal and contractual remedies, and the vital attributes of the legal system of effectiveness, fairness, impartiality, and predictability. The increasing global investments in blockchain technology justify a progressive regulatory adaptation to the altering materiality and so, civil liability and the insurance sector are required to amend and govern an ever-more pressing techno-economic evolution. It is worth noting that adapting existing rules to deal with the technology will need an understanding of the various manners robots and humans respond to legal rules. A robot cannot make an instinctive judgment about the value of a human life. It is argued that the automation of legal services is a manner to enhance access to justice, diminish legal costs, and upgrade the rule of law, which means that these improvements are a democratization of law. There is a shifting role of artificial intelligence in the legal course.
Chapter 1. Introduction
Artificial intelligence is becoming global and so, it is encompassing various industries and transforming commerce, which means that AI is having tremendous economic consequences akin to transformational technologies of the past, such as electrification, manufacturing, and information technology. Algorithmic decision-making presents multiple benefits to society and so, algorithms surpass human abilities, and the set of those tasks is escalating. AI and its usage have significant impact on human lives and society as a whole. AI involves a number of potential risks, such as opaque decision-making, gender-based or other kinds of discrimination, intrusion in private lives, or being used for criminal purposes.
Peer-to-Peer Lending Development in Latvia, Risks and Opportunities
Investment opportunities have become limited due to low interest rates; therefore, investors are searching for alternative investment sources. Peer-to-peer (P2P) platforms act as mediators between investors and borrowers and provide an opportunity for mutually beneficial interaction. The aim of the research is to study the P2P lending process and to identify risks and opportunities related to this area. The research is focused on the investors' side due to the specifics of Latvian P2P lending platforms, i.e., they do not grant loans directly but use loan originators. Mixed research methods were performed as follows: a field experiment (trial investments through P2P lending platforms), a survey, structured interviews, and a focus group discussion. The study shows that rapid development of P2P lending in Latvia is driven by providing relatively lower risks to investors. The main investors' risk mitigation tools are critical originator selection, when a due diligence procedure is executed for each prospective loan originator, buyback guarantees, and payment guarantees, when marketplaces compensate the invested principal and earned interest if the borrower is late with the repayment. Most Latvian marketplaces offer to diversify investors' risk by investing in fractions of loans across different borrowers, originators, loan types, and geographies. Some marketplaces offer loan ratings based on the internal evaluation of the risks. Secondary loan market provides liquidity to investors. However, some specific risks still exist such as the P2P lending operating model's sensitivity to adverse economic development scenarios.
Irina Petersone, Ilmars Kreituss
Concurrent Correctness in Vector Space
Correctness verification of a concurrent history is challenging and has been proven to be an NP-complete problem. The reason that verifying correctness cannot be solved in polynomial time is a consequence of the way correctness is defined. Traditional correctness conditions require a concurrent history to be equivalent to a legal sequential history. The worst case number of legal sequential histories for a concurrent history is O(n!) with respect to n methods invoked. Existing correctness verification tools improve the time complexity by either reducing the size of the possible legal sequential histories or improving the efficiency of generating the possible legal sequential histories. Further improvements to the time complexity of correctness verification can be achieved by changing the way correctness of concurrent programs is defined. In this paper, we present the first methodology to recast the correctness conditions in literature to be defined in vector space. The concurrent histories are represented as a set of method call vectors, and correctness is defined as properties over the set of vectors. The challenge with defining correctness in vector space is accounting for method call ordering and data structure semantics. We solve this challenge by incorporating a priority assignment scheme to the values of the method call vectors. Using our new definitions of concurrent correctness, we design a dynamic analysis tool that checks the vector space correctness of concurrent data structures in $$O(n^2)$$ O ( n 2 ) with respect to n method calls, a significant improvement over O(n!) time required to analyze legal sequential histories. We showcase our dynamic analysis tool by using it to check the vector space correctness of a variety of queues, stacks, and hashmaps.
Christina Peterson, Victor Cook, Damian Dechev
6. Post-independence Land Reform, War Veterans and Sporadic Rural Struggles
This chapter discusses the intervening period between the second and third zvimurenga by focusing on developments central to the rise of the fast track land occupations in the year 2000. A central consideration for this period is the Zimbabwean state's failure to shift fundamentally the colonial land and agrarian structure, with the land reform programme failing to de-racialise the countryside in terms of landholdings. Alongside this stalled land reform programme were two further developments which facilitated the emergence of the third chimurenga. On the one hand, large numbers of ex-guerrillas from the war of liberation were marginalised in the post-1980 period and they began to mobilise and organise in a manner which led to the eventual formation of a national war veterans' association which expressed discontent with the Zimbabwean state and ruling party. On the other hand, because of minimal land reform, as well as ongoing land pressures and livelihood challenges in the communal areas, villagers often in alliance with war veterans increasingly began to occupy land in the 1990s in a deeply localised way. By the late 1990s, the stage was set for another large-scale episode of land struggles.
Kirk Helliker, Sandra Bhatasara, Manase Kudzai Chiweshe
3. Land Alienation, Land Struggles and the Rise of Nationalism in Rhodesia
This chapter acts as a prelude which frames the examination of the second chimurenga in Chapters 4 and 5 . In discussing land alienation, land struggles and the rise of nationalism in Rhodesia in the intervening period between the first and second zvimurenga, the chapter brings to the fore the deep grievances around land under colonial subjugation, which resulted in localised resistance and struggles in the Reserves, later Tribal Trust Lands. Grievances and struggles were firmly embedded in the historical memories of rural people as they engaged with guerrilla armies during the second chimurenga. The chapter shows social differentiation within the reserves and the tensions which sometimes arose because of this differentiation. In the case of both the pre-nationalist days and the days of emerging mass nationalism from the mid-1950s, the chapter stresses the ways in which Africans drew upon their localised experiences and grievances when confronting the colonial order, including the agrarian and land reconfiguration of the reserves. As well, the chapter has a specific focus on women, as they struggled not only against a colonial order but also a patriarchal order.
The Evolution of Chatbots in Tourism: A Systematic Literature Review
In the last decade, Information and Communication Technologies have revolutionized the tourism and hospitality sector. One of the latest innovations shaping new dynamics and fostering a remarkable behavioral change in the interaction between the service provider and the tourist is the employment of increasingly sophisticated chatbots. This work analyzes the most recent systems presented in the literature (since 2016) investigated via 12 research questions. The often appreciated quick evolution of such solutions is the primary outcome. However, such technological and financial fast-pace requires continuous investments, upskilling, and system innovation to tackle the eTourism challenges, which are shifting towards new dimensions.
Davide Calvaresi, Ahmed Ibrahim, Jean-Paul Calbimonte, Roland Schegg, Emmanuel Fragniere, Michael Schumacher
Chapter 18. Citizen Science and Policy
Citizen science has manifold relationships to policy, which is understood as sets of ideas or plans for action followed by a government, business, political party, or group of people. In this chapter, we focus on the relationship between citizen science, government policies, and the related notions of politics and polity. We discuss two core areas of interaction between citizen science and policy. Firstly, government policies can support citizen science to flourish, for example, through legitimisation or funding. Secondly, citizen science can contribute to policymaking at various stages of the policy cycle, including policy preparation, formulation, implementation, monitoring, and evaluation. Since both of these perspectives are intertwined, the policy landscape related to citizen science is complex, and it is continuously evolving. This chapter disentangles some of the complexities, with a particular focus on the European landscape, its geographic diversity, and key players (stakeholders and beneficiaries). It presents a brief history and the current context and also includes recommendations for the future with respect to governance, policy impact, sustainability of citizen science initiatives, and the role of digital transformations. We showcase the pathways of leading examples but also highlight currently unanswered questions.
Sven Schade, Maite Pelacho, Toos (C. G. E.) van Noordwijk, Katrin Vohland, Susanne Hecker, Marina Manzoni
Chapter 17. Science as a Lever: The Roles and Power of Civil Society Organisations in Citizen Science
Citizen science has become an umbrella term that encompasses a growing range of activities, actors, and issues. This chapter examines the potential of citizen science to generate transformative knowledge and argues that civil society organisations (CSOs) are key actors in this regard. However, the roles of CSOs are neglected in the literature on citizen science. We turn to the traditions of community-based research and participatory action research to learn more. With two case studies on health and safety, we show how transformative knowledge enables concerned communities to claim their rights and enriches scientific knowledge generation. Through a socio-historical analysis, we find three main roles grassroots CSOs take on in participatory research: (1) a technical role in the production of data and knowledge; (2) a governance role in the deliberation on research activities and risk assessment; and (3) an advocacy role by campaigning for transformative knowledge. These roles determine the ability of grassroots CSOs to generate legitimacy and rely on CSO members belonging to different spheres of society, scientific skills, and access to marginalised communities. Finally, we discuss the conceptual and practical challenges of accounting for CSOs' roles in order to build a more just and transformative future through citizen science.
Claudia Göbel, Lucile Ottolini, Annett Schulze
Chapter 10. Machine Learning in Citizen Science: Promises and Implications
The chapter gives an account of both opportunities and challenges of human–machine collaboration in citizen science. In the age of big data, scientists are facing the overwhelming task of analysing massive amounts of data, and machine learning techniques are becoming a possible solution. Human and artificial intelligence can be recombined in citizen science in numerous ways. For example, citizen scientists can be involved in training machine learning algorithms in such a way that they perform certain tasks such as image recognition. To illustrate the possible applications in different areas, we discuss example projects of human–machine cooperation with regard to their underlying concepts of learning. The use of machine learning techniques creates lots of opportunities, such as reducing the time of classification and scaling expert decision-making to large data sets. However, algorithms often remain black boxes and data biases are not visible at first glance. Addressing the lack of transparency both in terms of machine action and in handling user-generated data, the chapter discusses how machine learning is actually compatible with the idea of active citizenship and what conditions need to be met in order to move forward – both in citizen science and beyond.
Martina Franzen, Laure Kloetzer, Marisa Ponti, Jakub Trojan, Julián Vicens
Chapter 23. Citizen Science in the Digital World of Apps
In this chapter, we highlight the added value of mobile and web apps to the field of citizen science. We provide an overview of app types and their functionalities to facilitate appropriate app selection for citizen science projects. We identify different app types according to methodology, data specifics, and data collection format.The chapter outlines good practices for creating apps. Citizen science apps need to ensure high levels of performance and usability. Social features for citizen science projects with a focus on mobile apps are helpful for user motivation and immersion and, also, can improve data quality via community feedback. The design, look and feel, and project identity are essential features of citizen science apps.We provide recommendations aimed at establishing good practice in citizen science app development. We also highlight future developments in technology and, in particular, how artificial intelligence (AI) and machine learning (ML) can impact citizen science projects.
Rob Lemmens, Vyron Antoniou, Philipp Hummer, Chryssy Potsiou
An Aircraft Pilot Workload Sensing System
The workload evaluation is of great importance for human error avoidance training, particularly in the use of complex systems that requires different and concurrent activities. The excessive workload harms human performance even with adverse outcomes. In the aviation field, certain flight maneuvers, such as take-off and landing, are characterized by great attention and workload demand to the pilot. Thus, a system capable of measuring pilots' workload levels during flight could be beneficial to increase pilots' performance. This work aims to study the initial feasibility of a device called Cockpit Pilot Warning System that monitors the pilot workload level during flight. With this aim, an experimental campaign using a Level-D business aircraft flight simulator is conducted. Two sensors are used to acquire biological signals: a thermographic camera is used to obtain pilots' Face Temperature Variation (FTV) while a Heart sensor is used to acquire their Heart Rate (HR). The nervous system modifies FTV and HR in response to stressing or high workload events and can thus be used to monitor pilots' workload that affects their performance. The workload measurement with the thermographic camera is an indirect measurement, particularly indicated in aviation, since it is contactless. It does not interfere with the concentration and leaves pilots' freedom of movement, thus not affecting their working functions.
Andrea Alaimo, Antonio Esposito, Alberto Milazzo, Calogero Orlando
Combining Ultrasound and Surface Treatments for an Efficient Ice Protection
Different strategies may be adopted to avoid ice formation, such as power-consuming active systems and passive coatings. Several categories of surface treatments with superhydrophobic/icephobic behavior have been developed in the last decade. The goal of the coating application is to repel water droplets, delay ice nucleation and significantly reduce ice adhesion. However, surface treatments alone are not sufficient to guarantee icing protection in a wide range of humidity and temperature conditions. They should be considered as a complementary solution to traditional protection active systems to reduce their power consumption and environmental impact. This study concerns the early stage of development about a hybrid system, characterized by a low energy consumption and based on both passive techniques, the superhydrophobic/icephobic coating, and an active one, ultrasound, to remove ice build-ups from treated surfaces. Preliminary tests are conducted on a coated metal plate and the results coming from the investigation are presented.
Leandro Maio, Filomena Piscitelli, Salvatore Ameduri, Antonio Concilio, Fabrizio Ricci
A Structural-Aware Frequency Division Multiplexing Technique for Acoustic Data Communication in SHM Applications
The technological advancements in the sensor design and fabrication process brought about a new generation of smart sensor nodes to be used for Structural Health Monitoring (SHM) purposes, which are concurrently capable of data sensing and processing in situ. This is the case of GWs-based monitoring applications, where the capability of the state-of-the-art transducers to generate custom signals inspired new potentials for acoustic data communications without the need for external cabling. Thus, information about the structural integrity might be transferred between sensor nodes permanently attached to the structure and exchanged across the monitored mechanical waveguide as a numerical damage indicator. Here, a combination of square-wave excitation sequences and frequency-division multiplexing (FDM) is explored for simultaneous communication with multiple nodes. In detail, the problem of selecting the most appropriate carrier frequencies is specifically tackled, by proposing two different strategies for structural aware SHM data communication systems. A Multiple-in Multiple-out (MIMO) miniaturized smart sensor network, consisting of low-power and low-cost sensor nodes, was deployed to prove the effectiveness of the advanced solutions. Transducers were positioned in a spatially distributed and permanently installed network. Cable-free exchange of encoded information across a square metallic plate as well as on a stiffened carbon-fiber reinforced plastics (CFRP) panel is achieved.
Federica Zonzini, Luca De Marchi, Nicola Testoni, Christian Kexel, Jochen Moll
Damage Identification by Inverse Finite Element Method on Composite Structures Subject to Impact Damage
One main limitation to the implementation of an SHM system on real structures is the difficulty to accurately define the load boundary conditions and the material properties, possibly leading to damage misclassification, especially with heterogeneous materials like composites. In this framework, the inverse Finite Element Method (iFEM) enables to reconstruct the complete displacement, and thus, the strain field starting from discrete strain measures without any a priori knowledge of the loading condition and the material properties. Structural assessment is then performed by computing an anomaly index identifying discrepancies between the strain reconstructed and measured in some testing positions and exploiting the latter for computing the Mahalanobis distance to further highlight discrepancies. Though the anomaly identification framework is general for any arbitrary component geometry and damage type, the procedure is experimentally verified with a CFRP reinforced panel subjected to a compressive load with propagating delamination generated from bullet damage.
Luca Colombo, Daniele Oboe, Claudio Sbarufatti, Marco Giglio
Review on Various Coating Techniques to Improve Boiling Heat Transfer
Boiling has got prominence in the recent decades for its effectiveness in cooling of micro-electronic devices due to its superior heat extraction ability as compared to air or single-phase liquid cooling. Numerous works have been published regarding augmentation of boiling heat transfer by developing modified surfaces. Micro- and nano-surfaces have been developed for this purpose. These surfaces are engineered either by surface coating or by micro-machining. The present review attempts to elaborate the various coating techniques and methods that have been used to fabricate surfaces to improve pool and flow boiling heat transfer. The experimental studies have been primarily focused in this paper. The results obtained using the modified surfaces and the mechanisms responsible for them have been discussed.
Amatya Bharadwaj, Rahul Dev Misra
Chapter 2. Gavagai? The International Politics of Translation
This chapter unpacks the politics of translation in four steps. In a first step, it reviews how translation is made unproblematic in contexts as diverse as the literature on international norms, actor-network theory, and in a generalised attitude toward social research commonly dubbed 'positivism'. Second, it turns to W. V. O. Quine's influential take on the indeterminacy of translation to highlight how it effectively disrupts routinised attempts to render translation unproblematic. A third step discusses these attempts in the broader horizon of a quest for certainty, a longing for knowledge to stand on a firm ground, which contrasts sharply with the reflexive interplay of social relations of translation. In a concluding step, the chapter discusses the politics of both translation and untranslatability in terms of its inextricably international dimension.
Benjamin Herborth
Gravitation of Blockchain in Shared Services: The Next Phase of Service Delivery Strategy
A Blockchain is an immutable, tamper-proof, shared ledger of state changes of a digital asset. It is an incorruptible digital ledger of economic transactions that can be programmed to record not just financial transactions but virtually everything of value. This digital ledger is managed via a distributed network across many nodes that can verify and confirm those transactions through consensus. The implications of the technology are far-reaching but there are conditions that should be met in order for Blockchain to be a viable solution. The purposes of this research are to (1) explore the current Blockchain use cases in Shared Services (2) understand the value created by Blockchain in Supply Chain Management and (3) study the tactical challenges in adopting a Blockchain strategy in Shared Services. In addition to a literature review conducted, we conducted in-depth interviews with selected Shared Services Leaders and experts. Results of our research indicate that Blockchain technology can deliver on expectations and implementation in Shared Services organizations will require simple steps. This study provides the data necessary for executives to build a business case for applying Blockchain technology in Shared Services and investigates the potential that Blockchain has to revolutionize industry and deliver gains in speed, security, transparency, traceability and accountability for a wide range of business processes.
Vipin K. Suri, Marianne D. Elia, Jos van Hillegersberg
What Do You See in Your Bot? Lessons from KAS Bank
The introduction of robotic process automation (RPA) has created an opportunity for humans to interact with bots. While the promise of RPA has been widely discussed, there are reports suggesting that firms struggle to benefit from RPA. Clearly, interactions between bots and humans do not always yield expected efficiencies and service improvements. However, it is not completely clear what such human-bot interactions entail and how these interactions are perceived by humans. Based on a case study at the Dutch KAS Bank, this paper presents three challenges faced by humans, and consequently the perspectives humans develop about bots and their abilities to perform work. We then provide a set of five practices that are associated with the management of the interactions between humans and bots.
Ilan Oshri, Albert Plugge
Digital Maturity: A Survey in the Netherlands
Digital transformations facilitate the need for speed. However, how the required digital maturity to manage transformation is not well-understood. This Dutch research examines inhibitors for digital maturity and is focusing on the business as well as the information technology side. Using a literature review and survey research of managers from national and global firms based in the Netherlands; we present a research model and empirically test the hypothesized relationships. The results show the inhibitors for digital maturity including the capability limitations for both the Chief Information Officer and the business representatives. In the research also the balance between achieving the digital business and information technology maturity has been measured: number of months required to achieve digital maturity. There is support for the hypothesis that information technology and business digital maturity are balanced.
Erik Beulen
Smart Contracts for Global Sourcing Arrangements
While global sourcing arrangements are highly complex and usually represent large value to the partners, little is known of the use of e-contracts or smart contracts and contract management systems to enhance the contract management process. In this paper we assess the potential of emerging technologies for global sourcing. We review current sourcing contract issues and evaluate three technologies that have been applied to enhance contracting processes. These are (1) semantic standardisation, (2) cognitive technologies and (3) smart contracts and blockchain. We discuss that each of these seem to have their merit for contract management and potentially can contribute to contract management in more complex and dynamic sourcing arrangements. The combination and configuration in which these three technologies will provide value to sourcing should be on the agenda for future research in sourcing contract management.
Jos van Hillegersberg, Jonas Hedman
Chapter 10. Towards a New World Trade Order? The EU, Brexit and North-East Asia
British political leaders regularly speak of the importance of free trade. In doing so, they overstate the importance of trade with North-East Asia but understate that of inward investment, which is of greater importance to the UK than to any comparable economy. The rise in this has gone in tandem with how modern manufacturing has responded to globalisation through global value chains. Bilateral and regional free trade agreements need increasingly complex rules of origin to cover these, reducing their attraction to businesses. Although the UK's preference is to negotiate new bilateral trade agreements, nonetheless, Japan has suggested that it could join the new trans-Pacific CPTPP regional trade agreement and that the EU and CPTPP could together negotiate a further agreement. This would be the world's largest free trade area, covering more than 30% of global GDP. For Japan, the proposal seems to be defined primarily by its rivalry with China and Korea, but it is still an opportunity for the EU to enhance trade relations with the Pacific Rim and to project its values. But the EU should push for inclusion of Korea and Taiwan in the agreement to ensure maximum impact.
Chapter 2. International Organizations
"International Organizations" describes the three main types of IOs—the United Nations System, regional and sub-regional organizations, and other intergovernmental organizations outside the UN system that are built upon cultural, linguistic, religious, or historic ties—as well as international humanitarian organizations. It also provides a comprehensive overview of the various types of UN peace operations, including peacekeeping operations, political missions, police missions, and human rights missions.
Jonas Claes
Chapter 2. Experimental Techniques
This section deals with the experimental techniques that have been used during the thesis. It covers several areas, from material to device preparation and characterisation. At each subsection, we briefly describe an experimental technique, explaining also its utility in our research.
Daniel Montero Álvarez
Offline and Online Citizen Activism in Russia
The article is devoted to the analysis of civic activity in modern Russia. The article presents the results of a longitudinal study of civic activity in Russia since 2014. The study is conducted by a survey of experts. Particular attention is paid to the analysis of the development of online and offline civic activity.Considerable attention is paid to the analysis of mobilization and demobilization in civic activity. It examines what forms of organizations are most significant in civic engagement, as well as how authorities react to their activities, what tools are used to demobilize citizens.The research show that the degree of development of civic activity has remained at approximately the same level for several years. At the same time, on-line activism is more developed than off-line. It seems that online activism is more massive and affordable, less labor-intensive for ordinary participants. At the same time, the Internet provides a fairly diverse set of tools, the application technologies of which are developing. Internet technologies are used as a mechanism by which political action can be seen by authorities and the public. At the same time, the state is forced to respond to such changes and is stepping up to regulate various forms of activity on the Internet.
Alexander Sokolov, Asya Palagicheva, Yuri Golovin
On the Legal Issues of Face Processing Technologies
The article analyzes the problems and prospects of using recognition technologies for human faces. The authors note that their development over recent years brings together the problems of the right to a personal image and the right to privacy, enshrined in the constitutions of most democratic countries. This is due to the fact that these technologies make it difficult, and, in some cases, impossible (or inappropriate) to use traditional legal mechanisms to protect these rights. In this regard, the authors propose to extend the concept of personal integrity to the "digital forms of existence" of an individual reflected in personal images, videos, virtual accounts, etc.The authors propose to put some approaches formulated in the article as the basis of the legal regulation of the use of facial processing technologies. In particular, there should be a legislative ban on the development and use of programs and systems that search and process photo and video images that are not publicly available, and legal liability measures should be established for its violation. On the contrary, a person's posting of such information in the public domain should be interpreted as his consent to their search and comparison.Otherwise, issues should be resolved with the processing of photo and video images, as a result of which they are subjected to various kinds of distortions. Although the prohibition on creating such fakes is unreasonable, their publication and distribution may be restricted by law.
Roman Amelin, Sergey Channov
Simulation of Human Upright Standing Push-Recovery Based on OpenSim
Investigating the human standing balance mechanisms under push-recovery task is of great importance to the study of biped robot balance control. Under human push-recovery mission, the passive stiffness, stretch reflex and short-range stiffness control mechanisms of human ankle joint are the main components in the internal mechanism of human body. To this end, this paper dedicates to evaluating the roles of the three aforementioned mechanisms during human upright standing push-recovery mission. Firstly, based on the simulation platform OpenSim4.0, this paper chooses a simplified lower-limb musculoskeletal model as the research object. Subsequently, this paper completes the design of the passive stiffness, stretch reflex and passive stiffness controller, and completes the static standing test and upright push-recovery simulation of the selected musculoskeletal model. Finally, in order to verify the effectiveness of the simulation, this paper uses electromyography, force plate and dynamic capture system to collect the relevant data of the human upright push-recovery. The experimental and simulation results reveal that the selected musculoskeletal model can basically simulate the process of human upright push-recovery under the joint actions of the three mechanisms noted above, which, to some degree, can reflect the effectiveness of the established method. Thus, the established method may provide some insights on the balance control of the bipedal robot.
Ting Xiao, Biwei Tang, Muye Pang, Kui Xiang
Variable Impedance Control of Manipulator Based on DQN
For traditional constant impedance control, the robot suffers from constant stiffness, poor flexibility, large wear and high energy consumption in the process of movement. To address these problems, a variable impedance control method based on reinforcement learning (RL) algorithm Deep Q Network (DQN) is proposed in this paper. Our method can optimize the reference trajectory and gain schedule simultaneously according to the completion of task and the complexity of surroundings. Simulation experiments show that, compared with the constant impedance control, the proposed algorithm can adjust impedance in real time while manipulator is executing the task, which implies a better compliance, less wear and less control energy.
Yongjin Hou, Hao Xu, Jiawei Luo, Yanpu Lei, Jinyu Xu, Hai-Tao Zhang
Towards Safe and Socially Compliant Map-Less Navigation by Leveraging Prior Demonstrations
This paper presents a learning-based approach for safe and socially compliant map-less navigation in dynamic environments. Our approach maps directly 2D-laser range findings and other measurements to motion commands, and a combination of imitation learning and reinforcement learning is deployed. We show that, by leveraging prior demonstrations, the training time for RL can be reduced by 60% and its performance is greatly improved. We use Constrained Policy Optimization (CPO) and specially designed rewards so that a safe and socially compliant behavior is achieved. Experiment results prove that the obtained navigation policy is capable of generalizing to unseen dynamic scenarios.
Shiqing Wei, Xuelei Chen, Xiaoyuan Zhang, Chenkun Qi
Management and Service System of CAS Key Laboratory
Key laboratory system is an important part of the scientific research and technological innovation system, the core strength of basic research, applied research and high-tech frontier exploration in Chinese Academy of Sciences (CAS). The purpose of developing the management service platform of Key Laboratory is to realize the informatization of the management service of CAS Key Laboratory (CAS-KLMSS), so the managers can conveniently understand and master the research and operation of the Key Laboratory, and more support will be provided for the implementation of innovation-driven development strategy. This paper designs a platform architecture system based on data flow and approval process for different users and work scenarios according to the actual needs of laboratory management. It introduces the idea of generating system "Database" through filling of laboratory annual reports, to make all data captured, accounted and analyzed. The security mechanism of the system is developed to ensure the encryption of all data transmission and reading. Finally, it outlooks the future development trend of the informatization of the key laboratory management in CAS.
Hongfei Hou, Xiaoning Li, Xuerui Bai, Jue Wang, Fan Yang, Ying Wang
Construction of a Scientific Research Integrated Management Information Service Platform Integration in a Form of Cross-Platform and Multi-disciplinary Organization
In the era of big science, the scientific research is basically characterized by cross-platform, multi-discipline, and large-scale collaboration. Based on the abovementioned characteristics and by organically combining information with scientific research and integrating new generation information technologies (e.g., Internet of Things, Cloud Computing, Big Data and Mobile Internet), this study aimed to reconstruct the cloud center-based technical architecture and build an integrated scientific research management information service platform with scientists as center, focusing on scientific research activities and covering the integrated solutions which involved human resources, finance, condition guarantee, and daily office. The platform was expected to have the following integrated information service functions: the management collaboration of scientific research projects], the digitalization of scientific research activities, the compliance of scientific research funds, the sharing of equipment and instruments, the servitization of talent teams, the e-commercialization of supporting activities, the cyberization of scientific communication, and systematization of scientific and technological think tanks. Besides, the platform was expected to achieve the following objectives: to realize the horizontal close-loop management and vertical business interconnection of research units by human-centered application experience; to greatly improve the compliance of business and reduce the management risk of research units by refined full-cost control; to provide information support for improving the management system, optimizing the corporate governance structure and promoting the multi-disciplinary and cross-field big science cooperation and innovation.
Binjian Qiang, Rui Zhang, Yugang Chen, Tongtong Zhang
Opportunities and Challenges for Biometrics
Biometrics refers to the science and technology of automatic identification achieved by computers through acquiring and analyzing physiological and behavioral characteristics of human body. The purpose of biometrics research is to give computers advanced intelligence to automatically detect, capture, process, analyze, and identify digital biometric signals, that is, make machines "can see and hear". This is one of the basic functions of machine intelligence as well as one of the most significant challenges in theoretical and applied research human beings face. In conclusion, biometrics research is important in terms of both academic significance and practical value. In recent years biometrics has become an important part of national strategies such as the "Internet + Action Plan" and the "Development Plan on the New Generation of Artificial Intelligence". At the same time, it has already become a new growth point for strategic high-tech and electronic information industry in the field of national and public security. This paper introduces research progress of several common biometric modalities such as face, iris, fingerprint and gait, summarizes development trends and opportunities of current biometrics technology, and analyzes main challenges on the road to the development of a new generation of biometrics. Finally, this paper provides some suggestions regarding the future development of biometrics.
Zhenan Sun, Qi Li, Yunfan Liu, Yuhao Zhu
Biomedicine Big Data—Trends and Prospect
This forward-looking review focuses on the development and applications for Biomedicine Big Data (BMBD), and its role in the engineering system for data management, scientific and technological research and development, as well as in social and economic transformation. The review starts with an elaboration on the complex connotations of BMDB from the inter-disciplinary point of view. It then explores the implications of BMDB in sectors such as life science research, medical and health institutions, and biotechnology and bio-medicine industries in connection with the challenges and opportunities faced by social and economic development. The recent COVID-19 outbreak is used as an illustrative case study. The review ends with an analysis of a decade of BMBD practice, both domestically and abroad, with suggestions for policy-making and solutions to tackle major challenges from China's perspective. It is hoped that any BMBD-related institutions, including administrative, academic, industrial, financial and social organizations,practitioners and users will benefit from this insightful summary drawn from the past decades of BMBD practice. Any critical comments and constructive suggestions are sincerely welcomed by the authors.
Guoping Zhao, Yixue Li, Daming Chen, Yan Xiong
37. Toxic Leadership: Managing Its Poisonous Effects on Employees and Organizational Outcomes
This chapter gives an overview of the research on toxic leadership with regard to its definitions and the different ways in which toxic leadership has been dimensioned and measured. The chapter describes predictors of toxic leadership, poisonous effects of toxic leadership on employees, and organizational outcomes as well as a toxic leadership process. The author describes possible ways employees, HR personnel, and organizations may cope with and manage toxic leaders as well as the bright sides of toxic leadership with regard to its positive effects for individuals and organizations. Lastly, this chapter proposes future directions for researchers and practitioners.
Emem Laguda
10. Diverse Personalities, Egos, Roles, and Relations: Toward Workplace Wellbeing
This chapter introduces and integrates four theoretical and practical approaches to workplace wellbeing: (1) personality approach that addresses personal identity theories, personality types (e.g., Myers and Briggs), and personal strengths (e.g., StrengthsFinder) to enhance self-awareness; (2) ego or adult development approach with its various developmental stages that chart the dynamic developmental stages as experienced over a lifetime; (3) multiple role identity approach that leads to the fluidity of leadership and followership, which has the positive potential for enhanced individual wellbeing and organizational team performance; and (4) relational model approach that extends its four fundamental forms of interpersonal relationships (e.g., Communal Sharing, Authority Ranking, Equality Matching, and Market Pricing) further into the six complex social relations in the metarelational model with proper inclusion and preclusion of social relations that enhance workplace wellbeing within the bounds of legality, morality, and cultural norms. These four approaches assume diverse intrapersonal and interpersonal perspectives and provide varying yet connected theoretical understanding and practical implications for a workplace wellbeing located at both the individual and organizational level. For each approach, the theoretical definitions presented are followed by practical explanations and explorations supported by examples of what the theories imply for workplace wellbeing. Workplace wellbeing comes from both personal and relational perspectives. This chapter demonstrates that an integrated understanding of self and the self's interdependency with others through dynamic and permissible relations is instrumental to workplace wellbeing.
Petros G. Malakyan, Tim Schlak, Wenli Wang
43. How Wakeful Leaders Create Flourishing Workplaces
Wakefulness is reviewed in this chapter as a critical characteristic for leaders. It is described as the way of an awakened leader and an effective way to secure flourishing workplaces. Wakefulness is subdivided into three dimensions: internal, external, and integrated. Each of these dimensions is briefly reviewed. In a review of ways for wakeful leaders to create flourishing workplaces, the chapter discusses two critical strategic foundations: (1) the macro-to-micro approach, in which leaders first consider the macro needs and then formulate ways to fulfill those needs, and (2) ecumenical learning, in which the well-being of all stakeholders is considered. In the macro-to-micro approach, profits are reformatted from a starting point to a rewarding consequence of need-fulfilling actions, and gratification of all stakeholders at all levels is guaranteed. Five considerations are thereby provided, which leaders could use as a guide toward implementing the macro-to-micro approach. In the discussion of ecumenical learning, a comprehensive and revolutionary style of organizational learning, the chapter presents a number of factors to be considered, from the moment a deviation surfaces or an insight for a change in the status quo appears to the evaluation of the ramifications of this application to stakeholders inside and outside the direct organizational or even industrial environment.
Joan Marques
54. Let My People Go: Emancipating Values as a Remedy for Religious Role Conflict
A decades-long decline in job satisfaction in the United States has inspired researchers and practitioners to seek out changes that might explain the trend and suggest solutions. The numbers remain dismally low, with half of American workers still declaring they are dissatisfied with their jobs. One prognosis that has been offered for this long-term trend is increasing role conflict. There is evidence that demographic, cultural, and even political shifts resulted in an increase in role conflict issues for American workers. Many organizations have made an effort to address the issue, adding perquisites such as company-provided daycare, flextime, family-leave plans, and even concierge services. Meanwhile, these same organizations, ironically often as part of their diversity initiatives, have implemented policies which segregate workers from their deeply held values. Religious role conflict has been virtually ignored by both scholars and industry. This chapter will examine the issue of religious role conflict in modern organizations and suggest actions which emancipate worker values as a potentially effective treatment.
Mumphord Kendall
31. Improving Engagement During Times of Change
The vast majority of change initiatives fail to meet their objectives, and most decimate their organization's levels of engagement in the process. The effect of plummeting employee engagement during turbulent times creates a downward spiral that can result in permanent damage to the organizational culture and capabilities. This phenomenon has led some to believe that change can only be achieved at the cost of employee engagement and that engagement can only be improved during periods of stability. Our work suggests that this is a false dichotomy. Through careful planning and active management, some organizations utilize these times of change to deploy strengths-based, positive approaches to successfully deliver their change agenda while simultaneously cultivating greater work meaningfulness and engagement. In this chapter, we examine a case study that demonstrates, through the use of Appreciative Inquiry (AI) as one such approach, how taking on aggressive change initiatives in this manner can be leveraged as an opportunity for widescale reinvention of the organization, enabling greater work meaningfulness, engagement, and flourishing.
Melissa A. Norcross, Patrick Farran
14. Happiness and Workplace Well-Being: Transformational Leadership and the Role of Ethical and Spiritual Values
Happiness and well-being – being the main objectives of human pursuit (Fisher 2010) – have attracted attention in organizational context (Cooper and Marshall 1978; Smith et al. 1995; Danna and Griffin 1999; Simone 2014), given the growing emphasis on quality of life at work. Increasingly, larger part of waking life is being spent at the workplace, resulting in spilling over of the workplace experiences into personal and family life. Happiness and well-being have also known to have positive effect on customer care and, consequently, on profitability and productivity. It has been observed that transformational and other forms of positive leadership have great potential to contribute significantly to individual's happiness and well-being (Turner et al. 2002; Sivanathan et al. 2004) by ensuring justice with different stakeholders and customers but also providing money as well as meaning that help to eradicate negativity and useless stress and contribute to positive experiences. Transformational leadership and its variants like authentic leadership, servant leadership, ethical leadership, and responsible leadership are deeply rooted in spiritual and ethical values (Kumar and Vij 2014). Practicing moral values stemming from spirituality contributes to individual's happiness and well-being (Ricard 2008). This chapter explores how transformational leadership and its variants contribute to happiness and workplace well-being and how spiritual values contribute to transformational leadership as well as happiness and well-being of the people in the workplace.
Varinder Kumar, Satinder Dhiman
29. Work Alienation and Disengagement: Sexual Harassment and Uber
Work is fundamental to human flourishing. A toxic work environment can lead to work alienation and disengagement, adverse to human flourishing. Toxic leadership, including sexual harassment by managers, a form of bullying, creates a toxic work environment. Workers value transparency and fairness. The typical way that sexual harassment complaints are resolved in work organizations involves mandatory arbitration and nondisclosure agreements. Not only are these processes non-transparent, but they also enable the continuation of the toxic behavior. The #MeToo movement led to whistleblowing about sexual harassment at the Weinstein Company, Fox News, CBS, NBC, and Uber. Investigations conducted at Uber, following a complaint by a female engineer posted on a public blog, resulted in the resignation of the founder of Uber as CEO and widespread change in corporate procedures, including performance management and compensation systems. New York, New Jersey, and California all have prohibited secret nondisclosure agreements settling sexual harassment complaints. High-tech companies including Uber, Microsoft, Facebook, and Google have voluntarily abandoned mandatory arbitration of sexual harassment claims. Significant culture change is required to eradicate sexual harassment in the workplace, so that the sex roles of female workers are not defined as salient, but rather female workers are judged in terms of the effectiveness of their job performance. Performance management and compensation systems for executives are required to create real culture change in work organizations. The focus on improving organizational transparency and fairness would appropriately be expanded including race harassment and gender identity issues in the workplace.
Paula Alexander Becker
12. Seeking Meaning for the Contemporary Workplace: Insights from the Desert Fathers and Mothers
Contemporary organizations have become increasingly aware of an employee's desire for meaningful work. According to Afsar, Badir, and Kiani (J Environ Psychol 45:79–88, 2016), employees who feel a "sense of self-worth, meaning, interconnection, interdependence and collective purpose" (pp. 95–96) in their work are more likely to be intrinsically motivated to accomplish tasks and be more innovative (Afsar and Rehman, J Manag Spiritual Relig 12:329–353, 2015). Organizations that create a climate characterized by trust, open communication, and service see an increase in productivity and efficiency, a reduction in expenditures, higher customer satisfaction, lower rates of employee turnover, and deeper organizational engagement (Afsar and Badir, J Work Learn 29:95–108, 2017; Podsakoff et al., J Appl Psychol 94(1):122–141, 2009).As twenty-first century organizations and their members continue their quest for greater productivity and purpose, one historically distant source has emerged as an enduring reservoir of wisdom. Distressed by a lack of respect for human dignity and authentic community, the desert mothers and fathers disentangled themselves from secular society in search of a deeper grasp of interiority. For instance, in his Conferences, fifth-century C.E. monk and writer John Cassian (1997) tells the stories of individual spiritual leaders who lived and prayed in the deserts of Egypt. One such spiritual leader, Abba Moses, explains that while the ultimate goal of monastic life is the kingdom of God, the more immediate goal (that which leads to the kingdom) is the acquisition of puritas cordis, or purity of heart. Thomas Merton (The wisdom of the desert. New Directions, New York, 1960), a twentieth-century C.E. monk and writer, wrote that one who is pure of heart "has an immediate apprehension of the way things really are" (p. 8); i.e., such a person is not prey to extreme emotional reactions, considers things from a transcendent point of view, has discretion, and responds appropriately and completely to each person and every situation he or she encounters. In other words, personal well-being.Whether religious or secular, leaders are prey to addictive and neurotic thinking, segmentalism, and psychological projection, and their dysfunctional thinking and acting leads to unhealthy organizational environments. The concept of purity of heart, developed in the early years of Christian monasticism, has much to offer leaders and scholars of leadership, not only in understanding "the way things really are" in organizations but in how personal well-being is connected to community well-being, and vice versa.To that end, the purpose of this chapter is to draw wisdom from the stream of experience shared by the early Christian monks – the desert mothers and fathers – and illustrate their relevance to the contemporary quest for workplace well-being and human flourishing.
Michael R. Carey, Dung Q. Tran
Hardness vs Strength for Structural Steels: First Results from Experimental Tests
Cultural heritage protection and restauration are fundamental matters. Intervention design requires preliminary modelling and analysis to carefully simulate the structural behaviour of existing buildings. The identification of constructive schemes is based on direct surveys, whereas direct testing are required to reveal mechanical and physical properties of materials and their degradation status. Clearly, higher knowledge levels correspond to minor penalties in terms of material performances. For metal structures, regulations provide the employment of destructive investigations only. Furthermore, the sampling of specimens often collides with the safety requirements of artifacts. Therefore, there is a strong need for non-destructive investigations, such as the Leeb method, for a reliable in-situ characterization of carpentry steels. A fundamental step towards reaching this aim is represented by the identification of a theoretical relationship between Leeb hardness values, measured in-situ, and experimental tensile strengths. In order to identify a generally valid correlation, data of the past four years were collected from the database of the Tecnolab s.r.l. company. The experimental setup was based on in-situ Leeb analysis followed by samples collection for consequent tensile tests performed in the laboratory. The experimental data, compared to the trend provided by internationally valid guidelines, provide resistances that the regulations tend to overestimate. Therefore, designing an intervention using these resistances would not be on the safe side. Further analyses should be performed to investigate determinants related to in-situ conditions altering the steel resistance with the aim of identifying potential corrective factors.
Antonio Formisano, Antonio Davino
A State-of-the-Art Review of Nature-Inspired Systems for Smart Structures
Since the dawn of humanity, nature has been a source of inspiration for developing engineering systems, referred to as "nature-inspired systems". With respect to smart structures instrumented with smart structural health monitoring (SHM) systems, nature-inspired systems may provide promising advancements, for example, by executing self-healing or self-diagnosing processes. However, for developing optimum strategies towards deploying nature-inspired systems to smart structures, the plenitude of nature-inspired systems in SHM need to be classified. This paper aims at reviewing the potential of nature-inspired systems to advance the performance of smart structures. Upon a brief introduction to smart structures and nature-inspired systems, a state-of-the-art review of nature-inspired systems that exhibit potential to advance smart structures is presented, providing decision support on how to advantageously apply the benefits of nature-inspired systems to smart structures.
Henrieke Fritz, Kay Smarsly
Smart Composite Rebars Based on DFOS Technology as Nervous System of Hybrid Footbridge Deck: A Case Study
The paper presents the concept and application of the smart pedestrian footbridge, equipped with DFOS strain sensors called EpsilonRebars. These sensors, in the form of composite rods being simultaneously the structural reinforcement for the concrete deck, were placed along the entire span of nearly 80 m. Thanks to the application of distributed optical fibre sensing technique DFOS, it is possible to perform measurements of strains, displacements (deflections) and temperature changes in a geometrically continuous manner along the entire length of the footbridge. The sensors integrated with the deck were used to measure selected physical quantities during the hydration of early-age concrete (thermal-shrinkage strains) as well as during the load tests. Sensor readings can be performed at any time of the structure operations in order to assess its technical condition (e.g. crack appearing) and to analyze the impact of environmental conditions and other factors, e.g. rheological phenomena.
Rafał Sieńko, Łukasz Bednarski, Tomasz Howiacki
Path Identification of a Moving Load Based on Multiobjective Optimization
This contribution presents and tests experimentally a nonparametric approach for indirect identification of 2D paths of moving loads, based on the recorded mechanical response of the loaded structure. This is an inverse problem of load identification. The method to be proposed is based on multicriterial optimization with two complementary criteria. The first criterion is purely mechanical, and it quantifies the misfit between the recorded mechanical response of the structure and its predicted response under a given trajectory. The second criterion is geometric: it represents the heuristic knowledge about the expected geometric regularity characteristics of the load paths (such as related to linear and angular velocity), and in fact it can be considered to be a regularizing criterion. A multicriterial genetic search is used to determine and advance the Pareto front, which helps to strike the balance between the response fit and the geometric regularity of the path. The proposed approach is tested in an experimental laboratory setup of a plate loaded by a line-follower robot and instrumented with a limited number of strain gauges.
Michał Gawlicki, Łukasz Jankowski
Fatigue Reliability Assessment of Pipeline Weldments Subject to Minimal Detectable Flaws
The study presents a probabilistic modeling of the fatigue crack growth prediction of the pipeline steel weldments in nuclear power plants in the context of an integrated structural health monitoring setting. Fatigue testing of the crack growth in the fusion line region of the steel weldments is made using compact-tension specimens. In particular, the uncertainty of the crack growth due to different crack plane orientations is investigated in details. A total of six orientations of the specimens are manufactured and tested according to the ASTM standards to obtain the fatigue crack growth data. The Bayesian method is used to identify the probability density function of the parameters of the Paris' fatigue crack growth model. Using the concept of damage tolerance, the reliability model of the pipeline weldments given the minimal detectable internal flaws of the ultrasonic nondestructive evaluations can be established. The time-dependent reliability of the pipeline weldments is obtained using the efficient first-order reliability method. Results indicate the uncertainty of the orientations of the flaws plays an important role in the overall reliability of the pipeline weldments.
Xiaochang Duan, Xinyan Wang, Xuefei Guan
Chapter 8. Simulation and Verification on the Proposed Model and Control Strategy
The above-mentioned chapters in this book discussed the vehicle state and TRFC estimation, direct yaw moment control, torque vectoring, energy-efficient control, respectively. This chapter will validate the effectiveness and robustness of the proposed model, control algorithms and methods under different maneuvers based on the simulative experimental platform, as shown in Figure 8.1.
Xudong Zhang
Chapter 5. Mechanical Performance of Pulsed Alternators
Compensated Pulsed Alternators (CPAs) typically operate at high speed to obtain higher inertial energy storage and reverse potential for higher energy storage and power densities. CPA is subjected to extremely high mechanical stress. Moreover, during pulsed discharging, the excitation current and discharge current are very high, there exists a very strong magnetic field, in the motor instantaneously. Some components of the motor are therefore subjected to a great instantaneous electromagnetic force, which directly threatens the security of CPA, so the mechanical stress, electromagnetic stress and key component strength are required to be analyzed and verified. The introduction of fiber resin composites with high strength-density ratio decreases the quality and increases the speed of the air core CPA, which greatly improves the energy density and power density. However, due to the anisotropy characteristics of the mechanical characteristics of fiber resin composite materials, the lateral strength of fiber is much lower than the vertical strength. It is necessary to carry out a safety assessment for reasonable parameters design.
Shaopeng Wu, Shumei Cui
"Wer informiert hier eigentlich wen?" Qualität des Dialogs zwischen verschiedenen Gesundheitsberufen und PatientInnen
Wie arbeiten verschiedene Berufsgruppen im Gesundheitswesen zusammen, um die PatientInnen bei Veränderungen ihres Ess- und Bewegungsverhaltens zu unterstützen? Im Rahmen einer Studie in Südtirol wurden hierzu AllgemeinmedizinerInnen, KrankenpflegerInnen, ErnährungstheraputInnen und PatientInnen befragt. Eine Erkenntnis daraus ist, dass zwar jede Berufsgruppe beansprucht, für diese Lebensstilveränderungen zuständig zu sein, sich aber in ganz unterschiedlicher Weise tatsächlich darum kümmert. Ganz klar ist auch die Lücke, die zwischen den Berufsgruppen existiert, herausgekommen. Sie arbeiten überwiegend für sich, weshalb die Frage aufkommt: Welcher Dialog findet überhaupt statt?
Heike Wieser, Harald Stummer
Kapitel 12. Kostenmanagement
Auch eine Sache der Compliance
Kostenmanagement – das impliziert die Mitverantwortung aller Mitarbeitenden im Unternehmen. Alle sind beteiligt, jede Person in ihrer Position entsprechend ihrer Aufgaben und Möglichkeiten. Jeder und jede kann mitwirken, sei es mit bewussten Einsparungen oder mit guten Ideen, zumindest aber mit konsequentem Handeln.
Werner Heister, Julia Tiskens
Chapter 3. Cognitive Psychological Influences
This chapter examines the cognitive psychological influencescognitive psychological influences that shape dominant state threat perceptionsthreat perception and their proliferation responses. Because the methodology incorporates variables that influence a powerful state's operational milieu, descriptive data illustrates the importance of these influences.
Brian K. Chappell
Chapter 7. Analysis of Data
The importance of cognitive psychological influencescognitive psychological influences, national securitynational security policies, and military capabilities in shaping a powerpower projecting state's perceptions and response to emerging nuclear threatsnuclear threats was discussed in Chapters 2 , 3 , 4 , and 5 . The Analysis of Data chapter incorporates these influences and applies them to the four proliferation cases using the Differential Effects of Threat Perception's two decision-tree heuristicsdecision-tree heuristics to forecast the power-projecting state's degree of perceived threatperceived threat, and how it is projected to respond to the non-power-projecting state's proliferation efforts.
Chapter 2. Literature Review
To answer the central question of this study, "When and why do states that have the military capability to use force to disrupt or destroy a proliferating state's nuclear facilities choose to take no action, use military force, or pursue coercive diplomacy?" the research first discusses the contributions and shortcomings of the existing proliferation literature. This critique contextualizes the foundation of nuclear proliferation literature before transitioning from the study of the aggregate to the individual effects of proliferation by discussing Matthew Kroenig's power-based Differential Effects of Nuclear Proliferation Theory, which argues nuclear proliferation has varying effects on differently situated power-projecting states and these differing effects account for the variations in their proliferation responses.
Chapter 8. Conclusion
This book explored the role threat perceptionsthreat perception play in state responses to nuclear proliferationnuclear proliferation by conducting a comparative analysis of the United StatesUnited States and IsraelIsrael over four case studies: IraqIraq (1981 and 2003), SyriaSyria (2007), and IranIran (2015).
Chapter 6. The Middle East States and Threat Perceptions
Adversarial rhetoricrhetoric has a cognitive influence on the powerpower projecting state decision-makersdecision-makers' threat perceptionsthreat perception and how they perceive the intent of the non-nuclear-weaponnuclear weaponacquiring state's proliferation ambitions.
Chapter 4. National Security Policy and Nuclear Policy
A state's national security policy provides the structural framework for protecting its citizens and national security interests. These objectives provide the foundation for how a state will conduct its foreign policy, address threats, and promote its values and economic well-being. A state's nuclear policy provides the governing philosophies of its nuclear arsenal, primarily concerning their security and conditions under which nuclear weapons could be employed.
Chapter 6. Restructuring Banks and Borrowers
The systemic banking crisis subsided by March 1999, but it took five more painful years before the Japanese banking system recovered full health. This chapter reviews the process to restructure banks and borrowers and the impacts the process had on the labor force. It then tries to estimate the costs borne by corporates, banks, and taxpayers and explore what would have been the consequence if the immediate clean-up option had been chosen in 1992.
Ryozo Himino
E-Government Mechanisms Development: Comparative Analysis of British and Russian Cases
The main goal of the study is to identify shortcomings in the implementation of e-government mechanisms in Russian Federation and to propose recommendations for improving these mechanisms. Particular attention in the article is paid to current trends and problems in the field of e-government, which are based on high rates of technological development, informatization and digitalization. The relevance of this study lies in the need to improve the state system and its industries in the context of digitalization, as well as its dynamic transformations in order to meet the urgent needs of citizens. In this context, successful development and operation experience of e-government in the United Kingdom is indicative. In order to achieve the goal set in the study, the authors conduct a composite graphical analysis of the UK and Russian e-government websites. As a result of identifying the significant advantages of e-government in the UK, as well as taking into account the current challenges of digitalization and the needs of society, the authors develop recommendations for improving e-government mechanisms in Russia.
Svetlana Morozova, Alexander Kurochkin
Intelligent Legal Decision Support System to Classify False Information on Social Media
In the study, a decision support system for governance in the field of law was developed and the existing decision making model to conduct linguistic expertise of inaccurate public information in online media and social networks was improved, taking into account the human factor. The results of the proposed system are presented in a set of the formed recommendations, based on which the user makes a decision. The specific feature of the decision support system (DSS) is that it works with several types of false information in accordance with the Russian legislation against "fake news" addressed in the study. The adapted algorithms of Bayes classification were studied and built for effective work of the decision-making and classification module of false information. These algorithms were implemented in the system and a computational experiment on text classification was performed. The study examined the features of the Russian legislation on false information dissemination, and described the components and functionality of the proposed intelligent legal DSS, as well as its efficiency. This solution implies a widespread use of systems, application packages, special software and legal support for analytical work, obtaining forecasts and conclusions on the processes under study based on databases and expert judgment, considering the human factor and active influence of the controlled system on the governance process.
Arsenii Tretiakov, Elizaveta Kobets, Natalia Gorlushkina, Viktor Kumpan, Alexandra Basakina
Algorithmic Panopticon: State Surveillance and Transparency in China's Social Credit System
This article examines China's Social Credit System to illustrate how information and communication technologies bring forth new forms of interaction between the state and its citizens. In particular, it asks how the transparency generated by the Social Credit System enables new forms of social control, trust, and self-regulation. The study provides a descriptive account of the Social Credit System's basic design elements and the political intentions behind its implementation. Based on Foucault's model of the panopticon, the study then derives three basic parameters, each of which relate to the system's capacity to create transparency and to reconfigure government-citizen relations. The study finds that the system increases the control of the government over society, likely diminishes trust, and reduces the freedom to act. However, compared to the clientelism and arbitrary decision-making of previous decades, the precise and depersonalized standards of the Social Credit System can be seen as an improvement that enables individuals' capacity to self-regulate. This theoretical and analytical study thus adds to the debate about how government through algorithms rearranges practices of state power and control.
Viktor Suter
Legal Framework for the Use of Drones by Public Entities for Monitoring and Control Purposes in Russia
The International Civil Aviation Organization (ICAO), in its annual report in 2018, noted an unprecedented increase in the use of small unmanned aircrafts (UAS), which represents a serious challenge for regulators in terms of safety and security. Moreover, there is a general trend in the filing of special requests to the ICAO for preparing of harmonizing documents in the field of legal regulation of drones. According to recent research, both in Russia and abroad, drones are used in more than 50 sectors of the economy to solve more than 450 commercial tasks, which allows companies to increase profits, reduce costs, and optimize many processes. The TOP 5 areas of drone use include: medicine & health care; oil and gas industries; protection of nature reserves and forests; urban planning and surveying; delivery of light cargo by transport companies.At the same time, considering the current practice, many specialists note the limited applicability of the tools provided for by UN Convention on International Civil Aviation and traditional air regulations in case of UAS, and the urgent need for further efforts to create specific rules and regulations.At present, considering the increasing frequency of drone use, two main areas can be identified: civil (commercial, recreational) and public (state bodies for monitoring and monitoring compliance with legislation, military). The purpose of this study is to analyze the general state of legal regulation on the use of drones for public administration purposes, as well as to determine its main parameters and possible differences/deviations from the rules applicable for commercial use. The authors also formulate the basic principles that should allegedly become the basis for drafting of special rules for the public use of drones to ensure the necessary level of security and the required efficiency – to achieve the public goals by their using.
Mikhail Bundin, Aleksei Martynov, Ekaterina Shireeva, Maria Egorova
eIDAS Implementation Challenges: The Case of Estonia and the Netherlands
Solid eID (electronic identification) infrastructures form the backbone of today's digital transformation. In June 2014, the European Commission adopted the eIDAS regulation (electronic identification and trust services for electronic transactions in the internal market) as a major initiative towards EU-wide eID interoperability; which receives massive attention in all EU member states in recent years. As a joint effort of Estonia and the Netherlands, this study provides a comparative case study on eIDAS implementation practices of the two countries. The aim was to analyze eIDAS implementation challenges of the two countries and to propose a variety of possible solutions to overcome them. During an action learning workshop in November 2019, key experts from Estonia and the Netherlands identified eIDAS implementation challenges and proposed possible solutions to the problems from the policy maker, the service provider and the user perspective. As a result, we identified five themes of common challenges: compliance issues, interpretation problems, different practices in member states, cooperation and collaboration barriers, and representation of legal persons. Proposed solutions do not only involve changes in the eIDAS regulation, but different actions to develop an eIDAS framework and to improve cross-border service provision - which has recently become an important topic among member states. Eventually, the study provides practical input to the ongoing eIDAS review process and can help member states to overcome eIDAS implementation challenges.
Silvia Lips, Nitesh Bharosa, Dirk Draheim
Factors of Open Science Data Sharing and Reuse in the COVID-19 Crisis: A Case Study of the South Korea R&D Community
Semi-structured interviews with South Korean experts were conducted to explore enabling and limiting factors influencing open commutation of scholarly outputs and data in public health emergencies, such as the COVID-19 outbreak. The study provided a set of contextual/external, institutional/regulatory, resource, and individual/motivational factors with some relevant examples. The results revealed the highest importance of institutional/regulatory factors in such situations. The findings might be useful for a country's comprehensive Open Science policy development as a component of future outbreak preparedness.
Hanna Shmagun, Charles Oppenheim, Jangsup Shim, Kwang-Nam Choi, Jaesoo Kim
Fault Tolerance in Multiagent Systems
A decentralized multiagent systems (MAS) is comprised of autonomous agents who interact with each other via asynchronous messaging. A protocol specifies a MAS by specifying the constraints on messaging between agents. Agents enact protocols by applying their own internal decision making.Various kinds of faults may occur when enacting a protocol. For example, messages may be lost, duplicates may be delivered, and agents may crash during the processing of a message. Our contribution in this paper is demonstrating how information protocols support rich fault tolerance mechanisms, and in a manner that is unanticipated by alternative approaches for engineering decentralized MAS.
Samuel H. Christie V, Amit K. Chopra
Fragility and Robustness in Multiagent Systems
Robustness is an important property of software systems, and the availability of proper feedback is seen as crucial to obtain it, especially in the case of systems of distributed and interconnected components. Multiagent Systems (MAS) are valuable for conceptualizing and implementing distributed systems, but the current design methodologies for MAS fall short in addressing robustness in a systematic way at design time. In this paper we outline our vision of how robustness in MAS can be granted as a design property. To this end, we exploit the notion of accountability as a mechanism to build reporting frameworks and, then, we describe how robustness is gained. We exemplify our vision on the JaCaMo agent platform.
Matteo Baldoni, Cristina Baroglio, Roberto Micalizio
Aplib: Tactical Agents for Testing Computer Games
Modern interactive software, such as computer games, employ complex user interfaces. Although these user interfaces make the games attractive and powerful, unfortunately they also make them extremely difficult to test. Not only do we have to deal with their functional complexity, but also the fine grained interactivity of their user interface blows up their interaction space, so that traditional automated testing techniques have trouble handling it. An agent-based testing approach offers an alternative solution: agents' goal driven planning, adaptivity, and reasoning ability can provide an extra edge towards effective navigation in complex interaction space. This paper presents aplib, a Java library for programming intelligent test agents, featuring novel tactical programming as an abstract way to exert control over agents' underlying reasoning-based behavior. This type of control is suitable for programming testing tasks. Aplib is implemented in such a way to provide the fluency of a Domain Specific Language (DSL). Its embedded DSL approach also means that aplib programmers will get al.l the advantages that Java programmers get: rich language features and a whole array of development tools .
I. S. W. B. Prasetya, Mehdi Dastani, Rui Prada, Tanja E. J. Vos, Frank Dignum, Fitsum Kifetew
Chapter 25. Aspects of Risk Management and Vulnerability Assessment of Buildings in the Republic of Georgia
This manuscript presents an ad-hoc methodology for the assessment of the vulnerability of buildings to be applied in Georgia where such assessment is included in the management risk approach designed to work at the national level.The risk management approach is based on the determination of several factors: specifically, it considers the severity and probability of occurrence of adverse events, the multiplicity and complexity of their impacts (anthropological, economic, environmental, political and social), and the determination of physical and social vulnerability. The physical vulnerability assessment of buildings is here generally defined against all events that produce mechanical effects on buildings, with special focus on natural hazards.
Teimuraz Melkadze
A Comparison of Machine Learning and Classical Demand Forecasting Methods: A Case Study of Ecuadorian Textile Industry
This document presents a comparison of demand forecasting methods, with the aim of improving demand forecasting and with it, the production planning system of Ecuadorian textile industry. These industries present problems in providing a reliable estimate of future demand due to recent changes in the Ecuadorian context. The impact on demand for textile products has been observed in variables such as sales prices and manufacturing costs, manufacturing gross domestic product and the unemployment rate. Being indicators that determine to a great extent, the quality and accuracy of the forecast, generating also, uncertainty scenarios. For this reason, the aim of this work is focused on the demand forecasting for textile products by comparing a set of classic methods such as ARIMA, STL Decomposition, Holt-Winters and machine learning, Artificial Neural Networks, Bayesian Networks, Random Forest, Support Vector Machine, taking into consideration all the above mentioned, as an essential input for the production planning and sales of the textile industries. And as a support, when developing strategies for demand management and medium-term decision making of this sector under study. Finally, the effectiveness of the methods is demonstrated by comparing them with different indicators that evaluate the forecast error, with the Multi-layer Neural Networks having the best results with the least error and the best performance.
Leandro L. Lorente-Leyva, M. M. E. Alemany, Diego H. Peluffo-Ordóñez, Israel D. Herrera-Granda
Chapter 14. Precision Retailing: Building Upon Design Thinking for Societal-Scale Food Convergence Innovation and Well-Being
As a holistic framework, food well-being (FWB) is conceived as "a positive psychological, physical, emotional, and social relationship with food at both individual and societal levels" (Block et al. 2011, p.5). It traces ways forward for research and action for individuals and society through five domains: food socialization, literacy, marketing, availability, and policy. The decade that followed since the introduction of the concept has seen significant theoretical development bearing on the experiential quality of eating and its dynamics (Batat et al. 2019), as well as on the five primary domains (Bublitz et al. 2019; Scott and Vallen 2019).
Laurette Dubé, Dilip Soman, Felipe Almeida
Chapter 11. Food Well-Being in the Higher Education Sector: How to Leverage Design Thinking to Create Healthy and Pleasurable Food Experiences Among College Students
Higher education is an important and unique sector to examine food well-being, defined as an integrative understanding of the psychological, physical, emotional, and social relationships individuals have with food (Block et al. 2011). Transitioning into adulthood and living away from home for the first time, students demonstrate inadequate food literacy (Abraham et al. 2018; Kang et al. 2014; Malan et al. 2020; Tam et al. 2017; Wilson et al. 2017). Food availability is often limited to on-campus institutional dining services, fast-food restaurants, and vending machines, with little access to grocery stores (Caruso et al. 2014; Dhillon et al. 2019; Horacek et al. 2013; Lugosi 2019). Promotions for junk food and beverages, such as pizza, burgers, and sugar-sweetened sodas, dominate marketing efforts aimed at this demographic (Bragg et al. 2018; Buchanan et al. 2018; Jayanetti et al. 2018), though calorie concerns, especially among female students, guide many food decisions, often at the expense of pleasurable and social food experiences (Rozin et al. 2003; So et al. 2012; Zein et al. 2016). Meanwhile, university food policies, such as mandatory meal plans, can be costly, confusing, and wasteful (Ellison et al. 2019; Laterman 2019; Pappano 2016). As a microcosm of universal food experiences, student food well-being experiences can inform innovation in all food sectors.
Jane Machin, Brooke Love
Chapter 16. From Food Product to Food Experience: How to Use Design Thinking to Service Vulnerable Populations and Improve Their Food Well-Being
Design thinking, the process of transforming deep user insight into new solutions by utilizing methods and mindsets borrowed from designers, has evolved to become one of the most rapidly spreading approaches for development globally. Today, design thinking is applied not only for product and service development but also for societal, political, and economic problems. In this chapter, we argue that design thinking can help to promote and enhance healthy food consumption experiences for vulnerable groups. To do so, we discuss three core elements of design thiking: empathy, visualization and collaboration.
Nina Veflen, Øydis Ueland
The AIQ Meta-Testbed: Pragmatically Bridging Academic AI Testing and Industrial Q Needs
AI solutions seem to appear in any and all application domains. As AI becomes more pervasive, the importance of quality assurance increases. Unfortunately, there is no consensus on what artificial intelligence means and interpretations range from simple statistical analysis to sentient humanoid robots. On top of that, quality is a notoriously hard concept to pinpoint. What does this mean for AI quality? In this paper, we share our working definition and a pragmatic approach to address the corresponding quality assurance with a focus on testing. Finally, we present our ongoing work on establishing the AIQ Meta-Testbed.
Markus Borg
Software Quality for AI: Where We Are Now?
Artificial Intelligence is getting more and more popular, being adopted in a large number of applications and technology we use on a daily basis. However, a large number of Artificial Intelligence applications are produced by developers without proper training on software quality practices or processes, and in general, lack in-depth knowledge regarding software engineering processes. The main reason is due to the fact that the machine-learning engineer profession has been born very recently, and currently there is a very limited number of training or guidelines on issues (such as code quality or testing) for machine learning and applications using machine learning code. In this work, we aim at highlighting the main software quality issues of Artificial Intelligence systems, with a central focus on machine learning code, based on the experience of our four research groups. Moreover, we aim at defining a shared research road map, that we would like to discuss and to follow in collaboration with the workshop participants. As a result, the software quality of AI-enabled systems is often poorly tested and of very low quality.
Valentina Lenarduzzi, Francesco Lomio, Sergio Moreschini, Davide Taibi, Damian Andrew Tamburri
A Systematic Literature Review on Implementing Non-functional Requirements in Agile Software Development: Issues and Facilitating Practices
Agile Software Development methods have become a widespread approach used by the software industry. Non-functional requirements (NFRs) are often reported to be a problematic issue for such methods. We aimed to identify (within the context of Agile projects): (1) the issues (challenges and problems) reported as affecting the implementation of NFRs; and (2) practices that facilitate the successful implementation of NFRs. We conducted a systematic literature review and processed its results to obtain a comprehensive summary. We were able to present two lists, dedicated to issues and practices, respectively. Most items from both lists, but not all, are related to the requirements engineering area. We found out that the issues reported are mostly related to the common themes of: NFR documentation techniques, NFR traceability, elicitation and communication activities. The facilitating practices mostly cover similar topics and the recommendation is to start focusing on NFRs early in the project.
Aleksander Jarzębowicz, Paweł Weichbroth
The Sars-Cov-2 Pandemic and Agile Methodologies in Software Development: A Multiple Case Study in Germany
In recent years, agile methodologies have been established in software development. Today, many companies use agile or hybrid approaches in software development projects. The Sars-Cov-2 pandemic has led to a paradigm shift in the way people work in Germany. While it was customary for German software development teams to work co-located before the pandemic, teams have been working on a distributed remote basis since March 2020. Studies show that distributed work impacts the performance of agile software development teams. To examine the effects of the Sars-Cov-2 pandemic on agile software development in Germany, we planned, carried out, and evaluated a multiple case study with three cases. The results show that the majority of teams did not experience any loss in performance. We present some problems and challenges and derive specific suggestions for software development practice from the results of the study.
Michael Neumann, Yevgen Bogdanov, Martin Lier, Lars Baumann
Journalismus & Werbung
Zur Trennung von redaktionellen Inhalten und kommerzieller Kommunikation
Der Trennungsgrundsatz zwischen redaktionellen Inhalten und kommerzieller Kommunikation schützt sowohl die Demokratie- wie auch die Werbeträgerfunktion von Massenmedien. Dieser Beitrag stellt für die diversen Aspekte des Trennungsgrundsatzes wie die Kennzeichnungspflicht, das Verbot von bezahlten Inhalten und Kopplungsgeschäften sowie die Handhabung zahlreicher Darstellungsformen redaktioneller Werbung alle verfügbaren gesetzlichen und standesrechtlichen Regelungen zusammen. Anschließend wird für die einzelnen Aspekte des Trennungsgrundsatzes der Stand der Forschung insbesondere hinsichtlich Beschreibung und Wirkung referiert. Schließlich werden Lösungsvorschläge für die aktuelle Anwendung und Desiderata zusammengetragen.
Stefan Weinacht
Corporate Social Responsibility in Medienunternehmen
Der Beitrag entwirft ein innovatives Rahmenkonzept für das Management der Corporate Social Responsibility (CSR) in Medienunternehmen. Dazu werden die unterschiedlichen Bedeutungen von CSR herausgearbeitet und gezeigt, dass die CSR auch für Medienunternehmen an strategischer Relevanz gewinnt.Unter Einbezug zentraler Ansätze der Wirtschaftsethik wird geklärt, wie die Ansprüche der betriebswirtschaftlichen Managementlehre mit der integrativen Wirtschaftsethik vereinbart werden können. Schließlich wird das Rahmenkonzept des CSR-Managements von Medienunternehmen an aktuellen Beispielen aus der Medienpraxis dargestellt.
Anke Trommershausen, Matthias Karmasin
Methoden der Medienökonomie
Die Methoden der Forschung zur Medienökonomie sind so vielfältig wie die Zugänge zum Fach. Als Teildisziplin der Medien- und Kommunikationswissenschaft werden einerseits die Methoden dieses Fachs verwendet und die Ergebnisse im Hinblick auf ökonomische Fragestellungen interpretiert. Andererseits sind die forschenden Personen im Fach oftmals in anderen Disziplinen sozialisiert worden. Damit spielen auch Methoden – insbesondere aus den Wirtschaftswissenschaften – jeweils eine wichtige Rolle. Der vorliegende Beitrag stellt nach einer kurzen Einführung der Fachgeschichte aus methodischer Perspektive die gängigen Erhebungs- und Analysemethoden systematisch vor. Weiter wird erörtert, mit welchen speziellen Herausforderungen das Forschungsfeld aus methodologischer Perspektive konfrontiert ist. Im Anschluss wird ein Ausblick gegeben, welche Methoden das etablierte Spektrum bereichern können, verbunden mit der Diskussion, warum sich diese Methoden bislang nur schwer etablieren konnten.
M. Bjørn von Rimscha, Juliane A. Lischka
Chapter 1. High-Power Laser Energy
This chapter goes through high-power laser and its energy definition. This chapter presents a tutorial discussion of the material responses to high-power laser energy radiations, with emphasis on simple, intuitive models. Topics discussed include optical reflectivity of metals at infrared (IR) wavelengths, laser-induced heat flow in materials, the effects of melting and vaporization, the impulse generated in materials by pulsed radiation, and the influence of the absorption of laser radiation in the blow-off region in front of the irradiated material.
Bahman Zohuri
Chapter 6. Assistance Systems in Textile Production
Using the simple formula "assistant + data glasses = skilled worker", a German news magazine described possible effects on work and assistance systems in Industrie 4.0, quoting one manager as follows: "With data glasses, a new worker can achieve the same results after a short briefing as another employee with many years of experience" and "When Amazon hires 800 new people for the Christmas business, they won't need 20 trainers in the future, but can put on the Smartglass and get started immediately" [NN14].
Yves-Simon Gloy
Chapter 3. Regularization of Linear Inverse Problems
The discretization of linear identification problems led to linear systems of algebraic equations. In case a solution does not exist, these can be replaced by linear least squares problems, as already done in Sect. 2.3 . We will give a detailed sensitivity analysis of linear least squares problems and introduce their condition number as a quantitative measure of ill-posedness. If the condition of a problem is too bad, it cannot be solved practically. We introduce and discuss the concept of regularization which formally consists in replacing a badly conditioned problem by a better conditioned one. The latter can be solved reliably, but whether its solution is of any relevance depends on how the replacement problem was constructed. We will especially consider Tikhonov regularization and iterative regularization methods. The following discussion is restricted to the Euclidean space over the field of real numbers ?? = ℝ $$\mathbb {K}=\mathbb {R}$$ but could easily be extended to complex spaces.
Mathias Richter
Listings and M&As of Chinese Real Estate Enterprises
Many domestic and foreign investors are paying great attention to the issues of real estate enterprises' listings and M&As within China. The Chinese supervisory authorities have made an effort to regulate the capital markets relating to real estate industry in order to improve transactional efficiencies and management of real estate development and investment activities. This chapter discusses the circumstances of the real estate industry's initial public offerings (IPOs), both domestically and overseas, based on an analysis of related laws and regulations issued by competent authorities. Some critical M&A matters in the real estate industry are also presented in this chapter to help readers understand the scenarios and situations before making plans to invest in real estate in China.
Qingjun Jin
Tax Framework for Accessing Real Estate Asset Classes
Real estate investment in China provides an enormous opportunity for significant capital appreciation and rewarding returns to foreign investors. Yet, the regulatory and tax regime governing foreign investment in China's real estate sector is complicated. Respectable return on a successful real estate project could be easily wiped out by uncertain or unexpected taxation rules. Thus, in order to avoid pitfalls, robust tax considerations are essential throughout the life cycle of different types of real estate projects. The purpose of this chapter is to highlight the application of taxation frameworks for international investors, in particular key tax challenges when investing in the real estate sector in China.
Chapter 6. Migration
One of the key areas of concern for the UK has been its high net migration. Insights into understanding migration, its motivations and impact are assessed, alongside a brief presentation of UK immigration statistics, its poor image in the public eye and the persistent UK labour market inequality. Migration can have a positive or negative impact for the UK as a whole and also for particular indigenous groups, depending on the economic aspect analysed. Therefore, understanding the evidence in relation to migration is important, as this is highly significant for deciding which form of Brexit to favour. The effect of migration is discussed in relation to many key economic aspects for the UK, including demographics, jobs, business and sectors of activity, skill levels, wages, employment, fiscal contributions, housing and, quintessentially, productivity. The design of a post-Brexit migration system is analysed, comparing it mainly to the Australian points-based visa system, and its consequences for UK business and productivity are summarised alongside recommendations—principally related to the need for its flexibility—aimed at addressing some of UK employers' major concerns of skill shortages and the economy suffering post-Brexit. Migrant labour can make a positive contribution to our nation if government, business and the research community work together to help design a migration policy appropriate to our country's needs.
Philip B. Whyman, Alina Ileana Petrescu
Chapter 9. Alternative Trading Models After Brexit
The book concludes by examining the alternative trading models that have been advanced for Brexit. These range from close forms of relationship with the EU, such as membership of the European Economic Area (EEA) or customs union, or more independent options, such as concluding a free trade agreement (FTA) with the EU or alternatively trading according to World Trade Organisation rules. Additional trade options are considered, such as membership of the European Free Trade Association, increasing trade with the Commonwealth, North American Free Trade Agreement and 'Anglosphere' countries, or joining the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). There is a fundamental choice facing policy makers—whether to seek to minimise short-term trade costs through the adoption of a close economic relationship with the EU, but at the cost of inhibited policy autonomy, or to utilise policy flexibility to pursue national economic objectives, but at the cost of short-term reduction in UK-EU trade. Rodrik's trilemma model is used to illustrate this choice, between adopting a 'golden straightjacket' and opting for the benefits of greater national sovereignty and policy space. The chapter explains and evaluates the Chequers (May) and Johnson variants of the withdrawal agreement, before noting how game theory might suggest a mutually beneficial bargaining solution between the UK and the EU—based around a basic form of FTA.
Chapter 2. The Fiscal Impact of Brexit
The fiscal impact of Brexit is an area where even detractors concede that the UK will gain from withdrawal from the EU, as smaller (if any) contributions will be made to cover the UK's share of EU programmes. The precise nature of this fiscal benefit is, however, uncertain. This is partly due to the way in which the EU budget is only finalised ex post facto, as relative gross national income only becomes known after the fact, whilst prior commitments are often only realised into payments after a time lag. Calculations additionally depend upon assumptions related to the UK budget rebate and whether gross or net contributions are used in calculations, or additionally, whether EU or UK Treasury figures are deemed to be the most appropriate for the particular application of the data. Future budgetary developments need to be estimated, for any comparative forecast to be accurate, and the final form of trade agreement reached with the EU will have a significant effect upon the size of fiscal gain, as some of the options contain fiscal payment implications. The size of the financial settlement is broadly known, albeit that aspects will remain indeterminate for a number of years. Finally, this chapter notes that whilst potentially significant, the direct fiscal gain from Brexit is likely to be relatively modest when compared to any impact upon GDP (and thereby overall fiscal balance) arising from Brexit's broader economic impact.
Sustainable Buildings and Practice in China
Sustainable building, or more commonly referred to as "green building" in China, is an integral part of the country's path toward sustainability. There are both opportunities and challenges in China's green building market. Opportunities lie in both architecture and building engineering, as well as planning and urban development areas. The recently appeared WELL building concept may also be promising in China. On the other hand, the development of green building in China faces challenges in economics, professional capability and awareness, and regulatory standards. Moving forward, the certification-focused developers shall introduce new technologies to increase the added value of green building labels. For technology-focused developers, they will need to combine their advanced core technology with the existing ones to capture larger shares of the market, particularly in the residential area. The ideas and concepts that may bring about technological integration will be the main drivers in the market.
Sean Chiao, Nancy F. Lin
Chapter 5. Regulation
Regulation has been identified as one area where Brexit may deliver economic benefits. Sections of business opinion have long criticised the EU for the negative impact of its regulations upon the business community, and therefore the chapter examines the potential for a shift towards national rather than supra-national regulation to deliver economic gains. Evidence is presented relating to the relative ranking of UK regulatory burdens, and that the vast majority of UK firms do not export to other EU member states and yet remain constrained by single market regulations. The chapter evaluates the evidence, produced by a handful of studies, which have sought to contrast the economic cost-benefit rations produced by national as opposed to EU regulations, before considering policy responses in the form of significant regulatory liberalisation ('Singapore on Thames'), gains that might be made through repatriation of certain regulations and the significance of the choice facing UK negotiators, concerning whether to pursue regulatory divergence or concede EU demands for regulatory compliance (the so-called level playing field).
The Regulation of Leasing Activities
Chinese law provides a national-level regulatory framework for leasing. Variation, however, can occur among different localities in respect of local regulations. Moreover, different customs exist in different localities, which can affect matters such as the amount of the rental bond, frequency of rental payments and whether subleasing is acceptable. In this chapter, we outline China's national-level regulatory framework for leasing activities. Local regulations and practices are also discussed, though no attempt has been made to systematically address local regulations and practices.The leases discussed in this chapter involve the periodic payments of rental for the occupation of real estate premises, similar to leasing arrangements made elsewhere in the world. This is to be distinguished from the granting land use rights in China, which can also be referred to as leasing from the government. As discussed below, all land in China is owned either by the government or rural collectives. Government-owned land may be transferred to an individual or company for a certain period of time (between 40 and 70 years, depending on the usage) in exchange for the payment of money. This arrangement is sometimes referred as granting of the land use right or lease of land. For the purpose of this chapter, our discussion does not include the granting of land use rights, which is regulated under a different regulatory regime.
Karen Ip, Nanda Lau
Real Estate Valuation in China
Property valuation has become increasingly important in China due to growing merger and acquisition (M&A) activities. Although the same internationally accepted valuation approaches are applied in real estate valuation in China, a number of local particularities need to be considered, including the special regulatory environment and the unique ownership structure, such as the so-called land use rights, as well as volatility in market developments.
Florian Hackelberg, Nova Chan
Allocation of Regulatory Responsibilities: Who Will Balance Individual Rights, the Public Interest and Biobank Research Under the GDPR?
In this chapter, an analysis is undertaken of the division of legislative power in the space created by the GDPR, regarding the balancing of individual rights, the public interest and biobank research. The legislative competences of the EU, international obligations within bioethics, and the regulatory space left for Member States are all examined. The conclusion of the chapter is that in spite of the aim of the GDPR to further legal harmonisation, it is more likely that unity will be brought about through administrative cooperation and soft law tools.
Jane Reichel
The Impact of the GDPR on the Governance of Biobank Research
Governance of health and genomic data access in the context of biobanking is of salient importance in implementing the EU General Data Protection Regulation (GDPR). Various components of data access governance could be considered as 'organizational measures' which are stressed in the Article 89(1) GDPR together with technical measures that should be used in order to safeguard rights of the data subjects when processing data under research exemption rules. In this chapter, we address the core elements regarding governance of biobanks in the view of GDPR, including conditions for processing personal data, data access models, oversight bodies and data access agreements. We conclude by highlighting the importance of guidelines and policy documents in helping the biobanks in improving the data access governance. In addition, we stress that it is important to ensure the existing and emerging oversight bodies are equipped with adequate expertise regarding using and sharing health and genomic data and are aware of the associated informational risks.
Mahsa Shabani, Gauthier Chassang, Luca Marelli
Frauenquote/© Wolfilser / stock.adobe.com, dSpace kooperiert mit Microsoft./© dSpace, Führungskräfte unter der Lupe/© DNY59 / Getty Images / iStock, Jubelndes 3-D-Männchen läuft ins Ziel/© fotomek / Fotolia, Pluta Logo/© Pluta, Rombach Rechtsanwälte/© Rombach Rechtsanwälte | CommonCrawl |
Impact of water quality on Chronic Kidney Disease of unknown etiology (CKDu) in Thunukkai Division in Mullaitivu District, Sri Lanka
Kalaivani Gobalarajah1,
Prabagar Subramaniam2,
Uthpala Apekshani Jayawardena ORCID: orcid.org/0000-0001-9613-43853,
Gobalarajah Rasiah4,
Sittampalam Rajendra5 &
Jasotha Prabagar6
Increase in the number of cases in Chronic Kidney Disease of Unknown etiology (CKDu) in Sri Lanka has become a health issue of national concern. Even though, Northern Province is not identified as a high-risk province, there is an increasing trend of CKDu after the end of civil war in the Northern Province.
The present study was conducted in Thunukkai Division in Mullaitivu District to investigate the socio demographic and clinical pattern of CKDu patients and to evaluate the quality of their water sources. The samples were selected by using stratified purposive random sampling method which represented 29% of total CKDu patients in Thunukkai Division. Pretested structured questionnaire was administered to collect the data from the CKDu patients. The association between serum creatinine excreted by CKDu patients and the water quality parameters were determined by using linear regression model.
Among the patients, 80% were male with over 68% falling in the age range of 50–70. Majority (90%) were involved in agriculture related occupation. Smoking and alcohol consumption were detected as common habits among 40% of the patients. Secondarily developed, hypertension (60%) and diabetes (34%) were reported as common diseases in the area. Dug wells served as the commonest source of drinking water in the area (90% households) together with few tube wells. Physicochemistry of more than 50% of the water samples revealed higher electric conductivity, salinity, total dissolved solids, total hardness and Na levels compared to drinking water standards in Sri Lanka.
Serum creatinine levels of the CKDu patients were significantly and negatively correlated with phosphate while positively correlated with total dissolved solids (TDS) and arsenic content of the drinking water. Geospatial mapping of TDS and arsenic in drinking water with the occurrence of higher serum creatinine levels confirmed the same trend. Thus, the total dissolved solids and arsenic in drinking water may have positive correlation with the occurrence of CKDu in Thunukkai region in the Mullaitivu District of Sri Lanka.
Chronic Kidney Disease of unknown etiology (CKDu) is the occurrence of Chronic Kidney Disease (CKD) without a known underlying cause [1]. Since its first report in mid-90's cases of CKDu have increased tremendously in North Central Province of Sri Lanka [2]. It is estimated that thousands of Sri Lankan people are affected by CKDu, mostly poor families living in remote areas. However, the number of CKDu patients and causes of the disease are unknown. Unfortunately, the research studies conducted to date were unable to provide exact cause/s of CKDu. A common conclusion is that the CKDu is caused by multiple factors involving environmental and social impacts [3].
There are several etiologies, proposed by the researchers of CKDu, including demographic factors of the affected community [4], quality of their drinking water including hardness [5], agrochemical and heavy metal contaminations [5, 6] fluoride level [7], the genetic makeup of vulnerable populations [8], etc. Demographic factors include the socioeconomic characters of a population such as age, sex, occupation etc. Several studies conducted in the North Central Province revealed that the main livelihood of CKDu affected population is farming and the age of the patients ranged between 30 and 60 with higher prevalence among elderly males over 50 years of age [2, 5, 9]. Low water consumption during farming activities and dehydration due to the exposure to direct sunlight may have led to renal failure [5]. Disease aggravating habits such as alcohol consumption, betel chewing and smoking have also been investigated in relation to patient demography [5]. A common genetic variant close to SLC13A3 is reported to be related to CKDu [10]. This has been identified as the most sensitive gene marker to predict the renal disease of type 2 diabetes mellitus. Furthermore, genes such as IGFBP1, KIM1, GCLC and GSTM1 are proposed to be used in combination for early determination of CKDu [11].
Correlation between high groundwater hardness and the occurrence of CKDu have been frequently reported [8, 9, 12, 13]. According to World Health Organization (WHO) hard water is mainly caused by the presence of calcium, magnesium, strontium, and iron together with carbonate, bicarbonate, sulphate and chloride anions. Furthermore, possible correlation between fluoride (F) in drinking water and the prevalence of CKDu was suggested in various instances [12,13,14]. According to WHO (2011), Sri Lanka is one of the tropical countries in the world with higher fluoride content in water resources, reaching the upper limit value of 0.6 mg/L [15]. In most of the CKDu endemic areas F content exceeds the upper limit value [16]. Maximum permissible contaminant level of Arsenic (As) is 10 μg/L though As contamination in the disease endemic regions exceeded the upper limit [17]. However, Balasooriya et al., [9] and Nanayakkara et al., [8] found insignificant levels of As and other trace elements in drinking water of CKDu endemic areas of Sri Lanka.
Algal toxins have also been considered as a suspect of the CKDu [18]. According to WHO, eighteen different types of cyanobacteria are capable of producing toxins under favourable conditions. Among them, fifteen toxic producing cyanobactria have been identified in Sri Lankan reservoirs and canals. These toxins are identified as hepatotoxic, dermatotoxic, neurotoxic and nephrotoxic compounds [18].
CKDu has direct impact on patients' lives including their livelihood activities. As the disease advances, patients become too ill to continue their employment, affecting the economic conditions and wellbeing of the entire family.
Current data on CKDu distribution show the occurrence of the disease in North Central, North Western, Southern, Eastern and Uva Provinces. Even though, Northern Province is not identified as a high-risk province, CKDu is developing at an alarming rate after the end of civil war in Northern part of this country. This may be the result of increased use of agrochemicals, residuals of explosives and newly emerging industries with unplanned effluent disposal leading to aquatic pollution in natural reservoirs. Northern Province comprises of five districts; Jaffna, Killinochchi, Mullaitivu, Vavuniya and Mannar. Among these Mullaitivu and Vavuniya have been considered "at risk" for the occurrence of CKDu with 09 other districts from North Central, Central and Uva provinces [19, 20] (Fig. 1). And Northern Province appear to have higher CKD prevalence than Central or Southern Provinces [20]. In the present study, Thunukkai of Mullaitivu District was selected as its prevalence of CKDu has not been studied much. Most of the people in Thunukkai are farmers, who carried out paddy farming throughout the year. Majority in Thunukkai Divisional secretariat use shallow dug wells and reservoirs for their daily consumption without any treatment of water. Thunukkai Divisional Secretariat has many reservoirs. Some GN ("Grama Niladhari"-Public Service Officer) divisions are named according to the name of reservoirs such as Anichakulam, Thenniyankulam, Kodaikadiyakulam etc.
Thunukkai Divisional secretariat of the Mullaitivu District showing the distribution of drinking water sources (sampling points, A1-A15, B1-B20) of the patient households and control samples from C1-C3. {a CKD prevalence rates across the most affected districts in Sri Lanka [19], b Administrative map of the Mullaitivu District (Sri Lanka Surveys Department, Maps and Geoinformations, https://www.survey.gov.lk) c Sampling points in Thunukkai DS (Drawn with ArcGIS 10.1, https://www.arcgis.com/index.html)}
Objectives of this study were to analyze the socio demographic and clinical pattern of CKDu patients in Thunukkai Division, and to determine the water quality parameters, such as cadmium (Cd), arsenic (As), nitrate, phosphate, fluoride, hardness, total dissolved solids (TDS), pH, sodium (Na), potassium (K) and electric conductivity of the drinking water in the wells and to assess correlations between water quality parameters and the serum creatinine levels of the CKDu patients.
Ethical clearance was obtained from the Ethics Review Committee, Faculty of Medicine, University of Jaffna. Data regarding CKDu patients were gathered during the period of January 2018 to July 2018 from respective Regional Director of Health Service in Northern Province. In Mullaitivu district, 631 cases of both CKD, including CKDu were identified in 3 Medical Office of Health (MOH) which are Thunukkai (120 cases), Manthai East (86 cases) and Sampathnuwara (425 cases). Thunukkai division was selected for the present study as it is a scarcely studied area, affected by the civil war.
In Thunukkai Division 120 patients, out of the total population of 10,172 were identified as CKD/CKDu patients by MOH office Mallavi. The water samples were collected by using stratified purposive random sampling method which represented 29% of total CKDu patients in Thunukkai Division. Sample size (n) was estimated using the equation, n = Z2 p(1-p)/ e2, where Z = confidence level at 95% (i.e. 1.96), p = estimated prevalence of the area (120/10,172 = 0.0118) and e2 = margin of error. Even though, the sample size n = 18 was statistically sufficient 35 patients were randomly sampled to cover all the CKDu positive villages of the area.
Sociodemographic data were collected through an interviewer administered questionnaire, developed for the study and filled during the visit to patient's houses (Supplementary file). Data include age, gender, occupation of the patient, clinical signs and symptoms of CKDu (serum creatinine level) and potential habits for the disease progression (smoking and alcohol consumption).
As CKDu patients were distributed in 65% of total villages in Thunukkai Division water samples were collected in these villages. A total of 38 water samples; 35 samples from CKDu affected areas (Fig. 1, A1-A15, B1-B20) and 3 control samples (Fig. 1, C1-C3) were collected in August 2018. The three control samples were collected from places in Thunukkai central, Yokapuram west and Ugilankulam, where there were no records of CKDu patients.
Water samples were collected during the dry season. Before collecting water, the water column was thoroughly stirred with the collecting bucket. If the water surface has floating scum or algae, those were skimmed before collecting the samples. Water was collected over the depth of 10 cm below the water surface. In the case of tube wells, pumps were used to take samples of water. Water samples were collected in cleaned plastic bottles which were then refrigerated at 4 °C until assaying. Physicochemical parameters such as, turbidity, colour, odour, total dissolved solids, alkalinity, salinity, electrical conductivity, NO3−, PO43−, SO42−, F−, total hardness, Ca2+, Mg2+ and Cl− of the water samples were measured within a week.
Physicochemical properties of the water samples were determined with standard instruments, following the standard procedures. Onsite measurements were obtained with portable meters for pH (Jenway Phm6, UK), turbidity, electrical conductivity and salinity (Senso direct, con110, USA). TDS was measured gravimetrically with HCl acid. Alkalinity and total hardness were measured using sulfuric acid based and EDTA based complexometric titration methods, respectively. Fluoride (F−) was measured with SPANDS Spectrophotometric method. NO3−, PO43−, SO42− ions were determined by COD multiparameter photometer (HI83399, UK). These analyses were carried out in the Department of Chemistry, University of Jaffna and As and Cd concentrations were measured in Industrial Technological Institute in Colombo using graphite furnace atomic absorption spectrophotometry (GFAAS) with a precision of 0.001.
By using Geographic information system (GIS) software (ArcGIS 10.1) mapping was conducted for nitrate, phosphate, total hardness, total dissolved solid, fluoride and Arsenic content in water.
Physicochemical parameters of the test samples were compared with those of the control samples and the Sri Lankan standards for potable water. To assess the correlation between water quality parameters and serum creatinine levels of the CKDu patients, linear regression, model was used. Through this analysis, relationships between the target variable (dependent variable) and a set of independent variables (covariates) were quantified. The regression equation estimates a coefficient for each variable. The goal of regression analysis is to generate the line that best fits the observations. However, the best fitted line for the data leaves the least amount of unexplained variation, such as the dispersion of observed points around the line. The following formula describes the linear relationship between dependent and independent variables.
$$\mathrm{Y}=\upbeta 0+\upbeta 1\\mathrm{a}+\upbeta 2\mathrm{b}+\upbeta 3\\mathrm{c}+\upbeta 4\\mathrm{d}+\upbeta 5\\mathrm{e}+\upbeta 6\\mathrm{f}+\mathrm{error}$$
Where, dependent variable (Y) is the serum creatinine levels of the CKDu patient. Independent variables; a-nitrate, b-fluoride, c- phosphate, d- Total Dissolved Solid, e- Total hardness, f- Arsenic content in water. Intercept β0- is a constant that defines where the linear trend line intercepts the Y-axis. Coefficient β1, β2, β3, β4, β5 and β6; constants that represent the rate of change in the dependent variable as a function of changes in the independent variable. It is the slope of the linear line. Error, represents the unexplained variation in the target variable. It is treated as a random variable that picks up all the variation in Y that is not explained by X.
Demography of the CKDu patients
Ages of CKDu patients in Thunukkai Division ranged between 30 and 80 years. Among the studied patients 24 were in the 50–70 years range, comprising 63% of the total sample (Table 1). CKDu was more prevalent among males as the male to female ratio was 4:1. Among them 90% were engaged in agriculture related occupation while the rest were laborers, drivers, or unemployed people.
Table 1 Demographic data and baseline characteristics of the CKDu patients in Thunukkai, Mullaitivu District, Sri Lanka
According to the data obtained from the questionnaire, more than 20% of the patients either smoked or consumed alcohol while another 20% did both.
Clinical data of the selected patients showed existence of other non-communicable diseases such as hypertension (43%) and diabetes (17%) with 17% suffering from both the diseases. The disease history of the patients revealed that they were diagnosed with hypertension and diabetics secondarily, only after they developed CKDu. Furthermore, a spotty pigmentation which was similar to arsenic related keratosis was observed in the palms of three male patients of the study group.
Physicochemical characters of the drinking water
Dug wells and tube wells serve as the major sources of drinking water in the study area. Thus, drinking water samples from 31 dug wells and 4 tube wells were collected and analysed for the physicochemical parameters. The results of the analyses are given in the Table 2. Among the parameters salinity, TDS and total hardness showed highest deviations with the corresponding values of more than 50% samples exceeding the relevant values of the Sri Lankan standards, SLS 614:2013 for drinking water (Salinity ranged between 0.13–3.66 g/L with an average of 0.69 g/L (standard error of mean, (SEM) = 0.12), with 50% of the samples exceeding the standard value of 0.5 g/L. Similarly, in 63% of the samples, TDS content exceeded the standard (400 mg/L) with an average value of 687 mg/L (SEM = 115) and reaching a maximum of 3570 mg/L. Total hardness of the samples ranged between 39.84–683.26 mg/L (SEM = 24.9) with 65% of the samples exceeding the standard (250 mg/L).
Table 2 Physicochemical properties of the water samples collected from Thunukkai, Mullaitivu District, Sri Lanka
Among other parameters, turbidity, fluoride, chloride, calcium and arsenic contents showed higher deviations from the SLS 614:2013, with more than 20% samples exceeding the respective limits. Mean turbidity of the drinking water samples was 2.3 NTU (Nephelometric Turbidity Unit) (SEM = 0.74) with a range of 0.3–26.5 NTU, all exceeding the mean turbidity of control samples. Ten water samples (29%) exceeded the standard turbidity level (2NTU-SLS 614:2013).
Fluoride content averaged 1.73 mg/L (0.1–22.3 mg/L) (SEM = 0.69), with 39% samples exceeding the standard of 1 mg/L. Similarly, chloride content of 31% of the water samples exceeded the standard value of 250 mg/L with an average of 367.27 mg/L (6.95–2066.53 mg/L) (SEM = 87.0). Calcium averaged 64.07 mg/L (3.98–193.22 mg/L) (SEM = 7.3) with 21% of the samples having higher values than the standard value (100 mg/L) for calcium in drinking water. Even though, arsenic detected only in nine samples, the concentration exceeded the standard value (0.01 mg/L) reaching as high as 0.03 mg/L in some water samples.
Water quality parameters such as pH, electric conductivity, nitrate, phosphate and magnesium contents showed no substantial deviations, with only 10% of the samples exceeding the SLS 614:2013. pH of the water samples was 8.2 on the average (SEM = 0.1), ranging between 7.4–9.1. Only six samples had a pH of more than 8.5 whereas no water sample had pH below 6.5. The mean value of electrical conductivity was 1416.31 μS/cm (SEM = 216), with the values ranging from 330 to 6690 μS/cm. 17% of the samples exceeded the standard (750 μS/cm). Nitrate content of the samples averaged 28.13 mg/L, (SEM = 9.01), with values ranging from 0 to 295 mg/L. Only 15% of the water samples contained higher nitrate levels than the standard value, 50 mg/L. The mean phosphate content was 0.84 mg/L (0.06–4.8 mg/L) (SEM = 0.11) with 10% of the samples exceeding the desirable level. Magnesium content had a mean value of 30.19 mg/L with 18% of samples exceeding the standard value.
All the other physicochemical parameters including, Na, K, sulphate contents and had lower levels compared to their respective Sri Lankan standards. Cadmium was not detected in any of the water samples collected from the sampling area.
According to the data gathered through the questionnaire, the serum creatinine levels of CKDu patients of Thunukkai ranged between 1.31–5.32 mg/dL, with an average of 1.91 ± 0.84 (SEM = 0.14) mg/dL. Serum creatinine of the control samples had an average, of 0.7 ± 0.05 mg/dL (SEM = 0.03), as illustrated in Fig. 2, showing a significantly low value (t = 8.24, p = 0.0001).
Serum creatinine data of CKDu patients, collected from different locations, A1-C2 in Thunukkai. A1-B20 are CKDu endemic areas, C1-C3 are none-CKDu areas. Reference line represents 2 mg/dL creatinine level
Correlation between water quality parameters and CKDu patients
The effect of physicochemical characters of water on the occurrence of CKDu was evaluated by applying the linear regression model, where serum creatinine concentration was treated as the response variable while water quality parameters (nitrate, fluoride, phosphate, etc.) were treated as the explanatory variables. Results are given in the Table 3 below.
Table 3 Regression analysis between serum creatinine of CKDu patient and explanatory variables, phosphate, TDS and Arsenic contents in water from Thunukkai, Mullaitivu District, Sri Lanka
R2 value of 0.5109 suggests, that six explanatory variables; fluoride, phosphate, TDS, total hardness and arsenic, together account for about 51.09% of variation in the serum creatinine concentration of the CKDu patients. Among these physicochemical parameters, TDS and As contents showed significantly positive correlation (p < 0.05) with the creatinine levels while phosphate content showed significantly negative correlation (p < 0.001, Fig. 3). On the other hand, nitrate content in the drinking water showed no influence on serum creatinine of the CKDu patients in Thunukkai.
Scatter plots of total dissolved solids (TDS), arsenic and phosphate contents in drinking water showing significant correlations with the serum creatinine levels of the CKDu patients in Thunukkai
The measured total hardness, TDS, nitrate, fluoride, phosphate and arsenic contents in the drinking water of the study area are illustrated in Figs. 4 and 5. Analysis of the spatial distribution data of the patients indicated that seven sampling points have creatinine levels over 2 mg/dL in the order of, A12 > A9 > A4 > A1 > A13 > B13 > A2. When these locations were overlapped on the GIS mapping distinct interrelations were identified in TDS, As and phosphate with the creatinine levels (Fig. 4), agreeing with the trend observed in the regression analysis. Higher serum creatinine levels, over 2.5 mg/dl appeared to be linked with higher total dissolved solids content 650.9–1516 mg/L (Fig. 4a) and higher arsenic content 1.6–29.7 μg/L (WHO limit of As is 10 μg/L) (Fig. 4b) and low phosphate content, 0.789–0.904 mg/L (Fig. 4c). The spatial distribution of nitrate, fluoride and total hardness contents showed no direct influence on serum creatinine levels in the study samples (Fig. 5a, b & c).
GIS mapping of a TDS-total dissolved solid, b arsenic and c phosphate distribution in the ground water in Thunukkai Division. Six triangles in each plot represent CKDu patients with higher serum creatinine content over 2 mg/dL. (Source: Created by the authors with ArcGIS 10.1, https://www.arcgis.com/index.html)
GIS mapping of a Nitrate, b fluoride and c total hardness content in the ground water in Thunukkai Division. Six triangles in each plot represent CKDu patients with higher serum creatinine content over 2 mg/dL. (Source: Created by the authors with ArcGIS 10.1, https://www.arcgis.com/index.html)
The present study revealed that the male farmers, aged between 50 and 70 in Thunukkai Division of Mullaitivu developed higher risk of CKDu. Smoking and alcohol consumption may enhance the vulnerability towards the disease. The patients have developed hypertension and diabetes as secondary illnesses after developing the CKDu. Drinking water quality of the area was not at the desirable level, particularly with high ionic content leading to high salinity, dissolved solids and hardness levels. Secondarily, turbidity, fluoride, chloride, and magnesium levels reported disturbingly higher values compared to SL standards. As predicted by the regression model, fluoride, phosphate, TDS, total hardness and arsenic levels together accounted for considerable variation in the serum creatinine of the studied patients. This trend was further validated by the association of geospatial distribution of TDS, arsenic and phosphates with the occurrence of higher serum creatinine levels of the study samples.
Demographic data of the study comply with many other studies conducted in CKDu issue in Sri Lanka. Sex distribution of the presents study, i.e. 4:1, (Male: Female) was consistent with Wanigasuriya et.al [21]., Noble et.al [22]., Ranasinghe et al. [2], and Balasooriya et al. [9] which reported male preponderance. However, the age distribution was not compatible with Wanigasuriya et.al., [21]) and Noble et.al., [22] who reported as the majority of patients were in the 40–50 years range unlike the 50–70 range in the present study. On the other hand, many recent studies reported consistent findings where more than 60% of the CKDu patients were over 50 years of age [2, 12] Spotty pigmentations on palms, observed by Paranagama [23] was consistent with similar type of patches observed in the patients of Thunukkai in the current study. Dermatological data of these patients who used arsenic rich water suggested the development of early stage of arsenic related keratosis. However, a proper histopathological investigation should be conducted for the validation purposes.
Drinking water quality parameters were compatible with the groundwater quality data available for North Central province through various CKDu based research conducted to date. However, the total hardness, calcium and magnesium ion concentration, electric conductivity levels of the groundwater of Thunukkai were substantially high compared to those of the groundwater of North Central province [5, 17, 24, 25]. On the other hand, As and Cd in the groundwater of Thunukkai were low compared to the levels detected in the North Central province [5, 17]. Nevertheless, in agreement with the present study, Wickramarathna et al., [12] reported insignificant levels of Cd and As in groundwater of Girandurukotte, Wilgamuwa and Nikawewa areas. Chloride ion content in the groundwater of Thunukkai was high compared to the groundwater in CKDu high prevalent areas such as Padaviya, Kebithigollawa, Medawachchiya, & Kahatagasdigiliya, and moderate prevalence areas such as Mihintale, Talawa, and Nochchiyagama of the dry zone of Sri Lanka [26]. Similarly, fluoride levels were relatively high in Thunukkai compared to those of the dry zone of Sri Lanka as reported by Chandrajith et al., [27]. Water hardness and the conductivity levels in Thunukkai were compatible with those of the high and moderately prevalent areas [26].
Correlation data of the water quality parameters and serum creatinine levels revealed a significant negative relationship with phosphate and positive relationships with total dissolved solid (TDS) and arsenic content of the drinking water. This observation was further confirmed by geospatial mapping of the quality of the constituents and the occurrence of higher serum creatinine levels in study samples. Patients with extremely high levels of serum creatinine (over 4.5 mg/dL) appear to consume water from wells with higher TDS and arsenic contents. This higher availability of arsenic in ground water of Thenyiakulam (A9 site), Kalvilan (A12 site) Thunukkai (A4 site) may have related to the sediment characteristics of the area, where higher mobilization of arsenic ions occur through the availability of carbonate minerals of decaying organic matter which facilitate rapid release of arsenic ions from the As-adsorbed Fe-oxyanions in sediments [28, 29]. Thus, in future studies, analysis of sediment characteristics of the drinking water sources may be pivotal for understanding the overall process.
Similar to arsenic, several other constituents in drinking water and the diet were linked with the serum creatinine levels [30]. For example, complying with the present study nitrate in the diet was only slightly linked with serum creatinine of CKDu patients [31, 32]. In contrast to our results, significant association between the fluoride content and the occurrence of CKDu was revealed by several other researchers, including Illeperuma et.al., [7], Balasooriya et al., [9], Wickramarathna [12], Jayasinghe [33] and Wijeratne et.al., [34].
As given in the present study, phosphate ions in water may negatively influence the serum creatinine of CKDu patients. Thus, the negative sign of the coefficient implies that when phosphate content in the water increases, possibility of occurring CKDu decreases. The coefficient of phosphate shows that by holding nitrate, fluoride, total dissolved solid, total hardness and arsenic content, possibility to occur CKDu decreases by 0.2113 times for every unit increase in phosphate content. This result found to be consistent with Eddington et.al., [35]. Furthermore, total dissolved solids positively influence the serum creatinine levels of the study participants. The positive sign of the coefficient implies that when TDS content in the water increases, possibility to occur CKDu increases. The coefficient of TDS shows that by holding nitrate, fluoride, phosphate, total hardness and arsenic constant, possibility to occur CKDu increases by 0.2113 times for every unit increase in TDS content in water. On the other hand, total hardness showed no association with the serum creatinine levels, i.e. not compatible with Jayasumana et.al., [36] and Paranagama [37] who concluded positive and significant relationship between total hardness in water and CKDu. However, low R2 value obtained in the present study may affect the significance of the findings. Thus, further studies should be conducted with a higher sample number and a broader study area.
The chronic kidney disease of unknown etiology in Thunukkai of the Mullaitivu District in the Northern Province of Sri Lanka showed male preponderance, M: F 4:1, at the age range of 50–70, revealing 80% occupational association with agricultural activities. Secondary development of hypertension and diabetes were observed in CKDu patients. Spotty pigmentation, similar to arsenic related keratosis was observed in the palms of the patients, lived in the areas where detectable arsenic levels in their drinking water.
Evaluation of drinking water revealed substantially high ionic content leading to higher electric conductivity, salinity, total dissolved solids and total hardness levels compared to those of the Sri Lankan standards. Serum creatinine levels of the CKDu patients were significantly and negatively correlated with phosphate and positive correlation with arsenic and TDS contents, contributing more than 50% variation in serum creatinine. The results may be concluded that water quality parameters such as phosphate, total dissolved solid and arsenic content are significantly correlated with CKDu in Thunukkai of Mullaitivu District in Sri Lanka.
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Additional data are available as a supplementary file.
CKDu:
Chronic Kidney Disease of unknown etiology
GFAAS:
Graphite furnace atomic absorption spectrophotometry
GIS:
GN:
"Grama Niladhari"-Public service officer
NTU:
SLS:
Sri Lanka Standards
Total Dissolved Solids
Athuraliya TN, Abeysekera DT, Amerasinghe PH, Kumarasiri PV, Dissanayake V. Prevalence of chronic kidney disease in two tertiary care hospitals: high proportion of cases with uncertain aetiology. Ceylon Med J. 2009;54(1).23-25.
Ranasinghe AV, Kumara GW, Karunarathna RH, De Silva AP, Sachintani KG, Gunawardena JM, Kumari SK, Sarjana MS, Chandraguptha JS, De Silva MV. The incidence, prevalence and trends of chronic kidney disease and chronic kidney disease of uncertain aetiology (CKDu) in the north Central Province of Sri Lanka: an analysis of 30,566 patients. BMC Nephrol. 2019;20(1):338.
Elledge MF, Redmon JH, Levine L, Wickremasinghe R, Waniyasuriya K, Joun R. ChronicKidney disease of unknown etiology: quest for understanding and global publication: RTI Press Publication No. RB-0007-1405. Research Triangle Park; 2014.
Jayasekara KB, Dissanayake DM, Sivakanesan R, Ranasinghe A, Karunarathna RH, Kumara GW. Epidemiology of chronic kidney disease, with special emphasis on chronic kidney disease of uncertain etiology, in the north central region of Sri Lanka. J Epidemiol. 2015;25(4):275–80.
Jayasumana C, Paranagama P, Agampodi S, Wijewardane C, Gunatilake S, Siribaddana S. Drinking well water and occupational exposure to herbicide is associated with chronic kidney disease – inPadavi- Sripura, Sri Lanka. Environ Health. 2015;14(6):6.
Jayatilake N, Mendis S, Maheepala P, Mehta FR. Chronic kidney disease of uncertain aetiology: prevalence and causative factors in a developing country. BMC Nephrol. 2013;14(1):180.
Illeperuma OA, Dharmagunawardhane HA, Herarh KPRP. Dissolution of aluminium from substandard utensils under high fluoride stress: A possible risk factors for chronic renal failure in the North-Central-Province. Natl Sci Found. 2009;37:219–22.
Nanayakkara S, Stmld S, Abeysekera T, Chandrajith R, Ratnatunga N, Edl G, Yan J, Hitomi T, Muso E, Komiya T, Harada KH. An integrative study of the genetic, social and environmental determinants of chronic kidney disease characterized by tubulointerstitial damages in the north central region of Sri Lanka. J Occup Health. 2014;56(1):28-38.
Balasooriya S, Munasinghe H, Herath AT, Diyabalanage S, Ileperuma OA, Manthrithilake H, Daniel C, Amann K, Zwiener C, Barth JA, Chandrajith R. Possible links between groundwater geochemistry and chronic kidney disease of unknown etiology (CKDu): an investigation from the Ginnoruwa region in Sri Lanka. Exposure Health. 2019;17:1–2.
Koizumi A, Kobayashi H, Harada KH, Ratnatunga N, Parahitiyawa NB, Chandrajith R, Senevirathna STMLD, Nanayakkara S, Abeysekera T, Hitomi T. Whole-exome sequencing reveals genetic variants associated with chronic kidney disease characterized by tubulointerstitial damages in north central region, Sri Lanka. Environ Health Prev Med. 2015;20(5):354.
Sayanthooran S, Magana-Arachchi DN, Gunerathne L, Abeysekera T. Potential diagnostic biomarkers for chronic kidney disease of unknown etiology (CKDu) in Sri Lanka: a pilot study. BMC Nephrol. 2017;18(1):31.
Wickramarathna S, Balasooriya S, Diyabalanage S, Chandrajith R. Tracing environmental aetiological factors of chronic kidney diseases in the dry zone of Sri Lanka—a hydrogeochemical and isotope approach. J Trace Elem Med Biol. 2017;44:298–306.
Dissanayake CB, Chandrajith R. Fluoride and hardness in groundwater of tropical regions-review of recent evidence indicating tissue calcification and calcium phosphate nanoparticle formation in kidney tubules. Ceylon J Sci. 2019;48(3):197–207.
Dissanayake CB. Water quality in the dry zone of Sri Lanka-some interesting health aspects. J Natl Sci Found Sri Lanka. 2010;33(3):161-8.
WHO. Hardness in Drinking water, Document for Development of WHO Guidelines for Drinkingwater quality. The World Health Organization, Geneva, Switzerland; 2011.
Wasana HM, Aluthpatabendi D, Kularatne WM, Wijekoon P, Weerasooriya R, Bandara J. Drinking water quality and chronic kidney disease of unknown etiology (CKDu): synergic effects of fluoride, cadmium and hardness of water. Environ Geochem Health. 2016;38(1):157–68.
Jayasumana C, Paranagama PA, Amarasinghe MD, Wijewardane KMRC, Dahanayake KS, Fonseka SI, Rajakaruna KDLMP, Mahamithawa AMP, Senanayake VK SUD. Possible link of chronic arsenic toxicity with chronic kidney disease of unknown etiology in Sri Lanka. J Nat Sci Res. 2013;3(1):64–73.
Dissanayake DM, Jayasekera JM, Ratnayake P, Wickramasinghe P, Radella YA, Shihana F. Short term effects of crude extracts of cyanobactria blooms of reservoirs in high prevalence area for CKD in Sri Lanka on mice. In:kidney proceeding. Sri Lanka: University of Peradeniya; 2011.
WHO & NSF. Designing a step wise approach to estimate the burden and to understand the etiology of CKDu in Sri Lanka. In: Workshop report. Sri Lanka; 2016.
Kafle K, Balasubramanya S, Horbulyk T. Prevalence of chronic kidney disease in Sri Lanka: a profile of affected districts reliant on groundwater. Sci Total Environ. 2019 Dec 1;694:133767.
Wanigasuriya KP, Peiris H, Heperuma N, Peiris RB, Wickremasinghe R. Could ochratoxinin food commodities be the cause of chronic kidney disease in Sri Lanka? Tranesactions R Soc Trop Med Hyg. 2008;102:726–8.
Noble A, Amerasinghe P, Manthrithilake H, Arasalingam S. Review of literature on chronic kidney disease of unknown etiology (CKDu) in Sri Lanka: IWMI, International Water Management Institute, Battaramulla, Sri Lanka; 2014.
Paranagama PA. Potential link between ground water hardness, arsenic content and prevalence of CKDu. InProceedings of the Symposium on "Chronic kidney disease of uncertain origin (CKDu): A scientific basis for future action; 2013. p. 1-8.
Wasana HMS, Perera GDRK, Gunawardena PSD, Bandara J. The impact of aluminum,fluoride, and aluminum–fluoride complexes in drinking water on chronic kidney disease. Environ Sci Pollut Res. 2015;22(14):11001–9.
Aqeelah Faleel R, Jayawardena U. Is it safe to drink water in Mihintale?; A case study from disease endemic areas of the Chronic Kidney Disease of unknown aetiology (CKDu). In: 17th Annual Research Sessions of the Open University of Sri Lanka; 2019.
Cooray T, Wei Y, Zhong H, Zheng L, Weragoda SK, Weerasooriya AR. Assessment of groundwaterquality in CKDuaffectedareas of Sri Lanka: implications for drinkingwatertreatment. Int J Environ Res Public Health. 2019;16(10):1698.
Chandrajith R, Diyabalanage S, Dissanayake CB. Geogenic fluoride and arsenic in groundwater of Sri Lanka and its implications to community health. Groundw Sustain Dev. 2020;10:100359.
Bandara UG, Diyabalanage S, Hanke C, van Geldern R, Barth JA, Chandrajith R. Arsenic-rich shallow groundwater in sandy aquifer systems buffered by rising carbonate waters: a geochemical case study from Mannar Island, Sri Lanka. Sci Total Environ. 2018;633:1352–9.
Amarathunga U, Diyabalanage S, Bandara UG, Chandrajith R. Environmental factors controlling arsenic mobilization from sandy shallow coastal aquifer sediments in the Mannar Island, Sri Lanka. Appl Geochem. 2019 Jan 1;100:152–9.
Siriwardhana ER, Perera PA, Sivakanesan R, Abeysekara T, Nugegoda DB, Weerakoon KG. Is the staple diet eaten in Medawachchiya, Sri Lanka, a predisposing factor in the development of chronic kidney disease of unknown etiology?-a comparison based on urinary β 2-microglobulin measurements. BMC Nephrol. 2014;15(1):103.
Mirmiran P, Bahadoran Z, Golzarand M, Asghari G, Azizi F. Consumption of nitrate containing vegetables and the risk of chronic kidney disease: Tehran lipid and glucose study. Ren Fail. 2016;38(6):937–44.
Silva CS. Water quality assessment in Jaffna, Vavuniya, Anuradhapura, Kurunagala and Hambantota in Sri Lanka for domestic purposes. Proceedings of the Annual Academic Sessions of the Open University of Sri Lanka; 2010.
Jayasinghe YK. CHRONIC KIDNEY DISEASE. Risk factor identification. Secondary data analysis. In: IWMI reports; 2011.
Wijerathne C, Weragoda SK, Kawakami T. A reviewof Chronic Kidney Disease Due to Unknown Etiology and Groundwater Quality in Dryzone, Sri Lanka. In: International Conference on Advances in Advances in Applied Science and Environmental Engineering (ASEE), Malaysia, Organized by Institute of Research Engineers and Doctors, USA; 2014.
Eddington H, Hoefield R, Sinha S, Chrysochou C, Lane B, Foley RN, Hegarty J, New J, O'Donoghue DJ, Middleton RJ, Kalra PA. Serum phosphate and mortality in patient with chronic kidney disease. Clin J Am Soc Nephrol. 2010;5(12):2251–7.
Jayasumana C, Gunatilake S, Senanayake P. Glyphosate, hardwater and nephrotoxic metals:are they the culprits behind the epidemic of chronic kidney disease of unknown etiology in Sri Lanka?Int J environ res. Public Health. 2014;11(2):2125–47.
Parangama A, Jayasuriya N, Bhuiyan MA. Water quality parameters in relation to ChronicKidney Disease in Sri Lanka. In: Jayasinghe, Mendis, Fernando S, Janaka Y, Ranjith Dissanayake R, editors. Capacity Building for Sustainability MTR. Kandy: University of Peradeniya; 2013. p. 173–83.
The authors wish to acknowledge Dr. K. Suseenthiran, medical officer Mallavi Hospital, Sri Lanka and medical officers from Provincial Director of Health Services, Northern Province. The study was supported by University Research Grant from University of Jaffna, Sri Lanka.
Financial support was provided by the University Research Grant from the University of Jaffna, Sri Lanka. The grant money was allocated for equipment, travel, consumables and miscellaneous and was spent over the course of research work. The funder, the University of Jaffna has no conflict of interest over publications produced though the study.
Centre for Environmental Studies and Sustainable Development, The Open University of Sri Lanka, Colombo, Sri Lanka
Kalaivani Gobalarajah
Industrial Technology Institute, 363, Bauddhaloka Mawatha, Colombo-7, Sri Lanka
Prabagar Subramaniam
Department of Zoology, Faculty of Natural Sciences, The Open University of Sri Lanka, Colombo, Sri Lanka
Uthpala Apekshani Jayawardena
Department of Construction Technology, University College, Jaffna, Sri Lanka
Gobalarajah Rasiah
Department of Surgery, Faculty of Medicine, University of Jaffna, Jaffna, Sri Lanka
Sittampalam Rajendra
Department of Chemistry, Faculty of Science, University of Jaffna, Jaffna, Sri Lanka
Jasotha Prabagar
KG carried out the study under the guidance of PS, UAJ, GR, RS and PJ. KG drafted the manuscript and UAJ and PJ reviewed it before the initial submission. All authors read and approved the final manuscript.
Correspondence to Uthpala Apekshani Jayawardena.
Ethical approval (Ref: J/ERC/17/82/NDR/0171) was granted from Ethics Review Committee (ERC) of Faulty of Medicine, University of Jaffna, Sri Lanka. Authors declare that the experiments conducted, complied with the current laws of Sri Lanka. Patient's data was collected through an interviewer administered questionnaire, filled during the visit to their houses. Verbal consent was taken as per the guidelines given by the ERC.
The authors declare that there is no conflict of interest.
Gobalarajah, K., Subramaniam, P., Jayawardena, U.A. et al. Impact of water quality on Chronic Kidney Disease of unknown etiology (CKDu) in Thunukkai Division in Mullaitivu District, Sri Lanka. BMC Nephrol 21, 507 (2020). https://doi.org/10.1186/s12882-020-02157-1
Water quality parameters
Serum creatinine | CommonCrawl |
ENGLISH LITERATURE ASSIGNMENT
EXCELLENT ENGLISH LITERATURE ASSIGNMENT WRITING SERVICES FOR YOU
When studying your English Literature course, you may, in some occasions, find it challenging to complete your assignments due to a number of reasons. Like many students all over the world, you are well away from the various challenges you have to face to get by at the university or college. You need to work and study at the same time, which means that you may not have enough time to work on your school assignments. In this case, you will need to find an alternative which, in most cases, is to look for a suitable English Literature assignment writer from a trusted company.
But how do you go about finding an individual who will actually offer genuine English Literature assignment help without getting scammed? In most cases, this has proven to be a challenging affair since there are so many organizations on the online platform that pose as genuine assignment help companies but are actually internet scammers who want to take your hard-earned money. You can, however, conduct your thorough research and find legitimate English Literature assignment writing services UK.
ACADEMIC ENGLISH LITERATURE ASSIGNMENT SERVICES
One example of such a company that was created to make the difference and reduce the confusion caused by so many fake online companies is Peachy Essay. According to the UK English Literature assignment help reviews on the best companies to work with when you need genuine help completing your literature assignments, our company tops the list.
By continually providing help to literature students and helping them in their bid to pass their English literature assignment writing tasks, we have continuously amassed a wealth of positive reviews from satisfied clients that make a stand apart from all our competitors. We understand the importance of serving our clients well, and we continuously conduct detailed research on the needs and requirements of all our literature clients and examine new ways to ensure that they are delighted with the final product.
NEED PROFESSIONAL ENGLISH LITERATURE ASSIGNMENT HELP?
Essayhelpp.com would like to offer you high-quality and timely English Literature assignment help. Our Oxbridge and Ivy League writers have years of experience in writing quality English assignments.
TOP-QUALITY ENGLISH LITERATURE ASSIGNMENT HELP SINCE 2007
In most colleges, you are required to write your research paper, English essay or law assignment using academic English. It is very important for you to take the most from English courses because academic writing skills will play crucial role in your future career, especially when you'll find the job at one of top companies like Google, Amazon, Erst and Young, Boston Consulting or Oracle, where you'll be required to write business letters, reports, plans, make up various business presentations, etc.
We highly recommend you to have at least upper-intermediate level of English before you start learning academic writing.
Academic writing is considered the highest form of English writing. It should be mentioned, that learning academic writing can be quite challenging for those students, who are not native speakers and still struggle with English grammar or vocabulary. However, following the tips listed below can help you learn academic writing fast and easy.
LEARN TO WRITE USING FORMAL STYLE
Remember, that academic writing has nothing to do with your Facebook chat.
Avoid using contractions like wanna, don't, gotta, gonna or won't.
Never use slang words and colloquialisms. Always check definition of the word you are going to use. We highly recommend you to consult Merriam-Webster or Oxford Dictionary. What you say in colloquial speech may have the opposite meaning in writing.
It is better to avoid using personal pronouns "I" and "me" when you are writing your research paper or dissertation. Instead of using first-person pronouns, use "we" because you always write your research paper in collaboration with your research supervisor. Using "we" will make your research paper or dissertation sound more academic.
Make your research paper or dissertation void of emotions and subjective opinions. Operate facts and statistical data. Remember that academic writing is all about facts. For example, instead of "bad", you can use "inadequate" or "unsatisfactory". The word "bad" has subjective connotation in this case.
USE AN APPROPRIATE ACADEMIC WRITING STYLE WHEN IT COMES TO ENGLISH LITERATURE ASSIGNMENTS
We highly recommend you to read Chicago manual or APA style, which are one of the oldest academic writing manuals in the world. You will learn how to capitalize abbreviations, where to use commas or semicolons, etc. For more information make sure you check our complete guideline on Essayhelpp.com professional Annotated Bibliography Help page.
WRITE A SOLID THESIS STATEMENT
Your thesis statement is the core of your research paper or dissertation. To put it simply, it claims certain statement which the rest of your academic paper will try to prove. There is no need to write a long thesis statement. It should be concise and to the point. Make sure you check our free thesis statement generator tool if you are working on your thesis or dissertation.
Use the draft: It is wise to make up a good outline of your academic paper, include all necessary elements like the list of references, appendix, practical part, summary which are required by your research supervisor.
Have a look at sample academic writing: The best way to learn academic writing is to read other academic papers. You can always ask your research supervisor to provide you with academic papers, written by other students. Besides, you can browse through the internet and find a lot of academic articles. Here is the comprehensive list of online databases and e-libraries where you can find a lot of well-written academic papers:
Britannica Online
Cambridge Books Online
Cambridge University Press e-journals
HOW HAVE WE ACHIEVED THIS STEP?
English literature refers to a term that has different meanings in different contexts. As a student of literature, you are expected to be well acquainted with letters as per the definition that is derived from the Latin word "litera". No matter the description that you decide to use, you will always have to face the fact that you need to deal with works of literature including poems, books, novels, stories, poems, fiction, and nonfiction.
As a well-established academic English Literature assignment services provider, Essayhelpp.com is well aware of the importance of literature and all the elements that make up the subject. As a result, we empower all our English Literature assignment writers with all the relevant tools to ensure that all your assignments are completed in the best way possible. Additionally, we constantly train them and encourage them to conduct constant research on how best to complete your literature assignment writing tasks.
However, we highly recommend you to check the article you are going to read for credibility. Nowadays, the internet is full of poor-quality articles and research papers. Be careful! There are a lot of companies providing poor-quality English assignments, plagiarized English essays and law assignments as well!
When you feel that it is too difficult to tackle your English Literature assignment writing task, do not hesitate to contact our dexterous team of professionals. You will never find a better assistant, and you will always be guaranteed of supreme quality services at the most affordable rates on the market. We are concerned about your success, and we will always come to your aid when you call upon us.
16th-century literature,17th-century literature,18th century literature,African literature,African-American literature,American literature,An Occurrence at Owl Creek Bridge,Asian literature,Canadian literature,Dark romanticism,Drama,Early medieval literature,Egyptian literature,Essays,Fahrenheit 451,French literature,Gender studies,Greek mythology,John Steinbeck,Lamb to the Slaughter,Literary criticism,Literary modernism,Literary realism,Literary terms,Lord of the Flies,Mark Twain,Maya Angelou,Mesoamerican literature,Middle English literature,Night (book),Nonfiction,Of Mice and Men,Old English literature,Poetry,Postcolonial literature,Prose,Romanticism,Russian literature,Shakespeare's sonnets,Short stories,Spanish Golden Age,Spanish literature,The Fall of the House of Usher,The Gift of the Magi,The Great Gatsby,The Legend of Sleepy Hollow,The Lottery,The Love Song of J. Alfred Prufrock,The Metamorphosis,The Old Man and the Sea,To Kill a Mockingbird,Tom Clancy's Net Force,Victorian literature
Why did Machiavelli write The Prince?
What does Jerry think about Ellen Barrett after talking to her on the phone in The Chocolate War?
In Things Fall Apart, why does Okonkwo kill Ikemefuna?
In Things Fall Apart, why is Okonkwo considered a tragic hero?
Why do authors use satire?
What is the theme of A Monster Calls?
In Night, how does Eliezer respond to the removal of his clothes and other belongings, to the shaving of his hair, and the number tattooed on his arm?
Identify parallelism in the Hemingway short story In Another Country by citing textual evidence. How does the use of parallelism affect the story?
What is the theme of Imagine by John Lennon?
Why is the trolls fighting in chapter 2 of The Hobbit an example of slapstick comedy?
Summarize Are These Actual Miles by Raymond Carver
In Night, what process do the prisoners undergo after they pass the selection that degrades and dehumanize them?
What is the dance of anger in Touching Spirit Bear?
What type of sonnet is "From Pamphiliato to Amphilantus 16"?
In the poem Annabel Lee, according to the narrator, why was his love taken away from him?
In The Scarlet Letter, why would Hawthorne call Chillingworth a leech in his interactions with Dimmesdale, but a physician in his interactions with Hester?
Was the poem "A Word to Husbands" written by Ogden Nash?
What genre is The Princess Bride?
What does the word "hunger" mean to Eliezer in Night?
What was ironic about Arthur Jarvis's death in Cry, the Beloved Country?
What type of protagonist is common to psychological suspense?
Why was fight club created in the first place by Tyler and the narrator? What was their true intention and meaning for fight club? What role was it meant to play within society?
In Night, what effect did the showers have on the prisoners?
What have dragons traditionally represented in literature?
What is the mood of A White Heron?
What is philosophical literature?
What are some of Lourdes' early jobs in California in Enrique's Journey?
What are the names of the trolls in The Hobbit?
Can you give some examples of foreshadowing in the book "Animal Farm"?
What is behaviorism in philosophy?
What is parallelism?
What is the theme of The Green Mile?
What is the theme of The Robe by Lloyd C. Douglas?
What is Henry James' "The Real Thing" about?
What makes a book a classic piece of literature?
Summarize 'In the Cemetery where Al Jolson is Buried'
Define epigraph
In Things Fall Apart, what is Okonkwo's tragic flaw?
Who wrote the book The Hunt for Red October?
What is an anthology?
In Touching Spirt Bear, how did Edwin define anger?
What is the theme of Edward Scissorhands?
Who illustrated The Little Prince?
What do prisoners scream in the cattle wagon in Night?
How many works has Elie Wiesel written?
What is the theme of Cathedral by Raymond Carver?
What is the theme of The Bell Jar?
What are examples of metaphor, simile, and hyperbole in Much Ado About Nothing, Act 3 scene 1?
What are some examples of diction, syntax, and figurative language used in The Book Thief?
Why is a chattel mortgage used in Twelve Years a Slave?
Would you call The Jilting of Granny Weatherall a stream of consciousness story?
Where is the beach house in The Ghost Writer?
Explain literary movements in Africa during 19th and 20th century.
One end of a uniform 6.0 m long rod of weight w is supported by a cable. The other end rests against the wall, where it is held by friction. The coefficient of static friction between the wall and…
Can ambiguity in literature ever be interpreted in a 'wrong' way?
Who is the Greek Goddess Achlys?
Were there any unresolved issues in the Great Gatsby novel?
A mass m_1 = 1.5 kg rests on a 30-degree ramp with a coefficient of kinetic friction = 0.40. Mass m_1 is tied to another mass m with a string which runs over a frictionless pulley. Mass m is hangin…
Was Gatsby's death and funeral the right way to end the novel of The Great Gatsby?
A golf ball of mass 0.040 kg is at rest on the tee. Just after being struck, it has a velocity of 100 m/s. If the club and ball were in contact for 0.80 ms, what is the average force exerted on the…
If a train car with a mass of 10,000 kg traveling at 50 m/s is brought to a stop in 100 m, what is the magnitude of the force that stops it?
Consider an electron in a 1D box (0 less than equal to x less than equal to L, L=1 nm), consider n=3, what is the probability of obtaining each specific momentum in a measurement?
A 5.2 N force is applied to a 1.05 kg object to accelerate it rightwards across a friction-free surface. Determine the acceleration of the object. (neglect air resistance)
A 1,061 kilogram truck pulling a 627 kilogram trailer exerts a 2,291 Newton force on the road and produces an acceleration of 0.8 meters/second^2. There is clearly some resistance force, such as fr…
What verb tense is used for literature reviews?
A theological argument offered by Donne in "Death Be Not Proud" may be summarized as: A. the human essence is immortal. life is illusion. C. death cannot be overcome. D. chance and fate rule…
Beyond tone, a poet's attitude toward his or her subject reveals to us a poem's: A. theme. structure. C. subject. D. diction.
Why was William Turner innovative?
Where was Joseph Mallord William Turner born?
How did William Turner paint light?
How did Roman architecture reflect Greek mythology?
Why is Margaret Cavendish important?
The total length of a tower crane is 100 m and its mass is 10 metric tons, with the load boom (jib) being 70m long. The counterweight has a mass of 20 metric tons and is at 10 m from the cab center…
A 350.0 kg block is pulled by a force of magnitude F = (18.00 kg\cdot m/s^3)t + 20.00 N down a 15º with \mu_s = 0.5500 and \mu_k = 0.3000. The force acts parallel to the incline. At t = 0, the…
One of the curves on an asphalt highway is crowned such that the outside lane is banked away from the turn center rather than into the turn. Consider a car that negotiates this turn, which will be…
A 12-lb block moves from the conveyor belt to a table by travelling around a smooth, semicircular track of 10-ft radius. (a) Determine the speed required on the conveyor to ensure that the block st…
At noon in your town (not Berkeley), the sun is directly overhead on a certain day. Its measured intensity at ground level is I_{circ} = 1.00 kW m^{-2}. Find E, the average magnitude of the electri…
The company has outstanding bonds payable with a total face value of $100,000. On July 1, the company redeemed the bonds by purchasing them on the open market for a total of $102,700. Make the nec…
A 10-kg object slows down from 24 m/s^2 to a final velocity of 9 m/s^2. What is the magnitude of the net force acting upon the object?
A net force of 100 N acts upon a crate produces an acceleration of 4.0 m/s^2. What is the weight of the crate on the earth?
Why does Bradbury constantly use the color pink to describe Beatty (e.g. "pink fluorescent cheeks") in Fahrenheit 451? Is there a deeper meaning behind this association with color?
In a tug of war, one team pulls with a force of 50 newtons at an angle of 30 degrees from the positive x-axis, and a second-team pulls with a force of 25 newtons at an angle of 25 degrees. Use a sc…
A force of 35 N (to the right) is applied to a 7 kg box. The force of friction is 6 N. What is the net force acting on the box, and what is the acceleration of the box?
Is The Woman Warrior fiction or nonfiction?
What is the shining power in The Shining?
Who lived in the house in The House on Mango Street?
Is Slouching Towards Bethlehem fiction or nonfiction?
A particle with mass 10 kg starts from rest at r = 10 and theta = 0 radians, following the trajectory given by r = 10 – 2t , theta = 0.2t. Find the radial components Fr of the force, as functions…
Where was The Fire Next Time published?
When was The Fire Next Time published?
What type of book is The Fire Next Time?
What is the book The Fire Next Time about?
What awards did The Fire Next Time win?
What African American wrote wrote The Fire Next Time?
On what kind of day does the teller of the tale journey towards the House of Usher?
Braces are used to apply forces to teeth to realign them. The tensions applied by the wire to the protruding tooth are T = 25.5 N. What is the magnitude of the net force that is exerted on the toot…
What significance is attached to the food in The Metamorphosis?
What was the reason for different storylines in "Love Medicine"?
A person is pushing a wheelbarrow along a ramp that makes an angle a = 35.0 degree with the horizontal. The wheelbarrow and load have a combined mass of 27.00 kg with the center of mass at the midp…
What did William Shakespeare like about his job?
What was William Shakespeare's mother's job?
Compare and contrast the goblin men in Rossetti's poem "Goblin Market" with the character of Mr. Edward Hyde in R. L. Stevenson's "Strange Case of Dr. Jekyll and Mr. Hyde." Using historical materia…
According to The Old Man and the Sea, who are Santiago's 'distant friends'?
Who wanted Elie's gold tooth in Night by Elie Wiesel?
Who said 'Human beings are members of a whole'?
When did Alice Walker teach at Jackson State?
In The Old Man and the Sea, what character says, 'You must get well fast, for there is much that I can learn, and you can teach me everything'?
Are George and Lennie brothers in Of Mice and Men?
In Of Mice and Men, do George and Lennie have a good friendship?
In chapter 4, what does Gatsby tell Nick about himself?
Are the characters in The Metamorphosis justified with their position?
In The Lottery's style, structure, and organization, was Jackson effective in making her point?
Did people have to pay off their debt with the bank even without money in the book 'Of Mice and Men'?
How is The Metamorphosis an existential text?
How does The Metamorphosis discuss existentialism?
How does the family improve in The Metamorphosis?
What does the picture in The Metamorphosis symbolize?
How does the protagonist change in The Metamorphosis?
How does The Metamorphosis explore the theme of absurdity?
How does The Metamorphosis resemble a comedy?
What is circumstantial evidence and what does it have to do with Tom's case in To Kill a Mockingbird?
Is Of Mice and Men fiction or nonfiction?
Who is Might in Prometheus Bound?
What language did the Aztecs speak?
Is Madame Bovary considered modernist?
Was Sir Francis Bacon writing under the name William Shakespeare?
Did William Shakespeare make up the name Jessica?
How many children did Maya Angelou have?
How many siblings did Maya Angelou have?
Who is Wayne Westerberg in Into the Wild?
In The Great Gatsby, where does Myrtle Wilson live?
In "A Connecticut Yankee in King Arthur's Court", what are five incidents that hinder Hank's progress?
Why does the old man feel he should risk sleeping in The Old Man and the Sea?
In The Old Man and the Sea, why wasn't the old man worried about the weather or about getting lost at sea?
Why did the old man know there would be a breeze all night in The Old Man and the Sea?
Why was talking only when necessary at sea considered a virtue in The Old Man and the Sea?
Why does the old man in the book The Old Man and the Sea hate jellyfish?
Why is Manolin (the boy) necessary in The Old Man and the Sea?
Who was the first fireman in Fahrenheit 451?
How did Edward change in The Prince and the Pauper?
Why are the mountains "dear to him and terrible" in The Red Pony?
What are Athena's powers?
Realism arose as a reaction to
Which of the following statements is untrue? a) Desolvation is an energy expensive process that involves the removal of water from polar functional groups prior to a drug binding to its binding sit…
How is Ernest Hemingway's The Old Man and the Sea not only a story about a man fishing, but also a story about life?
When the reader discovers that Farquhar has been tricked by a Federalist agent, the author is using what literary device to cause the reader to predict Farquhar's impending danger in 'An Occurrence…
How is purity a theme in Young Goodman Brown?
How does imagination play a crucial role in the story "The Fall of the House of Usher"?
How is "The Fall of the House of Usher" an example of Romanticism?
In "The Fall of the House of Usher," how does the setting in this story cause or influence action?
How does Poe's setting affect the story The Fall of the House of Usher?
How is poetic imagination used in "The Fall of the House of Usher"?
How does imagination overcome reason in "The Fall of the House of Usher" and create fear?
In the short story "The Gift of the Magi", what does Della do to get money for Jim's present?
What does Della buy in "The Gift of the Magi"?
What does Della do to buy her husband a Christmas gift in the short story "The Gift of the Magi?"
What does Della sell to be able to buy a gift for her husband in O. Henry's The Gift of the Magi?
Does the book The Sixth Extinction speak of plastic pollution?
Did William Shakespeare live with his wife?
Did William Shakespeare live in the Victorian times?
Did William Shakespeare live in London?
Did William Shakespeare live in Canterbury?
What is old Suzy's place in Of Mice and Men?
What is Clara's place in Of Mice and Men?
Compare and contrast The Lottery with other social or religious rituals.
Compare and contrast The Lottery and The Possibility of Evil.
Compare the tradition in Everyday Use by Alice Walker to the tradition in The Lottery by Shirley Jackson.
How are The Lottery and A Rose for Emily similar in themes?
Compare and contrast the traditions for A Rose for Emily and The Lottery.
How to write a Shakespearean sonnet.
How to write an Elizabethan sonnet.
Who is Ella Kaye in The Great Gatsby?
The terms of a will currently undergoing probate are: "A gift to my brother David of $25,000 cash; to my son James, $50,000 from my savings account; and to my Daughter Lila, all of my remaining pro…
What structure did Kafka use in The Metamorphosis?
Where is Hillsboro in Inherit the Wind?
?What does the term 'Kafkaesque' refer to and what would something described as 'Kafkaesque-like' be in terms of?The Metamorphosis?by Franz Kafka?
What keeps Eliezer from allowing himself to die during the forced march from Buna to Gleiwitz?
Is Echo in the Greek alphabet?
A stream of elastic glass beads, each with a mass of 0.45 g, comes out of a horizontal tube at a rate of 99 per second. The beads fall a distance of 0.52 m to a balance pan and bounce back to their…
Describe two examples of how either black slaves or white abolitionists used literature or the visual arts as a form of protest against slavery. Compare this to a modern example of art used for soc…
How do you say where were you in Spanish?
What views of life and its meaning are in conflict in the story Sonny's Blues?
Summarize Maya Angelou's "And Still I Rise".
What did Shakespeare change in Julius Caesar?
There is a contradiction in the sentence in The Old Man and the Sea: 'Then the fish came alive, with his death in him…' How can someone or something come alive as it is dying?
In what sections 'An Occurrence at Owl Creek Bridge' is there an outside person narrating the action? In what section of the story do we get to hear Farquhar?s thoughts?
How does the behavior of the townspeople in The Lottery reflect modern-day experiences with violence and crowd-thinking?
Why did Mark Twain write Fenimore Cooper's Literary Offenses?
Why did Mark Twain dislike James Fenimore Cooper?
In Washington Irving's The Legend of Sleepy Hollow, what does Ichabod do with his school children when he receives the invitation to the Van Tassel party?
The Romantic movement started as a rejection of what?
What are some examples of how Zeitoun's civil rights are violated?
Is there racism in the novel Possessing the Secret of Joy?
In "Night" by Elie Wiesel what does the setting look like and feel like? Where is it and when is it?
How long did Elie Wiesel stay in Auschwitz-Berkenau?
How does Myrtle react to Tom?s arrival?
What indication is there in The Great Gatsby that Tom means quite a bit to Myrtle?
How does Myrtle react to Tom's arrival in The Great Gatsby?
In Night, how does the dentist from Warsaw remove Elie's crown?
Who was Elie Wiesel's father?
What was the name of William Shakespeare's first acting company?
What theatre company did William Shakespeare join in 1594?
What acting company did William Shakespeare belong to?
What are some parallels in Edgar Allan Poe's short story "The Fall of the House of Usher"?
What historicism is shown in The Love Song of J. Alfred Prufrock by T. S. Eliot?
What was inscribed on the Golden Apple of Eris?
How did William Shakespeare and Anne Hathaway meet?
What are Angie Thomas' strong beliefs?
What is Angie Thomas' religion?
What are the chief characteristics of Renaissance drama?
Who did William Shakespeare act with?
What was William Shakespeare's parents's jobs?
What skills did William Shakespeare have?
What is Jon Krakauer's message in Into Thin Air?
What happens to Zalman during the march to Gleiwitz in Night?
Find the components of the vertical force F = (0, -12) in the directions parallel to and normal to the plane that makes an angle of π/3 with the positive x-axis as shown. Show that the total for…
Give a brief summary of A Smile To Remember by Charles Bukowski.
Religion and the church are central aspects of Go Tell It on the Mountain. Are these viewed positively or negatively? Do they help or hinder the characters?
How does Santiago make a new spear in The Old Man and the Sea?
According to your knowledge, which parameters would you use to measure the response of the mice in your experiments? (a) Oxygen consumption, Blood glucose, and lactate levels (b) Blood glucose…
What is the relationship between the narrator of T. S. Eliot's The Love Song of J. Alfred Prufrock and William Shakespeare's character Hamlet?
What member or members of his family try to get Gregor out of bed in "The Metamorphosis"?
Who is Gregor in The Metamorphosis?
In "The Gift of the Magi," whose gift is greater, Jim's or Della's?
Why did Angie Thomas pick the picture that she did for the cover of her book, The Hate U Give?
Why was regional literature effective during the realism era?
What is Maria's dream in "Journey of the Sparrows"?
In Roald Dahl's "Lamb to the Slaughter',' do you think the wife feels any regret after killing her husband?
How old is Jimmy in "The Gift of the Magi?"
How old is Jim in "The Gift of the Magi"?
How old is Jim from "The Gift of the Magi"?
Who founded moralist literary criticism?
What power does Slim have in Of Mice and Men?
What is George confessing to Slim in Of Mice and Men?
What does Slim want in Of Mice and Men?
What does Slim represent in Of Mice and Men?
In chapter three of Of Mice and Men, Slim says he finds something funny; what does Slim find funny?
Does Pyle speak Vietnamese in The Quiet American?
What is the shortest play written by William Shakespeare?
Is it realistic that there were so many survivors after an airplane crash in the novel Lord of the Flies?
Who is the Ancient Greek goddess of corn?
What are three characteristics of Shakespearean sonnets?
What is a distinguishing characteristic of a Shakespearean sonnet?
What caused the fire in A Refusal to Mourn the Death, by Fire, of a Child in London?
In "A Refusal to Mourn the Death, by Fire, of a Child in London", what is meant by 'the round Zion of the water bead'?
Babe Ruth steps to the plate and casually points to left center field to indicate the location of his next home run. The mighty babe hold his bat across his shoulder, with one hand holding the smal…
What reason did Myrtle give for marrying George Wilson?
Who is the murderer in Blues for Mister Charlie?
In what country does The River Between take place?
How did William Shakespeare's mother die?
How did William Shakespeare's daughter die?
How did William Shakespeare's dad die?
Was William Shakespeare knighted?
Was William Shakespeare famous during his life?
Was William Shakespeare an actor?
How would you describe Mary Maloney's behavior before and after the murder in "Lamb to the Slaughter"?
Analyzing a work of literature from a specific perspective means what?
Does Go Tell It on the Mountain portray religion as evil?
Explain how did racism impact the scene following: a mob almost attacks Atticus in order to enter the jail to lynch Tom Robinson.
Describe how racism affects the events in the novel To Kill a Mockingbird.
Individuals who experience or observe prejudice in action gain a new perspective and understanding of the world. How is this idea explored in "To Kill a Mockingbird" by Harper Lee?
What is the significance of the mermaids in the last stanzas of The Love Song of J. Alfred Prufrock?
Who are the mermaids in The Lovesong of J. Alfred Prufrock?
Why did Ray Bradbury write about technology?
The year is 2016. T transferred an investment portfolio to a Company trust (i.e., each beneficiary has the right to demand a distribution the amount of the annual gift tax exclusion for that year),…
What is significant about the conch's destruction?
Why did Henry V execute Cambridge in Shakespeare's Henry V?
What is the meter of Sonnet 29 by William Shakespeare?
When does A Confederacy of Dunces take place?
Provide guidance on how to write a poem about school shootings.
How many children did Shakespeare have and what were their names?
What inspired Carol Ann Duffy to write poetry?
On which day does The Lottery fall each year in The Lottery by Shirley Jackson?
In Shirley Jackson's short story The Lottery, on which day does The Lottery fall each year?
Where is James Baldwin buried?
Is Shakespeare in Love historically accurate?
Is Shakespeare in Love based on Romeo and Juliet?
Who influenced Robert Browning?
How is the lottery conducted in Shirley Jackson's The Lottery??
What was the purpose of Never Cry Wolf?
How does Kafka use symbolism to show characters' mental and attitude changes in the Metamorphosis?
For what reason is the book called "The Metamorphosis"?
How has Kafka used the setting to bring out the themes, namely Gregor's isolation, in the book The Metamorphosis?
Is Angie Thomas the author's real name?
What poem likely influenced Wallace Stevens' Anecdote of the Jar?
How does the meeting with Daisy affect Gatsby?
What are examples of tensions between dreams and reality in The Great Gatsby and The Grapes of Wrath?
Who is Elara in Greek mythology?
Why did Chinua Achebe write Civil Peace?
According to the American Psychological Association, all of the following are rights of test takers except the right to expect ________.
Who is E.K. Hornbeck in Inherit the Wind?
In The Old Man and the Sea, how long was the marlin?
What did Thomas Kyd do for a living?
How does?The Legend of Sleepy Hollow?portray?supernatural elements? ?
If?Metamorphosis?is a story about life within?modernist?society, what might Gregor Samsa?s life as a beetle mean? Is life as a beetle as anxious, individualistic, urban capitalism different than it…
Why did Amiri Baraka change his name?
Why did Amiri Baraka write Dutchman?
Why did Alice Walker write 'Everyday Use'?
What was Alice Walker's motivation for writing?
Consider a box being pushed along a rough surface with constant velocity. If the same box is now pushed with twice the force, what is the relationship between the applied force and the friction? Wh…
Did Elie Wiesel find his family after the Holocaust?
How is the idea of friendship presented in 'Of Mice and Men'?
Why does chapter 6 in "Of Mice and Men", by John Steinbeck, hold the most significance?
What are the strengths and weaknesses of Lennie's character in Steinbeck's "Of Mice and Men"?
Why does Shakespeare present Othello as unknown in the beginning of Othello?
Explain this quote from The Metamorphosis and what it has to do with alienation: 'Likewise the ash can and the garbage can from the kitchen. Whatever was not being used at the moment was just flung…
What happens to Slim in Of Mice and Men?
What happens to George in Of Mice and Men?
What happens to Lennie in Of Mice and Men?
What happens to Candy's hand in Of Mice and Men?
What happens to Whit in Of Mice and Men?
What happens to the wife in Of Mice and Men?
What happens to Candy's dog in Of Mice and Men?
What happens to Aunt Clara in Of Mice and Men?
What instrument does Charlie play in Dead Poets Society?
In The Legend of Sleepy Hollow, how wise was the philosophy the Van Tassels followed in rearing their child? ?
Why was Bertramt Cates in jail in Inherit the Wind?
What city is Hornbeck from Inherit the Wind?
A 2.0 \times10^4 N car is stopped on a 30 ^o hill. If friction is negligible, what is the force of the car pushing into the hill?
How did Carl Sandburg die?
Do Shakespearean sonnets have names?
Are Shakespeare's sonnets autobiographical?
What are some scenes in The Legend of Sleepy Hollow that present Brom Bones as a practical man?
In The Legend of Sleepy Hollow, what story does Brom Bones tell about his encounter with the Headless Horseman?
A person is on a scale in an elevator that is at rest. The elevator starts to move, and the person reads on the scale that her weight is 1.2 times her normal weight. What is the magnitude of the ac…
How did Eugene Delacroix die?
In Fahrenheit 451, how does Captain Beatty behave toward Montag at the firehouse?
How did romanticism change literature?
Where was the swimmer at the beginning of the story The Swimmer?
What makes literary criticism credible?
Who was Anne Hathaway?
When Gregor wakes to discover he has become a gigantic insect, he is mostly intent on the practical implications of his metamorphosis–how to get out of bed, how to get to his job, and so forth–he…
In?The Metamorphosis, can Gregor's death be considered a sacrifice in any sense?
Who was Maya Angelou's husband?
Who is the father of Maya Angelou's son?
What is Maya Angelou's son's name?
What is Lorna Doone?
Is Rita Dove still alive?
Is A Prodigal Son written by Christina Rossetti?
Is Ulysses and Odysseus the same person?
In The Lottery by Shirley Jackson, what is the kind of group behavior that exists? What human nature exists?
Who is Uncle Albert in Never Cry Wolf?
What is Love by Elizabeth Barrett Browning?
What literary device does Shakespeare use to open Julius Caesar?
What dramatic technique does Shakespeare use in Julius Caesar?
How is "The Gift of the Magi" a love story?
In A Connecticut Yankee in King Arthur's Court, what are Hank's mixed feelings about King Arthur?
What does seize the day mean in Dead Poets Society?
What does carpe diem mean in Dead Poets Society?
Could you help me understand the difference between a thematic topic and a thematic statement? I am reading The Lottery by Shirley Jackson.
Why did James Gatz invent Jay Gatsby?
Who are Paulina's two husbands in Winter's Tale?
How does Paulina awaken the statue of Hermione in Winter's Tale?
What comet is Mark Twain often associated with and why?
What is the connection between Mark Twain and Halley's Comet?
Where does The Winter's Tale take place?
What is the purpose of reader-response criticism?
The outcome of a state lottery game is certainly a very unequal distribution of the prize income. Some players are made very rich, whereas others lose their money. Using this example, discuss wheth…
Is The Metamorphosis written in third-person limited or omniscient narration?
Describe the significance of Eliezer looking at himself in the mirror in Night. What did he see and what does this mean?
Which poets made the greatest contribution to early literary criticism?
How did Gcina Mhlophe become famous?
Does Angels in America take place in one setting?
Which cools faster, one pound of wide and flat lasagna noodles or one pound of spaghetti noodles. Assume that both kinds of noodles are made from the same pasta and start out with the same temperat…
How does the setting influence the conflict in To Hell with Dying?
What was the biggest decision Santiago had to make in The Old Man and the Sea?
Discuss the idea of dramatic monologue in The Love Song of J. Alfred Prufrock and how it fits with Modernism.
The Love Song of J. Alfred Prufrock uses dramatic monologue. How does this technique reveal Prufrock's character?
How is dramatic monologue used in The Love Song of J. Alfred Prufrock to reveal Prufrock's character?
The Love Song of J. Alfred Prufrock uses dramatic monologue. Discuss how this technique reveals Prufrock's character.
How does Gatsby represent romantic idealism in The Great Gatsby?
Where does Alice Walker live now?
Where did Alice Walker go to college?
In The Red Pony, why does Carl say that Jody must never turn the pony into a trick horse?
Why does Jody in The Red Pony name his colt Gabilan?
How does Daisy represent the American Dream in The Great Gatsby?
How does Gatsby represent the American Dream in "The Great Gatsby"?
What does Still I Rise mean?
Who was the narrator occasionally permitted to see in Flatland?
Who is the narrator of Flatland? | CommonCrawl |
Search SpringerLink
Tolerance analysis for robotic pick-and-place operations
Bence Tipary ORCID: orcid.org/0000-0001-6591-63411,2 &
Gábor Erdős ORCID: orcid.org/0000-0002-3531-38031,2
The International Journal of Advanced Manufacturing Technology volume 117, pages 1405–1426 (2021)Cite this article
Robotic workcell design is a complex process, especially in case of flexible (e.g., bin-picking) workcells. The numerous requirements and the need for continuous system validation on multiple levels place a huge burden on the designers. There are a number of tools for analyzing the different aspects of robotic workcells, such as CAD software, system modelers, or grasp and path planners. However, the precision aspect of the robotic operation is often overlooked and tackled only as a matter of manipulator repeatability. This paper proposes a designer tool to assess the precision feasibility of robotic pick-and-place workcells from the operation point of view. This means that not only the manufacturing tolerances of the workpiece and the placing environment are considered, but the tolerance characteristics of the manipulation and metrology process (in case of flexible applications) as well. Correspondingly, the contribution of the paper is a novel tolerance modeling approach, where the tolerance stack-up is set up as a transformation chain of low-order kinematic pairs between the workpiece, manipulator, and other workcell components, based on manipulation, seizing, releasing, manufacturing, and metrology tolerances. Using this representation, the fulfillment of functional requirements (e.g., picking or placing precision) can be validated based on the tolerance range of corresponding chain members. By having a generalized underlying model, the proposed method covers generic industrial pick-and-place applications, including both conventional and flexible ones. The application of the method is presented in a semi-structured pick-and-place scenario.
Real manufactured and assembled products only match their nominal design within certain tolerances. Geometric and dimensional deviations are caused by imprecise manufacturing and assembly processes, resulting in imperfect workpiece shapes as well as inaccurate relative positions between them. These deviations influence the assemblability and functions of the product, and thereby the fulfillment of functional requirements (FRs). Therefore, the success of assembly processes and the resulting mechanical assemblies is significantly affected by the corresponding tolerance design. Tolerance analysis (or tolerance stack-up analysis) addresses this problem through the determination of the dimensional and geometrical variation of the final assembly from the given tolerances on individual components and on the created joints. Based on this, both the satisfaction of the defined FRs and the assemblability of the product can be verified [1, 2].
In case of manual assembly, proper geometric design and allocation of tolerances ensure that the worker will be able to assemble the product, while maintaining the product's key characteristics (KCs) [3]. This concept is viable due to the assembly capabilities and dexterity of human workers (supported by suitable tools). On the other hand, as a manual assembly process becomes robotized, the robot capability needs to be taken into account when designing the assembly and corresponding robotic operation. However, by losing the dexterity and skills of the worker, the result of the assembly is determined not only by the robot repeatability (in addition to the tolerance of assembly components), but also by the tolerance of the involved cell components. Hence, the tolerance aspect of the whole operation needs to be addressed, which can affect the manipulator and equipment selection, or even the assembly design. Yet, the tolerance design for this scenario does not receive much support. Indeed, related issues are often overcome by applying much more precise equipment than necessary, or by trial and error.
The present research addresses the subject of tolerancing in robotic manipulation. Correspondingly, the main contribution of this paper is a novel tolerance analysis method for validating robotized operations in terms of tolerances, in order to overcome the above-mentioned issues. The proposed method aims to assess the feasibility of waypoint-based robotic applications, particularly the pick-and-place operation—including placement and insertion—which can be considered one of the most common assembly tasks [4]. The tolerance model is prepared for a general pick-and-place representation, capable of handling both conventional and flexible (e.g., bin-picking) tasks. Apart from mechanical tolerances, the latter involve the precision of the metrology system used to resolve uncertainties (such as the workpiece picking pose in case of bin-picking) and to realize visual servoing.
In the proposed model, the tolerance stack-up of the operation is set up on a transformation basis using low-order kinematic pairs (joints) [5]. The aggregated tolerances—corresponding to FRs—are formulated parametrically through the multiplication of parametric transformation matrices. As the design specifications (i.e., workholding, grasping, manipulation, servoing, and metrology characteristics as well as tolerances) are substituted into these formulae, the operation can be evaluated for feasibility.
This tolerance model provides a basis for Monte Carlo simulation and sensitivity analysis [6]. Using these, the particular workcell setup can be analyzed, allowing the designer to check the suitability of the selected equipment in the early design phase. This representation can also be used during the different planning (path and grasp planning) steps of the robotic workcell, as it can contain the robot kinematics and component relations besides the tolerance stack-up. Thereby, this method fits into the tolerance model of the generic pick-and-place workcell development methodology introduced in the authors' previous work [7]. This aids developers in setting up feasible tolerance regions for Digital Twins of such robotic workcells and in assessing twin closeness, which indicates whether or not the virtually planned robotic operation can be executed feasibly in the real workcell.
The remainder of the paper is structured as follows. The related literature, regarding general tolerance modeling approaches, manufacturing operation-related approaches and the tolerance aspect of pick-and-place operations is overviewed in Section 2. In Section 3, the concept of the tolerance model is presented, together with the generalized pick-and-place operation, the basic structure of the transformation chain, fundamental FRs, and the pick-and-place-specific tolerance influencing factors. The complete model is formulated in Section 4, including the required input data, the deduction of the transformation chain, and the evaluation of FRs. In Section 5, the implementation of the proposed model is presented through a case study of an experimental, physical, pick-and-place workcell. Finally, conclusions are drawn, and the possible future research directions are presented in Section 6.
Tolerance modeling
During product design, the customer requirements define the desired product. These are then translated by the designer to FRs, which capture the functional intention of the designer in terms of dimensions and tolerances. The components of the critical FRs are KCs, which are defined as "the product, subassembly, part, and process features that significantly impact the final cost, performance, or safety of a product when the KCs vary from nominal. Special control should be applied to those KCs where the cost of variation justifies the cost of control." by Thornton [3].
To achieve product feasibility (including assemblability and the fulfillment of FRs), the tolerancing problem needs to be solved [1]. Dimensional tolerances have been for long the primary means for expressing the allowable deviations of workpieces and products. Besides, geometrical tolerances also became formally defined and standardized by the introduction of Geometrical Dimensioning and Tolerancing (GD&T) [8]. Together, these allow the characterization of a variety of deviation types. The key approaches for tolerance analysis are the analytical worst-case and statistical analysis [9, 10], and Monte Carlo simulation [6]. The sought results can be the list of contributors, sensitivity, and effect of each contributor, as well as the tolerance stack-up.
Numerous research works were dedicated to defining new tolerance representation methods in order to establish mathematical models for the expression and representation of geometric deviations. These include the variational model [11], TTRS (technologically and topologically related surfaces) model [12], matrix model [13], vector-loop [14], torsor model [15], Jacobian-torsor model [16], GapSpace [17], T-Map model [18], deviation domain [19], polytopes [20], and Skin model shapes [21], among others. Furthermore, many commercial CAT (computer-aided tolerancing) software packages were developed on the basis of these models, e.g., 3-DCS, VisVSA, CETOL, FROOM, or CATIA.3D FDT [22,23,24]. Detailed summaries on tolerance representations and tolerance analysis approaches are given in [22, 25, 26].
From the above, the most relevant ones are the variational, vector-loop, matrix, and torsor models. These apply homogeneous transformations on an assembly graph to capture the geometric variations. The variational model [11] approach represents the deviations from the nominal geometry due to the tolerances and the assembly conditions through a parametric mathematical model. First, the nominal shape and dimensions of each assembly component are set up with respect to a local datum reference frame (DRF). Then, the components are assembled together using small kinematic adjustments to represent their relative location with respect to each other within the assembly. With this, each feature can be expressed in the same reference (i.e., the global DRF). The FRs can be solved by analytical approaches or Monte Carlo analysis.
The vector-loop model [14, 25] represents the workpiece geometric variability via chains of vectors. The vectors represent component dimensions or kinematical variable dimensions. Three types of deviations are considered: dimensional deviations, geometric feature deviations, and deviations originating from kinematic adjustments (from the assembly process). The vector-loop model results in a set of non-linear equations, which can be evaluated using a worst-case or statistical method when linearized, or using Monte Carlo simulation.
The matrix model [13, 26] is based on TTRS; it transforms the tolerance zones to establish the limit boundaries for tolerances. The aim of the matrix model is to derive an explicit mathematical representation of the boundary of the entire spatial region enclosing all possible displacements originating from the variability sources. The representation needs to be completed by an additional set of inequalities, which define the bounds for every component in the matrix. Being a point-based approach, the result of the matrix model is the variation of a point on a functional surface, and as the boundaries of the region of possible variations are defined (i.e., extreme values), intrinsically a worst-case approach is applied.
The small displacement torsor model [15, 25] is based on the first-order approximation of the matrix model [27]. The displacement of geometric elements is modeled as a translation vector and a linearized rotation matrix arranged into a torsor. Three kinds of torsors are defined: component, deviation, and gap torsor. The global behavior of the assembly can be obtained from the union of the torsors. For the evaluation of the model, a worst-case approach can be applied, similarly to the matrix model.
In the following, relevant tolerance modeling approaches are presented, which are related to manufacturing processes. In the field of precision engineering, a transformation chain–based approach is commonly applied for setting up the kinematic error model of machine tools, for estimating their achievable accuracy, and for analyzing the contributing tolerances [28]. A kinematic model–based approach is presented in [29] to investigate the contributing factors of the location deviations of holes in drilling operations. In [30], a positioning variation model is presented for an eye-in-hand drilling system, considering vision-based positioning error measurement and compensation. Further manufacturing process-related tolerance models include fixturing and the consideration of manufacturing signatures [31, 32]. In the field of robotic machining, a working precision analysis is presented in [33] using a robot-process model, to investigate the robot- and process-related influencing factors.
In case of robotic systems for workpiece manipulation, when considering the precision aspect, the focus is mostly on robot calibration. Tolerance modeling and analysis on the process side receive hardly any attention. In [34], the error model of the peg-in-hole assembly is set up for automated assembly, but it does not cover any errors before the insertion (placing) process. The relevant failure modes of automated assembly systems—involving precise grasping, releasing, and collision—are identified in [35]; however, only manufacturing errors and the robot repeatability are considered in the tolerances. Furthermore, no general model is provided, only the Monte Carlo simulation for error diagnosis.
Although numerous different tolerance representations and analysis approaches exist, to the best of the authors' knowledge, there is no tolerance model specifically representing robotic manipulation, and particularly the pick-and-place operation, from the operation point of view. Inspired by this, the goal of the present research is to introduce a suitable approach for bridging the identified gap.
Tolerance factors in robotic pick-and-place
After overviewing the relevant tolerance models, in this section, the different contributing factors are gathered, which take part in the tolerance chain of pick-and-place operations. In case of conventional pick-and-place, apart from the cell component (e.g., workholders) and workpiece manufacturing tolerances, the robot positioning precision and the effect of the seizing and releasing processes need to be considered in the tolerance model. These can clearly affect the feasibility of the picking and/or the placing process of the operation.
Robot precision includes repeatability and accuracy, which are both standard robot characteristics [36]. Repeatability plays a key role in the tolerance stack of pick-and-place operations. Robot repeatability is generally provided by the manufacturers for positioning on the robot end flange. In this regard, the determination of position and orientation repeatability is presented in [37]. This characteristic in general can only be improved by using more precise components and component connections, as well as an internal measurement system with higher resolution [38]. On the other hand, accuracy is not as significant as repeatability in terms of tolerance, since in most cases, accuracy-related errors can be sufficiently compensated. Whenever suitable, traditional (lead-through) robot programming can neutralize poor robot accuracy. Moreover, robot calibration [39] is applicable for improving positioning accuracy. Calibration is especially important in case of applications utilizing model-based offline robot programming [7].
Workpiece seizing and releasing processes can also introduce geometric errors during the pick-and-place operation through the contact transitions between the components. This topic is much less studied in the literature, and only a few relevant papers were found. Uncertainty in case of grasping is studied in [40]. In this paper, the concept of self-alignment is introduced, which occurs during workpiece grasping. The phenomenon is studied for one particular workpiece and gripper finger geometry pair. The effect of grasping position on assembly success is analyzed in [41]. The focus of this paper is to investigate the alignment capability of a parallel finger gripper when seizing a cylindrical workpiece with initial position error, and to check whether or not the grasped workpiece can be successfully placed into a workholder. The geometrical conditions of the robotic peg-in-hole problem, as a placing task, are investigated by [42]. Here, the effect of chamfering on the placing side is taken into account as a source of self-alignment mechanism for the workpiece.
In more advanced applications, where metrology systems are applied to resolve uncertainties, other tolerance components are introduced in the tolerance chain. These are in form of equipment resolution, measurement errors, and the precision of data processing algorithms. Equipment resolution is provided by the manufacturers in general, and there are many papers in the literature about the accuracy of different approaches, such as camera calibration and 2D pattern detection [43], or model-based 3D pose estimation [44]. For robotic applications, fewer studies are available. An object recognition and pose estimation framework is presented in [45], including the accuracy measures of the estimated poses. Uncertainty in perception and grasping is taken into account for bin-picking in [46, 47]. The grasp is selected based on how likely its feasibility is, when simulating this uncertainty. Moreover, fine positioning is considered if placing precision is predicted to be insufficient; with a predetermined manipulator motion sequence, certain features of the workpiece are aligned after the placing process.
With the help of metrology systems, it is also possible to compensate positioning errors using servo techniques [48, 49] real-time, during operation. Using a closed-loop control cycle, the target point can be reached with continuous measurement and actuation of the manipulator, until the defined target condition is reached. Servoing can improve the positioning precision of the robot through closed-loop motion control in case of picking and/or placing, without the need for changing the equipment, or prescribing tighter tolerances (i.e., without increasing investment costs). This means that the precision influencing effects in the system are negated, and the resultant precision will depend on the manipulator positioning resolution, and the errors in the metrology system, measured data, and processing algorithms. Servo techniques, and visual servoing in particular, receive great attention in the literature, including robotic pick-and-place as well. Visual servoing is used for picking pose compensation and grasping in [50], and for placing pose compensation in an assembly cell in [51]. Furthermore, compliance control strategy is applied for peg-in-hole insertion in [52].
The listed tolerance factors, together with the corresponding literature, provide a basis for the tolerance model proposed in the present paper, and support the introduced tolerance influencing concepts. Although not all of these factors are thoroughly explored in the literature—especially, the tolerances introduced during contact transition between components, and characteristics of self-alignment while grasping and placing—the main ideas and mechanisms appear to be clear. This allows the preparation of the fundamental tolerance model for robotic pick-and-place considering the operation viewpoint, and more specific tools can be created as the related research progresses further.
Concept and assumptions
In this paper, the tolerance stack-up of the pick-and-place operation is modeled as a sequence of transformations in a kinematic graph, which uses the same idea as the one behind machine tool accuracy estimation in the work of Slocum [28]. This concept resembles the definition of a mechanism using standard mechanism joints. The joints are considered as low-order kinematic pairs [5], which are essentially parametric transformations between the frames of components (rigid bodies). The idea is to represent different tolerance types as mechanism joints. In this way, the tolerance propagation can be modeled simply using matrix multiplication. These matrices contain nominal and tolerance values, as well as joint variables (in case of actual mechanism joints, such as the robot joints) as parameters for the underlying translational and rotational transformations. Since multiple frames can be within a single rigid body, connected through tolerances, the shape of the rigid body becomes determined and fixed as a particular tolerance value is selected from the defined tolerance interval.
From the general tolerance modeling methods, the proposed approach is most similar to the variational solid modeling approach [11]. However, in the proposed one, the kinematic joints are applied as matrix transformations (using homogeneous transformation matrices), and the number of constraint equations is minimized. This is done by first preparing the spanning tree of the kinematic graph, then by forming the constraint equations through loop closures (wherever necessary). Hence, open loops can be evaluated simply by substituting in the geometrical parameters, and for closed loops, only a minimum number of constraint equations need to be solved, before evaluating the FRs. Furthermore, this allows the deviation of both the position and orientation components of feature frames to stack up. Even though variation in orientation (e.g., perpendicularity or parallelism) is captured by tolerance zones for geometric tolerances in the standard, the representation of orientation as inclination is beneficial in scenarios such as robotic peg-hole insertion or grasping.
Considering the robotic assembly aspect of the pick-and-place operation, the term workpiece refers to the component, which is manipulated by the robot. Here, the model describes the pose of this workpiece throughout the operation. Hence, the workpiece needs to be attached in the transformation chain in a way to continuously represent its actual physical contacts with the other cell components. Therefore, the workpiece is detached and re-attached in the transformation chain according to the operation process steps (i.e., seizing and releasing). Correspondingly, the workpiece state is investigated before and after the seizing and the releasing actions.
The number of DoFs of the workpiece is restricted throughout the operation by the different components the workpiece is in contact with, namely, by the picking workholder, gripper, and placing workholder. In general, the workpiece does not move relative to the contacting component, except during the contact transition. However, the tolerance stack-up (including the contact transition) can result in deviations in the free directions between the aligned frames of the workpiece and other components. Therefore, these contacts should be captured through the proper parameterization of the transformations, i.e., according to the DoFs of the workpiece relative to the contacting components.
The tolerances appearing in the transformation chain need to be determined using existing tolerance analysis methods (as these are mostly resultant tolerances and not explicitly specified by the designer), or through measurements. It is noted that in case of individual workcell setup, some of the deviation sources—typically the component manufacturing tolerances and the robot accuracy—can be compensated through calibration. In the present paper, every input parameter or parameter interval (in case of tolerances) is assumed to be predetermined, along with the actual pick-and-place strategy (application of metrology and tolerance enhancing techniques). Also, since manufacturers specify the repeatability of the manipulators on their end flange, but not on their links, manipulators are presented as a single unit during tolerance analysis, and not as separate links. It is noted that in case of modular robots, the precision of individual modules could be taken into account, and it is possible to set up the corresponding kinematic chain automatically [53].
The presented model defines a set of possible frames to capture the FRs. To assess the fulfillment of FRs, inequalities need to be formed for each relevant general direction of each relevant frame. Alternatively, if required, inequalities can be formulated specifically for different artifacts of the workpiece or other components. These can be captured via additional frames, or even as geometric constraints. The FRs and included KCs need to be defined by the designer, and these are also assumed to be given in this paper.
Part of the applied functions (originating from tolerance influencing factors, see Section 3.5) throughout the tolerance stack are not continuously differentiable, and the extraction of orientation components has no closed-form solution. Consequently, the prepared tolerance chain is numerically evaluable. The evaluation of the model is carried out using Monte Carlo simulation, by substituting the nominal values, joint variables (if any), and tolerance set instances (sampled from the corresponding tolerance intervals) into the parametric tolerance chain.
The established transformation chain suits the family of robotic pick-and-place applications. When applied for a particular robotic workcell, the chain needs to be prepared by considering the operation conditions and kinematic joints present in the specific scenario. Then, by substituting the dimensional and tolerance parameters, the FRs can be evaluated in the different workpiece states (poses) along the operation sequence. The general structure of the tolerance model is applicable for most industrial pick-and-place tasks—where no humans are involved—in the presented form. For niche cases, the model might need slight adjustments to properly reflect on the particular scenario. The application of this model is most beneficial in case of tasks that are geometrically well defined and where the component relations are clear, and the component selection or design is still adjustable (i.e., in the early design phase).
Pick-and-place operation
A pick-and-place operation can be described as two subsequent (i.e., picking and placing) manipulator movement and gripper setting (or activating and deactivating) steps. From the control perspective, this can be defined using robot configurations; from the point of view of the workpiece, it can be described through poses. For clarification, poses and frames are defined in the robot task space as task space points; however, a pose corresponds to the reference frame of a rigid body, whereas frames correspond to a constant transformation with respect to rigid body references (can be multiple per rigid body). Configurations are defined in the robot configuration space as a robot joint vector.
The important robot configurations in the presented tolerance chain are the seizing and releasing configurations. On the path planning and control side, usually there are approach and retreat configurations for both of these configurations. Additionally, for flexible scenarios, metrology configurations can exist, for the execution of metrology and resolving the pose uncertainties. The seizing configuration corresponds to the workpiece picking pose. In the seizing configuration, the gripper is set to a seizing setup, therefore making contact with the workpiece. At this point, the workpiece gets into the seized pose, which is bound to the gripper. Then, the robot moves to the releasing configuration, transferring the workpiece to the releasing pose. This corresponds to the workpiece placing pose. Here, the gripper is set to a releasing setup; therefore, the workpiece breaks contact with the gripper, and establishes one with the placing workholder.
The component contacts can occur in multiple ways. The type of contact transient during seizing and releasing, as well as component compliances can have significant effect on the seizing and releasing action. If the contact break and establishment happen simultaneously, or one after the other (e.g., seizing with vacuum cup, or dropping workpiece when releasing), then the seizing and releasing action can introduce considerable additional deviations. On the other hand, if the first contact is maintained even after establishing the new contact (until the next robot movement), an overconstrained case occurs, where the components are prone to stuck or damage (e.g., peg-in-hole problem with a rigid gripper). In order to keep the paper more concise, here, the effects of the contact transients are simplified and are only taken into account in case of leading features (see Section 3.5). Nevertheless, these effects need to be considered on a case-by-case basis.
Transformation chain setup
There are relevant frames in a spatial and temporal basis as well. The workcell reference is the base frame, in which each component reference frame is defined. The relation of the frames can be represented on a kinematic graph, which is shown in Fig. 1. Here, the relevant frames are the manufacturing datum frames of the workpiece (wp,ref), picking workholder (wh1,ref), gripper (gr,ref), and placing workholder (wh2,ref).
Kinematic graph of the main workcell components
Component feature frames are defined relative to their reference frames. For the workpiece, these are the picking (wp,pick), grasp (wp,grasp), and placing (wp,place) frames. The pairs of these frames are on the contacting components. The picking workholder has the picking frame (wh1,pick), the placing workholder has the placing frame (wh2,place). The gripper has the grasp frame (gr,grasp); the grasp frame pair is usually the result of the grasp planning.
These frames are shown in case of a sample pick-and-place scenario in Figs. 2 and 3. As visible, the frames exist in different phases of the operation for the moving components. The workpiece poses are the picking (p1), seized (g1), releasing (g2), and placing poses (p2), while the gripper poses are the seizing (corresponding to seizing configuration, g1) and releasing poses (corresponding to the releasing configuration, g2).
Relevant frames in the workcell in a flexible scenario (in this particular case, the transformation between the picking frame pair is determined through metrology)
Relevant frames on the workpiece (in this particular case, the reference, picking and placing frames coincide) (a), and nominal and real grasp frame pairs in g1 or g2 pose (b)
Transformation matrices
Frames are described by homogeneous transformation matrices, parameterized with three translational and three rotational components. However, locally different representations can be selected (e.g., deviation in a cylindrical or Cartesian coordinate system). The tolerance chain includes manipulation, seizing, releasing, manufacturing, and metrology tolerances as parameters.
Homogeneous transformation matrices, realizing transformation from component c1, frame f1, pose o1 to component c2, frame f2, and pose o2, are denoted with \( {\boldsymbol{T}}_{\mathrm{c}2,\mathrm{f}2,\mathrm{o}2}^{\mathrm{c}1,\mathrm{f}1,\mathrm{o}1} \), or simply \( {\boldsymbol{T}}_{\mathrm{c}2,\mathrm{f}2,\mathrm{o}2}^{\mathrm{base}} \) if f1 is the base frame. For the sake of simplicity, pose indices are only noted at the pose changing components (i.e., in case of the workpiece and the gripper). Frame transformations are modeled as follows:
$$ {\boldsymbol{T}}_{\mathrm{c}2,\mathrm{f}2,\mathrm{o}2}^{\mathrm{c}1,\mathrm{f}1,\mathrm{o}1}\left(x,y,z,\xi, \eta, \zeta \right)=\left[\begin{array}{cc}\boldsymbol{R}\left(\xi, \eta, \zeta \right)& \boldsymbol{d}\left(x,y,z\right)\\ {}0& 1\end{array}\right] $$
where R is the rotation matrix and d is the translation vector containing both nominal (n) and tolerance (t) components:
$$ \boldsymbol{d}\left(x,y,z\right)={\left[x,y,z\right]}^{\mathrm{T}}={\left[{n}_x+{t}_x,{n}_y+{t}_y,{n}_z+{t}_z\right]}^{\mathrm{T}} $$
The rotational component can be constructed in multiple ways, depending on the order of rotations. Here, a sequence of yaw, pitch, and roll rotations is applied (this is suitable for typical tolerance values, but other conventions can also be applicable in case of singularity issues), which results in the following formula:
$$ \boldsymbol{R}\left(\xi, \eta, \zeta \right)=\left[\begin{array}{ccc}{\mathrm{c}}_{\zeta }{\mathrm{c}}_{\eta }& -{\mathrm{c}}_{\eta }{\mathrm{s}}_{\zeta }& {\mathrm{s}}_{\eta}\\ {}{\mathrm{c}}_{\xi }{\mathrm{s}}_{\zeta }+{\mathrm{c}}_{\zeta }{\mathrm{s}}_{\eta }{\mathrm{s}}_{\xi }& {\mathrm{c}}_{\zeta }{\mathrm{c}}_{\xi }-{\mathrm{s}}_{\zeta }{\mathrm{s}}_{\eta }{\mathrm{s}}_{\xi }& -{\mathrm{c}}_{\eta }{\mathrm{s}}_{\xi}\\ {}-{\mathrm{c}}_{\zeta }{\mathrm{c}}_{\xi }{\mathrm{s}}_{\eta }+{\mathrm{s}}_{\zeta }{\mathrm{s}}_{\xi }& {\mathrm{c}}_{\xi }{\mathrm{s}}_{\zeta }{\mathrm{s}}_{\eta }+{\mathrm{c}}_{\zeta }{\mathrm{s}}_{\xi }& {\mathrm{c}}_{\eta }{\mathrm{c}}_{\xi}\end{array}\right] $$
where cx stands for cos(x), and sx for sin(x). Each angle value contains a nominal and a tolerance component. The rotation angles (corresponding to the axes of the frame) are represented by the vector r:
$$ \boldsymbol{r}\left(\xi, \eta, \zeta \right)=\left[\xi, \eta, \zeta \right]=\left[{n}_{\xi }+{t}_{\xi },{n}_{\eta }+{t}_{\eta },{n}_{\zeta }+{t}_{\zeta}\right] $$
The tolerance and nominal parameters are summarized in arrays as:
$$ \boldsymbol{t}=\left({t}_x,{t}_y,{t}_z,{t}_{\xi },{t}_{\eta },{t}_{\zeta}\right),\boldsymbol{n}=\left({n}_x,{n}_y,{n}_z,{n}_{\xi },{n}_{\eta },{n}_{\zeta}\right) $$
This representation allows the extraction of each angle from the rotation matrix (numerically), their adjustment, and the re-creation of the adjusted rotation matrix, in a consistent way. The corresponding screw parameters of a transformation matrix are represented as an array:
$$ \mathbf{scr}\left(\boldsymbol{T}\left(x,y,z,\xi, \eta, \zeta \right)\right)=\left(x,y,z,\xi, \eta, \zeta \right) $$
In the following, for the sake of simplicity, the arguments of the transformations are only spelled out where relevant.
As an example, if a finger gripper with parallel finger planes seizes a slab-like feature on a workpiece (see Fig. 3); a planar kinematic joint needs to be set up to represent the tolerance of the seizing action. In this case, due to the planar contact, positioning tolerances from the gripper activation occur only in x and z direction, and orientation tolerance around the y axis (based on Fig. 3). The joint is formulated as follows:
$$ {\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}=\left[\begin{array}{cc}\boldsymbol{R}& \boldsymbol{d}\\ {}0& 1\end{array}\right]=\left[\begin{array}{cccc}{\mathrm{c}}_{\eta }& 0& {\mathrm{s}}_{\eta }& x\\ {}0& 1& 0& 0\\ {}-{\mathrm{s}}_{\eta }& 0& {\mathrm{c}}_{\eta }& z\\ {}0& 0& 0& 1\end{array}\right] $$
which contains both nominal and tolerance parameters, and can be evaluated to any tolerance set instance.
$$ {\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}=\left[\begin{array}{cccc}{\mathrm{c}}_{n_{\eta }+{t}_{\eta }}& 0& {\mathrm{s}}_{n_{\eta }+{t}_{\eta }}& {n}_x+{t}_x\\ {}0& 1& 0& 0\\ {}-{\mathrm{s}}_{n_{\eta }+{t}_{\eta }}& 0& {\mathrm{c}}_{n_{\eta }+{t}_{\eta }}& {n}_z+{t}_z\\ {}0& 0& 0& 1\end{array}\right] $$
Formulation of the transformation chain
Having prepared the construction of transformation matrices, the transformation chain can be formed. First, considering nominal transformations (denoted with T∗), an ideal case is assumed. The workpiece starts in the picking pose, where its picking frame is aligned with the picking frame of the picking workholder \( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{base}}={{\boldsymbol{T}}^{\ast}}_{\mathrm{wh}1,\mathrm{p}\mathrm{ick}}^{\mathrm{base}}\right) \).Thereby, the nominal grasp frame of the workpiece is determined through the workpiece reference frame \( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{base}}\right) \). Then, the robot is commanded to align the gripper grasp frame with the grasp frame of the workpiece\( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}={{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}^{\mathrm{base}}\right) \). The gripper is activated, and the workpiece is detached from the picking workholder and attached to the gripper with the grasp frames aligned, as it gets to the seized pose \( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}={{\boldsymbol{T}}^{\ast}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}\right) \).
Next, the robot is commanded to the releasing configuration, where the placing frame of the now attached workpiece is aligned with the nominal placing frame of the placing workholder\( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{place},\mathrm{g}2}^{\mathrm{base}}={{\boldsymbol{T}}^{\ast}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{base}}\right) \). Here, the gripper is set to release the workpiece, detaching the workpiece from the gripper and attaching it to the placing workholder, as it gets to the placing pose. At this point, the workpiece is laid into the placing pose. The placing frame of the workpiece and the placing workholder meet \( \left({{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{place},\mathrm{g}2}^{\mathrm{base}}={{\boldsymbol{T}}^{\ast}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{base}}\right) \), and the pick-and-place operation is finished.
From here, assuming tolerances, the tolerance chain is formed on four branches, which are then connected. (i) The workpiece grasp frame for the picking pose is formed by the tolerance stack-up from the base frame through the picking workholder and the workpiece \( \left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{base}}\right) \). (ii) The gripper grasp frame for the seizing pose is formed from the base through the robot links and gripper \( \left({\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}\right) \). (iii) The workpiece placing frame is formed for the releasing pose from the base frame through the robot links, gripper, and workpiece \( \left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}2}^{\mathrm{base}}\right) \). And (iv) the placing frame of the placing workholder is formed from the base frame \( \left({\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{base}}\right) \). As the robot is commanded to the nominal workpiece grasp frame in the picking pose, and the gripper seizes the workpiece, the grasp frames (i–ii) are now misaligned because of the tolerances. These misalignments can further increase as the two frames are getting attached, due to the contact transition (seizing tolerance). This effect is not present in the nominal case; however, in reality, the workpiece pose changes during seizing (and also during releasing) due to the physical contact. The gripper-workpiece attachment (see Figs. 4 and 5) allows the formation of the workpiece placing frame when manipulated to the nominal placing pose.
Initial transformation chain before the workpiece seizing (a), and the workpiece laying in picking pose (p1) with aligned nominal grasp pair (b)
Transformation chain after seizing (a), and the workpiece in the seized pose (g1), detached from the picking workholder and attached to the gripper (b)
The workpiece is then manipulated to the releasing pose, introducing further deviations through the robot positioning precision. Finally, when releasing the workpiece, an additional releasing tolerance is considered. With this, the misaligned placing frames (iii–iv) are getting attached (see Figs. 6 and 7). The measure of misalignment in the placing frame pair \( \left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right) \) thus becomes evaluable in tolerance set instances.
Transformation chain after the robot moves to the releasing configuration (a), and the workpiece in the releasing pose (g2) attached to the gripper (the releasing pose has an offset above the workholder for better visualization) (b)
Transformation chain after releasing (a), and the workpiece in the placing pose (p2), detached from the gripper and attached to the placing workholder (b)
It is noted that the robot is not necessarily commanded to reach exactly \( {{\boldsymbol{T}}^{\ast}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{base}} \) or \( {{\boldsymbol{T}}^{\ast}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{base}} \). A constant offset can also be considered between the grasp and placing frame pairs, which is overcome by workpiece relative motion when being seized or dropped. However, for the sake of simplicity, no additional frames are introduced for these, and their effects are considered in the seizing and releasing tolerances.
Main functional requirements
To achieve a successful pick-and-place operation, multiple FRs need to be satisfied. FRs are arranged into arrays of feasible intervals, similarly to the nominal and tolerance parameters in formula (5) and are denoted with c. First and foremost, the workpiece needs to be placed into the placing pose within a given tolerance range in each direction (cplace). The allowed deviation for the placing pose is generally determined by the design of the assembly that contains the workpiece, or by the following process, for which the placed workpiece serves as an initial condition.
In addition, the FR for picking has to be met (cpick). Usually, this can be defined based on the gripping range within which the gripper can seize the workpiece. For example, in case of a vacuum gripper, the vacuum cup must be located within the specific planar surface; otherwise, the vacuum cup falls to the edge of this surface and the gripping will fail.
Depending on the task, additional frames (e.g., multiple feature frames for placing) and FRs can be defined. There can be different configurations and geometric artifacts, for which the feasibility needs to be checked (e.g., to avoid collision in different poses). Furthermore, FRs can be formulated not only as simple intervals, but also as geometric constraints. These need to be specified on a case-by-case basis, allowing the preparation of more detailed evaluation of operation feasibility.
Tolerance influencing factors
The tolerance chain does not only depend on the dimensional conditions of the pick-and-place scenario. Multiple other factors were identified that have a significant effect on the tolerance chain. These are defined in each direction separately, as follows:
Leading feature (i.e., self-alignment and self-location): when there is relative motion between the components (during seizing or releasing), depending on the physical contact between specific faces, edges and vertices, the workpiece can be either free or guided.
Pose specification: the picking and placing poses can be either known or not known with sufficient precision in the design phase.
Servoing: servo technique(s) can be either applied or not applied for robot positioning in case of the picking and placing poses.
The decisions on the tolerance influencing factors are arranged into arrays of Boolean values (similarly as in formula (5)) denoted with k, while the tolerance modifying functions are denoted with F(k, T), where T is the transformation matrix on which the modification is applied. The separate application of these functions to each translational and rotational direction is possible due to the selected rotational matrix formulation in equation (3). The effect of tolerance influencing factors on the operation sequence is shown in Fig. 8.
Simplified operation sequence (shown only in one direction) of flexible pick-and-place from the point of view of the tolerance influencing factors
Leading feature
Considering workpiece self-alignment and self-location, when being guided in a particular direction, a workpiece is limited to move between the guiding features in this direction. For example, if a workpiece is placed on a flat surface, it is free to move in the plane of the surface, but guided in the direction normal to the surface. Therefore, if the workpiece is dropped above the plane, its position normal to the plane will be limited by the plane as the two meet.
However, leading features are not exactly the same as a DoF restriction, as they have multiple effects. Leading features (i) guide the previous workpiece manipulation action, potentially reducing the accumulated tolerances up to the point of guiding, pose a new tolerance requirement to avoid workpiece wedging or damage while performing this previous action (ii), and potentially while performing the next action (iii).
For (i), the leading feature saturates the tolerance up to the following member of the tolerance chain, which improves the achievable precision (e.g., when using chamfers or cones). This allows worse precision while fitting, then improves the precision utilizing the geometrical, physical contact-based guidance of the fixture. Effects (ii) and (iii) pose new FRs in order to avoid overconstrained workpiece manipulation. As a simplified example, in order to utilize the self-locating capability of a chamfer, the deviation must be low enough to maintain the counterpart feature on the chamfer slope. It is noted that to determine actual self-alignment and self-locating ranges, as well as corresponding FRs, simulations or experiments are necessary, as their behavior is complex and depends on many factors (shape, material pair, forces, etc.) [36].
Leading features can be capable of handling different types of misalignments (axial, radial, lateral, or torsional) with a certain limit, depending on shape. Furthermore, these features can exist on the picking and placing side, on the gripper or on the workpiece itself. Leading features can be geometric (cones, chamfers) or kinematic (closing gripper fingers). Tolerance adjustment can happen during seizing and releasing, depending on the leading feature setting.
Although leading features are linked with the three effects above, these only manifest simultaneously during the seizing process. Before seizing, the previous workpiece manipulation step is not made as a part of the present assembly task (only (iii) applies), while at releasing, the following workpiece manipulation step is also not done as a part of the present assembly task (only (i) and (ii) apply). Moreover, as mentioned earlier, these effects are influenced by the contact transients and need to be considered accordingly.
The presence of leading features is represented with Boolean arrays. These are ka,wh1 when the workpiece lays on the picking workholder before seizing, ka,gr while the workpiece is being seized by the gripper, and ka,wh2 when it is being released onto the placing workholder. Each of these determines whether a corresponding FR is present or not. These are ca,wh1, ca,gr, and ca,wh2, which often overwrite the original requirements (cpick and cplace) to looser ones. On the other hand, only ka,gr and ka,wh2 have an effect on the aggregated tolerances. The corresponding functions are Fa,gr(ka,gr, T) and Fa,wh2(ka,wh2, T), which return the original transformation if there is no leading feature; however, they saturate the values in the guided directions to the corresponding ranges (agr, awh2).
For example, if a workpiece is guided during seizing in ξ and η but not in ζ, then the corresponding angle values are calculated from the original rotational matrix, and these get saturated, while the original angle value is kept in the ζ direction. Then, based on these values, the rotation and the whole transformation matrix are recalculated. When applied in a single direction (x), this can be formulated as:
$$ {\mathbf{F}}_{\mathrm{a},\mathrm{gr}}\left({\boldsymbol{k}}_{\mathrm{a},\mathrm{gr}},\boldsymbol{T}\left(x,\dots \right)\right)=\left\{\begin{array}{l}\boldsymbol{T}\left(x,\dots \right),\kern0.5em \mathrm{if}\ {k}_{\mathrm{a},\mathrm{gr},x}=\mathrm{false}\\ {}\boldsymbol{T}\left({a}_{\mathrm{gr},x}\cdot \mathrm{sat}\left(x/{a}_{\mathrm{gr},x}\right),\dots \right),\kern0.5em \mathrm{otherwise}\end{array}\right. $$
The formulation of Fa,wh2(ka,wh2, T) is completely analogous to formula (9).
Pose specification
Considering pose specification, there is a corresponding precision value for such a pose, which is sufficiently specified in the design phase. Thus, the tolerance of this pose is part of the geometrical tolerance stack-up and has to be considered accordingly, when checking feasibility. On the other hand, if the pose is not known precisely (typically in flexible scenarios), the design phase has to be carried out parametrically. Here, the pose has to be determined precisely during operation by means of metrology. In these cases, metrology precision needs to be considered when calculating the tolerance stack-up.
Whether or not the picking and placing pose are known with sufficient precision is captured with the Boolean arrays kpick and kplace, and the corresponding functions are Fpick(kpick, T) and Fplace(kplace, T). These functions return the input transformation matrix with the tolerance parameters corresponding to the designed pose (tp1 and twh2,place) and/or corresponding to the complete metrology process (tm,p1 and tm,p2). For example, if the workpiece picking position is only known in z direction but not in x and y directions, the translational part of the matrix will be set up using tm,p1,x, tm,p1,y and tp1,z. When applied in a single direction (x), this can be formulated as:
$$ {\mathbf{F}}_{\mathrm{p}\mathrm{ick}}\left({\boldsymbol{k}}_{\mathrm{p}\mathrm{ick}},\boldsymbol{T}\left(x,\dots \right)\right)=\left\{\begin{array}{l}\boldsymbol{T}\left({t}_{\mathrm{m},\mathrm{p}1,x},\dots \right),\kern0.5em \mathrm{if}\kern0.5em {k}_{\mathrm{p}\mathrm{ick},x}=\mathrm{false}\\ {}\boldsymbol{T}\left({t}_{\mathrm{p}1,x},\dots \right),\kern0.5em \mathrm{otherwise}\end{array}\right. $$
The formulation of Fplace(kplace, T) is completely analogous to formula (10).
Servoing
Lastly, applying servo techniques is a further possibility, during which certain elements of the tolerance chain can be improved or bypassed entirely, enhancing the overall precision capability of the system. Servo control allows the online correction of robot positioning by realizing a closed-loop motion control, until a certain condition is met (e.g., a certain positioning precision is achieved based on a camera system). In these scenarios, similarly to the case of poses that are not known precisely, the precision of the metrology has to be taken into account when evaluating the tolerance chain.
Whether or not servoing is applied in case of picking and in case of placing is captured with the Boolean arrays ks,g1 and ks,g2, and the corresponding functions are Fs,g1(ks,g1, T) and Fs,g2(ks,g2, T). These functions enable the substitution of the accumulated tolerances to servo tolerances (ts,g1 and ts,g2), returning the original transformation matrix if servoing is not applied, and constructing a new transformation otherwise. For example, in case there is servo motion on the picking side in x and y direction, then the x and y components of the translational transformation will be changed to ts,g1,x and ts,g1,y, while the z component remains that of the original. When applied in a single direction (x), this can be formulated as:
$$ {\mathbf{F}}_{\mathrm{s},\mathrm{g}1}\left({\boldsymbol{k}}_{\mathrm{s},\mathrm{g}1},\boldsymbol{T}\left(x,\dots \right)\right)=\left\{\begin{array}{l}\boldsymbol{T}\left(x,\dots \right),\kern0.5em \mathrm{if}\kern0.5em {k}_{\mathrm{s},\mathrm{g}1,x}=\mathrm{false}\\ {}\boldsymbol{T}\left({t}_{\mathrm{s},\mathrm{g}1,x},\dots \right),\kern0.5em \mathrm{otherwise}\end{array}\right. $$
The formulation of Fs,g2(ks,g2, T) is completely analogous to formula (11).
Model formulation
After the introduction of every necessary component for the preparation of the complete model, in this section, the detailed deduction of the tolerance analysis is presented. First, the necessary input data are overviewed. Then the transformation is constructed from the individual transformations between component frames and tolerance influencing functions. Finally, the evaluation of FRs is presented to assess the feasibility of the pick-and-place operation.
Both the requirement and the available capacity side need to be provided in order to draw conclusions about task-equipment compatibility. Based on the tolerance influencing factors, the following Boolean information needs to be decided for every direction:
is there a leading feature on the picking workholder, gripper, and placing workholder (ka,wh1, ka,gr and ka,wh2)?
is the workpiece picking and placing pose known with sufficient precision (kpick and kplace)?
is picking and placing servo applied (ks,g1 and ks,g2)?
Quantitative data needs to be defined in form of:
manufacturing tolerances: workpiece (twp,pick, twp,place, twp,grasp), gripper (tgr,grasp), picking (twh1,ref, twh1,pick), and placing workholder (twh2,ref, twh2,place),
location tolerance: workpiece picking pose on picking workholder (tp1),
tolerance of the complete metrology and servo metrology methods (tm,p1, tm,p2, ts,g1, ts,g2),
manipulator positioning tolerance (tr,g1, tr,g2),
seizing and releasing tolerance (tg1, tg2),
self-location and self-alignment ranges of the gripper and the placing workholder (agr, awh2)
Finally, FRs need to be formulated generally in form of feasible tolerance ranges:
the geometric relation between the gripper and the workpiece during seizing for successful picking (cpick); and between the workpiece and the placing workholder for successful placing (cplace),
additional FRs introduced by leading features for feasible picking (ca,wh1 and ca,gr) and placing (ca,wh2).
Tolerance model
The main goal of the generalized tolerance model is to determine the transformation between the corresponding placing frames of the placing workholder and workpiece \( \left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right) \), for which the foremost tolerance requirement is defined. In order to achieve this parametric transformation, the tolerance chain is set up starting from the workpiece picking pose, up to the point when it settles on the placing workholder after releasing. The summary of transformation-related notations is given in Table 1.
Table 1 Summary of notations for homogeneous transformation matrices
The first step is to determine the grasp frame relation before seizing (see Fig. 4). This starts by determining the workpiece picking pose with respect to the base frame:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{wh}1,\mathrm{ref}}^{\mathrm{base}}\cdotp {\boldsymbol{T}}_{\mathrm{wh}1,\mathrm{p}\mathrm{ick}}^{\mathrm{wh}1,\mathrm{ref}}\cdotp {\mathbf{F}}_{\mathrm{pick}}\left({\boldsymbol{k}}_{\mathrm{pick}},{\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{wh}1,\mathrm{p}\mathrm{ick}}\right) $$
where \( {\boldsymbol{T}}_{\mathrm{wh}1,\mathrm{ref}}^{\mathrm{base}} \) contains the workholder location tolerance (twh1,ref), \( {\boldsymbol{T}}_{\mathrm{wh}1,\mathrm{pick}}^{\mathrm{wh}1,\mathrm{ref}} \) contains the workholder machining tolerances (twh1,pick), and \( {\mathbf{F}}_{\mathrm{pick}}\left({\boldsymbol{k}}_{\mathrm{pick}},{\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{wh}1,\mathrm{p}\mathrm{ick}}\right) \) is the workpiece picking frame relative to the workholder picking frame including designed (tp1) and/or sensed parameters (tm,p1) depending on kpick. Next, the workpiece grasp frame is calculated:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{base}}\cdotp {{\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1}}^{-1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1}, $$
where \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{ick},\mathrm{p}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1} \) and \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1} \) contain workpiece machining inaccuracies (twp,pick and twp,grasp, respectively). In the following, the gripper grasp frame is determined:
$$ {\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{gr},\mathrm{ref},\mathrm{g}1}^{\mathrm{base}}\cdotp {\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{ref},\mathrm{g}1}, $$
where \( {\boldsymbol{T}}_{\mathrm{gr},\mathrm{ref},\mathrm{g}1}^{\mathrm{base}} \) contains the robot positioning inaccuracies (tr,g1) and \( {\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{ref},\mathrm{g}1} \) contains gripper machining tolerances (tgr,grasp). Then, the relation of the grasp frame pair is determined, together with the servo technique at picking:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}={\mathbf{F}}_{\mathrm{s},\mathrm{g}1}\left({\boldsymbol{k}}_{\mathrm{s},\mathrm{g}1},{{\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{base}}}^{-1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}^{\mathrm{base}}\right), $$
where Fs,g1(ks,g1, T) enables the substitution of the accumulated tolerances to servo tolerances (ts,g1) based on ks,g1. The next step is the seizing process. At this point, the workpiece is detached from the picking surface:
$$ {{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}, $$
where \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{ref},\mathrm{g}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1} \), and it describes the seizing transient, including the misalignment (tg1), introduced by the seizing action. Then, the workpiece is getting attached to the gripper (see Fig. 5):
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}={\mathbf{F}}_{\mathrm{a},\mathrm{g}\mathrm{r}}\left({\boldsymbol{k}}_{\mathrm{a},\mathrm{g}\mathrm{r}},{{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}\right) $$
Depending on the leading feature on the gripper (ka,gr), the misalignment between the grasp frames is reduced by the application of Fa,gr(ka,gr, T) to the corresponding self-alignment and self-locating range (agr). Next, the workpiece grasp frame is determined with respect to the gripper reference frame:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{ref},\mathrm{g}1}={\boldsymbol{T}}_{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{ref},\mathrm{g}1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1} $$
The result is the transformation between the seized workpiece and the gripper. This needs to be applied at the end of the last manipulator link, after controlling it to the releasing configuration (see Fig. 6). Then, the workpiece grasp frame is to be determined with respect to the base frame in the releasing configuration:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{gr},\mathrm{ref},\mathrm{g}2}^{\mathrm{base}}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2}^{\mathrm{gr},\mathrm{ref},\mathrm{g}2}, $$
where \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2}^{\mathrm{gr},\mathrm{ref},\mathrm{g}2}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{ref},\mathrm{g}1} \) and \( {\boldsymbol{T}}_{\mathrm{gr},\mathrm{ref},\mathrm{g}2}^{\mathrm{base}} \) contains the manipulator positioning inaccuracies (tr,g2). Next, the workpiece is detached from the gripper, as it is being released (see Fig. 7):
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}2}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2}^{\mathrm{base}}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}2}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2}, $$
where \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}2}^{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}2} \) describes the releasing transient, including the misalignment (tg2), introduced by the releasing action. Then, as the workpiece arrives to the placing pose, the workpiece placing frame needs to be determined:
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}2}^{\mathrm{base}}\cdotp {{\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}2}^{\mathrm{wp},\mathrm{ref},\mathrm{p}2}}^{-1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wp},\mathrm{ref},\mathrm{p}2}, $$
where \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}2}^{\mathrm{wp},\mathrm{ref},\mathrm{p}2}={\boldsymbol{T}}_{\mathrm{wp},\mathrm{grasp},\mathrm{p}1}^{\mathrm{wp},\mathrm{ref},\mathrm{p}1} \), and \( {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wp},\mathrm{ref},\mathrm{p}2} \) contains workpiece machining inaccuracies (twp,place). At this point, the placing workholder and the placing pose are to be introduced:
$$ {\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{base}}={\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{ref}}^{\mathrm{base}}\cdotp {\mathbf{F}}_{\mathrm{place}}\left({\boldsymbol{k}}_{\mathrm{place}},{\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{wh}2,\mathrm{ref}}\right), $$
where \( {\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{ref}}^{\mathrm{base}} \) contains the workholder location tolerances (twh2,ref), and \( {\mathbf{F}}_{\mathrm{place}}\left({\boldsymbol{k}}_{\mathrm{place}},{\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{place}}^{\mathrm{wh}2,\mathrm{ref}}\right) \) contains the designed (twh2,place) and/or sensed parameters (tm,p2) based on kplace. Then, the placing frame pair relation is determined and the placing servo technique is applied (if any):
$$ {{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}={\mathbf{F}}_{\mathrm{s},\mathrm{g}2}\left({\boldsymbol{k}}_{\mathrm{s},\mathrm{g}2},{{\boldsymbol{T}}_{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}^{\mathrm{base}}}^{-1}\cdotp {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{base}}\right), $$
where Fs,g2(ks,g2, T) corresponds to the placing servo, reducing the accumulated tolerances to the level of servo tolerance (ts,g2) based on ks,g2. Finally, the effect of the last leading feature is applied (if any):
$$ {\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}={\mathbf{F}}_{\mathrm{a},\mathrm{wh}2}\left({\boldsymbol{k}}_{\mathrm{a},\mathrm{wh}2},{{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right), $$
where Fa,wh2(ka,wh2, T) corresponds to the leading feature at placing, reducing the accumulated tolerances to the corresponding range (awh2) based on ka,wh2. With this, the placing frame pair and other geometric relations corresponding to FRs are set up parametrically.
Evaluation of functional requirements
To achieve a successful pick-and-place operation, all the actually considered FRs need to be simultaneously satisfied. First, the variation in the workpiece placing pose (relative to the placing workholder) is checked against the allowed deviation range cplace, in each direction. If this is not fulfilled, the application of leading features (e.g., chamfering), or precision enhancement is necessary in form of tolerance improvement or servo techniques. Here, cplace defines a feasible tolerance region. Assuming symmetric ranges, this can be written in a single direction (x) as:
$$ \left|\mathbf{scr}{\left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_x\right|\le {c}_{\mathrm{place},x} $$
Next, the FR for successful workpiece picking (cpick) is checked. This can be written in a single direction (x) as:
$$ \left|\mathbf{scr}{\left({\boldsymbol{T}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{p}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}\right)}_x\right|\le {c}_{\mathrm{pick},x} $$
Then, the FRs posed by the leading features need to be assessed (ca,wh1, ca,gr and ca,wh2) to avoid overconstraining the system during contact transition. These can be formulated in a single direction (x) as follows:
$$ \left|\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{g}\mathrm{rasp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{rasp},\mathrm{g}1}\right)}_x\right|\le {c}_{\mathrm{a},\mathrm{wh}1,x},\kern0.5em \mathrm{if}\kern0.5em {k}_{\mathrm{a},\mathrm{wh}1,x}=\mathrm{true} $$
$$ \left|\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}^{\mathrm{gr},\mathrm{g}\mathrm{r}\mathrm{asp},\mathrm{g}1}\right)}_x\right|\le {c}_{\mathrm{a},\mathrm{g}\mathrm{r},x},\mathrm{if}\kern0.5em {k}_{\mathrm{a},\mathrm{g}\mathrm{r},x}=\mathrm{true} $$
$$ \left|\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_x\right|\le {c}_{\mathrm{a},\mathrm{wh}2,x},\mathrm{if}\kern0.5em {k}_{\mathrm{a},\mathrm{wh}2,x}=\mathrm{true} $$
The corresponding evaluations can be performed via Monte Carlo simulation. By substituting the input parameters with tolerance values sampled from the given intervals into the transformation chain, any transformation can be evaluated to check the fulfillment of FRs.
The proposed tolerance model is presented through a flexible pick-and-place case study. A physical workcell was set up as an experimental demonstrator cell for palletizing workpieces laying in a semi-structured arrangement. Separated workpieces, laying in one of their stable poses, are manipulated from a light table to the corresponding workpiece pallet. The used manipulator is an UR5 robot arm equipped with a Robotiq 2f-85 finger gripper and an IDS XS camera. The actual workpiece picking poses are determined using this 2D camera, with the help of the backlighting capability of the light table. The workcell setup, with the applied frames, is shown in Figs. 2 and 3. The preparation and operation of the workcell is presented in more detail in [7].
For the sake of simplicity, the tolerance analysis is regarded as a planar problem, considering only the x, y, and ζ components. This consideration is viable, since the workpiece picking and placing pose correspond to the same workpiece stable pose, i.e., no re-gripping and no re-orientation of the workpiece is necessary, apart from rotating around the vertical axis. The screw parameters x, y, and ζ are represented as an array, where x and y components are translations parallel to the picking plane and ζ is the rotation around the axis normal to the picking plane (vertical axis). The decision parameters are Boolean, while dimensions, tolerance and FR ranges are given in mm for translational and in ° (deg) for rotational parameters. For the sake of conciseness, tolerance and FR ranges are considered symmetric, closed intervals with zero mean value, and are represented with a single value with a ± sign.
First, the decision parameters are defined. The workpieces lay on the light table in a semi-structured way; their pose is not known precisely in x, y and ζ directions. These have to be resolved using the camera fixed on the robot. However, the placing poses are known with sufficient precision in the design phase. Since the workpieces lay freely on a table, they are not guided in the plane. On the other hand, the finger gripper guides the workpiece in y and ζ directions (see Fig. 3) and the placing pose is guided in all three directions via a chamfered centering pin and chamfered columns (shown in Fig. 14). Lastly, servoing is applied neither in the picking nor in the placing phase. The corresponding factors are summarized in Table 2.
Table 2 Tolerance influencing factors
Next, the nominal and tolerance parameters are defined; these are listed in Table 3 (values with v1 and v2 subscripts are discussed in Section 5.2). The only relevant nominal dimension in the tolerance model corresponds to the workpiece grasp frame (nwp,grasp) relative to the workpiece reference. Other dimensions are either zero or they cancel out in this particular scenario. The positioning precision of the robot is given, together with the self-positioning and self-alignment ranges of the placing workholder, as well as the manufacturing tolerances of the workpiece, gripper, and picking and placing workholders. It is noted that the tolerance of the workpiece picking and placing frame (twp,pick and twp,place) relative to the workpiece reference is zero, as these all coincide. Furthermore, the tolerance of the picking and placing workholder references (twh1,ref and twh2,ref) are also considered zero, because the position of the corresponding picking and placing frames were measured relative to the workcell reference frame directly.
Table 3 Nominal values, self-location, self-alignment, and tolerance ranges
The remaining tolerance parameters (metrology, seizing, and releasing) were needed to be determined experimentally. The precise and exhaustive exploration of these parameters is not in the scope of this paper; nevertheless, simple measurement setups and overestimation of the parameters were carried out to have acceptable input data for the simulation. Correspondingly, to keep the paper more concise, only a brief overview is given about the experiments.
The combined tolerance of the metrology system and image processing algorithms (tm,p1) was measured by setting a workpiece in a pose on the light table using different 3D-printed temporary locators (see Fig. 9a, b), and capturing images of workpiece. The captured images have a resolution of 1280 × 720. After every captured image, the robot (with the mounted camera) was moved to a random picking configuration, then back to the image capturing configuration, to emulate the real operation and add disturbance to the vision system. The workpiece pose was evaluated on each image against the originally set up pose, using the image processing algorithm employed in the demonstrator (see Fig. 10). The vision-based localization employs feature recognition and homography transformation (the algorithm is described in detail in [7]). This process was repeated using multiple workpiece setups.
Workpiece setups on the light table with different temporary locators; setup for the measurement of tm,p1 (a, b), and setup for the measurement of tg1, agr, cpick, and ca,gr (c)
Recognized workpiece features, and localization based on the camera image (for different workpiece setups)
The seizing tolerance (tg1) was estimated by seizing a similarly positioned workpiece multiple times (using the temporary locator shown in Fig. 9c) with disturbed seizing configurations, and measuring the variation of the seized pose in the free direction with a digital caliper (with 0.01 mm resolution), relative to the gripper fingers. This process is shown in Fig. 11. The self-positioning and self-alignment range of the gripper (agr) was measured in the same setup in the guided directions, using the same camera system. However, the image processing algorithm was refined for more accurate results; the achieved precision of the camera-based measurement system on the same dataset is ±0.07 mm, ± 0.06 mm and ±0.17° in x, y,and ζ, respectively.
Measurement process of the seizing tolerance after the workpiece is set up, with a disturbed seizing configuration
Finally, the releasing tolerance (tg2) was measured by repeatedly fixing the workpiece manually into the gripper using a different temporary locator (see Fig. 12), then releasing it on the light table. The variation in the releasing pose was measured, again, using the camera-based measurement system.
Setting up the workpiece into the gripper using a temporary locator
FRs are defined symmetrically for picking and placing, and additionally for the leading features at the gripper and the placing workholder. These are listed in Table 4. The FR ranges were validated in the physical workcell. The requirement corresponding to seizing and the gripper alignment (cpick and ca,gr) were checked in the seizing tolerance measurement setup, to establish feasible extrema for the FR ranges. The leading feature on the placing side (awh2) is set so that it fulfills the placing requirement; therefore, cplace is automatically satisfied if ca,wh2 is fulfilled. That is, if after releasing the workpiece, it is guided to the placing pose by the gravity and the leading features, the placing pose is satisfactory. Otherwise, if the workpiece stays on top of the guiding features, does not reach the bottom support plane, or tilts out of the workholder, the placing pose is not satisfactory. Consequently, the analysis focuses on ca,wh2 instead of the original cplace requirement. The feasibility bounds were experimentally evaluated for ca,wh2 by manually fixing the workpiece into the gripper, once again, with a temporary locator (same way as in Fig. 12), moving the robot to a disturbed releasing configuration and releasing the workpiece. Two cases with different releasing configurations are shown in Fig. 13, one with correct, the other with a failed placement. In the determined deviation region, the releasing process was consistently successful for each sampled combination of deviations.
Table 4 Defined FRs
Experimental setup for the determination of the ca,wh2 feasibility bounds
By applying the proposed tolerance model with the above input data, tolerance analysis can be performed for the pick-and-place scenario. To get a comprehensible result, a Monte Carlo simulation was carried out. The model was evaluated for 20,000 tolerance set instances, sampled in the given regions assuming normal distribution on 99.72% confidence level. To visualize the deviations, the 2D projection of the geometry was triangulated and transformed based on the results, the transformed shapes were overlaid, and their union was computed. In order to reduce the number of computations, the extreme displacements (with combined translations and rotation) were selected heuristically. The final union results in a boundary within which the actual workpiece will lay according to the tolerance data. Since there are two different aligning features, (chamfered orienting columns and chamfered positioning pin), two different 2D projection boundaries of the workpiece were calculated. Based on the resultant transformations, the worst tolerance stack-up case can be found in each direction. The results are shown in Table 5 and in Fig. 14. The calculation details are summarized in Table 6.
Table 5 Result of the Monte Carlo simulation on each FR
The top view of the workpiece pallet with the workpiece projection, worst-case boundary region, and geometric boundaries of the aligning features for the whole workpiece-chamfered columns (a), and the workpiece bottom hole-chamfered positioning pin relationships (b)
Table 6 Experimental results for the whole workpiece projection on an Intel I5-6440HQ CPU @ 2.60-GHz computer under Windows 10
According to the results, ca,wh2 is not satisfied in the worst deviation combination, which is also visible on Fig. 14, as the boundaries of the leading features and the deviation regions of the workpiece intersect in multiple places. When evaluating each deviation result of the Monte Carlo simulation, 2537 cases of the 20,000 do not satisfy ca,wh2, while the other FRs are always satisfied. With this, a failure rate of 12.69% is predicted for the placing part of the operation.
For comparison, a sequence of physical operations was executed to check the actual failure events. One pick-and-place cycle included the palletizing of 4 workpieces onto the 4 slotted workpiece pallet. Altogether 50 cycles were run, meaning 200 workpieces seizing and releasing processes. From the 200 cycles, releasing failure occurred in 7 cases (3.5%), where the workpiece did not fall into the placing pose, but stayed on top of the leading features or tilted in the wrong direction. Apart from these, there were no failure events during the operation, which matches the simulation results for the other FRs. Although the simulation and the experiment show a similar tendency in releasing failures, the gap between the results is significant. The reason behind this is that the combined leading features are represented with a single FR, which is a more conservative, more precautious approach. As this consideration in this particular case is not well suited, it is also possible to separate the FR to multiple checking points.
In this case, three different constraints can be formulated for feasible workpiece placing. Firstly, the workpiece-centering pin connection is described. Based on the diameter and chamfering of the workpiece and the workholder, the FR for the planar location (x and y) of the workpiece reference can be formulated as follows:
$$ \sqrt{\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_x^2+\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_y^2}\le \left(16.5-12.77\right)/2 $$
The remaining two FRs correspond to the chamfered columns of the workholder, and the corresponding vertices (designated as v1 and v2) on the workpiece that first make contact with the chamfers' slope. These vertices also carry deviations relative to the workpiece reference. The corresponding parameters are listed in Table 3. The FRs can be formulated for the y-directional deviation (in the workpiece reference frame) of v1 and v2 by applying planar transformation to the vertices according to the deviation results from the simulation. This results in the following inequalities:
$$ \mid \left({n}_{\mathrm{wp},\mathrm{v}1,y}+{t}_{\mathrm{wp},\mathrm{v}1,y}\right)\cos \left(\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_{\zeta}\right)+\left({n}_{\mathrm{wp},\mathrm{v}1,x}+{t}_{\mathrm{wp},\mathrm{v}1,x}\right)\sin \left(\mathbf{scr}{\left({{\boldsymbol{T}}^{\prime}}_{\mathrm{wp},\mathrm{p}\mathrm{lace},\mathrm{p}2}^{\mathrm{wh}2,\mathrm{p}\mathrm{lace}}\right)}_{\zeta}\right)\mid \le 13.25 $$
Now, the original FR (corresponding to ca,wh2) can be replaced with inequalities [30,31,32]. All three inequalities need to be satisfied simultaneously to achieve feasible workpiece placement. These requirements were also experimentally verified in the same setup as the measurement of ca,wh2, with different sampled deviations in the releasing configuration.
After checking the deviation results of the Monte Carlo simulation against the newly formulated FRs, only 961 error cases were observable, reducing the failure rate to 4.81%. This is much closer to the 3.5% failure rate obtained from the physical execution of the system. The remaining gap between the results is most probably due to the overestimated tolerance values and the intricacies of leading feature mechanics, contact transition, and dynamics of the released workpiece. Nevertheless, the experiments show close relationship with the simulation results, which validates the proposed method.
Conclusion and future work
This paper presented a novel tolerance model for evaluating the feasibility of robotic pick-and-place operations from the precision perspective. The proposed model aids robotic workcell developers in the selection of cell components, applied methods, and allocation of tolerances. Beyond the positioning precision of the manipulator and the manufacturing tolerances of the components, this tool allows the assessment of robotic workcell FRs considering tolerances introduced by the manipulation and metrology processes. The tolerance stack-up is set up in the form of a transformation chain. By substituting the tolerance values of the relevant frames along the tolerance chain, and comparing the aggregated tolerances to the feasible tolerance ranges, the realization of FRs can be validated.
First, the basic concept and main considerations were introduced, together with the structural setup of the transformation chain, the essential pick-and-place-related FRs, and the additional tolerance influencing factors. Then, the complete model formulation was presented, including the necessary input data, the composition of the tolerance chain, and the evaluation of the fulfillment of FRs. Finally, the approach was implemented and validated in case of a physical robotic demonstrator cell, through a flexible pick-and-place application.
Future work will include more exhaustive experimentation on real workcells to analyze and characterize specific tolerance inputs, such as seizing and releasing tolerances, investigate the effect of contact transitions, and identify further possible failure modes, which can potentially influence the computed tolerance chain. The other main goal is to generalize the model to different robotic operation types, like screwing or welding as well. A further possible research direction is to consider positioning tolerances on the individual robot links. If these parameters are available from the manufacturer or are measured, then the consideration of FRs along the robot trajectories could also be formulated (e.g., smallest distance between particular objects for offline path planners). Moreover, modular robots could also be considered in the tolerance model.
The data that support the findings of this study are available from the corresponding author, upon reasonable request.
Code availability
The custom code that supports the findings of this study is available from the corresponding author, upon reasonable request.
Dantan J-Y, Qureshi A-J (2009) Worst-case and statistical tolerance analysis based on quantified constraint satisfaction problems and Monte Carlo simulation. Comput Aided Des 41:1–12. https://doi.org/10.1016/j.cad.2008.11.003
Shah JJ, Ameta G, Shen Z, Davidson J (2007) Navigating the tolerance analysis maze. Comput-Aided Des Appl 4:705–718. https://doi.org/10.1080/16864360.2007.10738504
Thornton AC (1999) A mathematical framework for the key characteristic process. Res Eng Des 11:145–157. https://doi.org/10.1007/s001630050011
Edmondson NF, Redford AH (2002) Generic flexible assembly system design. Assem Autom 22:139–152. https://doi.org/10.1108/01445150210423189
International Organization for Standardization (2019) ISO 10303-105:2019 - Industrial automation systems and integration - product data representation and exchange - part 105: integrated application resource: kinematics. Geneva, Switzerland
Yan H, Wu X, Yang J (2015) Application of Monte Carlo method in tolerance snalysis. Procedia CIRP 27:281–285. https://doi.org/10.1016/j.procir.2015.04.079
Tipary B, Erdős G (2021) Generic development methodology for flexible robotic pick-and-place workcells based on digital twin. Robot Comput-Integr Manuf 71:102140. https://doi.org/10.1016/j.rcim.2021.102140
Whitney DE (2004) Mechanical assemblies: their design, manufacture, and role in product development. Oxford University Press
Chase KW, Greenwood WH (1988) Design issues in mechanical tolerance analysis. Manuf Rev 1:50–59
Teeravaraprug J (2007) A comparative study of probabilistic and worst-case tolerance synthesis. Eng Lett 14
Marziale M, Polini W (2010) Tolerance analysis: a new model based on variational solid modelling. Am Soc Mech Eng Digit Collect 383–92. https://doi.org/10.1115/ESDA2010-24437.
Desrochers A (2003) A CAD/CAM representation model applied to tolerance transfer methods. J Mech Des 125:14–22. https://doi.org/10.1115/1.1543974
Desrochers A, Rivière A (1997) A matrix approach to the representation of tolerance zones and clearances. Int J Adv Manuf Technol 13:630–636. https://doi.org/10.1007/BF01350821
Chase KW, Gao J, Magleby SP (1995) General 2-D tolerance analysis of mechanical assemblies with small kinematic adjustments. J Des Manuf 5:263–274
Bourdet P, Mathieu L, Lartigue C, Ballu A (1996) The concept of small displacement Torsor in metrology. In: Ciarlini P, Cox MG, Pavese F, Richter D, (eds) Adv. Math. Tools Metrol. II, vol. 40, World Scientific Publishing Company, p. 110–22.
Ghie W, Laperrière L, Desrochers A (2003) A unified Jacobian-Torsor model for analysis in computer aided yolerancing. In: Gogu G, Coutellier D, Chedmail P, Ray P, (eds) Recent Adv. Integr. Des. Manuf. Mech. Eng., Dordrecht: Springer Netherlands, p. 63–72. https://doi.org/10.1007/978-94-017-0161-7_7
Zou Z, Morse EP (2002) Assembleability analysis using GapSpace model for 2D mechanical assembly. Proc. ASME 2002 Int. Des. Eng. Tech. Conf. Comput. Inf. Eng. Conf. 3:329–336. https://doi.org/10.1115/DETC2002/DFM-34187
Davidson JK, Mujezinovic A, Shah JJ (2002) A new mathematical model for geometric tolerances as applied to round faces. J Mech Des 124:609–622. https://doi.org/10.1115/1.1497362
Giordano M, Pairel E, Samper S (1999) Mathematical representation of tolerance zones. In: van Houten F, Kals H (eds) Glob. Consistency Toler. Dordrecht: Springer Netherlands, pp 177–186. https://doi.org/10.1007/978-94-017-1705-2_18
Teissandier D, Couetard Y, Delos V (1999) Operations on polytopes: application to tolerance analysis. In: van Houten F, Kals H (eds) Glob. Consistency Toler. Dordrecht: Springer Netherlands, pp 425–434. https://doi.org/10.1007/978-94-017-1705-2_43
Schleich B, Anwer N, Mathieu L, Wartzack S (2014) Skin model shapes: a new paradigm shift for geometric variations modelling in mechanical engineering. Comput Aided Des 50:1–15. https://doi.org/10.1016/j.cad.2014.01.001
Cao Y, Liu T, Yang J (2018) A comprehensive review of tolerance analysis models. Int J Adv Manuf Technol 97:3055–3085. https://doi.org/10.1007/s00170-018-1920-2
Schleich B, Anwer N, Zhu Z, Qiao L, Mathieu L, Wartzack S (2014) A comparative study on tolerance analysis approaches. In: Howard TJ, Eifler T (eds) Proc. Int. Symp. Robust Des. - ISoRD14. Copenhagen, Denmark, pp 29–39. https://doi.org/10.4122/dtu:2084
Chapter Google Scholar
Qin Y, Qi Q, Lu W, Liu X, Scott PJ, Jiang X (2018) A review of representation models of tolerance information. Int J Adv Manuf Technol 95:2193–2206. https://doi.org/10.1007/s00170-017-1352-4
Schleich B, Wartzack S (2016) A quantitative comparison of tolerance analysis approaches for rigid mechanical assemblies. Procedia CIRP 43:172–177. https://doi.org/10.1016/j.procir.2016.02.013
Chen H, Jin S, Li Z, Lai X (2014) A comprehensive study of three dimensional tolerance analysis methods. Comput Aided Des 53:1–13. https://doi.org/10.1016/j.cad.2014.02.014
Desrochers A (1999) Modeling three dimensional tolerance zones using screw parameters. 25th Des Autom Conf vol. 1, Las Vegas, Nevada, USA, p. 895–903. https://doi.org/10.1115/DETC99/DAC-8587
Slocum AH. Precision machine design. Society of Manufacturing Engineers; 1992
Polini W, Corrado A (2019) A general model to estimate hole location deviation in drilling: the contribution of three error sources. Int J Adv Manuf Technol 102:545–557. https://doi.org/10.1007/s00170-018-03273-x
Mei B, Liang Z, Zhu W, Ke Y (2021) Positioning variation synthesis for an automated drilling system in wing assembly. Robot Comput-Integr Manuf 67:102044. https://doi.org/10.1016/j.rcim.2020.102044
Corrado A, Polini W (2020) Tolerance analysis tools for fixture design: a comparison. Procedia CIRP 92:112–117. https://doi.org/10.1016/j.procir.2020.05.174
Corrado A, Polini W (2017) Manufacturing signature in variational and vector-loop models for tolerance analysis of rigid parts. Int J Adv Manuf Technol 88:2153–2161. https://doi.org/10.1007/s00170-016-8947-z
Ferreras-Higuero E, Leal-Muñoz E, García de Jalón J, Chacón E, Vizán A (2020) Robot-process precision modelling for the improvement of productivity in flexible manufacturing cells. Robot Comput-Integr Manuf 65:101966. https://doi.org/10.1016/j.rcim.2020.101966
Doydum C, Duke PN (1991) Use of Monte Carlo simulation to select dimensions, tolerances, and precision for automated assembly. J Manuf Syst 10:209–222. https://doi.org/10.1016/0278-6125(91)90034-Y
Baydar C, Saitou K (2004) Off-line error prediction, diagnosis and recovery using virtual assembly systems. J Intell Manuf 15:679–692. https://doi.org/10.1023/B:JIMS.0000037716.69868.d0
International Organization for Standardization (1998) ISO 9283:1998 - Manipulating industrial robots - performance criteria and related test methods. Geneva, Switzerland
Mihelj M, Bajd T, Ude A, Lenarčič J, Stanovnik A, Munih M et al (2019) Accuracy and repeatability of industrial manipulators. In: Robotics, Cham. Springer International Publishing, pp 231–241. https://doi.org/10.1007/978-3-319-72911-4_15
Nubiola A (2014) Contribution to improving the accuracy of serial robots. PhD Thesis. École de Technologie Supérieure
Mooring B, Roth ZS, Driels MR (1991) Fundamentals of manipulator calibration. Wiley, New York
Wagner M, Morehouse J, Melkote S (2009) Prediction of part orientation error tolerance of a robotic gripper. Robot Comput-Integr Manuf 25:449–459. https://doi.org/10.1016/j.rcim.2008.02.006
Suyama A, Aiyama Y (2016) Influence of grasping position to robot assembling task. Proc. ISR 2016 47st Int. Symp. Robot., Munich, Germany, p. 1–6
ElMaraghy HA, ElMaraghy WH, Knoll L (1988) Design specification of parts dimensional tolerance for robotic assembly. Comput Ind 10:47–59. https://doi.org/10.1016/0166-3615(88)90047-4
De la Escalera A, Armingol JM (2010) Automatic chessboard detection for intrinsic and extrinsic camera Parameter Calibration. Sensors 10:2027–2044. https://doi.org/10.3390/s100302027
Lourakis M, Zabulis X (2013) Model-based pose estimation for rigid objects. In: Chen M, Leibe B, Neumann B (eds) Comput. Vis. Syst., Berlin. Heidelberg: Springer, pp 83–92. https://doi.org/10.1007/978-3-642-39402-7_9
Collet A, Martinez M, Srinivasa SS (2011) The MOPED framework: object recognition and pose estimation for manipulation. Int J Robot Res 30:1284–1306. https://doi.org/10.1177/0278364911401765
Kumbla NB, Thakar S, Kaipa KN, Marvel J, Gupta SK (2018) Handling perception uncertainty in simulation-based singulation planning for robotic bin picking. J Comput Inf Sci Eng 18:1–16. https://doi.org/10.1115/1.4038954
Kaipa KN, Kankanhalli-Nagendra AS, Kumbla NB, Shriyam S, Thevendria-Karthic SS, Marvel JA, Gupta SK (2016) Addressing perception uncertainty induced failure modes in robotic bin-picking. Robot Comput-Integr Manuf 42:17–38. https://doi.org/10.1016/j.rcim.2016.05.002
Chaumette F, Hutchinson S (2007) Visual servo control. II. Advanced approaches [Tutorial]. IEEE Robot Autom Mag 14:109–118. https://doi.org/10.1109/MRA.2007.339609
Chaumette F, Hutchinson S (2006) Visual servo control. I. Basic approaches. IEEE Robot Autom Mag 13:82–90. https://doi.org/10.1109/MRA.2006.250573
Ma Y, Liu X, Zhang J, Xu D, Zhang D, Wu W (2020) Robotic grasping and alignment for small size components assembly based on visual servoing. Int J Adv Manuf Technol 106:4827–4843. https://doi.org/10.1007/s00170-019-04800-0
Chang W-C (2018) Robotic Assembly of smartphone back shells with eye-in-hand visual servoing. Robot Comput-Integr Manuf 50:102–113. https://doi.org/10.1016/j.rcim.2017.09.010
Song J, Chen Q, Li Z (2021) A peg-in-hole robot assembly system based on Gauss mixture model. Robot Comput-Integr Manuf 67:101996. https://doi.org/10.1016/j.rcim.2020.101996
Bi ZM, Zhang WJ, Chen I-M, Lang SYT (2007) Automated generation of the D–H parameters for configuration design of modular manipulators. Robot Comput-Integr Manuf 23:553–562. https://doi.org/10.1016/j.rcim.2006.02.014
Open access funding provided by ELKH Institute for Computer Science and Control. Work in this paper has been in part funded under project number ED_18-22018-0006, supported by the National Research, Development and Innovation Fund of Hungary, financed under the (publicly funded) funding scheme according to Section 13. §(2) of the Scientific Research, Development and Innovation Act, and in part by the Ministry for Innovation and Technology and the National Research, Development and Innovation Office within the framework of the National Lab for Autonomous Systems.
Institute for Computer Science and Control (SZTAKI), Eötvös Loránd Research Network (ELKH), Budapest, Hungary
Bence Tipary & Gábor Erdős
Department of Manufacturing Science and Engineering, Budapest University of Technology and Economics, Budapest, Hungary
Bence Tipary
Gábor Erdős
B Tipary: conceptualization, code writing, manuscript writing and editing, methodology, investigation, visualization
G Erdős: conceptualization, manuscript reviewing, supervision
Correspondence to Bence Tipary.
Consent to participate
Tipary, B., Erdős, G. Tolerance analysis for robotic pick-and-place operations. Int J Adv Manuf Technol 117, 1405–1426 (2021). https://doi.org/10.1007/s00170-021-07672-5
Issue Date: November 2021
Design method
Tolerance analysis
Over 10 million scientific documents at your fingertips
Switch Edition
Academic Edition
Corporate Edition
Not affiliated
© 2022 Springer Nature Switzerland AG. Part of Springer Nature. | CommonCrawl |
If the altitude of the sun is at 60, then…
Home / Height and Distance / question
पोस्ट श्रेणी:Height and Distance
If the altitude of the sun is at 60°, then the height of the vertical tower that will cast a shadow of length 30 m is
A. $30 \sqrt{3} m$ B. $15 m$ C. $\frac{30}{\sqrt{3}} m$ D. $15 \sqrt{2} m$ Answer: Option A
Solution(By Apex Team)
Let AB be tower and a point C distance of 30 m from its foot of the tower which form an angle of elevation pf the sun of 60° $\begin{array}{l}\text{Let height of tower AB}=\text{h}\\ \text{Then in right }\triangle\text{ACB,}\\ \tan\theta=\frac{\text{ Perpendicular }}{\text{ Base }}=\frac{AB}{CB}\\ \Rightarrow\tan60^{\circ}=\frac{h}{30}\\ \Rightarrow\sqrt{3}=\frac{h}{30}\\ \Rightarrow h=30\sqrt{3}\\ \therefore\text{height of the tower}\\ =30\sqrt{3}\text{m}\end{array}$
शायद तुम्हे यह भी अच्छा लगे
An observer 1.6 m tall is 20√3 away..
जनवरी 3, 2022
If the angle of elevation of the top of a tower
The angles of depression of two ships from…
The average of odd numbers up to 100 is
Walking 3/4 his normal speed, Rabi is..
दिसम्बर 11, 2021/
The average age of 10 children is 9 years 9 months… | CommonCrawl |
Stem Cell Research & Therapy
Early passaging of mesenchymal stem cells does not instigate significant modifications in their immunological behavior
Niketa Sareen1,
Glen Lester Sequiera1,
Rakesh Chaudhary1,
Ejlal Abu-El-Rub1,
Subir Roy Chowdhury2,
Vikram Sharma3,
Arun Surendran1,
Meenal Moudgil1,
Paul Fernyhough2,
Amir Ravandi1 &
Sanjiv Dhingra1
Stem Cell Research & Therapy volume 9, Article number: 121 (2018) Cite this article
Bone marrow-derived allogeneic mesenchymal stem cells (MSCs) from young healthy donors are immunoprivileged and their clinical application for regenerative medicine is under evaluation. However, data from preclinical and initial clinical trials indicate that allogeneic MSCs after transplantation provoke a host immune response and are rejected. In the current study, we evaluated the effect of an increase in passage number in cell culture on immunoprivilege of the MSCs. Since only limited numbers of MSCs can be sourced at a time from a donor, it is imperative to expand them in culture to meet the necessary numbers required for cell therapy. Presently, the most commonly used passages for transplantation include passages (P)3–7. Therefore, in this study we included clinically relevant passages, i.e., P3, P5, and P7, for evaluation.
The immunoprivilege of MSCs was assessed with the mixed leukocyte reaction assay, where rat MSCs were cocultured with peripheral blood leukocytes for 72 h. Leukocyte-mediated cytotoxicity, apoptosis (Bax/Bcl-xl ratio), leukocyte proliferation, and alterations in cellular bioenergetics in MSCs were assessed after the coculture. Furthermore, the expression of various oxidized phospholipids (oxidized phosphatidylcholine (ox-PC)) was analyzed in MSCs using a lipidomic platform. To determine if the ox-PCs were acting in tandem with downstream intracellular protein alterations, we performed proteome analysis using a liquid chromatography/mass spectrometry (LC/MS) proteomic platform.
Our data demonstrate that MSCs were immunoprivileged at all three passages since coculture with leukocytes did not affect the survival of MSCs at P3, P5, and P7. We also found that, with an increase in the passage number of MSCs, leukocytes did not cause any significant effect on cellular bioenergetics (basal respiration rate, spare respiratory capacity, maximal respiration, and coupling efficiency). Interestingly, in our omics data, we detected alterations in some of the ox-PCs and proteins in MSCs at different passages; however, these changes were not significant enough to affect their immunoprivilege.
The outcome of this study demonstrates that an increase in passage number (from P3 to P7) in the cell culture does not have any significant effect on the immunoprivilege of MSCs.
Bone marrow (BM)-derived mesenchymal stem cells (MSCs) are attractive candidates for cell therapy since these cells are immunoprivileged [1]. Therefore, allogeneic (unrelated donor) MSCs can suppress the host immune system and survive after transplantation. In fact, the outcome of numerous preclinical studies and initial clinical trials have demonstrated that allogeneic BM-MSCs after transplantation were able to initiate a regenerative process [2]. However, long-term survival of transplanted MSCs was not detected in the recipient system. We recently reported in a rat model of myocardial infarction that allogeneic MSCs after transplantation into the infarcted heart lost their immunoprivilege and were rejected by the host immune system [3]. The outcome of our studies was confirmed by several other reports that allogeneic MSCs after transplantation become immunogenic and are rejected by the recipient immune system [4]. Therefore, for a successful bench-to-bedside translation of MSC-based regenerative therapies, it is imperative to understand the mechanisms of immune switch in MSCs from the immunoprivileged to immunogenic state.
To maintain uniformity in the quality of the cell product for any allogeneic cell-based clinical trial, MSCs derived from a single healthy donor are used for transplantation in multiple recipients. Therefore, it is imperative to expand them in culture to facilitate generation of the required number of MSCs. The safety of various passages of MSCs in the in-vitro studies has been amply demonstrated. A recent study reported no changes in telomeric ends for up to 25 passages [5]. However, subjecting MSCs to passages in cell culture, even in the short term, has been associated with physiological changes. Madeira et al. reported major differences in the pathways pertaining to culture-induced senescence [6]. There are numerous factors that may influence the overall optimum physiology of the cells, and these may be affected by the passaging of MSCs. The cell surface lipidome analysis of MSCs is garnering attention of late. The passaging of MSCs was found to alter levels of cell surface lipids and immune-modulation [7]. The oxidized phospholipids (oxidized phosphatidylcholine (ox-PC)), in particular, are known to affect the immunological behavior of cells [8]. Furthermore, cellular bioenergetics including cellular respiration and metabolic pathways are instrumental to stem cell renewal, maintenance, and general cell health [9]. Altered metabolic pathways lead to a difference in the functional capabilities of stem cells [10]. It is reported that, with an increase in passage number in the culture, cellular bioenergetics and growth rate are affected [11, 12].
Therefore, in this study, we have attempted to investigate the effect of an increase in passage number in cell culture on the immunological behavior of MSCs. The most commonly used passages for cell therapy in ongoing clinical studies include passage (P)3–7. In this study, we included the clinically relevant passages, i.e., P3, P5, and P7, for evaluation. We have employed functional evaluation as well as whole cell high-throughput assessment. We have chosen parameters including cell surface ox-PCs, cellular bioenergetics, global proteomic assessment, and general immune as well as cell survival pathways for our investigations. These characteristics have hitherto not been studied in detail. With MSCs poised at the cusp of clinical application, it is becoming increasingly apparent that our knowledge is very limited in various attempts to transfer understanding from the in-vitro setting into in-vivo applications.
Experimental animals
Unrelated male Sprague-Dawley (SD) rats (200–250 g) were used for the isolation of bone marrow MSCs and for the isolation of peripheral blood leukocytes. The study protocol was approved by the Animal Care Committee of the University of Manitoba and conformed to the 'Guide for the Care and Use of Laboratory Animals' published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1985).
MSC isolation and characterization
Bone marrow cells were flushed from the cavities of femur and tibias of SD rats. After the connective tissue was removed from around the bones, both ends were cut. The bone marrow plugs were flushed with Dulbecco's modified Eagle's medium supplemented with 15% fetal bovine serum (FBS), 100 units/ml penicillin G, and 0.1 mg/ml streptomycin. Cells were plated and cultured in the same medium followed by a media change and the removal of nonadherent hematopoietic cells the next day. The medium was replaced every 3 days, and the cells were subcultured when confluency exceeded 90%. MSCs from passages 3, 5, and 7 were used for the studies described herein.
MSCs were characterized by flow cytometry as described previously [3]; the cell population which was identified as CD90.1+, CD29+, CD45−, and CD34− was used for further experiments. To further characterize the cells, MSCs were analyzed for their ability to differentiate into osteogenic, adipogenic, and chondrogenic lineages using a kit (R&D systems, catalogue number SC020). The cells were induced to differentiate and stained using reagents and primary antibodies provided in the kit (osteocalcin for osteogenic differentiation; FABP4 for adipogenic differentiation; aggrecan for chondrogenic differentiation). The secondary antibodies used (AF488 for osteocytes; AF647 for adipocytes and chondrocytes) were purchased separately. The nuclei were stained with DAPI. The images were captured using a Cytation5 at 20× magnification. Additionally, differentiated MSCS were also stained using Alizarin Red (osteogenic) and Oil Red-O stain (adipogenic).
MSC population doubling
The population doubling time of MSCs at different passages was calculated using a trypan blue cell viability assay. The cells were plated at 100,000 cells/well in six-well dishes. After 96 h of culture, the MSCs were detached using trypsin EDTA followed by staining with trypan blue and counting of the live cell number using an automated cell counter (BioRad). The doubling time was calculated as follows:
$$ \mathrm{Doubling}\ \mathrm{time}=\mathrm{time}\ \mathrm{of}\ \mathrm{culture}\times \log (2)/\log\ \left(\mathrm{final}\ \mathrm{cell}\ \mathrm{number}\right)\hbox{--} \log\ \left(\mathrm{initial}\ \mathrm{cell}\ \mathrm{number}\right) $$
Mixed leukocyte-mediated cytotoxicity
Leukocytes were isolated from rat spleen using HISTOPAQUE 1083 (Sigma-Aldrich) and cocultured with allogeneic MSCs at different passages (3, 5, and 7) at a ratio of 10:1 (leukocytes:MSCs). After 72 h of coculture, leukocyte-mediated cytotoxicity in MSCs was assessed by a Live/Dead viability/cytotoxicity assay kit (Thermo Fisher Scientific, L3224).
Assessment of apoptosis
Apoptosis in MSCs at different passages (P3, P5, and P7) was assessed after 72 h of coculture with mixed leukocytes by measuring Bax and Bcl-xl levels using Western blot. Briefly, total protein levels were measured by Bradford protein assay, and 25 μg of protein was used in each group for SDS-PAGE electrophoresis. After separation with electrophoresis, proteins were transferred to PVDF membranes and probed with primary antibodies for Bax and Bcl-xl (Santa Cruz Biotechnologies Inc., CA, USA) and secondary antibodies (Biorad Inc.). The membranes were developed using x-ray film, and bands were quantified using Quantity One software for densitometry.
Leukocyte proliferation
The effect of MSCs on the proliferation of leukocytes was analyzed using an MLR assay. Leukocytes were cocultured with allogeneic MSCs at different passages (P3, P5, and P7) for 72 h at a ratio of 1:10 (MSCs:leukocytes). The leukocyte proliferation was measured by flow cytometry (BD Accuri). Briefly, after coculture, the leukocytes in the supernatant were collected and spun at 1000 rpm for 5 min. The pellet was washed three times using 1× phosphate-buffered saline (PBS), and suspended in 100 μl cold PBS. The cells were then fixed using 5 ml ice-cold 70% ethanol followed by RNase (20 μg/ml) treatment for 30 min. The cells were then stained with propidium iodide (PI; 5 μg/ml) for 5 min at room temperature and analyzed using flow cytometry. To measure leukocyte proliferation, a cell cycle analysis was performed by counting the number of cells entering the S phase (proliferating phase) and the G2/M phase from the G0/G1 phase (resting cells) of the cell cycle.
Assessment of the secretion profile of leukocytes
The leukocytes were cocultured with MSCs at different passages for 72 h at a ratio of 1:10 (MSCs:leukocytes). The cytokine secretion profile of leukocytes was analyzed using a multianalyte rat cytokine ELISArray kit (Qiagen; MER 336161) following instructions from the manufacturer. We analyzed the levels of 12 different cytokines including interleukin (IL)-1α, IL-1β, IL-2, IL-4, IL-6, IL-10, IL-12, IL-13, interferon (IFN)-δ, tumor necrosis factor (TNF)-α, granulocyte-macrophage colony-stimulating factor (GM-CSF), and RANTES. The plate was read at 450 and 570 nm using a Cytation5 analyzer (BioTek Inc.) in plate reader mode.
Measurement of cellular bioenergetics
The cellular bioenergetics were determined using the extracellular flux (XF24) analyzer (Seahorse Bioscience). MSCs (4 × 104 cells/well) and leukocytes were cocultured at a ratio of 1:10 (MSCs:leukocytes) in XF24 plates for 72 h. The mean basal respiration was determined by recording oxygen consumption rate (OCR) measurements before adding inhibitors or activators. ATP-linked OCR and proton leak were determined by injecting oligomycin (1 μM). The maximal respiration rate was determined after adding FCCP (an uncoupler of the electron transport chain) at a concentration of 1 μM. The difference between the basal rate and this FCCP-stimulated rate is the reserve capacity of the mitochondria, which is a measure of the maximal potential respiratory capacity the cell can utilize under conditions of stress and/or increased energetic demands. To completely inhibit mitochondrial electron transport, antimycin A (1 μM) and rotenone (1 μM) were used. The OCR determined after rotenone and antimycin A injection is attributable to nonmitochondrial oxygen consumption. Mitochondrial basal respiration, proton leak, and the maximal respiration were calculated after corrections were performed for the nonmitochondrial OCR for each assay. Under these conditions, viability was over 90% for all cell types and remained so over the time course of the assay. At the end of the assay period, trypsinized cells were collected, and values were normalized to the total cell number in each well [13].
Oxylipidomic analysis
Cell surface oxidized phosphatidylcholine (ox-PCs) levels were measured in MSCs at different passages (P3, P5, and P7) by liquid chromatography/mass spectrometry (LC/MS) analysis. Total cellular lipids were extracted from cell pellets using a protocol adapted from Folch et al. [14]. Oxylipidomic analysis was performed with reverse-phase high-performance liquid chromatography (HPLC) using an Ascentis Express C18 column (Supelco Analytical, Bellefonte, PA, USA). Data were collected using analyst 1.6 software (Applied Biosystems, Canada) and quantified using MultiQuant 2.1 (Absciex, Ontario, Canada). The mass spectrometry data were log transformed and autoscaled (mean-centered and divided by standard deviation of each variable) before applying statistical analysis. To determine the changes in ox-PCs that were statistically significant between different passages P3, P5, and P7, we performed a one-way analysis of variance (ANOVA) with a p value cut-off of 0.05, followed by Tukey's Honestly Significant Difference (Tukey's HSD).
E06 antibody treatment
E06 is a blocking antibody that specifically inhibits oxidized phosphatidylcholines in cells. To assess the involvement of ox-PCs in regulating leukocyte-mediated cytotoxicity, apoptosis, and cellular bioenergetics in MSCs, we cocultured MSCs and leukocytes with or without E06 antibody (1 ng/ml) (MSC + L + Ab) for 72 h and measured the abovementioned parameters.
Proteomic analysis
Sample preparation for mass spectrometry
Whole cell proteomic analysis was performed in MSCs at different passages (P3, P5, and P7) by the LC/MS proteomic platform. MSCs were cultured at different passages, and cell pellets were collected and washed in ice-cold PBS (pH 7.2) followed by treatment with urea lysis buffer (8 M urea in 0.1 M Tris-HCl, pH 8.5). Protein estimation was performed by Qubit fluorescence assay (Invitrogen). A total of 50 μg protein was digested using the FASP procedure as described previously [15]. Liquid chromatography tandem mass spectrometric analysis of tryptic peptides (500 ng) was carried out using a Proxeon nano spray ESI source (Thermo Fisher, Hemel, UK) and analyzed using Orbitrap Velos Pro FTMS (Thermo Finnigan, Bremen, Germany) [16].
Proteomic data analysis by MaxQuant
Peptides and proteins were identified by Andromeda via an automated database search of all tandem mass spectra against a curated target/decoy database (using forward and reverse versions of the Rattus norvegicus [Taxonomy ID 10116]) and Uniprot protein sequence database (http://www.uniprot.org; release October 2015) containing all rat protein entries from Swiss-Prot and TrEMBL. Cysteine carbamidomethylation was searched as a fixed modification, whereas N-acetyl protein, deamidated NQ, and oxidized methionine were searched as a variable modification. The resulting Andromeda peak list-output files were further processed using MaxQuant software. The downstream bioinformatics data analysis was carried out using the Perseus software suite (1.5.0.15) and the Ingenuity Pathway Analysis software tool (Ingenuity Systems, Qiagen, Redwood City).
Experimental values are expressed as mean ± SD. The comparison of mean values between various groups was performed by one-way ANOVA followed by multiple comparisons by Tukey test using the software GraphPad Prism. A p value < 0.05 was considered to be significant.
Differentiation of MSCs
To characterize MSCs, cells were induced to differentiate toward the adipogenic, osteogenic, and chondrogenic lineages. Our data demonstrate that MSCs have the ability to differentiate toward these three lineages (Additional file 1: Figure S1).
Population doubling of MSCs at different passages
To investigate the effect of an increase in passage number on population doubling time of MSCs, a cell viability assay was performed; our data demonstrate that there was no significant difference in the population doubling time of MSCs in the culture at P3, 5, or 7 (Fig. 1a).
Assessment of doubling time and immunoprivilege of MSCs. a Population doubling of MSCs at different passages was determined using trypan blue cell viability assay. The cells were plated in equal numbers followed by calculating the live cell number after 96 h of culture. There was no significant difference found in population doubling time of cells at different passages. b, c MSCs were cocultured with leukocytes (with or without E06 blocking antibody) for 72 h at a ratio of 1:10 (MSCs:leukocytes). b Leukocyte-mediated cytotoxicity in MSCs at different passages was determined by cytotoxicity assay kit using flow cytometry. There was no significant difference found in the level of cytotoxicity at different passages in the presence of leukocytes alone or in the presence of leukocytes and E06 antibody. c Western blot analysis was performed to determine the levels of the pro- and antiapoptotic proteins Bax and Bcl-xL. There was no significant difference observed in the Bax/Bcl-xl ratio in MSCs at different passages in the presence of leukocytes alone or in the presence of leukocytes and E06 antibody. Data are represented as mean ± SD (n = 3–6). ML, MSCs + leukocytes; MLA, MSCs + leukocytes + E06 antibody; MSC, mesenchymal stem cells
MSCs show no significant changes in immunological behavior with an increase in passage number
Bone marrow-derived MSCs are reported to be immunoprivileged and thus are able to escape the host immune system after transplantation. To understand the effect of an increase in passage number on the immunoprivilege of MSCs, cells at P3, P5, and P7 were cocultured with mixed leukocytes. The leukocyte-mediated cytotoxicity was measured in MSCs. Our results demonstrate that MSCs were immunoprivileged at all three passages (P3, P5, and P7) since we found more than 80% live cells even after 72 h of coculture with leukocytes (Fig. 1b). Interestingly, there was no significant difference detected in the number of live/dead MSCs at different passages after the coculture (Fig. 1b). We also assessed leukocyte-mediated apoptosis of MSCs and found no significant difference in the ratio of the antiapoptotic protein Bcl-xl and the proapoptotic protein Bax at P3, P5, and P7 after coculture with leukocytes (Fig. 1c).
Mesenchymal stem cells have the ability to suppress immune cell proliferation and promote immune tolerance. We analyzed the effect of an increase in passage number of MSCs on their ability to suppress leukocyte proliferation and found that there were no significant differences in the level of suppression of leukocyte proliferation by MSCs at different passages (Fig. 2a). The data are represented as different stages of the cell cycle including G0/G1 phase, S phase, and G2/M phase. G1/G0 phase represents the stage where the cells prepare for the next division cycle by synthesizing proteins and RNA required for the division and multiplication. In S phase, the DNA synthesis occurs allowing the cells in G2/M phase to have double DNA which becomes divided equally in the cells once they undergo mitotic cell division. There were no significant differences observed at any stage among the different passages, indicating that the MSCs at P3, 5, and 7 affect leukocyte proliferation at the same rate.
Effect of MSCs on leukocyte proliferation and the secretion profile of leukocytes. Leukocytes were cocultured with MSCs at passages 3, 5, and 7 at a ratio of 1:10 (MSCs:leukocytes) for 72 h. a Leukocyte proliferation was measured by flow cytometry. The data are represented as different stages of the cell cycle: G0/G1 phase, S phase, and M phase. The extent of leukocyte proliferation by MSCs did not change with an increase in passage number since there was no significant difference found in the number of leukocytes at different stages of the cell cycle among different passages. The effect of an increase in passage number of MSCs on the secretion of b anti-inflammatory and c proinflammatory cytokines by leukocytes was analyzed using ELISA array. The results indicate that MSCs at different passages had no significant effect on the secretion profile of leukocytes. Data are represented as mean ± SD (n = 3–4). GM-CSF, granulocyte-macrophage colony-stimulating factor; IFN, interferon; IL, interleukin; LC, leukocytes alone; ML3, leukocytes cocultured with MSCs at P3; ML5, leukocytes cocultured with MSCs at P5; ML7, leukocytes cocultured with MSCs at P7; TNF, tumor necrosis factor
To further assess the effect of MSCs at different passages on the immunomodulatory effects of leukocytes, we analyzed the levels of several proinflammatory cytokines including IL-1α, IL-1β, IL-2, IL-6, IL-12, IFN-γ, TNF-α, GM-CSF, and RANTES, and the anti-inflammatory cytokines IL-4, IL-10, and IL-13 in leukocytes after coculture with MSCs. Our data demonstrate that there was no significant change in the levels of these soluble factors in leukocytes after coculture with MSCs at different passages (Fig. 2b, c). These results suggest that an increase in the passage number from P3 to P7 does not affect immunoprivilege and immune tolerance of MSCs.
Effect of an increase in passage number on cellular bioenergetics
It is reported that intracellular energy metabolism has a primary influence on the presence or absence of T-cell activation signals. Therefore, cellular bioenergetics are a key factor for determining the response of transplanted cells toward the host immune system. We assessed the effect of an increase in passage number on the intracellular energy metabolism using a SeaHorse Bioscience XF24 analyzer. We found no significant difference in basal respiration rate and spare respiratory capacity in MSCs at P3, P5, and P7 before and after coculture with leukocytes (Fig. 3a, b). We also measured the maximal respiration along with coupling efficiency of MSCs in the presence as well as absence of leukocytes. Our data indicate that there was no significant difference observed in any of these parameters at different passages (Fig. 3c, d).
Measurement of cellular bioenergetics using XF24 Seahorse analyzer. Mesenchymal stem cells (MSCs) at passages 3, 5, and 7 were cocultured with leukocytes (with or without E06 blocking antibody) at a ratio of 1:10 (MSCs:leukocytes) for 72 h. a Basal respiration rate, b spare respiratory capacity, c coupling efficiency, and d maximal respiration rate were measured in MSCs. There were no significant changes observed in these parameters in MSCs with the increase in passage number, in the presence of leukocytes, and in the presence of E06 antibody. The values are normalized to cell number in each well. Data are represented as mean ± SD (n = 4–6). MSC + L, MSCs + leukocytes; MSC + L + Ab, MSCs + leukocytes + E06 antibody
Oxidized phosphatidylcholine (ox-PCs) levels change in MSCs without affecting immunoprivilege
The cellular ox-PCs have been recognized as important mediators of immune signaling. To assess changes in the total cell oxylipidome at different passages and their effect on immunoprivilege of MSCs, the cells at P3, P5, and P7 were subjected to LC/MS analysis. Our data demonstrate that, overall, there were no significant features identified between the passages (Fig. 4a, b). However, some of the ox-PCs which are already reported (in other cell types) to play a significant role in immune cell suppression and were found to be altered with an increase in passage number in the current study are SOVPC, KDdiA SPC, PAPC-OOH, SAPC-keto, and SECPC (Fig. 4b).
Oxylipidome profile of MSCs by LC/MS was carried out at passages 3, 5, and 7. a One-way analysis of variance (ANOVA) plot with a p value threshold of 0.05. b Clustered heatmap (distance measure using euclidean, and clustering algorithm using ward) showing the intensity of 55 ox-PC compounds. Each row represents data for a specific ox-PC compound and each column represents an individual passage (P3, P5, and P7). All values are log-normalized values of detected abundance for each ox-PC compound. The colors changing from high (red) to low (blue) correspond to the different intensity level of ox-PCs (n = 3)
To explore whether changes in the levels of ox-PCs have any effect on the immunological behavior of MSCs, we added E06 antibody in the coculture experiments and assessed leukocyte-mediated cytotoxicity and apoptosis in MSCs. The E06 antibody is responsible for blocking cell surface ox-PCs. Our data demonstrate that the presence of the antibody did not have any significant effect on the number of live/dead MSCs and apoptosis after coculture with leukocytes (Fig. 1b, c). Furthermore, the presence of E06 antibody did not cause any difference in cellular bioenergetics. We did not see any significant changes in the basal respiration rate, spare respiratory capacity, coupling efficiency, or maximal respiration rate before and after the addition of the E06 blocking antibody (Fig. 3a–d). Hence, alterations in individual cell surface ox-PCs from P3 to P7 do not affect the immunological properties or cellular bioenergetics of MSCs.
The stem cell proteome is largely unchanged over different passages
We performed whole-cell proteomic analysis at different passages using LC/MS to study changes in the levels of intracellular proteins related to cellular senescence, immunogenicity, and bioenergetics. In total, over 800 proteins were screened (Fig. 5a). Our proteomic data recorded some changes in the levels of proteins associated with cellular senescence and aging (Fig. 5b) and immunological synapse (Fig. 5c). Furthermore, some of the proteins that have been reported to play a role in cellular respiration pathways including glycolysis and oxidative phosphorylation as well as tricarboxylic acid (TCA) cycle showed changes with the increase in passage number (Fig. 5d–f). The proteomic analyses for mitochondrial pathways also indicated changes in some proteins in P7 versus P3 (Additional file 2: Figure S2). However, overall, the extent of change recorded in intracellular proteins was not significantly different among P3 and P7, and the changes recorded were not able to affect immunological behavior of MSCs.
Whole-cell proteome analysis of MSCs at passages 3 and 7 was performed using the LC/MS proteomic platform to determine the changes in different proteins with the increase in passage number. a Volcano plot of all the proteins shows no significant changes in all but 18 (shown in red color) proteins (p < 0.05). b–f The values of proteins involved in different pathways including cellular senescence (b), immunological synapse (c), glycolysis (d), tricarboxylic acid (TCA) cycle (e), and oxidative phosphorylation (f). Log2 fold change ratios of protein values at P7 versus P3 were calculated. The protein values between −1 and + 1 were considered to be normal and not changing significantly. Values higher than +1 indicate significant upregulation (at P7 compared to P3) of the protein and values lower than −1 indicate significant downregulation of the protein levels (n = 3)
In various animal models of degenerative diseases, transplantation of bone marrow-derived allogeneic MSCs has triggered regenerative processes. Based on the encouraging outcome of animal-based studies, several clinical trials have tested the efficacy of allogeneic MSCs. Although the outcome of allogeneic cell-based clinical trials has been encouraging, it is not as effective as the outcome of preclinical studies. Some of the recent studies reported that allogeneic MSCs after transplantation were immunogenic and were rejected by the host immune system. Therefore, understanding the mechanisms of the switch in the immunological behavior of MSCs from immunoprivileged to the immunogenic state would help in preserving the benefits of allogeneic MSCs. The current study is the first to evaluate the possible role of cellular bioenergetics and cell surface ox-PCs in combination with intracellular proteomic analysis in regulating the immunoprivilege of MSCs over different passages. The notion that an increase in passage number in cell culture may affect cellular physiology, morphology, and particularly cell surface molecules is being actively debated. It has been suggested that senescence of MSCs starts as soon as their culturing is initiated. Furthermore, the cells start losing the ability to differentiate, doubling capacity, and telomere length as the passage number increases in cell culture [17]. The outcomes of the majority of MSC-based animal studies and clinical trials have suggested that allogeneic MSCs after transplantation were immunoprivileged and there was no immune response detected in the recipient system against transplanted cells [18, 19]. However, some recent studies revealed that antidonor alloantibodies were detected against transplanted MSCs [3, 20] and cells were rejected by the recipient immune system. One of the important variables in these studies was that cells employed for these studies were from different passages. Therefore, in this study we investigated the effect of an increase in passage number in cell culture on the immunological behavior of MSCs. The most commonly used passages for cell therapy in concluded or ongoing studies include passages 3–7. Therefore, in the current study, we included P3, P5, and P7 for evaluation. Our investigations were focused on hitherto untested parameters of cell surface ox-PCs, cellular bioenergetics, global, and proteomic assessment.
Previous studies have reported an association of cellular senescence with alterations in the immunological behavior of MSCs [21,22,23]. Senescent cells are reported to attract immune cells; for instance, fibroblasts undergoing senescence secrete various cytokines as well as chemokines which lead to activation of lymphocytes and macrophages [23]. At the organ level, it is reported that kidneys from old donors are more prone to rejection by the host immune system compared with those from young donors [24]. Additionally, in the case of bone marrow-derived MSCs, senescence is associated with a reduced differentiation and proliferation potential [25]. Furthermore, radiation-induced senescence in MSCs is associated with abrogation of the immunomodulatory properties and impaired therapeutic potential in vivo in a mouse model of sepsis [26]. However, the effect of an increase in passage number on immunoprivilege of MSCs has not been investigated thoroughly. Therefore, to investigate this, in the current study we analyzed cellular changes in immunogenicity at different passages. We found that an increase in passage number from P3 to P7 did not have any significant effect on immunoprivilege of MSCs since leukocyte-mediated cytotoxicity and cell death in MSCs did not change between the different passages. Furthermore, there was no significant difference observed in the MSC-mediated suppression of leukocyte proliferation with different passages of MSCs.
Several studies in the literature have reported the role of the cellular bioenergetics profile in the regulation of immune response. The mode of intracellular respiration plays a key role in influencing the presence or absence of T-cell activation signals and thus regulating immunoprivilege of a cell [27, 28]. The choice of fuel (glucose or fatty acids) used for mitochondrial metabolism regulates the interaction of the cell with the immune system [29, 30]. In dendritic cells, there is a switch in the mode of metabolism from oxidative phosphorylation to glycolysis that triggers their activation [31]. In another study, the effect of blocking mitochondrial respiration was reported to reduce the binding of TNF-α to the cells, indicating that mitochondrial respiration might be an important mediator of the immune responses controlled by TNF-α [32]. To assess the effects of an increase in passage number on the cellular respiratory profile in MSCs and its association with the immunoprivilege of MSCs, we performed Seahorse XF24 analysis at different passages. Our data demonstrate that within clinically relevant passages an increase in passage number in cell culture from P3 to P7 did not affect basal respiration rate, spare respiratory capacity, maximal respiration, or coupling efficiency in MSCs before and after coculture with leukocytes.
Another major modulator for the immunogenicity of cells is the expression of cell surface ox-PCs. Several studies have described the role of oxidized phosphatidylcholines in modulating the immunological behavior of immune cells. Oxidation of phospholipids under various conditions such as inflammation, apoptosis, and senescence leads to the generation of proinflammatory damage-associated molecular products (DAMPs) that leads to recognition of the cells expressing these epitopes by the innate immune system [33]. During atherosclerosis, ox-PCs including phosphatidylcholines served as signals for uptake of the cells expressing the phospholipids [34]. However, antagonistic to their role in immune cell activation, oxidized phospholipids are also reported to be involved in preventing the activation of T lymphocytes [35]. Adding to their immunosuppressive role, oxidized phospholipids are also known to prevent the activation of dendritic cells by Toll-like receptor (TLR)3- and TLR4-mediated pathways [36]. These lipids can be both the mediators and the result of cellular apoptosis in different cell types [37]. In various models, the cells expressing the oxidized phospholipids are recognized by macrophages for apoptosis [38]. Interestingly, the macrophages might themselves undergo apoptosis due to activation of the TLR pathway by oxidized phospholipids [39]. Therefore, ox-PCs play an important role in mediating the cellular response to the immune system. To assess the effect of oxidized phospholipids on the immunoprivilege of MSCs in different passages, we performed LC/MS lipidomic analysis. Our data revealed that some of the ox-PCs were changing with the increase in passage number. However, alterations in ox-PCs did not affect immunoprivilege of MSCs. To determine if the ox-PCs were acting in tandem with downstream intracellular protein alterations, we performed proteome analysis using the LC/MS proteomic platform to screen more than 800 proteins. The overall trend recorded in intracellular proteins did not change in MSCs with an increase in passage number. However, we found changes in some proteins involved in cellular senescence and metabolism, but these changes were not able to affect the immunological behavior of MSCs.
Our study suggests that an increase in passage number from P3 to P7 does not affect immunoprivilege of MSCs. However, more studies are needed to delineate the mechanisms of switch in the immunological behavior of MSCs after transplantation.
Faiella W, Atoui R. Immunotolerant properties of mesenchymal stem cells: updated review. Stem Cells Int. 2016;2016 https://doi.org/10.1155/2016/1859567.
Fazel S, Chen L, Weisel RD, Angoulvant D, Seneviratne C, Fazel A, et al. Cell transplantation preserves cardiac function after infarction by infarct stabilization: augmentation by stem cell factor. J Thorac Cardiovasc Surg. 2005;130:1310.
Dhingra S, Li P, Huang X-P, Guo J, Wu J, Mihic A, et al. Preserving prostaglandin E2 level prevents rejection of implanted allogeneic mesenchymal stem cells and restores postinfarction ventricular function. Circulation. 2013;128(11 suppl 1):S69–78.
Oliveira R, Linhares G, Chagastelles PC, Sesterheim P, Pranke P. In vivo immunogenic response to allogeneic mesenchymal stem cells and the role of preactivated mesenchymal stem cells cotransplanted with allogeneic islets. Stem Cells Int. 2017;2017:e9824698.
Bernardo ME, Zaffaroni N, Novara F, Cometa AM, Avanzini MA, Moretta A, et al. Human bone marrow derived mesenchymal stem cells do not undergo transformation after long-term in vitro culture and do not exhibit telomere maintenance mechanisms. Cancer Res. 2007;67:9142–9.
Madeira A, da Silva CL, dos Santos F, Camafeita E, Cabral JMS, Sá-Correia I. Human mesenchymal stem cell expression program upon extended ex-vivo cultivation, as revealed by 2-DE-based quantitative proteomics. PLoS One. 2012;7:e43523.
von Bahr L, Sundberg B, Lönnies L, Sander B, Karbach H, Hägglund H, et al. Long-term complications, immunologic effects, and role of passage for outcome in mesenchymal stromal cell therapy. Biol Blood Marrow Transplant. 2012;18:557–64.
Bochkov VN, Oskolkova OV, Birukov KG, Levonen A-L, Binder CJ, Stöckl J. Generation and biological activities of oxidized phospholipids. Antioxid Redox Signal. 2010;12:1009–59.
Ito K, Suda T. Metabolic requirements for the maintenance of self-renewing stem cells. Nat Rev Mol Cell Biol. 2014;15:243–56.
Costello LC, Franklin RB. A review of the important central role of altered citrate metabolism during the process of stem cell differentiation. J Regen Med Tissue Eng. 2013;2 https://doi.org/10.7243/2050-1218-2-1.
Stab BR, Martinez L, Grismaldo A, Lerma A, Gutiérrez ML, Barrera LA, et al. Mitochondrial functional changes characterization in young and senescent human adipose derived MSCs. Front Aging Neurosci. 2016;8 https://doi.org/10.3389/fnagi.2016.00299.
Ziegler DV, Wiley CD, Velarde MC. Mitochondrial effectors of cellular senescence: beyond the free radical theory of aging. Aging Cell. 2015;14:1–7.
Roy Chowdhury SK, Smith DR, Saleh A, Schapansky J, Marquez A, Gomes S, et al. Impaired adenosine monophosphate-activated protein kinase signalling in dorsal root ganglia neurons is linked to mitochondrial dysfunction and peripheral neuropathy in diabetes. Brain. 2012;135:1751–66.
Folch J, Lees M, Sloane Stanley GH. A simple method for the isolation and purification of total lipides from animal tissues. J Biol Chem. 1957;226:497–509.
Bouyer G, Reininger L, Ramdani G D Phillips L, Sharma V, Egee S, et al. Plasmodium falciparum infection induces dynamic changes in the erythrocyte phospho-proteome. Blood Cells Mol Dis 2016;58:35–44.
Suárez-Cortés P, Sharma V, Bertuccini L, Costa G, Bannerman N-L, Sannella AR, et al. Comparative proteomics and functional analysis reveal a role of Plasmodium falciparum osmiophilic bodies in malaria parasite transmission. Mol Cell Proteomics. 2016;15:3243–55.
Bonab MM, Alimoghaddam K, Talebian F, Ghaffari SH, Ghavamzadeh A, Nikbin B. Aging of mesenchymal stem cell in vitro. BMC Cell Biol. 2006;7:14.
Makkar RR, Price MJ, Lill M, Frantzen M, Takizawa K, Kleisli T, et al. Intramyocardial injection of allogenic bone marrow-derived mesenchymal stem cells without immunosuppression preserves cardiac function in a porcine model of myocardial infarction. J Cardiovasc Pharmacol Ther. 2005;10:225–33.
Amado LC, Saliaris AP, Schuleri KH, St John M, Xie J-S, Cattaneo S, et al. Cardiac repair with intramyocardial injection of allogeneic mesenchymal stem cells after myocardial infarction. Proc Natl Acad Sci U S A. 2005;102:11474–9.
Ankrum JA, Ong JF, Karp JM. Mesenchymal stem cells: immune evasive, not immune privileged. Nat Biotechnol. 2014;32:252–60.
Agrawal A, Tay J, Yang G-E, Agrawal S, Gupta S. Age-associated epigenetic modifications in human DNA increase its immunogenicity. Aging. 2010;2:93–100.
Gupta S. Role of dendritic cells in innate and adaptive immune response in human aging. Exp Gerontol. 2014;54:47–52.
Burton DGA, Faragher RGA. Cellular senescence: from growth arrest to immunogenic conversion. Age Dordr Neth. 2015;37:27.
Fijter JWD, Mallat MJK, Doxiadis IIN, Ringers J, Rosendaal FR, Claas FHJ, et al. Increased immunogenicity and cause of graft loss of old donor kidneys. J Am Soc Nephrol. 2001;12:1538–46.
Turinetto V, Vitale E, Giachino C. Senescence in human mesenchymal stem cells: functional changes and implications in stem cell-based therapy. Int J Mol Sci. 2016;17 https://doi.org/10.3390/ijms17071164.
Carlos Sepúlveda J, Tomé M, Eugenia Fernández M, Delgado M, Campisi J, Bernad A, et al. Cell senescence abrogates the therapeutic potential of human mesenchymal stem cells in the lethal endotoxemia model. Stem Cells. 2014;32:1865–77.
Buck MD, O'Sullivan D, Pearce EL. T cell metabolism drives immunity. J Exp Med. 2015; https://doi.org/10.1084/jem.20151159.
MacIver NJ, Michalek RD, Rathmell JC. Metabolic regulation of T lymphocytes. Annu Rev Immunol. 2013;31:259–83.
Dranka BP, Benavides GA, Diers AR, Giordano S, Zelickson BR, Reily C, et al. Assessing bioenergetic function in response to oxidative stress by metabolic profiling. Free Radic Biol Med. 2011;51:1621–35.
van der Windt GJW, Chang C-H, Pearce EL. Measuring bioenergetics in T cells using a Seahorse Extracellular Flux Analyzer. Curr Protoc Immunol. 2016;113:3.16B.1–3.16B.14. Ed John E Coligan Al
Krawczyk CM, Holowka T, Sun J, Blagih J, Amiel E, DeBerardinis RJ, et al. Toll-like receptorinduced changes in glycolytic metabolism regulate dendritic cell activation. Blood. 2010;115:4742–9.
Sánchez-Alcázar JA, Hernández I, la Torre MPD, García I, Santiago E, Muñoz-Yagüe MT, et al. Down-regulation of tumor necrosis factor receptors by blockade of mitochondrial respiration. J Biol Chem. 1995;270:23944–50.
Weismann D, Binder CJ. The innate immune response to products of phospholipid peroxidation. Biochim Biophys Acta. 1818;2012:2465–75. BBA – Biomembr
Berliner JA, Leitinger N, Tsimikas S. The role of oxidized phospholipids in atherosclerosis. J Lipid Res. 2009;50(Suppl):S207–12.
Seyerl M, Blüml S, Kirchberger S, Bochkov VN, Oskolkova O, Majdic O, et al. Oxidized phospholipids induce anergy in human peripheral blood T cells. Eur J Immunol. 2008;38:778–87.
Blüml S, Kirchberger S, Bochkov VN, Krönke G, Stuhlmeier K, Majdic O, et al. Oxidized phospholipids negatively regulate dendritic cell maturation induced by TLRs and CD40. J Immunol. 2005;175:501–8. Baltim Md 1950
Serbulea V, DeWeese D, Leitinger N. The effect of oxidized phospholipids on phenotypic polarization and function of macrophages. Free Radic Biol Med. 2017;111:156–68.
Chang M-K, Binder CJ, Miller YI, Subbanagounder G, Silverman GJ, Berliner JA, et al. Apoptotic cells with oxidation-specific epitopes are immunogenic and proinflammatory. J Exp Med. 2004;200:1359–70.
Seimon TA, Nadolski MJ, Liao X, Magallon J, Nguyen M, Feric NT, et al. Atherogenic lipids and lipoproteins trigger CD36-TLR2-dependent apoptosis in macrophages undergoing endoplasmic reticulum stress. Cell Metab. 2010;12:467–82.
This work was supported by research grant from the Canadian Institute of Health Research to SD. NS and GLS are funded by a fellowship from Research Manitoba.
This work was supported by a research grant from the Canadian Institute of Health Research to SD.
All data generated and/or analyzed during this study are included in this published article and its Additional files.
Institute of Cardiovascular Sciences, St. Boniface Hospital Research Centre, Department of Physiology and Pathophysiology, University of Manitoba, Winnipeg, Canada
Niketa Sareen
, Glen Lester Sequiera
, Rakesh Chaudhary
, Ejlal Abu-El-Rub
, Arun Surendran
, Meenal Moudgil
, Amir Ravandi
& Sanjiv Dhingra
Division of Neurodegenerative Disorders, St. Boniface Hospital Research Centre, Department of Pharmacology & Therapeutics, University of Manitoba, Winnipeg, Canada
Subir Roy Chowdhury
& Paul Fernyhough
School of Biomedical and Healthcare Sciences, Plymouth University Peninsula Schools of Medicine and Dentistry, Plymouth, England
Vikram Sharma
Search for Niketa Sareen in:
Search for Glen Lester Sequiera in:
Search for Rakesh Chaudhary in:
Search for Ejlal Abu-El-Rub in:
Search for Subir Roy Chowdhury in:
Search for Vikram Sharma in:
Search for Arun Surendran in:
Search for Meenal Moudgil in:
Search for Paul Fernyhough in:
Search for Amir Ravandi in:
Search for Sanjiv Dhingra in:
NS, GLS, and SD conceptualized the study and designed the experiments; NS, GLS, RC, EAER, SRC, VS, AS, and MM carried out experiments and acquired the data; NS, GLS, AS, VS, AR, PF, and SD interpreted the data and carried out data analysis and statistical analysis; NS, GLS, and SD drafted the manuscript. All authors read and approved the final manuscript.
Correspondence to Sanjiv Dhingra.
The study protocol was approved by the Animal Care Committee of the University of Manitoba and conformed to the 'Guide for the Care and Use of Laboratory Animals' published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1985).
Figure S1. Differentiation of MSCs into osteocytes, adipocytes, and chondrocytes. MSCs were induced to differentiate toward osteocytes (a), adipocytes (b), and chondrocytes (c). MSCs (undifferentiated cells, control group) and differentiated MSCs (D-MSC) were stained for osteocalcin and Alizarin Red (osteocyte lineage), FABP4 and Oil Red-O stain (adipocyte lineage), and aggrecan (chondrocyte lineage). The images were taken using Cytation5 (BioTek Instruments) (20× magnification). (n = 6) (PPTX 2648 kb)
Figure S2. Heat map showing the highly upregulated or downregulated proteins that are known to be associated with mitochondrial respiration and immune pathways at P3 versus P7. (PPTX 5354 kb)
Sareen, N., Sequiera, G.L., Chaudhary, R. et al. Early passaging of mesenchymal stem cells does not instigate significant modifications in their immunological behavior. Stem Cell Res Ther 9, 121 (2018) doi:10.1186/s13287-018-0867-4
Revised: 29 March 2018
Immunoprivilege | CommonCrawl |
Closed orbits of dynamical systems
Consider the system $$\dot{x}=x-rx-ry+xy, \qquad \dot{y}=y-ry+rx-x^2,\qquad r=\sqrt{x^2+y^2},$$ which can be written in polar coordinates $(r,\theta)$ as $$\dot{r}=r(1-r), \qquad \dot{\theta}=r(1-\cos \theta)$$
Linearizing and studying the eigenvalues at the fixed points we see that the system has an unstable node at the origin $(0,0)$ and a saddle-node at the point $(1,0)$.
I'd like to show that the point $(1,0)$ is not stable and that any orbit starting at a point other than the origin converges to $(1,0)$, namely that for any $(x,y)\neq(0,0)$ we have $$\lim_{t\to \infty} \phi_t(x,y)=(1,0)$$
First, the Jacobian at (1,0) is $diag(-1,0)$, which is stable right? It has one eigenvalue with negative real part, and one whose real part is zero but with equal geometric ans algebraic multiplicities.
In order to see that any orbit starting at a point other than the origin converges to $(1,0)$, I'd like to use the Poincaré Bendixson's theorem. For this it suffices to show that there is no closed orbit in $\mathbb{R}^2\setminus (0,0)$. By virtue of Bendixson's criterion, this will follow provided that $div(f)$ doesn't change sign in $\mathbb{R}^2\setminus (0,0)$, where in our case $f(r,\theta)=(r(1-r), r(1-\cos\theta))$. But $$div(f)=1+r(\sin\theta-2)$$ which does change sign.
$\begingroup$ No, they are the same. $\endgroup$
It is not quite clear how Poincaré-Bendixson theorem could prove useful in this setting since the unit circle acts as a closed orbit of this dynamical system, see the picture below. Perusing these (reasonably concise) lecture notes on the subject might prove useful.
To solve the question, consider any orbit starting from some initial condition $(r_0,\theta_0)$ with $r_0\gt0$. The phase diagram of $\dot r=r(1-r)$ on $r\gt0$ shows that $r(t)\to1$, in particular $1\leqslant2r(t)\leqslant4$ for every $t$ large enough, say, for every $t\geqslant t_1$.
On $t\geqslant t_1$, $\dot\theta(t)=2r(t)\sin^2(\theta(t)/2)$ hence $\sin^2(\theta/2)\leqslant\dot\theta\leqslant4\sin^2(\theta/2)$. This implies that $t\mapsto\theta(t)$ follows an orbit of the dynamical system $$\dot\varphi=\sin^2(\varphi/2),$$ and the relative speed of $\theta$ compared to $\varphi$ is always between $1$ and $4$, in particular $\theta(t)$ converges to the first fixed point met by $\varphi(t)$, which, for every starting condition $\varphi_0=\theta_0$, is $0$ mod $2\pi$.
This proves that, for every $r_0\gt0$, $r(t)\to1$ and $\theta(t)\to0$ mod $2\pi$, that is, for every initial condition $(x_0,y_0)\ne(0,0)$, the orbit $(x(t),y(t))$ accumulates on $(1,0)$.
The simplest proof that the $(x,y)$-point $(1,0)$ is unstable might be to consider an initial condition on the unit cercle $r_0=1$ with $\theta_0$ in $(0,2\pi)$. Then, for every $t$, $$r(t)=1,\qquad \dot\theta(t)=2\sin^2(\theta(t)/2),$$ hence $\theta(t)$ crosses all the interval $[\theta_0,2\pi)$, that is, $(x(t),y(t))$ moves on the unit circle anti-clockwise from the angle $\theta_0$ to the angle $2\pi$. In particular, when $\theta_0\to0$, $\theta_0\gt0$, the trajectory $(x(t),y(t))$ always passes by $(x,y)=(-1,0)$, which is not at vanishing distance from $(0,0)$. This proves that $(1,0)$ is unstable.
$\qquad\qquad\qquad\qquad$
streamplot[{x(1+y)−(x+y)sqrt(x^2+y^2),y-x^2+(x-y)sqrt(x^2+y^2)},{x,-2,2},{y,-2,2}]
DidDid
Not the answer you're looking for? Browse other questions tagged dynamical-systems or ask your own question.
Use the definition of stability of equilibrium to prove (1,0) is unstable.
Find an explicit example of a critical point $C$ said to be **Asymptotically stable** but not **stable**.
Dynamical Systems Periodic Orbits existing
Nature of a fixed point in dynamical system
Trapping Region for Dynamical System
Understanding Proof in Teschl (dynamical systems)
Is the fixed point at the origin of this dynamical system asymptotically stable?
All trajectories of a dynamical systems converges to $0$ | CommonCrawl |
Eye-recognizable and repeatable biochemical flexible sensors using low angle-dependent photonic colloidal crystal hydrogel microbeads
Color tunable pressure sensors based on polymer nanostructured membranes for optofluidic applications
P. Escudero, J. Yeste, … M. Alvarez
Ultra-fast responsive colloidal–polymer composite-based volatile organic compounds (VOC) sensor using nanoscale easy tear process
Hyung-Kwan Chang, Gyu Tae Chang, … Jungyul Park
Ultrasensitive electrolyte-assisted temperature sensor
Mina Shiran Chaharsoughi, Jesper Edberg, … Magnus P. Jonsson
Stretchable and wearable colorimetric patches based on thermoresponsive plasmonic microgels embedded in a hydrogel film
Ayoung Choe, Jeonghee Yeom, … Hyunhyub Ko
Self-powered ultraflexible photonic skin for continuous bio-signal detection via air-operation-stable polymer light-emitting diodes
Hiroaki Jinno, Tomoyuki Yokota, … Takao Someya
A non-enzymatic glucose sensor enabled by bioelectronic pH control
Xenofon Strakosas, John Selberg, … Marco Rolandi
Wearable piezoelectric mass sensor based on pH sensitive hydrogels for sweat pH monitoring
E. Scarpa, V. M. Mastronardi, … M. De Vittorio
A wearable pH sensor with high sensitivity based on a flexible charge-coupled device
Shogo Nakata, Mao Shiomi, … Kuniharu Takei
Continuous monitoring of diabetes with an integrated microneedle biosensing device through 3D printing
Yiqun Liu, Qi Yu, … Yue Cui
Mio Tsuchiya1,
Yuta Kurashina2,3 &
Hiroaki Onoe1
Scientific Reports volume 9, Article number: 17059 (2019) Cite this article
Materials for devices
This paper presents eye-recognizable and repeatable biochemical flexible sensors using low angle-dependent stimuli-responsive photonic colloidal crystal hydrogel (PCCG) microbeads. Thanks to the stimuli-responsive PCCG microbeads exhibiting structural color, users can obtain sensing information without depending on the viewing angle and the mechanical deformation of the flexible sensor. Temperature-responsive PCCG microbeads and ethanol-responsive PCCG microbeads were fabricated from a pre-gel solution of N-isopropylacrylamide (NIPAM) and N-methylolacrylamide (NMAM) by using a centrifuge-based droplet shooting device (CDSD). As a proof-of-concept of thin and flexible biochemical sensors, temperature- and ethanol-sensing devices were demonstrated. By comparing the structural color of the stimuli-responsive PCCG microbeads and the color chart of the device, sensing information, including skin temperature of the human body and ethanol concentration in alcoholic beverages, was obtained successively. We expect that our device design using low angle-dependent stimuli-responsive PCCG microbeads would contribute to the development of user-friendly biochemical sensor devices for monitoring environmental and healthcare targets.
Stimuli-responsive hydrogels1 that shrink and swell in response to the intensity of external stimuli such as temperature, pH, and chemical substance have been applied in the field of drug delivery systems2,3, soft micro-actuators4,5, and biochemical sensors6. Particularly, sensors that respond to chemical and biological targets repeatedly for healthcare and environment sample monitoring have attracted growing interest. For applying these stimuli-responsive hydrogels for biochemical sensors, it is necessary to convert the intensity of the stimulus to another type of information. A photonic colloidal crystal hydrogel (PCCG), in which monodispersed colloidal particles with diameters of hundreds of nanometers are regularly arranged, appears to be a powerful tool for sensors based on a stimuli-responsive hydrogel7,8,9,10. PCCG that reflects visible-light wavelength through Bragg's diffraction is called "structural color hydrogel". In this hydrogel, the strength of the stimulus can be converted to visible color information by the change in the distance between the colloidal particles. Thus, it is possible to obtain the sensing information with the naked eye or by using simple optical devices such as a complementary metal-oxide-semiconductor camera.
By applying various functional polymers to PCCG, various eye-recognizable biochemical sensors for temperature11, humidity12, pH13, and certain analytes14,15,16,17 have been reported. For example, a PCCG that repeatedly responds to humidity was fabricated12. This composite hydrogel could be an economical alternative to traditional humidity sensors. These PCCG-based sensors have advantages such as simplicity in a measurement method, repeated use, non-fading color, and high efficiency in obtaining signals. However, as most conventional PCCG-based sensors are film-shaped, the peak wavelength and the color intensity change depending on the viewing angles, i.e., the so-called "angle dependency," and on the mechanical deformation of the shape of PCCG18,19. This property makes it difficult to apply these hydrogels to practical sensors because the measurement result could differ depending on the viewing angle of the user and the mechanical deformation of the device.
Thus, we propose a stimuli-responsive PCCG-based eye-recognizable and repeatable biochemical flexible sensor with low angle dependency and high robustness against mechanical deformation by packaging stimuli-responsive PCCG microbeads as a sensing element (Fig. 1). By comparing the color of the stimuli-responsive PCCG microbeads with the color chart of the substrate by the naked eye, we can visually obtain the value of physical and chemical sensing information. Spherical colloidal photonic crystals have been applied in various fields such as a high throughput bio-screening in analytical chemistry and optical displays in device engineering thanks to its mass producibility, monodispersity and shape symmetry20,21,22. In this study, however, we proposed the use of photonic crystal microbeads to practical eye-recognizable biochemical sensors to suppress the change in the structural color against the observation angle and the unexpected mechanical deformation. Owing to the stimuli-responsive PCCG microbeads in the flexible sheet, the angle dependency of our sensors can be suppressed because the ordered arrangement of spatially symmetric photonic colloidal crystals (PCC) such as microspheres or hemispherical domes does not change depending on the viewing angle23,24,25,26. Moreover, as the stimuli-responsive PCCG microbeads are freely dispersed in the chamber formed on the flexible sheet, they can avoid unexpected color change with the mechanical deformation caused by the bending or stretching of the flexible sensing device. In this study, we used a "centrifuge-based droplet shooting device (CDSD)" for fabricating PCCG microbeads27,28,29. This method makes it possible to fabricate hydrogel microbeads easily and rapidly in a centrifugal separator, without loss of the pre-gel solution of stimuli-responsive polymers. As a demonstration of our proof-of-concept, we prepared and characterized two types of stimuli-responsive PCCG microbeads: a temperature-responsive sensor with poly(N-isopropylacrylamide) (PNIPAM) microbeads, and an ethanol-responsive sensor with poly(N-methylolacrylamide) (PNMAM) microbeads. Finally, we integrated these stimuli-responsive PCCG microbeads with thin and flexible devices to demonstrate the measurement of the skin temperature of a human hand and the ethanol concentrations of alcoholic beverages.
Schematic illustration of eye-recognizable and repeatable biochemical flexible sensors with stimuli-responsive PCCG microbeads. Users can obtain sensing information visually by the color of the microbeads. The freely dispersed stimuli-responsive PCCG microbeads suppress the change in the structural color against the viewing angle and the mechanical deformation.
Temperature-Responsive Microbeads
As an eye-recognizable biochemical sensor element, stimuli-responsive PCCG microbeads with the diameter of a few hundred micrometers are required so that they can be packaged in a thin and flexible device. We chose the CDSD-based method27,28,29 for obtaining monodispersed microbeads easily in a short time without sample loss. A setup for fabricating microbeads (Fig. 2a) was mainly composed of a CDSD part (Fig. 2b) and an ultraviolet light-emitting diode (UV-LED) part. The CDSD part was composed of a glass capillary (G-1, Narishige) filled with a pre-gel solution (Fig. S1), a lab-made capillary holder, and a 1.5 mL microtube. The UV-LED part has four UV-LED lights (UF3VL-1H411, DOWA Electronics Materials) with each radiant flux is 0.9 mW. An assembly of the CDSD and the UV-LED parts was set in a 50 mL centrifuge tube for centrifugation (H-19α, Kokusan) (Fig. 2c). To confirm whether the CDSD-based method could be applied to the stimuli-responsive hydrogel, we first tested the fabrication of stimuli-responsive microbeads without colloidal particles. In this process, the pre-gel solution was ejected from the tip of the glass capillary by a centrifugal force (~45 G) for 30–40 s, and a W/O emulsion was formed in the oil. Subsequently, the emulsion droplet was polymerized by UV and was washed several times with deionized (DI) water to remove oil (Fig. 2d).
Fabrication of microbeads. (a) Schematic of the device setup for the fabrication of microbeads. (b) Setup of the CDSD part. (c) The assembly of the CDSD and the UV-LED parts and a 50 mL centrifuge tube. (d) The fabrication process of stimuli-responsive hydrogel microbeads. The pre-gel solution ejected by centrifugal force forms emulsion droplets in oil, which are polymerized by UV.
Figure 3a–d show the microscopic images and size distribution of the fabricated PNIPAM microbeads before washing (in mineral oil) and after washing (in water). The diameters of the fabricated microbeads before and after washing were 327.2 µm ± 10.0 µm (mean ± s.d., n = 50) and 358.6 µm ± 9.3 µm (mean ± s.d., n = 50), respectively, ensuring that the microbeads were monodisperse (C.V. ≤ 3.0%). As the shape of the microbeads was maintained even after washing with water, the generated microdroplets of the pre-gel solution were properly polymerized by the UV irradiation in the centrifuge. We observed the difference in the diameters of the microbeads before and after washing with water (swell ratio: 9.6%). This result indicates that the hydrogel microbeads polymerized in oil had swollen by absorbing more water molecules in the hydrogel network after washing. In the usual fabrication system with the CDSD, the diameter of the ejected droplet, d, changes according to Tate's law27 (d \(\propto \sqrt[3]{{d}_{0}/G}\), where d0 is the tip of the capillary and G is the applied centrifugal force); however, in this method, it is also necessary to consider the diameter change by polymerization and swelling by washing to fabricate microbeads of the desired diameter.
PNIPAM microbeads. Microscopic images and size distributions of PNIPAM microbeads (a,b) before washing and (c,d) after washing. (e) Microscopic images of PNIPAM microbeads at 15 °C and 45 °C. (f) Reversibility of the diameter change of PNIPAM microbeads.
The PNIPAM microbeads at both 15 °C and 45 °C exhibited a clear volume change (Fig. 3e) similar to that of conventional PNIPAM bulk hydrogel30. The PNIPAM microbeads also showed a reversible volume change with cyclic cooling and heating (Fig. 3f). These results show that functional stimuli-responsive hydrogel microbeads can be fabricated from the pre-gel solution through the CDSD-based fabrication method and that these fabricated microbeads could be applied to biochemical sensors for repeat uses as both the swelling degree and shrinkage degree were almost constant even if the temperature was changed repeatedly.
Temperature-Responsive Structural Color Microbeads
To visualize the volume change of the stimuli-responsive hydrogel microparticles, PCC was simultaneously formed inside the hydrogel microbeads during the fabrication process to obtain the PCCG microbeads. PCCG exhibits structural color by immobilizing regularly arranged desalted colloidal particles with the hydrogel network. The peak reflective wavelength (λmax) of stimuli-responsive PCCG is expressed as
$${\lambda }_{{\rm{\max }}}\propto D\times \,\sin \,\theta \times C$$
where D is a distance between the colloidal particles, θ is the angle of the incident light, and C is a diameter change ratio (C = d/d0, where d0 and d are the diameters before and after the response, respectively)31,32. Thus, the change in λmax for the stimuli-responsive PCCG before and after the response can be simply expressed as
$${\lambda }_{{\rm{\max }},{\rm{after}}}={\lambda }_{{\rm{\max }},{\rm{before}}}\times C.$$
As shown in Eqs (1) and (2), the peak wavelength of λmax or λmax,after shifts to the longer wavelength when the hydrogel swells (C > 1), and shifts to the shorter wavelength when the hydrogel shrinks (C < 1). Consequently, the visible structural color of the stimuli-responsive PCCG changes between the red side and the purple side owing to this characteristic. In stimuli-responsive hydrogels, the hydrogel volume changes according to the strength of the stimulus. Thus, the stimuli-responsive PCCG can convert the strength of the stimulus to the visible color change information.
For fabricating temperature-responsive PCCG microbeads, we prepared three types of pre-gel solutions with 130 nm SiO2 colloidal particles and with the different mole ratio of the monomer, N-isopropylacrylamide (NIPAM) to the crosslinker, N,N'-methylenebisacrylamide (BIS) (MNIPAM/MBIS = 14, 27, and 41). Then, three types of temperature-responsive PCCG microbeads were fabricated with the CDSD method. Note that the centrifugal force does not cause the sedimentation or aggregation of SiO2 colloidal particles in the pre-gel solution (Figs S2, 3). The PNIPAM PCCG microbeads were observed by obtaining microscopic images in deionized water from 10 °C to 40 °C (Fig. 4a-i-iii). These images revealed that the PCCG microbeads exhibited a clear change in structural color from the red side to the purple side depending on the increase in temperature. In addition to the microscopic images, these PNIPAM PCCG microbeads were evaluated by measuring the reflection spectra with a UV-vis-NIR spectrometer (S4) under the same conditions (Fig. 4b-i-iii). These reflection spectra showed a clear peak wavelength at each temperature, and the total range of the shift of λmax changed depending on the cross-linking ratio. This clear color change and λmax change indicate that the fabricated PNIPAM PCCG microbeads can convert the stimulus information into color change information.
PNIPAM PCCG microbeads with different crosslinking ratio. (a,b) Microscopic images and reflection spectra of PNIPAM PCCG microbeads in DI water from 10 °C to 40 °C. (a-i,b-i) MNIPAM/MBIS = 41, (a-ii,b-ii) MNIPAM/MBIS = 27 and (a-iii,b-iii) MNIPAM/MBIS = 14. Scale bars are 500 µm. Iref shows the reflected light intensity. (c) Temperature dependence of the peak wavelength λmax and the diameter of PNIPAM PCCG microbeads (MNIPAM/MBIS = 41). (d) Comparison of the temperature dependence of Δλmax/ΔT in PNIPAM PCCG microbeads for MNIPAM/MBIS = 41 and 14.
For confirming that the structural color changed owing to the volume change of the hydrogel as expressed by Eq. (3), we then compared the temperature dependence of the diameter and the temperature dependence of λmax in the case of MNIPAM/MBIS = 41 (Fig. 4c). Both the diameter and λmax show similar curves that sharply changed around 32 °C, which is near the lower critical solution temperature (LCST) of PNIPAM33. Thus, this peak shift toward shorter wavelength caused by the temperature rise is considered to be due to the decrease in the particle distance caused by the shrinkage of the PNIPAM hydrogel. As the peak wavelength obtained through spectroscopic measurement, λmax,meas, and the peak wavelength calculated using Eq. (3), λmax,calc, (initial diameter and λmax,before were set to be the values in the case of 10 °C, i.e., 618 nm and 502 µm, respectively) are reasonably close (λmax,meas = 599 nm, and λmax,calc = 596 nm at 16 °C, λmax,meas = 557 nm, and λmax,calc = 551 nm at 24 °C), these results show that PNIPAM PCCG microbeads changed isotopically in volume while maintaining the lattice structure (S4).
Furthermore, we evaluated the tuning of the sensitivity of the PNIPAM PCCG microbeads. As shown in Fig. 4a-i-b-iii, the total ranges of the shift of both the color and λmax differ depending on the cross-linking ratio, MNIPAM/MBIS. For example, in the case of MNIPAM/MBIS = 41, the color shift was large i.e., from red (615 nm, 12 °C) to purple (465 nm, 30 °C); in contrast, in the case of MNIPAM/MBIS = 14, the color shift was smaller i.e., from yellow-green (538 nm, 12 °C) to purple (455 nm, 29 °C). Figure 4d shows the peak wavelength shift per temperature change (Δλmax/ΔT), which indicates the sensitivity, in the case of MNIPAM/MBIS = 41 and 14. As shown in Fig. 4d, Δλmax/ΔT of MNIPAM/MBIS = 41 was higher than that of MNIPAM/MBIS = 14. This is because the hydrogel with lower cross-linking ratio swells and shrinks more easily, resulting in a larger change of the distance between the colloidal particles. This reveals that a lower cross-linking ratio makes the sensitivity higher. Therefore, the sensitivity of stimuli-responsive PCCG microbeads can be tuned by varying the crosslinking ratio.
An important feature of the PCCG microbeads is that their spherical shape can reduce the angle dependency of the PCCG. For confirming the low angle dependence of PNIPAM PCCG microbeads, we measured the reflection spectra of the PNIPAM PCCG microbeads by tilting the substrate from 45° to 90° (Fig. 5a). Microscopic images of the PCCG at varied observation angles, θ, (Fig. 5b) show that the dependence of the change in the structural color on the substrate angle was hardly observed. The reflection spectra measurement confirmed that no change in the peak wavelength was observed (90°: 545.8 nm, 45°: 545.8 nm) (Fig. 5c). This result suggests that the spherical structural color hydrogel is suitable for application to an eye-recognizable sensor.
Measurement of angle dependency. (a) Setup of the measurement of angle dependency. (b) Microscopic images of PNIPAM PCCG microbeads from different observation angles. Scale bar is 200 µm. (c) Reflection spectra of PNIPAM PCCG microbeads observed from 0° and 45°.
Stimuli-Responsive PCCG Microbeads for Sensing Chemicals
To demonstrate that this CDSD-based microbead fabrication method is applicable to various functional hydrogel polymers, we fabricated ethanol-responsive microbeads in the same way by using an ethanol-responsive monomer, N-methylolacrylamide (NMAM)34, and examined the ethanol responsivity. Although the pre-gel solution with 10% colloidal silica particle concentration is green in color, the fabricated microbeads exhibited a color closer to red (in the water). This is because the PNMAM hydrogel is more hydrophilic than the PNIPAM hydrogel, and is more likely to incorporate water molecules into the hydrogel network. Therefore, the final structural color of microbeads shifted to the longer-wavelength side through swelling. As the ethanol concentration increased from 0% to 100%, the color of the microbeads changed from red to the ultraviolet region while the diameter of the microbeads decreased (Fig. 6a). The reflection spectra measurement of the PNMAM PCCG microbeads confirmed that the peak wavelength shifted from the longer-wavelength side (597 nm) to the shorter-wavelength side (less than 497 nm) (Fig. 6b). The ethanol dependences of the diameter and λmax of the microbeads are compared in Fig. 6c. Both the diameter and λmax showed similar curves, indicating that this structural color change was due to the change in the distance of the colloidal particles caused by the volume change of the PNMAM hydrogel. From these results, we confirmed that the proposed microbead fabrication could be applied to other acrylamide derivatives that are polymerized with UV. The use of another acrylamide derivative monomer including vinyl-modified DNA aptamer makes it possible to prepare PCCG microbeads that respond only to a specific target such as specific low-molecular compounds35 and proteins16 without interfering with other targets.
PNAM PCCG microbeads. (a) Microscopic images of PNMAM PCCG microbeads in the ethanol concentration from 0% to 100%. Scale bar is 500 µm. (b) Reflection spectra of PNMAM microbeads in the ethanol concentration from 0% to 60%. (c) Ethanol dependence of the peak wavelength λmax and the diameter of PNMAM PCCG microbeads.
Eye-Recognizable and Flexible Sensing Devices
Finally, we integrated these sensor microbeads with flexible devices to evaluate their measurement performances. Our sensing device was composed of a sensing sheet with stimuli-responsive PCCG microbeads and a substrate with a color chart. We built the sensing sheet by placing water-dispersed PCCG microbeads in the chamber (3 mm in diameter and 0.6–1.0 mm in-depth) on the flexible polydimethylsiloxane (PDMS) sheet (7 mm × 7 mm, thickness ~1.5 mm) and then sealed the PCCG microbeads with a lab-fabricated PDMS membrane (thickness ~70 µm) or a porous polycarbonate membrane (manually punched ~70 µm holes in commercially available porous polycarbonate membrane (TCTP02500, Millipore) with 10 µm pores) for temperature sensing and ethanol sensing, respectively (Figs 7a–i, S4). We first evaluated the stability, the low angle dependency and the robustness against the mechanical deformation of the sensing sheet. Even after 6 months, the structural color of the PNIPAM PCCG microbeads was clearly observed as long as the device was stored while preventing drying. The comparison between the PNIPAM PCCG microbeads 4days after the fabrication and the ones 6 months after the fabrication showed that they had almost the same change in the peak wavelength depending on the temperature change (Fig. 7a-ii). Moreover, the structural color of the PNIPAM PCCG microbeads was observed to be almost unchanged by the change in the viewing angles (θ = 90° and 50° in Fig. 7a-iii). To confirm the robustness against the mechanical deformation of the sensing sheet, we stretched the sensing sheet from 0% to 66% or bent it with tweezers (radius of the curvature: ~2 mm) (Fig. 7a-iv,v). We confirmed that the freely dispersed PNIPAM PCCG microbeads maintained its spherical shape and the structural color did not change even when the sensing sheet was deformed (λmax, 0% = 549.4 nm, λmax, 33% = 549.0 nm). Therefore, stretching or bending does not cause the unexpected color change of the PCCG microbeads. From these results, we confirmed that the microbead-shaped structural color sensing elements effectively contributed to the suppression of angle dependency of the exhibited structural colors and to the color robustness against the deformation of the entire device.
Eye-recognizable flexible sensing device. (a-i) Schematic illustration of the sensing sheet. Pictures of the flexible sheet (a-ii) 6 months after fabrication, (a-iii) viewing from 90° and 50°, (a-iv) stretching from 0% to 66% and (a-v) before/after stretching. (b-i) Schematic illustration explaining the working principle of the temperature sensing device. Pictures of the temperature sensing device after placed (b-ii) at the experimental desk, (b-iii) in the refrigerator and (b-iv) on the skin of the hand. (c-i) Schematic illustration explaining the working principle of the ethanol sensing device. Pictures of the ethanol-responsive sensing device after immersing (c-ii) in the water, (c-iii) in vodka (Alc. 50%), (c-iv) in shochu (Alc. 62%), and (c-v) in diluted shochu (Alc. 10%). Brightness has been modified for the visibility in device pictures.
A schematic illustration (Fig. 7b-i) shows the working principle of the flexible temperature-sensing device using PNIPAM PCCG microbeads. The sensor microbeads sense the temperature of a sensing target via the PDMS membrane. We confirmed that the sensor microbeads could change its structural color within 5 min (Fig. S5a). After placing the flexible sensing sheet at a measurement point, the color of the PNIPAM PCCG microbeads can be converted into the temperature by comparing the color with the color chart. Temperature measurement has been demonstrated by placing the sensor at three different conditions successively: at the experiment desk in the office (~25.5 °C), in the refrigerator (~9 °C), and directly on the skin of a hand (~34 °C). On the experiment desk, the color of the microbeads indicated 25–26 °C, which was almost the value shown by a room-temperature meter, 25.5 °C (Fig. 7b-ii). In the refrigerator, the color of the microbeads changed to red (Fig. 7b-iii), which was similar to the color indicating 12 °C on the color chart. On the skin of the hand, the color of the microbeads changed to purple (Fig. 7b-iv), which was similar to the color indicating 30 °C on the color chart. These results show that temperature can be measured visually and repeatedly with this device.
Subsequently, the measurement of ethanol concentration was demonstrated by using three different alcoholic beverages: vodka (Russian distilled spirits, Alc. 50%), shochu (Japanese distilled spirits, Alc. 62%), and diluted shochu (Alc. 10%). In the case of the ethanol sensing, the flexible sensing sheet, where the water-dispersed NMAM PCCG microbeads were sealed in the chamber with the porous membrane, was immersed in a sample liquid and stirred for approximately 5 min to exchange the water in the chamber with the sample liquid completely via the porous membrane (Figs 7c-i, S5b). Before the measurement, the color of the NMAM PCCG microbeads was red (water: alcohol conc. = 0%); however, after immersing the sensing sheet in these alcoholic beverages in sequence, the color changed corresponding to each sample's alcohol concentration (blue: ~60% conc., green: ~50% conc., and orange: ~10% conc. for vodka, shochu, and diluted shochu, respectively (Fig. 7c-ii-v)). These results show that our ethanol-sensing device has selectivity on different liquors if those liquors have different alcohol concentrations. Besides, the chemical substance, i.e., ethanol, can be measured visually and repeatedly with this device even if the sensing sample is not a pure ethanol solution but a mixture containing components other than ethanol such as alcoholic beverages.
One of the features of our proposed sensor with feely-dispersed stimuli-responsive PCCG microbeads is the robustness of the structural color against the deformation of the device. This device design avoids the unexpected structural color change caused by the bending or stretching of the device. Therefore, this sensor can be applied to the measurement of shape-deforming objects and thin flexible samples, such as biological skin surfaces and cloth surfaces, respectively, which cannot be measured with the conventional sheet-shaped PCCG because of the change in the structural color depending on the mechanical deformation of the device18,19.
From the viewpoint of materials, tuning and enhancing the functionality of the stimuli-responsive PCCG microbeads will lead to the improvement of the sensor performance and will broaden the range of its application. As mentioned earlier, the sensitivity of the PCCG microbeads can be tuned by varying the cross-linking ratio. In addition to the sensitivity, the measurement range could be controlled by copolymerizing the monomer material with another monomer material. For example, LCST control by copolymerizing another monomer with NIPAM has been reported36,37. LCST can be increased beyond 32 °C by increasing the content of a hydrophilic monomer such as N,N-dimethylacrylamide, and decreased below 32 °C by increasing the content of a hydrophobic monomer such as butyl methacrylate. Therefore, applying this LCST tuning to our PNIPAM PCCG microbeads facilitates application to more varied target measurements. Furthermore, the use of other stimuli-responsive materials such as molecularly imprinting polymers38 and DNA hydrogel39,40 in these PCCG microbeads could broaden the range of application for the detection of specific chemicals ranging from ions to macromolecules in the environment or healthcare monitoring.
However, as our sensor is designed for obtaining sensing information visually, it may not be suitable for users who need to obtain sensing information precisely as a numerical value, or for users with color weakness. This problem could be solved by adopting a camera system that can convert color information to an exact numerical value to support the readout of the color. Thus, a more precise and user-friendly sensor could be achieved using our stimuli-responsive PCCG-microbeads-based biochemical sensor.
We expect that our work can be applied to a more user-friendly environment and healthcare monitoring system with the advantageous features of PCCG-based sensors, such as repeated use and no requirement of external electrodes.
The colloidal silica solution containing amorphous silica particles with diameters of 130 nm (MP-1040) was provided by Nissan Chemicals. NIPAM (095-03692) and BIS (134-02352) were purchased from Wako Pure Chemical Industries. NMAM (M0574) was purchased from Tokyo Chemical Industry. Span 80 (37408-32) was purchased from Kanto Chemical. A photopolymerization initiator (Irgacure1173) was obtained from BASF. Mineral oil (M8410) was purchased from Sigma-Aldrich. Ion exchange resins (AG501-X8) were purchased from BIO-RAD. For sensor device fabrication, PDMS (SILPOT 184) was purchased from Dow Corning Toray, polycarbonate porous membrane (TCTP02500) was purchased from Millipore, and a polyethylene terephthalate (PET) film (TOYOBO ESTER®Film E5001) was purchased from Toyobo.
Setup for fabricating microbeads
In the CDSD part, the glass capillary (G-1, Narishige), a lab-made capillary holder, and a 1.5 mL microtube were assembled. The tip of the glass capillary was sharpened with a puller to form a thin tip (PC-10, Narishige) and then cut out with a microforge (MF-900, Narishige) to the desired diameter (Fig. S1). The assembled CDSD was set in the UV-LED part. The UV-LED part is a jig made of polylactic acid using a 3D printer (Sigma, BCN3D Technologies) with four UV-LED lights (UF3VL-1H411, Dowa Electronics Materials) mounted around the jig. The UV-LED lights were connected to a DC power supply through a slip ring (Moog Components Group, EC4294-2) for supplying electric power to the centrifuge (H-19Rα, Kokusan).
Microbeads fabrication
For the fabrication of microbeads without colloidal particles, a pre-gel solution composed of NIPAM (0.8 M), BIS (MNIPAM/MBIS = 27), and a photopolymerization initiator (Irgacure1173) (0.5 vol%) was prepared. In the case of fabricating stimuli-responsive PCCG microbeads containing colloidal particles, a pre-gel solution composed of NIPAM (0.7 M), BIS (MNIPAM/MBIS = 14, 27, 41), Irgacure1173 (0.5 vol%), and silica particles (final concentration 10 wt%) was prepared, and then desalted for more than 5 min with the ion exchange resin. In the case of PNMAM microbeads, we used NMAM (0.8 M NMAM, MNMAM/MBIS = 46) instead of NIPAM. The pre-gel solution was loaded into the glass capillary, and the solution-filled capillary was fixed with the capillary holder to the 1.5 mL microtube containing mineral oil with Span 80 (3 wt%) and Irgacure1173 (0.5 vol%). The distance between the tip of the glass capillary and the air–oil interface was set to ~4 mm. The pre-gel solution was ejected via centrifugal force (~45 G) for 30–40 s, and a W/O emulsion was formed in the oil. Subsequently, the microtube was irradiated with UV (325 nm) in the centrifuge for 50 s for polymerizing the emulsion droplets. In these operations, the centrifuge was set at 5 °C for preventing shrinkage of the polymerized NIPAM owing to an increase in temperature. The polymerized hydrogel microparticles were washed several times with DI water to remove oil.
Observation and evaluation of microbeads
The fabricated microbeads were observed by using an inverted phase-contrast microscope (IX73P1-22FL/PH, Olympus) and a digital microscope (VHX-100F, Keyence). The reflection spectra of the stimuli-responsive PCCG microbeads were obtained using an epi-illuminated microscope (BX-50, Olympus) equipped with a UV–vis–NIR spectrometer (USB2000+, Ocean Optics) and an Olympus U-LH100 light source. The angle dependence was observed by attaching a compact digital camera (EX-F1, CASIO) on the epi-illuminated microscope with an adapter (NY-F1, CASIO).
Integration of stimuli-responsive PCCG microbeads with the flexible sensing device
For preparing the sensing sheet, the water-dispersed PCCG microbeads were placed in the chamber (3 mm in diameter and 1000 or 600 µm in-depth) fabricated on the PDMS sheet (7 mm × 7 mm, thickness ~1.5 mm) and sealed with the PDMS membrane or the porous membrane for the measurement of temperature or ethanol concentration, respectively (Fig. S4). We performed the demonstration of bending and stretching with the sensing sheet having the 1000 µm-depth chamber, and the other experiments were performed with the sensing sheet having the 600 µm-depth chamber. For preparing the substrate with the color chart, the colors of the microbeads were printed on a paper (18 mm × 18 mm), and then the PET film was affixed thereon for water-proofing.
To evaluate the measurement performance of the temperature-sensing device, the sensing sheet was first placed at the experiment desk in the office, then in the refrigerator, and finally on the skin of the hand. In each measurement, the sensing sheet was left for 5 min or more in the measurement condition. Then, for obtaining the measurement result, the sensing sheet was placed on the substrate to compare the color of the PCCG microbeads with the color chart.
For the evaluation of the measurement performance of the ethanol sensing device, three types of alcoholic beverages were used. The sensing sheet was first immersed in vodka (Smirnoff Vodka Blue 50°, Kirin), then in shochu (Yonaguni Miniature Kuba-Maki, Sakimoto Shuzo), and finally in diluted shochu (mixing ratio of shochu and DI water was 1:5). In each measurement, the sensing sheet was immersed in about 40 mL of alcoholic beverage with stirring for 5 min or more. Note that the minimum limit of the sample liquid depends on the size of the chamber and allowable error because our ethanol sensing device detects ethanol concentration by exchanging the water in the chamber with the sample liquid completely. Subsequently, the measurement results were obtained by placing the sensing sheet on the substrate for comparing the color of the PCCG microbeads with the color chart. The alcohol concentration of each beverage was also measured using a commercialized alcohol refractometer (Handheld Alcohol Meter, KKmoon).
Koetting, M. C., Peters, J. T., Steichen, S. D. & Peppas, N. A. Stimulus-responsive hydrogels: Theory, modern advances, and applications. Mater. Sci. Eng. R Reports 93, 1–49 (2015).
Dadsetan, M. et al. A stimuli-responsive hydrogel for doxorubicin delivery. Biomaterials 31, 8051–8062 (2010).
Malachowski, K. et al. Stimuli-responsive theragrippers for chemomechanical controlled release. Angew. Chemie - Int. Ed. 53, (8045–8049 (2014).
Nakajima, S., Kawano, R. & Onoe, H. Stimuli-responsive hydrogel microfibers with controlled anisotropic shrinkage and cross-sectional geometries. Soft Matter 13, 3710–3719 (2017).
Article ADS CAS Google Scholar
Ionov, L. Hydrogel-based actuators: Possibilities and limitations. Mater. Today 17, 494–503 (2014).
Ruan, C., Zeng, K. & Grimes, C. A. A mass-sensitive pH sensor based on a stimuli-responsive polymer. Anal. Chim. Acta 497, 123–131 (2003).
Ge, J. & Yin, Y. Responsive photonic crystals. Angew. Chemie - Int. Ed. 50, 1492–1522 (2011).
Takeoka, Y. Stimuli-responsive opals: Colloidal crystals and colloidal amorphous arrays for use in functional structurally colored materials. J. Mater. Chem. C 1, 6059–6074 (2013).
Yetisen, A. K. et al. Photonic hydrogel sensors. Biotechnol. Adv. 34, 250–271 (2016).
Cai, Z., Smith, N. L., Zhang, J. T. & Asher, S. A. Two-dimensional photonic crystal chemical and biomolecular sensors. Anal. Chem. 87, 5013–5025 (2015).
Weissman, J. M., Sunkara, H. B., Tse, A. S. & Asher, S. A. Thermally switchable periodicities and diffraction from mesoscopically ordered materials. Science (80-.). 274, 959–960 (1996).
Tian, E. et al. Colorful humidity sensitive photonic crystal hydrogel. J. Mater. Chem. 18, 1116–1122 (2008).
Tian, E. et al. Color-oscillating photonic crystal hydrogel. Macromol. Rapid Commun. 30, 1719–1724 (2009).
Ye, B. F. et al. Colorimetric photonic hydrogel aptasensor for the screening of heavy metal ions. Nanoscale 4, 5998–6003 (2012).
Sharma, A. C. et al. A General Photonic Crystal Sensing Motif: Creatinine in Bodily Fluids. J. Am. Chem. Soc. 126, 2971–2977 (2004).
Zhang, J. T., Chao, X., Liu, X. & Asher, S. A. Two-dimensional array Debye ring diffraction protein recognition sensing. Chem. Commun. 49, 6337–6339 (2013).
Zhang, C., Losego, M. D. & Braun, P. V. Hydrogel-based glucose sensors: Effects of phenylboronic acid chemical structure on response. Chem. Mater. 25, 3239–3250 (2013).
Iwayama, Y. et al. Optically tunable gelled photonic crystal covering almost the entire visible light wavelength region. Langmuir 19, 977–980 (2003).
Haque, M. A., Kurokawa, T., Kamita, G., Yue, Y. & Gong, J. P. Rapid and reversible tuning of structural color of a hydrogel over the entire visible spectrum by mechanical stimulation. Chem. Mater. 23, 5200–5207 (2011).
Zhao, Y., Shang, L., Cheng, Y. & Gu, Z. Spherical colloidal photonic crystals. Acc. Chem. Res. 47, 3632–3642 (2014).
Zhao, X. W. et al. Uniformly colorized beads for multiplex immunoassay. Chem. Mater. 18, 2443–2449 (2006).
Zhao, X. et al. Colloidal crystal beads as supports for biomolecular screening. Angew. Chemie - Int. Ed. 45, 6835–6838 (2006).
Kim, S.-H., Lim, J.-M., Jeong, W. C., Choi, D.-G. & Yang, S.-M. Patterned Colloidal Photonic Domes and Balls Derived from Viscous Photocurable Suspensions. Adv. Mater. 20, 3211–3217 (2008).
Suzuki, N., Iwase, E. & Onoe, H. Microfluidically Patterned Dome-Shaped Photonic Colloidal Crystals Exhibiting Structural Colors with Low Angle Dependency. Adv. Opt. Mater. 5, 1600900 (2017).
Gu, H. et al. Tailoring colloidal photonic crystals with wide viewing Angles. Small 9, 2266–2271 (2013).
Kuang, M. et al. Inkjet Printing Patterned Photonic Crystal Domes for Wide Viewing-Angle Displays by Controlling the Sliding Three Phase Contact Line. Adv. Opt. Mater. 2, 34–38 (2014).
Maeda, K., Onoe, H., Takinoue, M. & Takeuchi, S. Controlled synthesis of 3D multi-compartmental particles with centrifuge-based microdroplet formation from a multi-barrelled capillary. Adv. Mater. 24, 1340–1346 (2012).
Morita, M. et al. Droplet-Shooting and Size-Filtration (DSSF) Method for Synthesis of Cell-Sized Liposomes with Controlled Lipid Compositions. ChemBioChem 16, 2029–2035 (2015).
Morita, M., Yamashita, H., Hayakawa, M., Onoe, H. & Takinoue, M. Capillary-based Centrifugal Microfluidic Device for Size-controllable Formaion of Monodisperse Microdroplets. J. Vis. Exp. 108, 53860 (2016).
Otake, K., Inomata, H., Konno, M. & Saito, S. Thermal Analysis of the Volume Phase Transition with N-Isopropylacrylamide Gels. Macromolecules 23, 283–289 (1990).
Nakayama, D., Takeoka, Y., Watanabe, M. & Kataoka, K. Simple and precise preparation of a porous gel for a colorimetric glucose sensor by a templating technique. Angew. Chemie - Int. Ed. 42, 4197–4200 (2003).
Suzuki, N., Iwase, E. & Onoe, H. Micropatterning of Multiple Photonic Colloidal Crystal Gels for Flexible Structural Color Films. Langmuir 33, 6102–6107 (2017).
Heskins, M. & Guillet, J. E. Solution Properties of Poly(N-isopropylacrylamide). J. Macromol. Sci. Part A - Chem. 2, 1441–1455 (1968).
Toyotama, A. et al. Gelation of colloidal crystals without degradation in their transmission quality and chemical tuning. Langmuir 21, 10268–10270 (2005).
Hayashi, T., Takinoue, M. & Onoe, H. DNA Aptamer-Linked Structural-Color Hydrogel for Repeatable Biochemical Sensing. Transducers 2019 582–585 (2019).
Feil, H., Bae, Y. H., Feijen, J. & Kim, S. W. Effect of Comonomer Hydrophilicity and Ionization on the Lower Critical Solution Temperature of N-Isopropylacrylamide Copolymers. Macromolecules 26, 2496–2500 (1993).
Takei, Y. G. et al. Temperature-Responsive Bioconjugates. Temperature-Modulated Bioseparationst. Bioconjugate Chem. 4, 341–346 (1993).
Griffete, N. et al. Inverse opals of molecularly imprinted hydrogels for the detection of bisphenol A and pH sensing. Langmuir 28, 1005–1012 (2012).
Um, S. H. et al. Enzyme-catalysed assembly of DNA hydrogel. Nat. Mater. 5, 797–801 (2006).
Cheng, E. et al. A pH-triggered, fast-responding DNA hydrogel. Angew. Chemie - Int. Ed. 48, 7660–7663 (2009).
This work was partly supported by Research Grant (Basic Research) from TEPCO Memorial Foundation, JKA and its promotion funds from KEIRIN RACE, and JSPS Grant-in-Aid for Young Scientists (19K14920), Japan.
Graduate School of Integrated Design Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-Ku, Yokohama, 223-8522, Japan
Mio Tsuchiya & Hiroaki Onoe
Department of Mechanical Engineering, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kohoku-Ku, Yokohama, 223-8522, Japan
Yuta Kurashina
School of Materials and Chemical Technology, Tokyo Institute of Technology, 4259 Nagatsutacho, Midori-Ku, Yokohama, 226-8503, Japan
Mio Tsuchiya
Hiroaki Onoe
M.T., Y.K. and H.O. designed the study. M.T. performed all the experiments. Y.K. assisted in building the experimental setup. M.T. and H.O. wrote the paper. All authors discussed the results and commented on the manuscript.
Correspondence to Hiroaki Onoe.
Tsuchiya, M., Kurashina, Y. & Onoe, H. Eye-recognizable and repeatable biochemical flexible sensors using low angle-dependent photonic colloidal crystal hydrogel microbeads. Sci Rep 9, 17059 (2019). https://doi.org/10.1038/s41598-019-53499-2
Repeatable detection of Ag+ ions using a DNA aptamer-linked hydrogel biochemical sensor integrated with microfluidic heating system
Koki Yoshida
Tomoki Hayashi
Scientific Reports (2022)
Functional photonic structures for external interaction with flexible/wearable devices
Young Jin Yoo
Se-Yeon Heo
Young Min Song
Nano Research (2021)
About Scientific Reports
Guide to referees
Scientific Reports (Sci Rep) ISSN 2045-2322 (online) | CommonCrawl |
Comparative evaluation of GIS-based landslide susceptibility mapping using statistical and heuristic approach for Dharamshala region of Kangra Valley, India
Swati Sharma1 &
Ambrish Kumar Mahajan1,2
Geoenvironmental Disasters volume 5, Article number: 4 (2018) Cite this article
The Dharamshala region of Kangra valley, India is one of the fastest developing Himalayan city which is prone to landslide events almost around the year. The development is going on a fast pace which calls for the need of landslide susceptibility zonation studies in order to generate maps that can be used by planners and engineers to implement the projects at safer locations. A landslide inventory was developed for Dharamshala with help of the field observations. Based on field investigations and satellite image studies eight casual factors viz. lithology, soil, slope, aspect, fault buffer, drainage buffer, road buffer and land cover were selected to represent the landslide problems of the study area. The research presents the comparative assessment of geographic information system based landslide susceptibility maps using analytical hierarchy process and frequency ratio method. The maps generated have been validated and evaluated for checking the consistency in spatial classification of susceptibility zones using prediction rate curve, landslide density and error matrix methods.
The results of analytical hierarchy process (AHP) shows that maximum factor weightage results from lithology and soil i.e. 0.35 and 0.25. The frequency ratios of the factor classes indicate a strong correlation of Dharamsala Group of rock (value is 1.28) with the landslides which also agrees with the results from the AHP method where in the same lithology has the maximum weightage i.e. 0.71. The landslide susceptibility zonation maps from the statistical frequency ratio and heuristic analytical hierarchy process method were classified in to five classes: very low susceptibility, low susceptibility, medium susceptibility, high susceptibility and very high susceptibility. The landslide density distribution in each susceptibility class shows agreement with the field conditions. The prediction rate curve was used for assessing the future landslide prediction efficiency of the susceptibility maps generated. The prediction curves resulted the area under curve values which are 76.77% for analytical hierarchy process and 73.38% for frequency ratio method. The final evaluation of the susceptibility maps was based on the error matrix approach to calculate the area distributed among the susceptibility zones of each map. This technique resulted in assessing the spatial differences and agreement between both the susceptibility maps. The evaluation results show 70% overall spatial similarity between the resultant landslide susceptibility maps.
Hence it can be concluded that, the landslide susceptibility map (LSM) generated from the AHP and frequency ratio method have yielded good results as the 100% landslide data falls in the high susceptibility and very high susceptibility classes of both the maps. Also, the spatial agreement of almost 70% between the resultant maps increases the reliability on the results in the present study. Therefore, the LSM generated from AHP method with 76.77% landslide prediction efficiency can be used for planning future developmental sites by the area administration.
Landslides are the down slope movement of debris, rocks or earth material under the force of gravity (Cruden, 1991). Destructive mass movements such as landslides are considered as a major geological hazard around the globe. The Landslide activities in India are mostly associated with its the northern most states such as Uttarakhand, Himachal Pradesh, Sikkim and West-Bengal which are located in the Himalayan foothills with dynamic tectonic and climatic variations (Sarkar et al. 1995; Chauhan et al. 2010) and also towards the southern India the Nilgiri range and the Western Ghats are prone to landslides instead of hard rocks and tectonic stability (Kaur et al. 2017). According to the Geological survey of India almost 15% of the land area in India is exposed to the landslide events (Onagh et al. 2012) and is the worst affected country by landslides in Asia after China (Guha-Sapir et al., 2012; Binh Thai et al. 2016). The tendency towards the landslide is caused by various factors such as the steepness of slopes, the tectonic conditions of the study area, prolonged rainfall episodes with their return periods, topography and the inherent properties of the slope material, Anbalagan (1992). The mitigation measures for landslides require the identification of existing landslides in an area for spatial prediction of future events by studying the prevailing causal factors (Rai et al. 2014) for which a standard tool known as landslide susceptibility mapping is used around the world by various researchers (Guzzetti et al., 1999; Van Westen et al. 2008). Fell et al. 2008 considered the landslide susceptibility for identification of landslide prone sites and their relation to the set of causal factors in that area. The landslide susceptibility mapping generally involves two methods (I) qualitative which is based on expert knowledge and the landslide inventory development (Saha et al., 2002) such as analytical hierarchy process (AHP) used by many researchers (Komac 2006; Ghosh et al. 2011; Kayastha et al. 2012; Wu et al. 2016; Kumar and Anabalgan 2016; Achour et al. 2017) (II) quantitative methods including bivariate and multivariate modeling methods for statistical evaluation of landslides occurrences (Yin and Yan 1988; Kumar et al. 1993; Anbalagan and Singh 1996; Dai and Lee, 2002; Saha et al. 2005; Lee and Sambath 2006; Mathew et al. 2007; Dahal et al. 2008; Singh et al. 2008; Pradhan and Lee 2010; Rozos et al. 2011; Yalcin et al. 2011; Ghosh et al. 2011; Kayastha et al. 2013; Bijukchhen et al. 2013; Anbalagan et al., 2015; Rawat et al. 2015; Sharma and Mahajan 2018; Chen et al. 2016). In India landslide susceptibility mapping for Gharwal and Kumaun region of Uttarakhand has been carried out by Pachauri and Pant 1992; Gupta et al. 1999; Anabalgan et al., 2008; Anbalagan et al., 2015; Kumar and Anabalgan, 2016 whereas, Sarkar and Kanungo, 2004; Sarkar et al. 2013; Ghosh et al. 2011 have mapped the landslides of Darjeeling Himalaya for statistical correlation with the causal factors. For dealing with the landslide hazard and its risk imposed on various elements, it is necessary to evaluate the correlation of probable causative factors with the landslide location's characteristics. The qualitative methods such as AHP subjectively help to rank the causal factors leading to classification of an area based on the priority scale whereas, the quantitative methods (bivariate or multivariate statistical analysis) use the observed landslide data for asserting the spatial relationship of the problem with the prevailing geo-environmental parameters (J. Corominas et al. 2014). For generating a reliable spatial information regarding a natural hazard, remote sensing data and the geographic information system (GIS) are very powerful tools (Tofani et al. 2013). The application of GIS is useful in processing the digital elevation models for extracting information such as: slope angle, aspect, drainage network etc. and to integrate the various thematic layers for generating susceptibility, hazard or risk maps. In the state of Himachal Pradesh (H.P.) attempts have been made for landslide susceptibility zoning of the landslide prone areas such as district Chamba, Bilaspur and Parwanoo (Sharma and Mehta 2012; Sharma and Kumar 2008) whereas studies related to the use of statistical modeling methods for susceptibility mapping are lacking for important areas of this hilly state such as Kangra Valley which rests at the Himalayan foothills and experiences number of landslide episodes in various parts every year. Some parts of district Kangra, Himachal Pradesh such as Dharamshala region is the fastest growing tourism hub which has been announced as one of the smart cities of India. The Dharamshala region is characterized by steeply dipping slopes with number of drainages cutting across its weak and weathered lithology. The district Kangra, H.P. is tectonically very active and has experienced number of moderate and major earthquakes in past such as 1905 Kangra earthquake (Ms 7.8) which devastated this region badly (Ghosh and Mahajan 2011). Later on, from 1968 to 1986 the Dharamshala region of district Kangra which is sandwiched between longitudinal Main Boundary Thrust (MBT) in the north and Drini Thrust in the south, experienced three moderate earthquakes having magnitude varying between Ms 4.9 to Ms 5.7 (Kumar and Mahajan 1991; Mahajan and Kumar 1994). The tectonic emplacement and the northward movement of Indian landmass keep the Dharamshala region of H.P. under continuous stress conditions making it tectonically and geomorphologically dynamic. Mahajan and Virdi (2000) have studied the landslide sites of Dharamshala region using the field based mapping methods and identified 25 major landslides for correlating with factors such as slope angle, relief, drainage network etc. Sharma et al. (2015) have documented a major landslide event (Tirah lines Landslide) reported as a result of very high rainfall in the month of August 2013 which destroyed almost 10 multistoried buildings in army cantonment area of Dharamshala. Looking at the past record of the landslide studies and the structural complexity in the Dharamshala region it becomes important to statistically analyze the factors playing major role in causing slope instability and in order to minimize their societal impacts by developing landslide hazard or susceptibility zonation maps. This study involves the landslide susceptibility mapping (LSM) of Dharamshala region using heuristic judgment based analytical hierarchy process (AHP) and statistical frequency ratio (FR) method followed by the comparison of susceptibility maps for prediction efficiency of future landslide events. The resulting LSMs have been evaluated by the use of landslide density analysis and error matrix technique in order to check the concordance between the susceptibility class area distributions from heuristic and statistical methods. The evaluation of the LSMs has determined the total agreement and the spatial difference between the maps generated. The results can be useful for landslide risk assessment studies and for planners in implementing developmental projects.
Study area and geological setting
The study area covers a rectangle of 39.7 sq. km (32°12′ and 32° 15′ and N- 76°17′ and 76° 23′ E) as shown in Fig. 1 with an elevation between 899 m to 2523 m a.m.s.l. The geomorphology of the study area is dominated with hills and mountains dissected by number of drainages which are locally known as khad. The main khad flowing are Charan khad at the southern edge, Banoi khad in the middle and the Gaj khad at the northern edge of the study area which are the main tributaries of river Beas in the district Kangra of Himachal Pradesh. Dharamshala region comes under wet temperate zone with mean annual temperature remain between 19 ± 0.5 °C and the annual precipitation 2900 ± 639 mm (Jaswal et al. 2014). Geologically the southern part of the area falls in the Outer Himalaya comprising the Siwalik (boulder conglomerate exposures), Dharamsala group and Murree formation (Sandstone, Claystone and mudstone) which is separated by Main Boundary Thrust (MBT) from the northern part of the study area comprising of Lesser Himalayan rocks (Dharamkot Limestone and Chail Formation having low grade metamorphic such as slates) as shown in Fig. 2. Most of the settlements and the road excavation are in the Outer Himalayan rocks of the study area which are weak and have led to many slope instability conditions in the past. Weak lithology such as weathered sandstone, claystone along with unplanned construction activities or excavations of slopes for development projects and the heavy rainfall in this area often lead to landslides especially in the monsoon season. The slopes of Dharamshala region are steeply dipping up to > 41° with upper 5 m to 10 m cover of fluvial deposits or the debris cover which is easily prone to sliding under adverse conditions.
Location map of the study area shown as hill shade view with local drainages and some of the major locations
Geological Map of Dharamshala region showing lithology and structure exposed (Source- Mahajan and Virdi 2000)
The present analysis was carried out in three steps: data collection, database generation (thematic maps) and modeling for landslide susceptibility mapping (LSM). Firstly, the study area has been investigated for the prevailing landslide conditions for which a landslide inventory (Fig. 3) was developed through field surveys and available satellite imageries. Thirty nine landslide locations were mapped in the total study area of 39.7 km2. For the correlation of spatial distributions of the prevailing landslide with the chosen eight causal factors, various thematic maps were developed. With help of ASTERGDEM of 30 m resolution (source- USGS website) the drainage buffer, slope angle and the aspect maps have been produced whereas the geology, soil and fault buffer maps were prepared with the help of previous published maps (Mahajan and Virdi, 2000) and the land cover and road buffer maps were extracted with help of Google earth imagery. All the prepared thematic maps were rasterized at grid size of 30 × 30 with total pixel count for the study area 44,165 for using in the GIS based modeling methods (AHP and FR). In the analytical hierarchy process (AHP) method field survey based judgments and the data from previous literature have helped in assigning weightage (heuristic) to the causal factors and the factor classes whereas in the frequency ratio (FR) method, the ratio of landslide percentage in a factor class and the percentage area of that factor class gave the weightages (statistical). Both the modeling methods (analytical hierarchy process and frequency ratio) have resulted in landslide susceptibility index (LSI) maps which were reclassed using fivefold classification for zoning the landslide prone area which is very low susceptibility (VLS), low susceptibility (LS), medium susceptibility (MS), high susceptibility (HS) and very high susceptibility (VHS). Both the landslide susceptibility zonation maps (LSZM) were validated using the landslide density distribution method and the prediction curve success rates. The evaluations of the resulting landslide susceptibility maps are based on spatial area distribution match between the susceptibility classes for which the error matrix method has been used. The evaluations represent the concordance and the disagreement of class area distribution from the use of heuristic judgements and the objective datasets.
Landslide Inventory shown in the hill shade map of the study area prepared using the DEM data
Analytical hierarchy process (AHP)
Analytical hierarchy process is the decision making for a complex problem by arranging the elements of that problem in a hierarchy. It is a semi-qualitative process in which the weightages to the elements are assigned based on the expert's judgment and the weightage values vary from 1 to 9 (Saaty, 1980, Saaty and Vargas 2001, Saaty, 2005). The standard scale for using AHP method has been given in Table 1, according to which factor classes and the factors are assigned rating with respect to each other. Value 1 is assigned to the class with least influence and value 9 is assigned to the class with maximum influence. After the weightage assignment the factor maps are reclassed and integrated in GIS.
Table 1 Scale for Pairwise comparison
For checking the consistency of the comparison matrix prepared by rating factors and factor classes against one another, consistency ratio (CR) is used and the CR value below 0. 1 is considered acceptable (Ayalew et al. 2004).
$$ \mathrm{CR}=\frac{\mathrm{CI}}{\mathrm{RI}} $$
Where CI is the consistency index calculated as:
$$ \mathrm{CI}=\Lambda\ \max -\mathrm{n}/\mathrm{n}-1 $$
Where n is the order of the matrix and Ʌ max is the major value of the matrix.
Random index (RI) is the consistency of the randomly generated pair wise matrix and is dependent on the size of the matrix as given in Table 2.
Table 2 Values of random index based on the size f the matrix
Frequency ratio (FR)
Frequency ratio modeling is based on correlation of landslides in an area with the natural and anthropogenic causal factors in that area. Mathematically, it is the ratio of the percentage of the factor class (y) and the percentage of the landslides (x) in that class (Lee and Talib, 2005; Pradhan, 2010). The correlation factor for FR i.e. x/y (between the landslides and the factors) vary between < 1 to > 1. If the FR value is > 1 then the there exists a high correlation between landslide occurrence and the factor class and if the FR is < 1 then the correlation is weak. All the thematic maps are reclassed according to the FR values for each factor class and then integrated in GIS for generating the landslide susceptibility index (LSI) map.
Landslide inventory
For the preparation of landslide inventory field surveys have been carried out to demarcate the GPS location and the nature of landslides. The vector points of the noted locations were verified using the Google earth imagery and then imported in GIS for applying the heuristic and statistical models. The inventory data was split into training (75%) and testing (25%) groups as shown in Fig. 3 for using in modeling and validation phase respectively. Thirty nine landslides with varying size were demarcated out of which the largest landslide covers an area of 0.103 km2. In total all the landslides cover 1.1 km2 which is 2.7% of the total study area (39.7 km2) and the 75% training data of the total inventory landslide area covers 0.81 km2. Table 3 shows the location and the type of lithology the landslides belong to. Most of the mapped landslides have got activated in the monsoon season (July to September) in form of debris flow mostly. Some of the landslides show mudflow or earth flow type of mass movement which is due to low strength of material and water logging during the monsoons. Figure 4 shows some of the past landslides of the Dharamshala that have caused notable destruction.
Table 3 Landslide Inventory Locations
Field photographs of few recent landslides in Dharamshala region a Vulnerable slope along Dharamshala - Mecleodganj main road b Chhola landslide along the Charan Khad c Bypass road landslide near Kotwali bazaar d Naddi landslide near Dal lake
There exists no set rules for considering the trigger factors in landslide susceptibility mapping, rather the study area characteristics and the data availability guide the choice of thematic layers to be used (Ayalew and Yamagishi, 2005). Based on the study area characteristics, eight parameters discussed below have been considered as the major causal factors for landslides in the Dharamshala region and their thematic maps as shown in Fig. 5 have been prepared at grid size of 30 × 30 with pixel count of 44,165 in each map for modeling in GIS.
Distance from drainage (Drainage buffer): The Dharamshala region has a dense drainage network as shown in the location map (Fig. 1) and some of the landslides mapped during the field survey were found in the vicinity of local drainages. To find the distribution of landslides with respect to the drainages flowing in the study area, a drainage buffer map or distance from the drainage map was prepared at proximity of 100 m, 500 m and 1000 m.
Distance from fault (Fault buffer): The emplacement of faults in the study area has been found affecting the slope stability as many landslides were found near to the major faults in this region. To find the effect of faults on the mass movement activity a fault buffer map was prepared with three classes showing the proximity of 1000 m, 2000 m and 3000 m.
Distance from road (Road buffer): In hilly areas slope excavation for road widening is a common practice which greatly influences the slope stability and similar has been found for the present study area where landslides associated with the slope excavation are common. Considering it one of the main causal factors a road buffer map was prepared with buffer zones of 200 m, 800 m, 1500 m and 2500 m.
Lithology: Lithology of an area is closely related to the landslide occurrence as the strength of the emplaced lithology influences the slope stability. In Dharamshala region the lithology is grouped in to four classes which are Dharamsala Group (sandstone, claystone and mudstone), Siwalik Group (boulder conglomerates), Dharamkot Limestone and Chail formation (Schist, Quartzite and Gneissic rocks). According to the landslide inventory data all the landslides are located in the weak lithology of Dharamsala Group of rocks.
Soil: This parameter includes the overlying cover on the lithology which has a varying thickness in the present study area and has been grouped into three classes: debris, clay soil and compact alluvial deposits.
Land cover: The study area has been divided into four classes: forest cover, settlement on low to moderate slopes, sparsely vegetated area and settlement on steep slopes.
Slope angle: The slope map was extracted from the DEM (30 m resolution) and classified into five classes: 0° - 5°, 6° - 15°, 16° - 25°, 26° - 35° and ≥ 35°. These classes represent the slope inclinations throughout the Dharamshala area.
Aspect: After the slope extraction slope aspects were extracted using GIS tool and was grouped into nine classes: flat (− 1), N (0° – 22.5° and 337.5° - 360°), NE (22.5° – 67.5°), E (67.5° -112.5°), SE (112.5° – 157.5°), S (157.5° – 202.5°), SW (202.5° – 247.5°), W (247.5° – 292.5°) and NW (292.5° – 337.5°).
Maps of the chosen causal factors (1) Lithology (2) Land cover (3) Aspect (4) Slope (5) Fault buffer (6) Road buffer (7) Drainage buffer (8) Soil type
As described in the section 2.1 and 2.2 analytical hierarchy process (AHP) and the frequency ratio (FR) methods were applied on the causal factors and the factor classes for assigning weightages of influence and the frequency ratio for finding correlation with the prevailing landslide conditions of the study area. The results have been presented in Table 4 (AHP) and Table 5 (FR) of section 3 respectively.
Table 4 Comparison matrix of factor classes and the factors based on analytical hierarchy process (AHP)
Table 5 Frequency ratio values for the factor classes
In order to combine all the factor maps reclassed with their weightage values using AHP and frequency ratio (FR) values, the map algebra tool was used which resulted the two-landslide susceptibility index (LSI) maps from both the models. The results of AHP comparison matrix in the Table 4 show that maximum factor weightage results from lithology and soil i.e. 0.35 and 0.25 respectively followed by the weightages of land cover (0.14), drainage (0.07) and slope (0.06) whereas factors such as road, fault and aspect show a little influence on the landslide occurrences. Figure 6b shows the landslide susceptibility zonation (LSZ) map resulted from the heuristic analytical hierarchy process (AHP) method which has been classified using fivefold classification: very low, low, medium, high and very high. Table 5 showing the frequency ratios of the factor classes indicates a strong correlation of Dharamsala Group of rock (FR value is 1.28) with the landslides which also agrees with the results from the AHP method where in the lithology factor the maximum weightage has been given to the Dharamsala Group i.e. 0.71. Among the land cover classes, the settlements on steep angled slopes have the maximum FR value 6.51 indicating major concentration of landslide sites in this class as 32.7% of the landslide area alone falls in this class which covers only 5.04% of the total map area, which indicates an impact of anthropogenic activities on and near steep sloping areas. Among the classes for the slope factor, classes with 16° - 25° and 26° - 35° slope angles (moderate and steep slopes) show maximum FR values 1.48 and 1.35 respectively and collectively include more than 60% of the landslide area. Among the aspects the north-east class with 18.84% landslide area and the north aspect with 8.89% landslide area show maximum FR values 1.34 and 1.10 which indicates more exploitation of north facing slopes or high subsurface moisture conditions due to less sun exposure of northern slopes which makes them unstable. Nonetheless the south facing slopes (SW aspect with 14.76% landslide area) have also shown a high correlation FR value but the maximum FR values for northern slopes are an interesting parameter showing high anthropogenic interference. The debris soil hosts all the inventory landslides (100%) and gives the maximum correlation value 1.33 which is indicative of a shallow nature of maximum mass movements here on the steep slopes. The debris layer composes the weathered lithology from the Dharamsala Group (mudstone, sandstone, claystone) and the overlying fluvial deposits. The drainages have shown a higher correlation (FR = 1.01) at 500 m proximity with 62.96% landslide area whereas for the faults the proximity of 1000 m with 80.17% landslide area and 2000 m seem critical in term of FR values 1.16 and 0.75 respectively. The road buffer factor shows high landslide activity associated within 200 m proximity (FR = 1.51) with more than 50% of the landslide area, which indicated direct impact of slope excavation for road widening in the hilly areas. The resulting Fig. 6a shows landslide susceptibility zonation (LSZ) map from the statistical frequency ratio (FR) method with classes: very low susceptibility (VLS), low susceptibility (LS), medium susceptibility (MS), high susceptibility (HS) and very high susceptibility (VHS). Therefore, a five-fold classification scheme was followed based on natural break classifier option in GIS, which maximizes the variance between the susceptibility classes and represents a clear trend of class index value distribution. The classifications of resulting landslide susceptibility index maps were carried out in such a way so that 20% of the LSZ map area using AHP and FR includes 97% and 76% of the landslides respectively. Table 6 shows the distribution of landslides in various susceptibility classes from heuristic and the statistical methods applied, which shows 18.48 km2 area in high susceptibility class and 4.31 km2 area in very high susceptibility class using AHP method whereas using the statistical method (FR) 9.3 km2 and 13.07 km2 falls under high and very high susceptibility class respectively.
Landslide susceptibility zonation maps overlain with the mapped landslides in the study area: a LSM from Frequency Ratio model (b) LSM from Analytical Hierarchy Process model
Table 6 Shows the landslide area along with the landslide density distribution in the susceptibility classes of LSZ maps
Comparative assessments of LSZ maps from AHP and FR
For checking the reliability of the LSZ maps and comparing their performance for future landslide prediction spatially, various techniques have been proposed like agreed area analysis, prediction rate curve, landslide density distribution etc. (Kayastha et al. 2013; Gupta et al. 2008). In the present study landslide density in the susceptibility zones, prediction rate curves and the error matrix method have been used for assessment and the evaluation of LSZ maps (heuristic and statistical) with respect to each other. Table 6 shows the landslide density distribution among the susceptibility classes which was computed using the ratio of the landslide area in a susceptibility class to the area of that susceptibility class. The density should increase from the low to very high susceptibility class (Gupta et al. 2008) which is true for the present study. In case of AHP method the high (HS) and very high susceptibility (VHS) class have density value 0.022 and 0.095 respectively whereas, for the FR method the HS and VHS class have density value 0.004 and 0.059 respectively. Therefore, the susceptibility of various zones in both the maps matched the inventory data distribution noted from the field studies and also both the LSZ maps show a reliable similarity with varying values of landslide density distribution.
The validation of the susceptibility maps from AHP and FR technique was carried out using the prediction rate curve which computed the cumulative percentage of landslide occurrences (testing data) in both susceptibility zonation maps (Sarkar et al. 2013) which is shown in Fig. 7. The prediction curves were analyzed using area under curve (AUC) values which indicate the model fitness for landslide prediction in which value below 0.5 refers low accuracy level whereas value from 0.5 to 1 refers higher accuracy of the models used. In this study both the model heuristic (AHP) and statistical (FR) have shown AUC value above 0.5, where AHP method gave 76.77% (0.76) AUC and the FR method gave 73.38% (0.73) AUC. These results show that both the methods have given a good prediction rate for estimating the future landslide probabilities spatially.
Graph representing prediction rate curves of statistical model FR (Red trend line) and AHP (Blue trend line) for interpretation of model fitness for landslide susceptibility mapping and their respective AUC values
Evaluation of susceptibility zonation maps
Both comparison method: landslide density distribution and prediction rate curve have shown that AHP and FR techniques gave interpretations on a positive side but there exists difference in the results of both the LSZ maps generated i.e. the area of each susceptibility class varies in the maps from AHP and FR method. The spatial differences between the susceptibility classes can help to evaluate the LSMs and can state that, how the choices of subjective and objective judgements in heuristic and statistical methods respectively influence our results. To analyze the spatial difference among landslide susceptibility classes an error matrix method was used (Gupta et al., 2008; Kayastha et al. 2012) which is presented in Table 7. Using the combination of AHP-FR maps error matrix was tabulated showing a high degree match between areas of VLS, LS and MS zones of both the LSZ maps which indicates a similarity of 16.95 km2 area in total which constitutes 42.6% of the total map areas. In case of high susceptibility zone (HS) and very high susceptibility (VHS) zone the difference in the areas is more i.e. for AHP 4.32 km2 area is covered in VHS zone whereas for FR map 13.07 km2 area is covered in VHS zone but, in total more than 55% of both the map's susceptibility classes show spatial agreement. These differences in the areas of HS and VHS class can be due to difference of the methods used for susceptibility mapping where in the AHP method subjective judgment approach was used for determining the factor weightages whereas in the FR method a bivariate statistical approach was used to compute weights of each class separately. This evaluation has also helped to analyze the agreement of area distribution (pixels or km2) in the resulting LSMs which ascertains the consistency of causal factors used in the study whereas, the disagreement of area distribution refers to the difference of techniques used. Nonetheless, 100% of the observed landslide area falls in the high susceptibility and very high susceptibility classes which shows good prediction rate of both LSZ maps.
Table 7 Shows the error matrix for computing spatially agreed area between the landslide susceptibility classes in AHP and FR LSZ maps
The findings in this study point out the following conclusions:
The work shows a comparative study of GIS based heuristic and statistical models for landslide susceptibility zonation of Dharamshala region of Himachal Pradesh, India. The lithology and the land cover factors have shown maximum contribution toward landslide occurrence based on the computed weightage values using AHP and FR models. The anthropogenic interferences in this hilly terrain have caused huge impact on the slopes and the condition is worsened as the internal properties of the lithology and the overlying debris material are weak due to which instability of slopes is triggered. Maximum landslide locations were mapped in close proximity of the roads and the local drainages.
The landslide susceptibility zonation maps from both the methods have been classified into five zones: very low susceptibility (VLS), low susceptibility (LS), medium susceptibility (MS), high susceptibility (HS) and very high susceptibility (VHS). Both the LSZ maps show a good model fitness for predicting future landslide locations based on prediction rate curve method. Landslide density distribution increases from low to very high susceptibility class of both the LSZ maps which represents an agreement with the field conditions of the study area. Such results have inferred a statistical similarity between both the resultant susceptibility maps.
The LSMs prepared have been evaluated to check the consistency of area distribution among the susceptibility classes from AHP and FR technique. The evaluation of the susceptibility maps was based on the error matrix method which resulted into differences and the similarities of area (km2) assigned to each susceptibility zone. The results have shown a good consistency in the spatial area distribution in very low, low and medium susceptibility classes of LSZ maps which count for 42.6% of the susceptibility map areas. For the high and very high susceptibility classes the spatial area distribution in both the LSZ maps varies to some extent but this difference factor is hindered as both these classes HS and VHS include 100% landslide affected area in each resulting LSZ map. The spatial difference of susceptibility classes can be attributed to the variance of procedure [subjective (AHP) and objective (FR)] in weighting the factors and their classes whereas, the spatial similarity of the susceptibility zones may have occurred due to the use of similar causal factors and the landslide inventory data for both the modeling methods.
The results from the final map evaluations indicate that the 100% landslide data falls in the high susceptibility (HS) and very high susceptibility (VHS) classes and the spatial agreement between both the resultant maps as evaluated from error matrix method (Table 7) is more than 70%. Therefore, the maps landslide susceptibility maps generated can prove to be reliable and helpful in the landslide risk assessment for Dharamshala region and can guide planners for implementing developmental projects at safer locations.
Asia-Pac:
Comput Geosci:
Computers and geoscience
Comput Intel Sys.:
Computational intelligence research
Curr. Sci.:
Eng. Geol.:
Environ:
Geoenviron:
Geol. Soc.:
Geological society
Geophys.:
Int J Appl Obs Geoinf.:
International journal applied observation and Geoinformation
J Geosci:
Journal of geoscience
J Sci Ind Res:
Journal of scientific and industrial research
Jour. Him. Geol.:
Journal of Himalayan Geology
Jour.:
Mt Res Dev:
Mountain research development
Nat:
rem sens:
Sci.:
Spat. Inf. Res.:
Spatial information research
Theor. Appl. Climatol:
Achour, Y., A. Boumezbeur, R. Hadji, A. Chouabbi, V. Cavaleiro, and E.A. Bendaoud. 2017. Landslide susceptibility mapping using analytic hierarchy process and information value methods along a highway road section in Constantine, Algeria. Arabian Journal of Geosciences 10: 194. https://doi.org/10.1007/s12517-017-2980-6.
Anabalgan, R., D. Chakraborty, and A. Kohli. 2008. Landslide hazard zonation mapping on meso scale for systematic planning in mountainous terrain. Journal of Scientific and Industrial Research 67: 486–497.
Anbalagan, R. 1992. Landslide hazard evaluation and zonation mapping in mountainous terrain. Engineering Geology 32 (4): 269–277.
Anbalagan, R., R. Kumar, K. Lakshmanan, S. Parida, and S. Neethu. 2015. Landslide hazard zonation mapping using frequency ratio and fuzzy logic approach, a case study of Lachung Valley, Sikkim. Geoenvironmental Disasters 2 (1): 1–17. https://doi.org/10.1186/s40677-014-0009-y.
Anbalagan, R., and B. Singh. 1996. Landslide hazard and risk assessment mapping of mountainous terrains—A case study from Kumaun Himalaya, India. Engineering Geology 43 (4): 237–246.
Ayalew, L., and H. Yamagishi. 2005. The application of GIS based logistic regression for landslide susceptibility mapping in the Kakuda-Yahiko mountains Central Japan. Geomorphology 65 (1): 15–31. https://doi.org/10.1016/j.geomorph.2004.06.010.
Ayalew, L., H. Yamagishi, and N. Ugawa. 2004. Landslide susceptibility mapping using GIS based weighted linear combination, the case in Tsugawa area of Agano river, Niigata Perfecture, Japan. Landslides 1: 73–81.
Bijukchhen, P., P. Kayastha, and M.R. Dhital. 2013. A comparative evaluation of heuristic and bivariate statistical modelling for landslide susceptibility mappings in Ghumri-Dhad Khola, East Nepal. Arabian Journal of Geosciences 6: 2727–2743. https://doi.org/10.1007/s12517-0569-7.
Chauhan, S., M. Sharma, and M.K. Arora. 2010. Landslide susceptibility zonation of the Chamoli region, Gharwal Himalaya, using logistic regression model. Landslides 7: 411–423.
Chen, T., R. Niu, and X. Jia. 2016. A comparison of information value and logistic regression models in landslide susceptibility mapping by using GIS. Environment and Earth Science 75: 867. https://doi.org/10.1007/s12665-016-5317-y.
Corominas, J., C. Van Westen, P. Frattini, L. Cascini, J.P. Malet, S. Fotopoulou, and K. Pitilakis. 2014. Recommendations for the quantitative analysis of landslide risk. Bulletin of Engineering Geology and the Environment 73 (2): 209–263.
Cruden, D.M. 1991. A simple definition of a landslide. Bulletin of Engineering Geology and the Environment 43 (1): 27–29.
Dahal, R.K., S. Hasegawa, S. Nonomura, M. Yamanaka, S. Dhakal, and P. Paudyal. 2008. Predictive modelling of rainfall induced landslide hazard in the lesser Himalaya of Nepal based on weights of evidence. Geomorphology 102: 496–510.
Dai, F., and C.F. Lee. 2002. Landslide characteristics and slope instability modelling using GIS, Lantau Island, Hong Kong. Geomorphology 42: 213–238.
Fell, R., J. Corominas, C. Bonnard, L. Cascini, E. Leroi, and W.Z. Savage. 2008. Guidelines for landslide susceptibility, hazard and risk zoning for land-use planning. Engineering Geology 102 (3): 99–111. https://doi.org/10.1016/j.enggeo.2008.03.014.
Ghosh, G.K., and A.K. Mahajan. 2011. Interpretation of intensity attenuation relation in 1905 Kangra earthquake with epicentral distance and magnitude in the northwest Himalayan region. Journal of the Geological Society of India 77: 511–520.
Ghosh, S., E.J.M. Carranza, C.J. Van Westen, V. Jetten, and D.N. Bhattacharya. 2011. Selecting and weighting spatial predictors for empirical modeling of landslide susceptibility in the Darjeeling Himalaya (India). Geomorphology 131: 35–56.
Guha-Sapir, D., F. Vos, R. Below, and S. Ponserre. 2012. Annual disaster statistical review 2011: The numbers and trends. Centre for Research on the Epidemiology of Disasters (CRED).
Gupta, R.P., D.P. Kanungo, M.K. Arora, and S. Sarkar. 2008. Approaches for comparative evaluation of raster GIS-based landslide susceptibility zonation maps. International Journal of Applied Earth Observation and Geoinformation 10: 330–341. https://doi.org/10.1016/j.jag.2008.01.003.
Gupta, R.P., A.K. Saha, M.K. Arora, and A. Kumar. 1999. Landslide hazard zonation in a part of the Bhagirathi Valley. Garhwal Himalayas, using integrated remote sensing–GIS. Himalayan Geology 20: 71–85.
Guzzetti, F., A. Carrara, M. Cardinali, and P. Reichenbach. 1999. Landslide hazard evaluation: A review of current techniques and their application in a multi-scale study, Central Italy. Geomorphology 31 (1): 181–216.
Jaswal, A.K., N. Kumar, and P. Khare. 2014. Climate variability in Dharamsala-a hill station in western Himalayas. Journal of Indian Geophysical Union 18 (3): 336–355.
Kaur, H., S. Gupta, and S. Parkash. 2017. Comparative evaluation of various approaches for landslide hazard zoning: A critical review in Indian perspectives. Spatial Information Research 25 (3): 389–398.
Kayastha, P., M. Dhital, and F. De Smedt. 2012. Landslide susceptibility using the weight of evidence method in the Tinau watershed, Nepal. Natural Hazards 63: 479–498.
Kayastha P, Dhital MR, De Smedt F (2013) Application of the analytical hierarchy process (AHP) for landslide susceptibility mapping: a case study from Tinau watershed, west Nepal. Comput Geosci 52: 398–408. doihttps://doi.org/10.1016/j.cageo.2012.11.003.
Komac, M. 2006. A landslide susceptibility model using the analytical hierarchy process method and multivariate statistics in perialpine Slovenia. Geomorphology 74: 17–28. https://doi.org/10.1016/j.geomorph.2005.07.005.
Kumar, K.V., R.R. Nair, and R.C. Lakhera. 1993. Digital image enhancement for delineating active landslide areas. Asia-Pac Remote Sensing Journal 6 (1): 63–66.
Kumar, R., and R. Anabalgan. 2016. Landslide susceptibility mapping using analytical hierarchy process (AHP) in Tehri reservoir rim region, Uttarakhand. Journal of the Geological Society of India 87: 271–286.
Kumar, S., and A.K. Mahajan. 1991. Dharamsala seismotectonic zone–Neotectonic and state of stress in the area. Journal of Himalayan Geology 21: 53–57.
Lee, S., and T. Sambath. 2006. Landslide susceptibility mapping in the Damrei Romel area, Cambodia using frequency ratio and logistic regression models. Environmental Geology 50 (6): 847–855.
Lee, S., and J.A. Talib. 2005. Probabilistic landslide susceptibility and factor effect analysis. Environmental Geology 47: 982–990. https://doi.org/10.1007/s00254-005-1228-z.
Mahajan, A.K., and S. Kumar. 1994. Linear features registered on the landset imagery and seismic activity in Dharamsala Palampur region (NW Himalayas). Geofizika 11 (1): 15–25.
Mahajan, A.K., and N.S. Virdi. 2000. Preparation of landslides hazard zonation map of Dharamshala town & adjoining areas. District Kangra (H.P.): technical report, 45. Dehradun: Wadia institute of Himalayan Geology. ref No. Endst/281/MA dt 27/2/99..
Mathew, J., V.K. Jha, and G.S. Rawat. 2007. Weights of evidence modelling for landslide hazard zonation mapping in part of Bhagirathi valley, Uttarakhand. Current Science 92 (5): 628–638.
Onagh, M., V.K. Kumra, and P.K. Rai. 2012. Landslide susceptibility mapping in a part of Uttarkashi district (India) by multiple linear regression method. International Journal of Geology, Earth and Environmental Sciences 4 (2): 102–120.
Pachauri, A.K., and M. Pant. 1992. Landslide hazard mapping based on the geological attributes. Engineering Geology 32: 81–100.
Pradhan, B. 2010. Application of an advanced fuzzy logic model for landslide susceptibility analysis. International Journal of Computational Intelligence Systems 3 (3): 370–381.
Pham, B.T., Pradhan, B., Bui, D.T., Prakash, I. and Dholakia, M.B., 2016. A comparative study of different machine learning methods for landslide susceptibility assessment: a case study of Uttarakhand area (India). Environmental Modelling & Software 84:240-250. https://doi.org/10.1016/j.envsoft.2016.07.005.
Pradhan, B., and S. Lee. 2010. Regional landslide susceptibility analysis using back-propagation neural network model at Cameron highland, Malaysia. Landslides 7 (1): 13–30.
Rai, P.K., K. Mohan, and V.K. Kumra. 2014. Landslide hazard and its mapping using remote sensing and GIS. Journal of Scientific Research 58: 1–13.
Rawat, M.S., D.P. Uniyal, R. Dobhal, V. Joshi, B.S. Rawat, A. Bartwal, and A. Aswal. 2015. Study of landslide hazard zonation in Mandakini Valley, Rudraprayag district, Uttarakhand using remote sensing and GIS. Current Science 109 (1): 158–170.
Rozos, D., G.D. Bathrellos, and H.D. Skilodimou. 2011. Comparison of the implementation of rock engineering system and analytical hierarchy process methods, based on landslide susceptibility maps, compiled in GIS environment. A case study from eastern Achaia County of Peloponnesus, Greece. Environment and Earth Science 63 (1): 49–63.
Saaty, T.L. 1980. The analytical hierarchy process. Priority Setting. MacGraw-Hill: Resource Allocation, New York International Book Company 287.
Saaty, T.L. 2005. Theory and application of the analytic network process. Pittsburg: RWS.
Saaty, T.L, L.G. Vargas. 2001. Models, methods, concepts and application of analytical hierarchy process, 333. Kluwer, Boston.
Saha, A.K., R.P. Gupta, and M.K. Arora. 2002. GIS-based landslide hazard zonation in the Bhagirathi (ganga) valley, Himalayas. International Journal of remote sensing 23 (2): 357–369.
Saha, A.K., R.P. Gupta, I. Sarkar, M.K. Arora, and E. Csaplovics. 2005. An approach for GIS based statistical landslide zonation with a case study in the Himalaya. Landslides 2: 61–69.
Sarkar, S., and D.P. Kanungo. 2004. An integrated approach for landslide susceptibility mapping using remote sensing and GIS. Photogrammetric Engineering and Remote Sensing 70 (5): 617–625.
Sarkar, S., D.P. Kanungo, and G.S. Mehrotra. 1995. Landslide hazard zonation: A case study of Gharwal Himalaya, India. Mountain Research and Development 15 (4): 301–309.
Sarkar, S., A. Roy, and T.R. Martha. 2013. Landslide susceptibility assessment using information value method in parts of the Darjeeling Himalayas. Journal of the Geological Society of India 82 (4): 351–362.
Sharma, M., and R. Kumar. 2008. GIS based landslide hazard zonation: A case study from the Parwanoo area, lesser and outer Himalaya, H.P., India. Bulletin of Engineering Geology and the Environment 67: 129–137.
Sharma, S., and A.K. Mahajan. 2018. A comparative assessment of information value, frequency ratio and analytical hierarchy process models for landslide susceptibility mapping of a Himalayan watershed, India. Bulletin of Engineering Geology and the Environment. 1–18. https://doi.org/10.1007/s10064-018-1259-9.
Sharma, R., U.K. Sharma, and A.K. Mahajan. 2015. Rainfall and anthropologically accelerated mass movement in the outer Himalaya, north of Dharamshala town, Kangra district, Himachal Pradesh: A cause of concern. Journal of the Geological Society of India 86 (5): 563–569.
Sharma, R.K., and B.S. Mehta. 2012. Macro-zonation of landslide susceptibility in Garamaura – Swarghat - Gambhar section of national highway-21, Bilaspur district, Himachal Pradesh (India). Natural Hazards 60: 671–688. https://doi.org/10.1007/s11069-011-0041-0.
Singh, T.N., A. Gulati, I.K. Dontha, and V. Bhardwaj. 2008. Evaluating cut slope failure by numerical analysis- a case study. Natural Hazards 47: 263–279.
Tofani, V., F. Raspini, F. Catani, and N. Casagli. 2013. Persistent Scatterer interferometry (PSI) technique for landslide characterization and monitoring. Remote Sensing 5 (3): 1045–1065.
Van Westen, C.J., E. Castellanos, and S.L. Kuriakose. 2008. Spatial data for landslide susceptibility, hazard and vulnerability assessment: An overview. Engineering Geology 102: 3–4.
Wu, Y.L., W.P. Li, P. Liu, H.Y. Bai, Q.Q. Wang, J.H. He, Y. Liu, and S.S. Sun. 2016. Application of analytic hierarchy process model for landslide susceptibility mapping in the Gangu County, Gansu Province, China. Environment and Earth Science 75: 1–11.
Yalcin, A., S. Reis, A.C. Aydinoglu, and T. Yomralioglu. 2011. A GIS based comparative study of frequency ratio, analytical hierarchy process, bivariate statistics and logistic regression methods for landslide susceptibility mapping in Trabzon, NE Turkey. Catena 85 (3): 274–287. https://doi.org/10.1016/j.catena.2011.01.014.
Yin, K.L., and T.Z. Yan. 1988. Statistical prediction model for slope instability of metamorphosed rocks. In proceedings of the 5th international symposium on landslides, Lausanne, Switzerland. Vol. 2, 1269–1272. The Netherlands: AA Balkema Rotterdam.
Authors Prof. A.K. Mahajan and Mrs. Swati Sharma are thankful to Department of Science and Technology (DST) for all the research facilities provided under the project no. NRDMS/11/3023/013(G) for carrying out the studies. Department of Earth and Environmental sciences, Central university of Himachal Pradesh is also acknowledged for providing all the resources.
Project no. NRDMS/11/3023/013(G) funded by department of Science and Technology (DST), India.
Entire data prepared from this work is presented in the main manuscript.
Department of Environment Science, School of Earth and Environmental Sciences, Central University of Himachal Pradesh, Shahpur, HP, 176206, India
& Ambrish Kumar Mahajan
Wadia Institute of Himalayan Geology, Dehradun, India
Ambrish Kumar Mahajan
Search for Swati Sharma in:
Search for Ambrish Kumar Mahajan in:
SS has carried out the field investigations and preparation of the thematic maps with AKM for developing the landslide inventory. AKM has helped to conceptualize the methodology and SS has drafted the entire manuscript. Both the authors have read and approved the manuscript.
Correspondence to Swati Sharma.
Sharma, S., Mahajan, A.K. Comparative evaluation of GIS-based landslide susceptibility mapping using statistical and heuristic approach for Dharamshala region of Kangra Valley, India. Geoenviron Disasters 5, 4 (2018) doi:10.1186/s40677-018-0097-1
Landslide susceptibility mapping
Heuristic and statistical model
Map evaluations | CommonCrawl |
Quantum Computation
Jaden Pieper and Manuel E. Lladser (2018), Scholarpedia, 13(2):52499. doi:10.4249/scholarpedia.52499 revision #186567 [link to/cite this article]
Curator: Manuel E. Lladser
Jaden Pieper
Atri Bhattacharya
James Meiss
Stephen Becker
Mr. Jaden Pieper, University of Colorado, Boulder, CO, USA
Prof. Manuel E. Lladser, Department of Applied Mathematics, University of Colorado, Boulder, Colorado, USA
A quantum computer is a machine that exploits quantum phenomena to store information and perform computations.
The chief goal of this article is to provide a brief but comprehensive introduction to quantum computing. It overviews some mathematical underpinnings of quantum computation for readers with only a basic knowledge of linear algebra and probability. However, it does not attempt to be exhaustive nor make an exposition of quantum physics. It further does not attempt to stay current with physical implementations of quantum computers.
After providing a brief historical background, the article introduces the notion of a quantum bit (qubit) and the linear operators (gates) that act on these. It then addresses systems of multiple qubits and their corresponding gates. Along the way, the article covers the essential concepts of separable systems, as well quantum interference and decoherence. It also describes how to represent gates as quantum circuits. To conclude, it explains the underpinnings of two simple but insightful quantum algorithms. A final section suggests further readings for those who wish to delve deeper into quantum computing.
2 The Qubit
3 Single-qubit Gates
4 Systems of Multiple Qubits
5 Separable Systems
6 Multi-qubit Gates and Circuits
7 Quantum Interference and Decoherence
8 Quantum Algorithms
8.1 Deutsch's Algorithm
8.2 Grover's Algorithm
9 Final Remarks and Further Readings
9.1 Acknowledgements
During a lecture in the early 1980s, Richard Feynman proposed the concept of simulating physics with a quantum computer (Feynman 1982). He postulated that by manipulating the properties of quantum mechanics and quantum particles one could develop an entirely new kind of computer, one that could not be described by the classical theory of computation with Turing machines. Nature does not explicitly perform the calculations to determine the speed of a ball dropped from a tall building; it does so implicitly. Extending this line of thinking, Feynman wondered if one could harness the complex calculations nature performs intrinsically in quantum mechanics to design a computer with more computational power.
New results about the advantages of quantum computers began to filter in soon after Feynman's lecture. By 1985, Deutsch had developed the concept of the Quantum Turing machine, which formalized the theory of quantum computation (Deutsch 1985), and introduced a first toy problem that a quantum computer could in principle solve faster than any known classical algorithm. By 1992, Deutsch and Jozsa proposed a generalized algorithm for this problem (Deutsch and Jozsa 1992) that further demonstrated the potential speed increases quantum computers could provide. Soon, algorithms were being developed for searching (Grover 1996) and integer factorization. In fact, with the exponential speed-up in integer factorization shown by Shor (Shor 1997), and its implications for cryptography and security, the theory of quantum computation had shown its true potential. Research into the development of physical quantum computers began in earnest.
Recently there have been many advancements in physical implementations of the Deutsch-Josza algorithm (Linden et al. 1998), (Gulde et al. 2003), quantum search (Brickman et al. 2005), quantum integer factorization (Vandersypen et al. 2001), (Monz et al. 2016), quantum Fourier Transform (Chiaverini et al. 2005), and programmable quantum computers (Debnath et al. 2016), (Koch et al. 2007). Further, with IBM's Quantum Experience, a programmable quantum computer is now available to the general public.
The Qubit
In classical computing, the unit of information is called a bit. A bit can be in states $0$ or $1$ and, at any moment in time, it is in either of these two states. Likewise, a quantum bit, also called qubit, must be in either of these states but only upon measurement.
The critical distinction between qubits and the classical bits, and what gives quantum computing its power, is that a qubit may be in a superposition of the states $0$ and $1$ before being measured. In a sense—before measurement—the qubit may have simultaneously the values $0$ and $1$. Only when it is measured, it adopts, or as it is more commonly described, collapses to one of these two values. The so-called ket notation is frequently used to describe this bizarre behavior of qubits. A ket is most generally an element of a Hilbert space but, in the case of a system of $n$ qubits, it is a unit vector with $2^n$ complex entries.
In what follows $(a_0,a_1)$, $(a_{00},a_{01},a_{10},a_{11})$, etc. denote column vectors with complex entries.
A qubit $\left|\psi\right\rangle$, which is read as "ket psi," can have different probabilities of being measured in state $0$ or state $1$. For this reason, it is described by a 2-dimensional complex vector, say $\left|\psi\right\rangle=(a_0, a_1)$. The entries in this vector are called amplitudes. By definition, the likelihood of $\left|\psi\right\rangle$ being measured in state $0$ is $|a_0|^2$. Similarly, the likelihood of $\left|\psi\right\rangle$ being measured in state $1$ is $|a_1|^2$. Since a qubit must collapse to one of these two states upon measurement, $|a_0|^2 + |a_1|^2 = 1$. Equivalently, $\left|\psi\right\rangle$ must be a unit vector with respect to the Euclidean norm. We note that quantum mechanics is inherently linear and stochastic; in particular, requiring that kets are unit vectors is just a way to normalize vectors so as to have a probabilistic interpretation.
By convention, $\left|0\right\rangle:=(1,0)$ and $\left|1\right\rangle:=(0,1)$. These are called the measurement or pure states of a single qubit. This is because the qubit $\left|0\right\rangle$ will be measured in state $0$ with probability one. Similarly, $\left|1\right\rangle$ will be measured in state $1$ with probability one. These states are sometimes also called basis states because, for a general qubit $\left|\psi\right\rangle=(a_0, a_1)$, we have \[\left|\psi\right\rangle=\begin{bmatrix} a_0 \\ a_1 \end{bmatrix}=a_0\,\left|0\right\rangle+a_1\,\left|1\right\rangle.\]
Since the amplitudes are complex numbers we can further represent them in polar form as $a_k = r_k e^{i \theta_k}$. Note that neither $\theta_0$ nor $\theta_1$ alone can affect the measurement probabilities of $\left|\psi\right\rangle$. In fact, from the point of view of quantum physics, only the relative phase $\phi:=(\theta_1-\theta_0)$, in short the phase of the qubit, is relevant to describe it. Because of this, it is customary to assume that $\theta_0=0$. In particular, since $(r_0^2+r_1^2)=1$, a general qubit can be uniquely described by a two-dimensional vector of the form: \begin{equation} \left|\psi\right\rangle = \cos\!\left(\frac{\theta}{2}\right)\,\left|0\right\rangle + e^{i\phi}\,\sin\!\left(\frac{\theta}{2}\right) \left|1\right\rangle = \begin{bmatrix} \cos(\theta/2) \\ e^{i\phi}\sin(\theta/2) \end{bmatrix}, \tag{1} \end{equation} for certain $0\le\theta\le\pi$ and $0\le\phi<2\pi$.
Due to the identity in equation (1), it is common to visualize a qubit as a point on the so-called Bloch Sphere (see Figure 1). This alternative representation offers a visual reference of both the phase and the probabilities of measuring a qubit as either of the basis states, represented here as the north and south poles of the sphere. In general, the probabilities of measurement in each state are controlled entirely by the parameter $\theta$, which describes the qubits proximity to either pole. Although the phase $\phi$ has no effect on measurement probabilities, it plays a key role for quantum interference (McIntyre et al. 2012), a phenomenon described in a later section.
Figure 1: The Bloch Sphere. Visual representation of a general qubit $\left|\psi\right\rangle$.
Single-qubit Gates
Single-qubit gates are linear operators that transform a qubit into another (possibly the same) qubit; in particular, they may be represented as complex matrices of dimension $(2\times 2)$. As qubits must be unit vectors, quantum gates must be norm preserving and thus unitary. Said another way, all quantum gates can be visualized as rotations along the Bloch sphere.
In what follows, the operator $\oplus$ denotes addition modulo 2. In particular, in the context of bits, we have that: \[ \begin{array}{cccc} 0\oplus 0=0; & 0\oplus 1 =1;& 1\oplus 0 =1;& 1\oplus 1=0. \end{array}\]
A fundamental quantum gate is the so-called "$X$-gate": \[ X := \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}. \;\;\; \] This gate is analogous to the classical NOT operation over bits, which maps a bit $x_1$ to $(x_1 \oplus 1)$, effectively flipping a $0$ into a $1$ and vice versa. Instead, $X(a_0,a_1)=(a_1,a_0)$, for all $(a_0,a_1)$ i.e. the $X$-gate flips the amplitudes of a qubit. For this reason $X$ is also referred to as the NOT gate of quantum computing.
The twist gates form an important class of gates parametrized by a parameter $0\le\alpha\le\pi$ as follows: \[ T(\alpha) := \begin{bmatrix} 1 & 0 \\ 0 & e^{i\alpha} \end{bmatrix}. \] This gate shifts the phase of a quantum state by $\alpha$ radians; in particular, it can be visualized as a rotation about the $z$-axis of the Bloch sphere. Two twist gates are of particular historical relevance. The identity gate is defined as $I:=T(0)$. On the other hand, the so-called "$Z$-gate" is the twist gate associated with $\alpha=\pi$ i.e. $Z := T(\pi)$. The $I$, $X$ and $Z$ linear operators belong to a special family called the Pauli matrices (Kaye et al. 2007).
Another fundamental single-qubit gate is the $(2\times 2)$ Hadamard matrix: \[ H := \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\ 1 & -1 \end{bmatrix}. \] This gate maps $\left|0\right\rangle\to(1,1)/\sqrt{2}$ and $\left|1\right\rangle\to(1,-1)/\sqrt{2}$. In particular, upon measurement, $H\left|0\right\rangle$ and $H\left|1\right\rangle$ have a 50/50 chance of being in state $0$ or $1$. The Hadamard gate is therefore useful to generate a uniform superposition of the measurement states. This gate is a common initialization step in many quantum algorithms.
Systems of Multiple Qubits
In what follows, $n\ge1$ is a fixed integer.
A quantum system composed by $n$ distinguishable qubits is also represented as a ket, say $\left|\psi\right>$. Much like in the case of a single qubit, the qubits in the system may be in a superposition of $2^n$ states right until measurement, at which time each qubit collapses into either the state $0$ or $1$. Accordingly, $\left|\psi\right>$ denotes a $2^n$-dimensional complex vector. The entries in this vector are again complex numbers called amplitudes, and their squared magnitudes represent the probabilities of the system collapsing to different states (i.e. configurations of zeroes and ones).
To amplify on the above, let $(k)_2$ denote the $n$-bits binary expansion of an integer $0\le k<2^n$. For instance, when $n=2$, $(0)_2=00$, $(1)_2=01$, $(2)_2=10$, and $(3)_2=11$.
By convention, for each $1\le i\le 2^n$, the $i$-th coordinate of an $n$-qubits system $\left|\psi\right\rangle$ is the amplitude associated with the state $w=(i-1)_2$, which we denote in general as $a_w$. In particular, if we identify binary expansions of length $n$ as elements in $\{0,1\}^n$, and $\left|(i-1)_2\right\rangle$ denotes the $i$-th canonical vector in dimension $2^n$, then \[\left|\psi\right\rangle=\sum_{w\in\{0,1\}^n}a_w\,\left|w\right\rangle.\] The canonical vectors $\left|w\right\rangle$ are again called the measurement, pure, or basis states. This is because for a given $w=(w_1,\ldots,w_n)\in\{0,1\}^n$, $|a_w|^2$ is the probability that, for each $1\le j\le n$, the $j$-th qubit collapses to state $w_j$ upon observation. In particular, we must have $\sum_{w\in\{0,1\}^n}|a_w|^2=1$, i.e. the Euclidean norm of $\left|\psi\right\rangle$ is one.
To fix ideas, in a 2-qubits system, $\left|00\right\rangle=(1,0,0,0)$, $\left|01\right\rangle=(0,1,0,0)$, $\left|10\right\rangle=(0,0,1,0)$, and $\left|11\right\rangle=(0,0,0,1)$. Moreover, a general 2-qubits system is described by a vector of the form: \[\left|\psi\right\rangle =\begin{bmatrix} a_{00}\\ a_{01}\\ a_{10}\\ a_{11} \end{bmatrix} =a_{00}\,\left|00\right\rangle+a_{01}\,\left|01\right\rangle+a_{10}\,\left|10\right\rangle+a_{11}\,\left|11\right\rangle,\] where $|a_{00}|^2+|a_{01}|^2+|a_{10}|^2+|a_{11}|^2=1$. Here, for each $i,j\in\{0,1\}$, $|a_{ij}|^2$ is the probability that the first and second qubit collapse into state into states $i$ and $j$, respectively, upon measurement.
Separable Systems
Describing certain quantum systems with multiple qubits is facilitated by the use of Kronecker products. For given matrices $A\in\mathbb{C}^{r\times c}$ and $B$ of possibly different dimensions, their Kronecker product is the block matrix: \[ A \otimes B := \begin{bmatrix} a_{1,1}\!\cdot\!B & \ldots & a_{1,c}\!\cdot\!B \\ \vdots & \ddots & \vdots \\ a_{r,1}\!\cdot\!B & \ldots & a_{r,c}\!\cdot\!B \end{bmatrix}. \] Although this product is typically non-commutative, it is bilinear and associative. In particular, one may denote the product of several matrices $A_i$, with $1\le i\le n$, simply as $\otimes_{i=1}^n A_i$.
A useful property about Kronecker products is that for matrices $B_i$ of size such that the usual matrix multiplication $A_i\cdot B_i$ is well-defined for each $i$, it applies that: \begin{equation}\tag{2} (\otimes_{i=1}^n A_i)\cdot(\otimes_{i=1}^n B_i) = \otimes_{i=1}^n (A_i\cdot B_i). \end{equation}
A quantum system $\left|\psi\right\rangle$ of $n$ qubits is called separable if it can be represented as the Kronecker product of its qubits. Namely, if $\left|\psi_i\right\rangle$ denotes the $i$-th qubit in the system then $\left|\psi\right\rangle = \otimes_{i=1}^n\left|\psi_i\right\rangle$. This is equivalent to saying that the qubits collapse independently of each other—in a probabilistic sense—upon measurement.
To fix ideas, if $n = 2$ and $\left|\psi_1\right\rangle = (a_0, a_1)$ and $\left|\psi_2\right\rangle = (b_0, b_1)$, then $\left|\psi_1\right\rangle\otimes\left|\psi_2\right\rangle=(a_0b_0,a_0b_1,a_1b_0,a_1b_1)$. In particular, the probability that $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$ collapse upon measurement to states $i$ and $j$, respectively, is $|a_i|^2\cdot|b_j|^2$ i.e. the product of the probabilities that $\left|\psi_1\right\rangle$ collapses to state $i$ and $\left|\psi_2\right\rangle$ collapses to state $j$. Since this is the case for each $i,j\in\{0,1\}$, the states to which the qubits collapse to upon measurement are independent. The same conclusion applies to systems of more qubits. Thus, in general, a separable quantum system can be thought of as a measurement of probabilistically independent qubits.
It follows that a separable system of $n$ qubits may be represented as a complex vector in a $2n$-dimensional subspace within the $2^n$-dimensional ambient space. In short, we sometimes write $\left|\psi_1\right\rangle\cdots\left|\psi_n\right\rangle$ instead of $\left|\psi_1\right\rangle\otimes\cdots\otimes\left|\psi_n\right\rangle$
A system of qubits that is not separable is called entangled. It is commonly assumed at the start of a quantum computation that all qubits are separable. As the quantum computation proceeds, however, these may become entangled as one performs special operations on pairs of them. We emphasize that separability and entanglement are properties of quantum systems with at least two qubits.
Multi-qubit Gates and Circuits
Quantum gates that act on systems of $n$ qubits can be thought of as $(2^n \times 2^n)$ unitary matrices.
A quantum gate is called separable if it can be represented as the Kronecker product of single-qubit gates. Separable gates keep separable qubits unentangled. A non-separable gate is called entangling.
To fix ideas, consider two unentangled qubits $\left|\psi_1\right\rangle$ and $\left|\psi_2\right\rangle$, and the gate $U:=(X\otimes H)$ which is by definition separable. Because the qubits are unentangled, the system is described by the state $\left|\psi_1\right\rangle\otimes\left|\psi_2\right\rangle$; in particular, from the identity in equation (2), we see that $U(\left|\psi_1\right\rangle\otimes\left|\psi_2\right\rangle)=X\left|\psi_1\right\rangle\otimes H\left|\psi_2\right\rangle$ i.e. the $U$ gate keeps the unentangled qubits separable.
Figure 2: $U=(X\otimes H)$ gate with labeled columns and rows.
In general, to interpret the action of a gate (separable or not), it is useful to view its columns as inputs and its rows as outputs. For example, displays the matrix associated withe the previous gate $U$, where for convenience we have labeled its columns and rows by the corresponding measurement states. It follows, for instance, that the column corresponding with the input $\left|00\right\rangle$ (i.e. the first column) is associated with the output $(0,0,1,1)/\sqrt{2}$ i.e. the quantum state $(\left|10\right\rangle + \left|11\right\rangle)/\sqrt{2}$.
When working with large systems of qubits, the matrix representation of a quantum gate can be challenging to work with due to an exponentially large number of rows and columns. Because of this, quantum gates are usually represented as diagrams called quantum circuits. These diagrams make it much easier to ascertain the operations as well as the order in which they are applied to each qubit in the system.
To fix ideas, consider the three-qubit circuit in . Each qubit has an associated qubit line, with quantum gates on each line acting on the corresponding qubit. Quantum circuits are read left to right, and due to their resemblance to musical scores, they are sometimes also called quantum scores. The input state for this circuit is $\left|x_1\right\rangle\left|x_2\right\rangle\left|x_3\right\rangle$. The output state is $\left|y\right\rangle$, an 8-dimensional complex vector. Single-qubit gates (such as $X$, $H$ and $I$) are represented as single and vertically aligned boxes along the qubit lines. Instead, gates that take multiple qubits as inputs (such as the previous gate $U$) are represented as boxes spanning multiple qubit lines. The quantum gate associated with this circuit is by definition $(I\otimes U)\cdot(U\otimes I)\cdot(X\otimes H \otimes H)$. Namely: $(I\otimes U)(U \otimes I)(X\otimes H \otimes H)\,\left|x_1\right\rangle\left|x_2\right\rangle\left|x_3\right\rangle=\left|y\right\rangle$.
Figure 3: Example of a Quantum Circuit. Quantum Circuit associated with the operator $(I\otimes U)\cdot(U\otimes I)\cdot(X\otimes H \otimes H)$. The identity gate is commonly omitted from quantum circuits but it is drawn in here for concreteness.
The most common entangling gate is the controlled-not (CNOT) gate, also denoted as $\wedge_1 (X)$, which acts over two qubits. It is displayed in matrix form in Figure 4, where we have explicitly labeled its columns and rows to interpret its action better. It follows that when the control qubit is in state $\left|0\right\rangle$ the identity gate is applied to the target qubit, however, when the control is in state $\left|1\right\rangle$ the NOT gate is applied to the target. In other words, when $x_1,x_2\in\{0,1\}$, CNOT transforms $\left|x_1\right\rangle\left|x_2\right\rangle$ into $\left|x_1\right\rangle\left|x_1\oplus x_2\right\rangle$; in particular, it encodes the exclusive-or of the control and target qubits into the target qubit. The standard quantum circuit representation of the CNOT gate can be seen in Figure 5.
Figure 4: CNOT gate with labeled columns and rows.
It turns out CNOT is the only entangling gate required for a quantum computer. This is because the gates $H$, $T(\pi/4)$, and $\wedge_1(X)$ are universal for quantum computing (Kaye et al. 2007). Namely, one can approximate any quantum gate using a quantum circuit composed only of these gates. (Since in finite dimension all norms are equivalent, there is no need to define the norm used for the approximation explicitly.)
Figure 5: CNOT Quantum Circuit. Representation of $\wedge_1(X)$ as a quantum circuit. The control qubit ($x_1$ in this case) can be identified by the thick dot ($\bullet$) on its line. The target qubit (here $x_2$) is pointed by the modulo-two sum ($\oplus$) on its line.
We mention in passing that the notation of $\wedge_1(X)$ for the CNOT gate is part of a more general construct in quantum computing. Indeed, for any integer $k\ge1$ and single qubit gate $U$ of the form $U=V^{k+1}$, with $V$ a unitary operator, $\wedge_k(U)$ denotes a gate on $(k+1)$ qubits ($k$ controls and one target) such that the gate $U$ is applied to the target qubit if and only if all the control qubits are in state $\left|1\right\rangle$. Otherwise, the target qubit is left unchanged.
Quantum Interference and Decoherence
A fundamental idea in quantum computing is to control the probability a system of qubits collapses into particular measurement states. Quantum interference, a byproduct of superposition, is what allows us to bias the measurement of a qubit toward a desired state or set of states.
To fix ideas, consider the qubit in superposition $\left|\psi\right\rangle=(1,-1)/\sqrt{2}$ and note that $H\left|\psi\right\rangle=\left|1\right\rangle$. In other words, if the Hadamard gate is applied to $\left|\psi\right\rangle$ then it will be observed in the pure state $\left|1\right\rangle$ with theoretical certainty upon measurement. This is quantum interference at its purest.
Observe, however, that even though $\left|\psi\right\rangle$ may be measured in state $\left|0\right\rangle$ and $\left|1\right\rangle$ with equal probability, this is not the same as saying that $H\left|\psi\right\rangle=H\left|0\right\rangle$ with probability $1/2$, and $H\left|\psi\right\rangle=H\left|1\right\rangle$ with probability $1/2$. In fact, neither $H\left|0\right\rangle$ nor $H\left|1\right\rangle$ equals a pure state. Further, for $H\left|\psi\right\rangle=H\left|0\right\rangle$ and $H\left|\psi\right\rangle=H\left|1\right\rangle$ with equal probability, $\left|\psi\right\rangle$ should have been incidentally measured before the application of the Hadamard gate. (After an unintended measurement, a qubit is said to be in a mixed state.) Quantum interference may therefore be disrupted by an incidental measurement of a system qubit. This phenomenon is called quantum decoherence and can be a major source of error when working with physical quantum computers.
Quantum Algorithms
Deutsch's Algorithm
In 1985, a few years after Feynman's famous lectures on quantum computing (see Introduction), David Deutsch published a paper in which he formalized quantum complexity theory and quantum Turing machines (Deutsch 1985). This made it possible to compare the computational speed of classical computers against theoretical quantum machines. In the same paper, Deutsch posed a problem, and a quantum algorithm to solve it, that hinted at potential speedups of quantum computing in comparison to known classical algorithms.
Deutsch's algorithm is one of the most straightforward quantum algorithms. Thus, it serves as a prototype to how to think about and approach quantum algorithms. Its problem statement is as follows.
Problem: Given $f:\{0,1\}\rightarrow\{0,1\}$, determine if $f$ is a constant function with the least number of evaluations of this function.
Recall that $\oplus$ represents addition modulo 2 or, equivalently, the exclusive-or operation (XOR).
Whereas a classical computer could only solve this problem with two evaluations (i.e. by evaluating both $f(0)$ and $f(1)$), Deutsch showed that a quantum computer could in principle extract the value of $f(0) \oplus f(1)$ at once, taking advantage of superposition to evaluate $f(0)$ and $f(1)$ simultaneously. Since $f(0) \oplus f(1)=0$ if and only if $f$ is constant, the quantum algorithm should require just one evaluation of the function $f$ to determine whether it is constant or not.
The algorithm relies on the quantum gate $U_f\left|x\right\rangle\left|y\right\rangle:=\left|x\right\rangle\left|f(x)\oplus y\right\rangle$, for $x,y\in\{0,1\}$, which extends to arbitrary kets because of linearity. We note this gate is a particular instance of a more general class of gates that encode Boolean functions. In this context, $\left|x\right\rangle$ is called the control qubit. Although the definition of this quantum gate requires evaluating the function twice, Deutsch's algorithm demonstrates nonetheless how one could in principle determine if $f$ is constant or not, only through the manipulation of a single quantum gate.
Observe that if $\left|y\right\rangle=H\left|1\right\rangle$ then \begin{eqnarray*} U_f\left|0\right\rangle\left|y\right\rangle &=& \frac{U_f\left|0\right\rangle\left|0\right\rangle-U_f\left|0\right\rangle\left|1\right\rangle}{\sqrt{2}}=\frac{\left|0\right\rangle\left|f(0)\right\rangle-\left|0\right\rangle\left|f(0)\oplus1\right\rangle}{\sqrt{2}}=(-1)^{f(0)}\left|0\right\rangle\left|y\right\rangle;\\ U_f\left|1\right\rangle\left|y\right\rangle &=& \frac{U_f\left|1\right\rangle\left|0\right\rangle-U_f\left|1\right\rangle\left|1\right\rangle}{\sqrt{2}}=\frac{\left|1\right\rangle\left|f(1)\right\rangle-\left|1\right\rangle\left|f(1)\oplus1\right\rangle}{\sqrt{2}}=(-1)^{f(1)}\left|1\right\rangle\left|y\right\rangle. \end{eqnarray*} Consequently, if $\left|x\right\rangle=H\left|0\right\rangle$ then \[U_f\left|x\right\rangle\left|y\right\rangle=\frac{(-1)^{f(0)}\left|0\right\rangle+(-1)^{f(1)}\left|1\right\rangle}{\sqrt{2}}\left|y\right\rangle=(-1)^{f(0)}\frac{\left|0\right\rangle+(-1)^{f(0)\oplus f(1)}\left|1\right\rangle}{\sqrt{2}}\left|y\right\rangle.\] Thus, up to a phase shift, after the application of $U_f$ the control qubit will be in state $(1,1)/\sqrt{2}$ if $f$ is constant but state $(1,-1)/\sqrt{2}$ otherwise. Since these corresponds to the columns of the Hadamard gate, applying $H$ to the control qubit will then map it to $\left|0\right\rangle$ if $f$ is constant and to $\left|1\right\rangle$ otherwise. In other words: \[(H\otimes I)U_f\left|x\right\rangle\left|y\right\rangle=\left\{\begin{array}{lcl} \left|0\right\rangle\left|y\right\rangle &,& \hbox{ if } f(0)=f(1);\\ \left|1\right\rangle\left|y\right\rangle &,& \hbox{ if } f(0)\ne f(1).\\ \end{array}\right.\] Finally, since $\left|y\right\rangle=H\left|1\right\rangle=HX\left|0\right\rangle$, the circuit in Figure 6 implements the above procedure.
Figure 6: Circuit Diagram of Deutsch's Algorithm. The function $f:\{0,1\}\to\{0,1\}$ is constant if and only if the qubit at the top is mapped to $\left|0\right\rangle$ at the end of the circuit.
Grover's Algorithm
Grover's algorithm is often described as "finding a needle in a haystack.'' It is a remarkable algorithm in that it demonstrates a quantum computer's ability to find a particular item (the needle) in a large list of items (the haystack), with a quadratic speedup compared to the best-known classical algorithm. Much like Deutsch's algorithm, however, it is more of a theoretical than a practical exercise. We expand on this remark at the end.
In abstract terms, Grover's algorithm addresses the following problem.
Problem: Define $[N]:=\{0,\ldots,2^n-1\}$, for some integer $n\ge1$. Given a uniformly at random permutation $\sigma:[N]\to[N]$, determine the index $w\in[N]$ such that $\sigma(w)=1$.
With a traditional mindset, all one can do is to evaluate $\sigma$ sequentially until finding an index that evaluates it to $1$. Since $\sigma$ is random, the average number of queries is $N/2$ i.e. $O(N)$. In contrast, we explain ahead that Grover's quantum algorithm can perform the same task in $O(\sqrt{N})$ evaluations using $n$ qubits (Grover 1996). There is, however, an essential distinction between these approaches: whereas the classical algorithm is guaranteed to succeed, the quantum algorithm can only achieve a high probability of success but not guarantee it.
Figure 7: Quantum Circuit for Grover's Algorithm.
Identifying $[N]$ with $\{0,1\}^n$, Grover's algorithm exploits superposition to, in some sense, evaluate $\sigma$ at all $w\in[N]$ at once. Figure 7 displays the quantum circuit associated with it. The circuit is initialized with $n$ qubits that are put in uniform superposition using Hadamard gates. It then proceeds with repeated applications of an $n$-qubit gate $G$, called the Grover iterate. This is defined as $G=D\,S_w$, where $D$ is called the diffusion transform and $S_w$ the quantum oracle.
As mentioned earlier, Grover's algorithm is more of a theoretical than a practical exercise. This is because the definition of the quantum oracle necessitates knowing in advance the index $w$ such that $\sigma(w)=1$. Indeed, the quantum oracle flips the amplitude of the measurement state associated with the unique $w\in[N]$ such that $\sigma(w)=1$. Namely: \[S_w\left|x\right\rangle:=\left\{\begin{array}{ccl}-\left|x\right\rangle &,& \hbox{ if } \left|x\right\rangle=\left|w\right\rangle \\ \left|x\right\rangle &,& \hbox{ otherwise.}\end{array}\right.\] This operator is unitary because it corresponds to a diagonal matrix with $\pm 1$ entries along the diagonal.
On the other hand, the diffusion transform inverts the amplitudes of a ket about its average amplitude. More specifically, if $\left|\psi\right\rangle=\sum_{x\in\{0,1\}^n}a_x\left|x\right\rangle$ then \[D\left|\psi\right\rangle:=\sum_{x\in\{0,1\}^n}\big(\bar{a}+(\bar{a}-a_x)\big)\,\left|x\right\rangle,\hbox{ where }\bar a:=\frac{1}{2^n}\sum_{x\in\{0,1\}^n}a_x.\] This operator is also unitary because $\sum_{x\in\{0,1\}^n}|\bar{a}+(\bar{a}-a_x)|^2=\sum_{x\in\{0,1\}^n}|a_x|^2=1$. In fact, it can be represented in terms of other quantum gates as follows: $D=H^{\otimes n}\cdot S_{0^n}\cdot H^{\otimes n}$ (Grover 1996).
Figure 8: Insights on the Grover Iterate. (i) Initial uniform superposition of all measurement states. (ii) Amplitudes after the index $w$ such that $\sigma(w)=1$ has been marked by the quantum oracle $S_w$. The dotted line represents the average amplitude $(\bar a)$ after application of the quantum oracle. (iii) Amplitudes after the diffusion transform $D$ inverts the amplitudes about their mean.
Together, the quantum oracle and diffusion transform amplify the amplitude of the unique measurement state associated with the index $w\in[N]$ such that $\sigma(w)=1$ (see Figure 8). We note, however, that running the quantum algorithm for too long my decrease its success probability, necessitating a bound for its optimal number of iterations. In fact, it is shown in (Boyer et al. 1998) that after $j$ iterations of the $G$-gate the amplitude associated with state $w$ is $\sin((2j+1)\theta)$, where $0\le\theta\le\pi/2$ is such that $\sin^2(\theta)=1/N$. When $N$ is large, $\theta\sim1/\sqrt{N}$. In particular, since the amplitude associated with $w$ is maximized for $j$ such that $(2j+1)\theta\sim\pi/2$, the optimal number of iterations is $j\sim\pi\sqrt{N}/4$ (Boyer et al. 1998). Thus, Grover's algorithm determines with high probability the index $w$ such that $\sigma(w)=1$ in $O(\sqrt{N})$ iterations.
As a closing remark, we note that despite not being practical, Groover's algorithm exemplifies how the sole manipulation of amplitudes may offer significant speed-ups on problems that become rapidly untractable in standard computational settings.
Final Remarks and Further Readings
The principal goal of this article is to introduce some of the most fundamental aspects of quantum computing in a concise and self-contained manner to facilitate their learning. By no means, however, does this represent a complete account of the subject. A more comprehensive, yet still not exhaustive introduction, including a brief prelude to quantum physics and on how to program IBM's public quantum computer, can be found in the recent MS thesis by Pieper (2017). We note that IBM's Quantum Experience allows users to experiment with and create their own quantum circuits, with a welcoming introduction to the theory. It also gives detailed references on a physical implementation of a quantum computer, and has an active community to answer questions.
Some excellent introductions, including helpful expositions of popular quantum algorithms, are the textbooks by Kaye et al. (2007), and Nielsen and Chuang (2011). On the other hand, the textbook by McIntyre et al. (2012) gives good examples and intuitions on the physics of quantum particles.
For somewhat informal yet fun expositions of this and other related subjects, the books by Baggott (2011), Aaronson (2013), and Gisin (2014) are highly recommended. Moreover, Scott Aaronson's blog provides an informal, yet nuanced perspective on quantum computing and addresses many common misconceptions about this increasingly popular topic.
In conclusion, we would like to note that there are other models of quantum computation not based on qubits, such as adiabatic quantum computation, one-way quantum computing, quantum annealing, and topological quantum computing, among others. Moreover, much controversy exists around the physical feasibility of building a quantum computer, and there has been much debate about the real quantum nature, or absence of it, of architectures such as D-wave.
This work is based on the MS thesis by Pieper (2017), which was partially funded by the NSF EXTREEMS-QED grant #1407340. The authors would also like to thank the two reviewers for their careful reading and valuable comments.
S. Aaronson. Quantum Computing Since Democritus. Cambridge University Press, 2013.
S. Aaronson's Blog. https://www.scottaaronson.com/blog/
J. Baggott. The Quantum Story: A history in 40 moments. Oxford University Press, 2011.
M. Boyer, G. Brassard, P. Hoyer, and A. Tapp. Tight bounds on quantum searching. Fortschritte der Physik, 46(4-5):493–505, 1998.
K.-A. Brickman, P. C. Haljan, P. J. Lee, M. Acton, L. Deslauriers, and C. Monroe. Implementation of Grover's quantum search algorithm in a scalable system. Physical Review A, 72(5):050306, 2005.
J. Chiaverini, J. Britton, D. Leibfried, E. Knill, M. D. Barrett, R. B. Blakestad, W. M. Itano, J. D. Jost, C. Langer, R. Ozeri, et al. Implementation of the semiclassical quantum Fourier transform in a scalable system. Science, 308(5724):997–1000, 2005.
S. Debnath, N. M. Linke, C. Figgatt, K. A. Landsman, K. Wright, and C. Monroe. Demonstration of a small programmable quantum computer with atomic qubits. Nature, 536(7614):63–66, 2016.
D. Deutsch. Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 400(1818):97–117, 1985.
D. Deutsch and R. Jozsa. Rapid solution of problems by quantum computation. Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 439(1907):553–558, 1992.
R. P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21(6):467–488, 1982.
N. Gisin. Quantum Chance: Nonlocality, Teleportation and Other Quantum Marvels. Copernicus, New York, 2014.
L. K. Grover. A fast quantum mechanical algorithm for database search. In Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing, pages 212–219. ACM, 1996.
S. Gulde, M. Riebe, G. P. T. Lancaster, C. Becher, J. Eschner, H. H ̈affner, F. Schmidt-Kaler, I. L. Chuang, and R. Blatt. Implementation of the Deutsch-Jozsa algorithm on an ion-trap quantum computer. Nature, 421(6918):48–50, 2003.
IBM Quantum Experience. https://quantumexperience.ng.bluemix.net/qstage/#/user-guide. Accessed: 2017-04-01.
P. Kaye, R. Laflamme, and M. Mosca. An Introduction to Quantum Computing. Oxford University Press, 2007.
J. Koch, T. M. Yu, J. Gambetta, A. A. Houck, D. I. Schuster, J. Majer, A. Blais, M. H. Devoret, S. M. Girvin, and R. J. Schoelkopf. Charge-insensitive qubit design derived from the Cooper pair box. Physical Review A, 76:042319, Oct 2007.
N. Linden, H. Barjat, and R. Freeman. An implementation of the Deutsch-Jozsa algorithm on a three-qubit NMR quantum computer. Chemical Physics Letters, 296(12):61 – 67, 1998.
D. H. McIntyre, C. A. Manogue, and J. Tate. Quantum Mechanics: A Paradigms Approach. Pearson, 2012.
T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, and R. Blatt. Realization of a scalable Shor algorithm. Science, 351(6277):1068–1070, 2016.
M. A. Nielsen, and I. L. Chuang. Quantum Computation and Quantum Information: 10th Anniversary Edition. Cambridge University Press, 2011.
J. K. Pieper. Unentangling Quantum Algorithms for Mathematicians and Engineers. Master's thesis, University of Colorado, Boulder, 2017.
P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484–26, 10 1997.
L. M. K. Vandersypen, M. Steffen, G. Breyta, C. S. Yannoni, M. H. Sherwood, and I. L. Chuang. Experimental realization of Shor's quantum factoring algorithm using nuclear magnetic resonance. Nature, 414(6866):883–887, 2001. and/or and/or and/or and/or
Sponsored by: Prof. James Meiss, Applied Mathematics University of Colorado, Boulder, CO, USA
Reviewed by: Prof. Stephen Becker, University of Colorado Boulder, Boulder, Colorado, United States
Reviewed by: Prof. James Meiss, Applied Mathematics University of Colorado, Boulder, CO, USA
Retrieved from "http://www.scholarpedia.org/w/index.php?title=Quantum_Computation&oldid=186567"
"Quantum Computation" by Jaden Pieper and Manuel E. Lladser is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. Permissions beyond the scope of this license are described in the Terms of Use | CommonCrawl |
Frustrated by the light clock special relativity thought experiment [closed]
Here is this age old thought experiment being told by a professor on Sixty Symbols: https://youtu.be/Cxqjyl74iu4
This explanation using the light clock is extremely frustrating. How can one use a hypothetical example which is physically impossible and then say the "result" explains SR? The photon would never hit the top mirror directly above its source b/c light does not take on the velocity of its source. Instead, the instant it leaves its source it goes straight up while the rocket moves forward, and would strike the back of the rocket (or the top somewhere to the left of the mirror). If the photon struck the mirror it would not move forward with the rocket, but again would go straight down while the rocket moves forward, b/c for the photon to move forward it woud have to feel the friction of the mirror pushing it forward, which is again impossible. The reason a wave such as sound would have the trajectory shown in this example is that the medium inside the rocket, air, is moving at the speed of the rocket and the sound wave would take on that velocity as it left its source. Light does not use a medium to move. The reason a physical object such as a ball would have the trajectory shown is that particles take on the velocity of the souce that is accelerating them. Again, light does not take that velocity on, but instead it instantly has its standard speed (c) as it leaves its source. So, the photon does the exact same thing leaving a moving source as it would a stationary source, it moves at the speed of light in the direction it's facing, hence no length is added to its trajectory as stated in the example, and thus does not prove the time dilation of SR. Anybody else hearing me here? Thoughts?
special-relativity
wattsanandawattsananda
$\begingroup$ That the momentum of light can't change because its velocity can't change is simply false. For photons we have $p=h\nu/c$, i.e. momentum is proportional to the frequency. $\endgroup$ – CuriousOne Jan 11 '16 at 19:22
$\begingroup$ You say "the photon" as if there was only one. Imagine instead, a spherical wave emanating from a point source (the emitter) and reflecting off the mirror. Some part of the reflected wave eventually interacts with the point-like detector. $\endgroup$ – Solomon Slow Jan 11 '16 at 19:59
How can one use a hypothetical example which is physically impossible and then say the "result" explains SR?
It isn't impossible, you're just missing the trick, which is that directions get "skewed" a little. Have a look at this question which featured a light beam and a ship's mast. Imagine you're holding the parallel-mirror thing and you're moving thataway → quite fast. If the light didn't reflect up and down, you'd claim it wasn't aimed straight up. Only when you claim it is, and I'm sitting here motionless watching you zoom by, I claim it isn't.
The photon would never hit the top mirror directly above it's source b/c light does not take on the momentum of it's source. instead, the instant it leaves it's source it goes straight up
Your straight up isn't the same as my straight up.
The reason a wave such as sound would have the trajectory shown in this example is that the medium inside the rocket, air, is moving at the speed of the rocket and the sound wave would take on that velocity as it left it's source. Light does not use a medium to move.
It does. Have a look at Nobel Laureate Robert B Laughlin here: "It is ironic that Einstein's most creative work, the general theory of relativity, should boil down to conceptualizing space as a medium when his original premise [in special relativity] was that no such medium existed".
So, the photon does the exact same thing leaving a moving source as it would a stationary source, it moves at the speed of light in the direction it's facing, hence no length is added to it's trajectory as stated in the example, and thus does not prove the time dilation of SR.
The light-path length is longer. Have a look at the Simple inference of time dilation due to relative velocity on Wikipedia. Gamma is derived very simply from Pythagoras's theorem. The hypotenuse is the light path, the base represents your speed as a fraction of c, and the height represents the Lorentz factor.
Anybody else feeling me here? Thoughts?
Cross my heart and hope to die, this stuff is simpler than you think. Michael Merrifield's explanation is right. Have a look at the aberration of light for another example of directions appearing to change.
John DuffieldJohn Duffield
You may want to consider that a photon is not just a particle, it is also a wave. At any given moment it propagates at speed c in a direction $\vec{k}$ perpendicular to its wave front. But the direction of the wave front differs in different frames.
The clock's frame observes the photon having a wave front parallel to the mirrors, and propagating perpendicular to it and to the mirrors.
But the other frame observes a wave front tilted in the direction of motion at a slope $-\gamma\beta$, and therefore a photon propagating along a direction tilted at a slope $\frac{1}{\gamma\beta}$. The detailed reason has to do with relativity of simultaneity. This answer offers a detailed calculation of how this happens. It is not a first principles one, since it relies on the Lorentz transformations, but it clarifies why everything is self-consistent. See the other answers to the same question for first principles arguments.
In addition there is a neat simulation of the effect here, as a Java applet.
udrvudrv
Not the answer you're looking for? Browse other questions tagged special-relativity or ask your own question.
Why light moves sideways?
I am confused about the idea that the speed of light is independent of the speed of the source of light
Direction of the lightbeam in SRT thought experiments
Special Relativity & Mirror Reflection
Special relativity mirror clock experiment inconsistency
special relativity mirror experiment
Basic special relativity thought experiment problem
Special relativity clarifiyng time dilation experiment
Speed vs Velocity of light in Light clock thought experiment
Showing that light bends twice as much as newtonian gravity predicts with special relativity | CommonCrawl |
How much CO2 could you remove from the atmosphere with 2 trillion USD?
I know it's possible to capture $\ce{CO2}$ with various chemical reactions. For example NASA's space shuttle had some kind of regenerative $\ce{CO2}$ scrubber. But how expensive is it? Could a huge number of these devices, or something like it, significantly reduce the atmosphere's $\ce{CO2}$ level?
I realize this is probably not practical, but it's an interesting thought experiment. To make my question more specific: can anyone show some rough calculations of how much $\ce{CO2}$ you could remove if you had, say, 2 trillion USD to spend? To simplify, assume that energy is at current prices, but clean. (i.e., if the device requires electricity, assume it's coming from a nuclear power plant, solar farm, etc.).
environmental-chemistry atmospheric-chemistry
Melanie Shebel♦
Rob NRob N
$\begingroup$ Humans are increasingly producing $\ce{CO2}$, and there's never a world-wide agreement for using cleaner methods. Even if we could, we'd end up cleaning in circles. $\endgroup$ – M.A.R. ಠ_ಠ Jul 27 '15 at 17:11
$\begingroup$ How about we plant a bunch of trees? $\endgroup$ – Aura Jul 27 '15 at 20:58
$\begingroup$ Why not use non-toxic, solar powered, environment friendly, self sustaining and duplicating CO2 scrubbers? We just need more of them. $\endgroup$ – PTwr Jul 27 '15 at 23:05
$\begingroup$ Algae is a much more efficient CO2 scrubber than trees. treehugger.com/urban-design/… $\endgroup$ – Timbo Jul 27 '15 at 23:29
$\begingroup$ Start with reading what the IPCC has to say: 1, 2. The only scenarios seriously considered are Bioenergy with carbon dioxide capture and storage and afforestation. However, I don't think 2 trillion USD is actually a very large amount of money in this context. You need to consider indirect (e.g., socio-economic) costs. $\endgroup$ – Roland Jul 28 '15 at 14:37
This is very hard to answer precisely, as there are many different carbon capture strategies, and economics at the scale required is quite different from our normal understanding. However, I'd love to see some attempts to at least get order of magnitude estimates, or sources with more in-depth analyses.
Here is an implementation of carbon capture and storage which is useful to consider for its simplicity rather than its real-world applicability, to give a sense of scale. One way to remove anthropogenic $\ce{CO2}$ is to do "inverse combustion", more specifically:
$$\ce{CO2(g) -> C(s) + O2(g)}\ \ \ \ \ \ \ \mathrm{\Delta H=+390\ kJ/mol}$$
Assume this process can be done with perfect efficiency, and that the only energy expense in the process is to drive the reaction forwards (that is, zero energy consumed in transportation, collection, construction, etc). According to this source, the amount of anthropogenic $\ce{CO2}$ emissions from 1750 to 2008 has totalled about $1250\times 10^9\ \mathrm{t_{CO_2}}$. Suppose you wish to remove all this carbon dioxide (only about half is in the atmosphere, the rest is trapped in the ocean or land) using the above process. This would require about $\mathrm{10^{22}\ J}$ of energy, which Wolfram Alpha suggests is about 50% more energy than can be retrieved from combustion of all global proven oil reserves in 2003. Put another way, this is about 20 times the world energy consumption in 2012, or 150 times world electrical energy production. This would put the cost of this process in the range of hundreds of trillions of US dollars, meaning two trillion USD barely makes a dent.
Is there some other process where two trillion USD is close to enough? I seriously doubt it.
Edit: Several comments and answers mention biological sequestration, which is a legitimate carbon capture strategy. I did not consider it, however, because its costs are far more complicated to calculate. My intention with this answer was to find a quick and comparatively simple way to attach an energy cost to carbon sequestration. Whether the monetary cost even makes sense at this scale (how do you define "monetary cost" when it's larger than world GDP?), I don't know.
But here's another amusing comparison, which should somewhat temper hopes that biological sequestration will be a magic bullet. The amount of anthropogenic carbon released between 1750 and 2008, ~350 billion metric tons of carbon, is comparable to a significant amount of the biomass on Earth (all eukaryotic life contains approximately 560 billion metric tons of carbon, multicellular life is a fraction of this). Thus, biological sequestration of the majority of anthropogenic carbon would be broadly equivalent to sacrificing all eukaryotic life on Earth (or ~10-30% of all living organisms by mass) in order to collect and bury carbon, then seeding the Earth back to its current biological state.
Over 250 years of constant stimulation to produce as much energy/goods from fossil fuels, we have released a lot of carbon.
Nicolau Saker NetoNicolau Saker Neto
$\begingroup$ It basically boils down to the fact that we'd have to 'unburn' all the fossil fuels we've used so far, and then some, and that energy has to come from something other than said fossil fuels. Not easy to pull off..... $\endgroup$ – whatsisname Jul 27 '15 at 20:29
$\begingroup$ This answer neglects the fact that trees exist and can fix CO2 for lower energy requirements, being solar powered. $\endgroup$ – March Ho Jul 27 '15 at 23:15
$\begingroup$ We can cut that enthalpy cost by a factor of about two (179 kJ/mol) if we sequester as limestone instead of carbon (and reuse all your simplifying assumptions). [ @MarchHo correctly observes that I conflated enthalpy of reaction with enthalpy of activation, although neither of us used these words. So the catalysis bit is retracted.] $\endgroup$ – Eric Towers Jul 27 '15 at 23:16
$\begingroup$ @EricTowers but is there a better way than photosynthesis to capture solar energy AND simultaneously sequester carbon? $\endgroup$ – Timbo Jul 27 '15 at 23:33
$\begingroup$ The energy requirement of photosynthesis are irrelevant, since they are supplied by the sun and not by humans. $\endgroup$ – March Ho Jul 27 '15 at 23:39
It's... feasible. There are a number of technologies that are being considered. Costs will be high, some number of billions or trillions: you're talking planetary engineering, here.
The most obvious option is to plant trees. The obvious problem with that is that even planting them at the same speed as they are being removed is infeasible. Unfortunately trees csan store only a limited amount of carbon; once a forest's got as crowded as it can, it's maxed out its carbon sequestration capacity, and will never store another gram.
An option that addresses this is to heat crop residue, storing the "biochar", the charcoal from the burned crops, in landfill or even in farmland to enrich the soil. The output of biochar systems is typically around 20% char, 20% usable biogas, 60% usable bio-oil, making them both a net energy source, and a carbon sink.
This system is being considered as an additional revenue stream for the sugarcane growers of Brazil, where it would (if universally adopted) sequester about 330,000,000 tons of carbon annually.
Dewi MorganDewi Morgan
$\begingroup$ Out of curiosity, I did a quick calculation of how much area would have to covered by Amazonian forest to sink the amount of $\ce{CO2}$ mentioned in my answer. Using a value of 61 petagrams of carbon per 424 million hectares for above-ground carbon density of the Amazon shown in this article, we reach an area ~40% larger than Russia, which means 16% of the land area on Earth covered in dense jungle. Doesn't sound possible. $\endgroup$ – Nicolau Saker Neto Jul 27 '15 at 23:35
$\begingroup$ Good point. Forestry's arguably possible in theory, but not economically sane in today's world. $\endgroup$ – Dewi Morgan Jul 28 '15 at 1:48
$\begingroup$ You don't need to grow them all at the same time. Each crop is (ideally chared first, then) burried, say in an old coal mine, and then the land used for another crop. A first step would be to stop burnimg fossil coal in the first place. $\endgroup$ – JDługosz Jul 28 '15 at 15:19
$\begingroup$ Yes: as an alternative to mere forestry, biochar has the advantage that every year's crops can become part of the carbon sequestration, which makes it, to me, a very economically sensible part of the toolset to clean up atmospheric CO2 (it'll take many tools: biochar, artificial trees, scrubbing towers, ocean seeding, etc). Char needn't all be put into landfill or used-up mines either: it can be ploughed back into the earth to help drainage, used in carbon filters, etc. It could also be used in the place of regular charcoal and burned, of course: sadly the most likely outcome, economically. $\endgroup$ – Dewi Morgan Jul 28 '15 at 22:39
$\begingroup$ The forests couldn't do it because they'd be CO2 starved. All plants are currently CO2 starved air flow is needed to keep growing but trying to grow a vast forest or any other plant solution would leave them stunted. $\endgroup$ – user2617804 Jul 29 '15 at 0:11
"Give me a half a tanker of iron and I will give you another ice age"
That's the claim of people who believe in iron seeding the ocean. The link claims that "the addition of silicic acid or choosing the proper location could, at least theoretically, eliminate and exceed all man-made CO2", but no citation is given. As to cost?
"Current estimates of the amount of iron required to restore all the lost plankton and sequester 3 gigatons/year of CO 2 range widely, from approximately 2 hundred thousand tons/year to over 4 million tons/year. The latter scenario involves 16 supertanker loads of iron and a projected cost of approximately €20 billion ($27 billion)."
but again, citation needed!
$\begingroup$ That's pretty much free compared to all the other attempts (including those we are actually trying, and including the huge costs sunk in designing more efficient devices etc.). If they actually had good evidence this would work (and of course, that it wouldn't cause bigger issues which is always a possibility), cost wouldn't be the thing that would stop this. And of course, eventually, the carbon will either return (as fish breed out of control with their new cheap source of food) or drop to the ocean floor (possibly creating more trouble in the future). $\endgroup$ – Luaan Jul 28 '15 at 7:20
$\begingroup$ @Luaan Experiments have been conducted several times. Iron fertilization: Experiments - and yes, there are questions of other issues and the definition of sequestration. $\endgroup$ – user2175 Jul 28 '15 at 14:20
$\begingroup$ My own guess is that the human population will increase so much over the next 200 years that "farming the ocean" will be necessary to prevent mass starvation. So iron seeding is going to go forward no matter what. $\endgroup$ – user14717 Jul 28 '15 at 15:47
$\begingroup$ @user14717 That's a common theme in human history. The starvation part anyway - it hasn't come to pass quite yet, no matter how strongly the guys believed in their pet hypothesis. So far, we've always been a few steps ahead of that technologically. The fixed thermodynamic limits are probably going to make human population infeasible long before we hit limits in food production (unless we somehow lose the food sources we already have, which is of course possible - we're getting better and better in self-contained food production, though). $\endgroup$ – Luaan Jul 28 '15 at 16:22
the $64,000 Question is ... Is it economically feasible to use a 'synthetic' process over a naturally occurring one. In this case, No. Zooplankton are the prime movers of CO2 in our atmosphere. They basically turn gaseous CO2 into Calcium Carbonate a lab phrase for sea shells. Increasing the number of Zooplankton is key to the process. UV and oil slicks and the Sargasso seas of plastic are lowering those numbers.
so the answer to the question is what process would one employ that was the most cost effective solution to the problem, clearly A. clean up the oceans and promote natural zooplankton growth or B. mimic the metabolic pathway that results in the production of calcium carbonate, as a GM/organic solution. Either A or B is a far better and sustainable solution that does not reintroduce/produce CO2 for the energy to power an artificial non-organic/biologic method.
SkipBerneSkipBerne
$\begingroup$ Welcome to chem.SE! This is one borderline answer. I wouldn't know if it's VLQ or not. But it would pay if you be more elaborate. $\endgroup$ – M.A.R. ಠ_ಠ Jul 28 '15 at 16:39
$\begingroup$ This states part of the problem (without references) but doesn't provide any solutions so it doesn't really answer the question. $\endgroup$ – bon Jul 28 '15 at 17:51
protected by Martin - マーチン♦ Jul 29 '15 at 7:25
Not the answer you're looking for? Browse other questions tagged environmental-chemistry atmospheric-chemistry or ask your own question.
Can't we reduce climate change by converting CO2 & CH4?
How could I create a machine that as rapidly as possible creates greenhouse gases from air?
How much worse is methane than CO2 as greenhouse gas?
How to calculate the abundances of certain gases from the peak areas, GC-MS
How may the water from rain be tested to determine it is natural, or from cloud seeding?
How to convert the factors of radiative forcing of methane compared to carbon dioxide from ratio by weight to ratio by molecule? | CommonCrawl |
Domain The Number System
Cluster Apply and extend previous understandings of operations with fractions to add, subtract, multiply, and divide rational numbers.
Standard Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram.
Task Differences of Integers
Differences of Integers
Alignments to Content Standards: 7.NS.A.1.c 7.NS.A.1.b 7.NS.A.1
Ojos del Salado is the highest mountain in Chile, with a peak at about 6900 meters above sea level. The Atacama Trench, just off the coast of Peru and Chile, is about 8100 meters below sea level (at its lowest point).
What is the difference in elevations between Mount Ojos del Salado and the Atacama Trench?
Is the elevation halfway between the peak of Mount Ojos del Salado and the Atacama Trench above sea level or below sea level? Explain without calculating the exact value.
What elevation is halfway between the peak of Mount Ojos del Salado and the Atacama Trench?
The goal of this task is to subtract integers in a real world context. It will be very helpful for students to use number lines for this task. In the solution they are drawn vertically to match the context of elevation but accurately labeled horizontal number lines are also appropriate. More information about the geographic features mentioned in the task statement can be found at
http://en.wikipedia.org/wiki/Ojos_del_Salado
http://en.wikipedia.org/wiki/Peru%E2%80%93Chile_Trench
Below is a number line showing the elevation of these two locations. Each unit on the number line represents 1000 feet of elevation:
The number line indicates that to find the difference in elevation from the peak of Ojos del Salado to the bottom of the Atacama Trench, we need to add the elevation of Ojos del Salado above sea level to the depth of the Atacama Trench below sea level. In equations, we have \begin{align} 6900 - (-8100) &= 6900 + 8100 \\ &= 15000.\end{align} There is a difference of 15,000 meters between the altitudes of these two locations on the earth. In this equation, 8100 = -(-8100): subtracting -8100 is the same as adding 8100.
Since 8100 $\gt$ 6900, the depth of Atacama Trench is greater than the height of Ojos del Salado. If these quantities were the same, then the midpoint between them would be at sea level. Since the depth of Atacama Trench is greater, this means that the midpoint between these two quantities is negative or below sea level.
One way to find the elevation halfway between the mountain peak and trench floor is to divide the 15,000 foot difference between these by 2: 15,000 $\div$ 2 = 7500. This means that the point midway between them will be 7500 feet below the mountain peak and 7500 feet above the ocean floor: 6900 - 7500 = -600 and similarly -8100 + 7500 = -600. So the elevation halfway between the mountain peak and trench floor is at -600 feet.
A second method uses an idea from statistics. The midpoint between 6900 and -8100 the mean of the two numbers: \begin{align} \frac{6900 + (-8100)}{2} &= \frac{6900-8100}{2}\\ &= \frac{-1200}{2} \\ &= -600.\end{align} Halfway between the peak of Ojos del Salado and the bottom of the Atacama Trench is 600 meters below sea level. This is shown in the number line below: | CommonCrawl |
Trajectory Formula
A trajectory is the path taken up by a moving object that is following through the space as a function of time. Mathematically, a trajectory is described as a position of an object over a particular time. A much simplified example would by a ball or rock thrown upwards, the path taken by the stone is determined by the gravitational forces and resistance of air.
Some more common examples of trajectory motion would be a bullet fired from gun, an athlete throwing a javelin, satellite orbiting around the earth etc.
Trajectory formula is given by
\[\large y=x\:tan\,\theta-\frac{gx^{2}}{2v^{2}\,cos^{2}\,\theta}\]
y is the horizontal component,
x is the vertical component,
g= gravity value,
v= initial velocity,
$\theta$ = angle of inclination of the initial velocity from horizontal axis,
Trajectory related equations are:
\[\large Time\;of\;Flight: t=\frac{2v_{0}\,sin\,\theta}{g}\]
\[\large Maximum\;height\;reached: H=\frac{_{0}^{2}\,sin^{2}\,\theta}{2g}\]
\[\large Horizontal\;Range: R=\frac{V_{0}^{2}\,sin\,2\,\theta}{g}\]
Vo is the initial Velocity,
sin $\theta$ is the y-axis vertical component,
cos $\theta$ is the x-axis horizontal component.
Question: Marshall throws a ball at an angle of 60o. If it moves at the rate of 6m/s and Steve catches it after 4s. Calculate the vertical distance covered by it.
Given,
$\theta = 60^{\circ}$
$Initial\;velocity=v_{0} = 6m/sec$
time = 4 sec
The horizontal distance is given by:
$x=V_{x0}$
$t=6\;m/sec\times 4\;sec$
$x=24\;m/sec^{2}$
$y=x\,tan\,\theta -\frac{gx^{2}}{2v^{2}\,cos^{2}\,\theta}$
$=24\,m/sec^{2}\;tan\;60^{\circ}$
$=\frac{9.8\,m/sec^{2}(15/sec^{2})^{2}}{2(6m/sec)^{2}cos^{2}\,60^{\circ}}$
$=24.5\,m/s^{2}$
Which of the following scientists provided indirect evidence to support the fact that DNA is the genetic material in most organisms?
Gregor Mendel, Walter Sutton, T.H. Morgan
Alfred Hershey and Martha Chase
James Watson and Francis Crick
Rosalind Franklin and Maurice Wilkins
Infinite Geometric Series Formula Arithmetic Sequence Formula
Energy Consumed Formula Critical Angle Formula
Magnetic Field Strength Formula Triangular Pyramid Formula
Average Atomic Mass Formula Relativity Formula
How To Find The Diagonal Of A Rectangle Curved Surface Area Cylinder Formula
The elements used in Hershey and Chase's experiments were:
32P and 35S | CommonCrawl |
IPSJ Transactions on Computer Vision and Applications
Spatio-temporal silhouette sequence reconstruction for gait recognition against occlusion
Md. Zasim Uddin1,
Daigo Muramatsu1,
Noriko Takemura2,
Md. Atiqur Rahman Ahad1 &
Yasushi Yagi1
IPSJ Transactions on Computer Vision and Applications volume 11, Article number: 9 (2019) Cite this article
Gait-based features provide the potential for a subject to be recognized even from a low-resolution image sequence, and they can be captured at a distance without the subject's cooperation. Person recognition using gait-based features (gait recognition) is a promising real-life application. However, several body parts of the subjects are often occluded because of beams, pillars, cars and trees, or another walking person. Therefore, gait-based features are not applicable to approaches that require an unoccluded gait image sequence. Occlusion handling is a challenging but important issue for gait recognition. In this paper, we propose silhouette sequence reconstruction from an occluded sequence (sVideo) based on a conditional deep generative adversarial network (GAN). From the reconstructed sequence, we estimate the gait cycle and extract the gait features from a one gait cycle image sequence. To regularize the training of the proposed generative network, we use adversarial loss based on triplet hinge loss incorporating Wasserstein GAN (WGAN-hinge). To the best of our knowledge, WGAN-hinge is the first adversarial loss that supervises the generator network during training by incorporating pairwise similarity ranking information. The proposed approach was evaluated on multiple challenging occlusion patterns. The experimental results demonstrate that the proposed approach outperforms the existing state-of-the-art benchmarks.
Biometric-based person authentication is becoming increasingly important for various applications, such as access control, visual surveillance, and forensics. Gait recognition is one of the topics of active interest in the biometric research community because it provides unique advantages over other biometric features, such as the face, iris, and fingerprints. For example, it can be captured without the subject's cooperation at a distance and has discriminative capability from relatively low-resolution image sequences [36]. Recently, gait has been used as a forensic feature, and there has already been a conviction produced by gait analysis [14].
However, gait recognition has to manage some practical issues that include observation views [27, 45], clothing [13], carried object [37], and occlusion. In this study, we address the gait recognition problem against occlusion.
Occlusion for gait recognition can be one of two types based on the relative position between the occluder and the target subject in an image sequence: relative dynamic occlusion and relative static occlusion. For relative dynamic occlusion, the occluded portion of the target subject changes continuously over an image sequence, whereas, for relative static occlusion, the occluded portion does not change. An example of relative dynamic occlusion is shown in Fig. 1a and b, in which the person is occluded at different positions in each frame and the occluded portion of the person's body gradually changes in the video sequence during the person's gait cycle. For the example of relative static occlusion shown in Fig. 1c, the person is occluded at a fixed portion of the body in each frame in the video sequence during the person's gait cycle.
Examples of occlusion in real-life applications (every fifth frame of a sequence). a Relative dynamic occlusion where the subject is occluded by a tree and continuously changes the occluded portion from left to right. b Relative dynamic occlusion where the subject is occluded by a parked car and continuously changes the occluded portion from bottom to top. c Relative static occlusion where the subject is occluded by wall in a fixed position
Approaches to gait recognition against occlusion can be roughly grouped into two categories. The first category is reconstruction-free approaches [5, 28, 29, 48], which focus on extracting features from a silhouette sequence of a gait cycle or an average of them, such as the gait energy image (GEI) [10]. Because gait features are extracted by considering static shape and dynamic motion information from a silhouette sequence for a gait cycle, approaches of this type can achieve good performance for a very low degree of occlusion; however, the obvious limitation of this type of approach is that it cannot be applicable to cases in which the gait cycle is difficult to estimate.
The second category is reconstruction-based approaches [12, 33]. Approaches in this category focus on reconstructing occluded silhouettes. In these approaches, occluded silhouettes are identified and a sequence is separated into occluded and unoccluded gait cycles, and then silhouettes of occluded gait cycles are reconstructed. These approaches showed good silhouette reconstruction. However, these were applied on long sequences that consisted of multiple gait cycles in which some frames were partially occluded. These approaches are difficult to apply in the case in which all frames are severely occluded in a sequence, for example, the occlusion shown in Fig. 1a and b. One of the major limitations of reconstruction-based approaches is that the reconstructed silhouette sequence sometimes deteriorates the discrimination ability of the individual after reconstruction. Therefore, it can negatively influence gait recognition performance after reconstruction [22].
With the great success of deep convolutional neural networks (CNNs) and generative adversarial networks (GANs) [8] in many research areas of computer vision and biometrics, reconstruction-based approaches have been formulated as a conditional image or video generation problem for image inpainting [15, 21, 30, 42, 44], video inpainting [20, 39], and future prediction [4, 20, 23, 38]. Although, these works have been shown to generate very good-looking realistic images, such as faces, objects, and scenes, they sometimes lost subject identity [46]. An approach that can generate not only good-looking samples but also samples with the discrimination ability of an individual is necessary for biometric-based person recognition.
We present an effective feed-forward conditional deep generative network for silhouette sequence reconstruction considering dilated convolution [15, 43] and a skip connection [32]. Dilated convolutional kernels are spread out in the spatial and temporal directions, which allows us to reconstruct each pixel by covering a large spatio-temporal input area. This is important for silhouette sequence reconstruction because each input pixel is important for reconstruction, whereas a skip connection allows us to retain unoccluded input pixels as output. The input to the encoder network that maps hidden representations is the occluded silhouette sequence, and the output of the decoder is the reconstructed silhouette sequence. We regularize the training process of the generator network by incorporating triplet hinge loss into Wasserstein GAN (WGAN) loss [1, 9] as adversarial loss and reconstruction loss in pixel space. A triplet contains a query sequence, positive sequence, and negative sequence, where the query sequence is the reconstructed silhouette sequence, the positive sequence is the unoccluded silhouette sequences of the same subject as the query subject, and the negative sequence is of a different subject. The similarity relationship is characterized by the relative distance in the triplet.
The entire network is trained end to end with the reconstruction and proposed adversarial losses. Compared with existing inpainting or reconstruction-based approaches, one of the major advantages of our proposed approach is that it does not require occluded or inpainting position information (i.e., a mask) for reconstruction. Therefore, it can be applied to an arbitrarily structured occluded silhouette sequence during reconstruction. Because of the silhouette sequence reconstruction approach, we can evaluate gait recognition without knowing the gait cycle in advance because the gait cycle can be estimated from the reconstructed silhouette sequence.
The contributions of this paper are summarized as follows:
We propose to design a conditional deep generative network (sVideo) that consists of a generator with dilated convolution and a skip connection, and a critic network. It can reconstruct any type of occluded silhouette sequence.
We propose a novel adversarial loss based on triplet hinge loss incorporated with WGAN loss (WGAN-hinge). To the best of our knowledge, WGAN-hinge is the first adversarial loss that supervises the generator network during training by incorporating pairwise similarity ranking information.
We demonstrate the stability of the proposed generative network using the supervision of adversarial loss for WGAN and also propose WGAN-hinge loss during training for various experiments to reconstruct the silhouette sequence and present superior results for gait recognition compared with the state-of-the-art methods. Additionally, we also demonstrate that the proposed WGAN-hinge for a different generator network yields performance improvements over WGAN.
Existing approaches for gait recognition against occlusion
In this section, we review the works related to gait recognition against occlusion as two families: reconstruction-free approaches and reconstruction-based approaches.
Regarding reconstruction-free approaches, the following methods have been proposed. Zhao et al. [48] extracted features based on fractal scale wavelet analysis for each silhouette from a sequence of a gait cycle and then averaged them. They evaluated robustness against noisy data in addition to occluded data by adding a vertical bar in the silhouette sequence. Chen et al. [5] proposed an approach for an incomplete and occluded silhouette for gait recognition. They divided the silhouette sequence of a gait cycle into clusters, and the dominant energy image (DEI) was calculated by denoising each cluster. The frame difference energy image (FDEI) for a silhouette was computed as the summation of its corresponding clusters' DEI and the positive portion of the difference from the previous frame. Finally, features are extracted from the FDEI representation that mitigated the problem of spatial and temporal silhouette incompletion caused by imperfect silhouette segmentation and occlusion. In [29], a robust statistical framework was proposed that minimized the influence of silhouette defects. The authors evaluated gait recognition on GEIs and gradient histogram energy images by adding occlusion and noise into a silhouette sequence. A different technique to manage the problem of occlusion was addressed in [28], in which a GEI was separated into four modules and a module was excluded for gait recognition if occlusion was detected.
Regarding reconstruction-based approaches, Roy et al. [33] proposed a framework in which a silhouette sequence was first divided into a few subsequences of gait cycle(s) based on key poses. It also allowed the determination of whether a silhouette of a gait cycle was occluded. Occluded silhouettes were then reconstructed using a balanced Gaussian process dynamical model. Although the authors evaluated the reconstruction accuracy, they did not evaluate gait recognition using the reconstructed silhouette sequence. Hofmann et al. [12] proposed a simple approach to detect partially occluded gait subsequences from a sequence using foreground pixels. Occluded silhouettes were then replaced by similar-pose clean silhouettes from other cycles. In [26], a complete GEI was regenerated from a partially observable GEI using subspace-based method. Gait recognition was evaluated according to whether a matching pair did not share a common observable region.
We can observe that, from the above discussion, some approaches manage occlusion directly on pre-process feature GEI for a gait cycle. Thus, they assume that the gait cycle is known in advance. The remaining approaches estimate the gait cycle from the occluded silhouette sequence, which is very difficult or error prone when all frames are occluded in a sequence, for example, Fig. 1. Furthermore, they consider a large sequence where multiple gait cycles are available for gait recognition. However, there are many scenarios in real-world applications in which only a few frames (i.e., not more than a gait cycle) are available in a sequence, and all are partially or totally occluded. In those scenarios, existing approaches are not applicable.
Deep generative approach
GAN [8] is a framework for training the generative model implemented by a system of two neural networks: generative network G and auxiliary discriminator network D. The discriminator network serves to distinguish whether content is generated by a network or is real, whereas the generator network is trained to fool the discriminator network. Specifically, G and D are trained by solving the following minimax problem:
$$ \min_{G}\max_{D}\underset {{x\sim {\mathbb{P}_{r}}}}{\mathbb{E}} \left [{\log D \left ({x }\right) }\right] + \underset {{G \left({z}\right)\sim {\mathbb{P}_{g}}}}{\mathbb{E}} \left [{ \log \left ({1-D \left (G \left({z}\right) \right) }\right) }\right], $$
where \(\mathbb {E(\cdot)}\) denotes the expectation operator, and \(\mathbb {P}_{r}\) and \(\mathbb {P}_{g}\) are the real and generated data distributions, respectively. Generator G transforms input sample z to mimic a real sample. However, one of the main concerns of GAN is instability during training. Numerous works have addressed improving the training stability. Radford et al. [31] introduced deep convolutional GANs (DCGAN) that imposed empirical constraints on the architecture of the GAN and optimized the hyperparameters. Recently, Arjovsky et al. [1] introduced WGAN [9], which minimizes the Earth Mover's Distance (a.k.a Wasserstein-1) between the generator and real data distribution. Specifically, the objective function was constructed by applying the Kantorovich-Rubinstein duality:
$$\begin{array}{*{20}l} \min_{G}\max_{D\in \mathcal{D}} \underset {{x\sim {\mathbb{P}_{r}}}} {\mathbb{E}} \left [D(x) \right]-\underset {{G (z)\sim {\mathbb{P}_{g}}}}{\mathbb{E}} \left[D \left(G (z) \right) \right], \end{array} $$
where \(\mathcal {D}\) is the set of 1-Lipschitz functions. To enforce the Lipschitz constraint on the critic function, Gulrajani et al. [9] proposed an improved version of WGAN with a gradient penalty term with respect to the input. The new objective is as follows:
$$\begin{array}{*{20}l} &\min_{G}\max_{D}\underset {{x\sim {\mathbb{P}_{r}}}}{\mathbb{E}} \left[D(x) \right]-\underset {{G (z) \sim {\mathbb{P}_{g}}}}{\mathbb{E}} \left[D(G (z)) \right] + \lambda L_{GP}, \end{array} $$
where \( L_{GP}= \underset {{{\hat {x}}\sim {\mathbb {P}_{\hat {x}}}}} {\mathbb {E}} \left [ \left (\left \|{ \nabla _{\hat {x}}{D({\hat {x}})} }\right \|_{2}-1 \right)^{2} \right ] \), \(\hat {x} = \epsilon x+(1-\epsilon) \tilde {x}\), and λ is a gradient penalty coefficient and ε∼U[0,1]. The authors called the auxiliary network a critic instead of discriminator because it is not a classifier. In this paper, we train our proposed approach using the framework of WGAN with the gradient penalty coefficient [9]. We present our approach in detail in Section 3.
Image and video reconstruction
A large body of literature exists for image and video reconstruction from traditional approaches to learning-based approaches (i.e., deep learning). Traditional approaches include diffusion-based [3] and patch-based techniques[7]. The diffusion-based technique propagates the image appearance around the target position, where propagation can be performed based on the isophote direction field, whereas the patch-based technique extracts patches from a source image and then pastes them into a target image. The patch-based technique is also used for video completion [40] by replacing image patches with spatio-temporal synthesis across frames. However, these types of approaches can only fill a very small and homogeneous area, and one obvious limitation is the repetition of content.
Recently, conditional GAN-based [25] approaches have emerged as promising tools for image and video completion. Regarding image completion, a context encoder (CE) [30] was the first attempt to train deep neural networks for image completion. It is trained to complete the center region using pixel-wise reconstruction and single discriminator loss. Some approaches in the literature introduce two discriminators/critics [15, 21, 44] as adversarial losses, where one discriminator/critic considers the entire image and the other focuses on a small hole area to enforce local consistency. However, the main issue for these approaches is that they assume the occluded/inpainting position is known during training and testing. The generator takes the masked image as input and outputs the generated image, and finally, it replaces pixels in the non-masked region of the generated image with the original pixels.
Regarding video completion, there are very few works in the literature. Vondrick et al. [38] first proposed a video generative network for video generation and predicted the future frame using the DCGAN model [31] and spatio-temporal three-dimensional (3D) convolutions [18, 35]. Later, Kratzwald et al. [20] improved the video generative network using WGAN with a gradient penalty critic network and applied it to multi-functional applications.
With the goal of achieving high accuracy for gait recognition in the presence of a high- or poor-quality generated silhouette sequence, we propose a conditional generative network for silhouette sequence reconstruction using spatio-temporal 3D convolution [18, 35] with a dilated kernel [43] in a bottleneck layer to increase the more receptive fields of the output neurons while maintaining a constant number of learnable weights. To regularize the generative networks, we explore triplet hinge loss incorporating WGAN with gradient penalty loss.
Spatio-temporal silhouette sequence reconstruction
The goal of the proposed approach is to reconstruct a silhouette sequence from an occluded sequence based on conditional GANs. An overview of our approach is shown in Fig. 2. The proposed approach uses generator G and critic D networks. A single generator network is used for the reconstruction, whereas the additional network critic is used to supervise the generator network during training to realistically reconstruct and preserve subject identity. After training, generator G can take an occluded silhouette sequence and reconstruct it.
Overview of our silhouette sequence reconstruction framework. It consists of a generator (encoder and decoder) and a critic network. The generator takes the occluded silhouette sequence as input and outputs the reconstructed silhouette sequence. The critic is used to supervise the generator network during training (i.e., positive reference is unnecessary for the target subject reconstruction during testing)
Different from existing video generative approaches [20, 38], we propose to design a very deep architecture for the generator network considering the spatio-temporal 3D convolution with small kernels along with dilated convolution and a skip connection; we will explain the detail in Section 3.1. Regarding the critic network, we chose similar critic architecture to [20]. However, the training procedures are different; we will explain the detail in Section 3.2.
Generator G is designed as a simple encoder-decoder pipeline. The occluded input silhouette sequence to the encoder is first mapped to hidden representations, which allows low memory and low computational cost by decreasing the spatial and temporal resolutions. Unlike a pooling layer, the encoder decreases the resolution twice using strided convolutions to avoid a blurred texture in the occluded regions. Then, the decoder takes this low-dimensional feature representation and restores it to the original spatial and temporal resolution through the convolutional layers with fractional strides [47]. Unlike [15, 38], we use convolution kernels of 3×3×3 (time × width × height) and 4×4×4 because it is proven that small kernels perform better in a deep 3D network [35]. An illustration of the generator network architecture is shown in Fig. 3.
Illustration of the architecture of the generator network. The silhouette sequence and feature dimensions are shown in the figure and denoted as "time × width × height"
We use dilated convolution [43] in the mid-layer and a skip connection [32] in the top layers. The dilated convolutional kernels are spread out in spatio-temporal directions, which allows us to compute each output pixel by considering a much larger input area, whereas the number of parameters and computational cost still remains constant. This is important for the silhouette sequence reconstruction from a partially observable occluded sequence because the spatial context and neighbor frame information are critical for reconstruction. To keep unoccluded input pixels in the reconstructed sequence, we use a U-shape-like network with skip connections (i.e., feature map of the encoder are combined with the decoder) because the decoder path is more or less symmetric to the encoder path.
We initialize the convolutional weights for stable training and faster convergence as [11] and perform batch normalization [16] to zero mean and unit variance followed by ReLU activation functions after each layer, except the final output layer. A hyperbolic tangent function is used in the last layer, which is beneficial for normalizing the reconstructed sequence within the range [− 1,1].
Critic network
Different from existing GANs [1, 9, 20] in which a discriminator/critic distinguishes generated samples from ground truth samples and adversarial supervision of the generator network maximally fools the discriminator, in a different direction, we propose exploring an updated WGAN. Our proposed critic network, D, can distinguish a silhouette sequence of a subject from ground truth and simultaneously use the pairwise similarity ranking, in which the critic network assigns a smaller distance to a silhouette sequence of the same subject and larger distance to a different subject, and it is realized using hinge loss. Using hinge loss along with WGAN loss, we use the adversarial loss so that the generator can maximally fool the critic.
The architecture and layer settings are similar to [20]. Specifically, we use five convolutional layers, followed by a linear downsampling layer with 4×4×4 kernels along with a stride of 2×2×2. We set the number of output channels for the first layer to 64 and double the values as the layer gets deeper. Similar to DCGAN [31], we use LeakyReLU [41] with a threshold of 0.2. Similar to [9], we use layer normalization [2] instead of batch normalization. Because the critic is not trained to classify between the reconstructed silhouette sequence and ground truth, we exclude softmax or any other activation in the final layer and instead train the network to provide good gradient information for generator updates.
Training objective
To train our networks, we use objective functions composed of silhouette sequence reconstruction loss, WGAN loss along with hinge loss as adversarial loss. Given occluded silhouette sequences z and corresponding ground truth sequences x along with a positive reference \(\bar {x}\) and a negative reference \( \bar {\bar {x}}\), respectively, as the same and different subject as ground truth, our proposed approach is trained to minimize the generative loss for generator network G:
$$ L_{\text{gen}}=L_{\text{adv}} + \gamma L_{\text{img}}, $$
where γ is a weighting parameter to control the trade-off between adversarial Ladv and image loss Limg.
Image loss Limg calculates the mean squared error, which attempts to minimize the pixel-wise error between the reconstructed (\(\tilde {x} = G(z)\)) and ground truth silhouette sequence. It is well known that stabilizing the adversarial training is a significant issue in GANs. A loss in image space is added with adversarial loss, and the loss in image space can contribute to stabilizing the training [6]. We, therefore, employed the image loss Limg with adversarial loss in our proposed approach, which can be defined as follows:
$$ L_{\text{img}} = \underset {{\tilde{x}, x \sim {\mathbb{P}_{g}, \mathbb{P}_{r}}}}{\mathbb{E}}\left[{(\tilde{x} -{x})^{2}} \right], $$
where \(\mathbb {P}_{g}\) and \(\mathbb {P}_{r}\) represent the distributions of reconstructed silhouette sequence \(\tilde {x}\) and ground truth x, respectively.
Adversarial loss Ladv is the generator loss in adversarial training, which is the combination of WGAN loss and triplet ranking hinge loss, which can be defined as follows:
$$ L_{\text{adv}}=L_{\text{WGAN}} - \kappa L_{\text{hinge}}, $$
where \(L_{\text {WGAN}}=-\underset {\tilde {x} \sim \mathbb {P}_{g}} {\mathbb {E}} \left [{D(\tilde {x})} \right ]\) is the WGAN loss, Lhinge is the hinge loss for pairwise similarity ranking, and κ is the coefficient to control the trade-off between WGAN and the proposed hinge loss. The output of critic network D is a real-valued scalar, and the hinge loss is calculated using the relative distance of the output of the reconstructed silhouette sequence with the positive reference (i.e., the silhouette sequence of same subject to the reconstructed silhouette sequence) and negative reference (i.e., the silhouette sequence of a different subject to the reconstructed silhouette sequence). Specifically, the triplet pairwise ranking hinge loss function can be defined as follows:
$$ \begin{aligned} L_{\text{hinge}}=\max(\text{margin} - & \underset{\tilde{x}, \bar{\bar{x}} \sim \mathbb{P}_{g}, \mathbb{P}_{\bar{\bar{x}}}} {\mathbb{E}} \left[|D(\tilde{x}) - D(\bar{\bar{x}})|\right] \\ + & \underset{\tilde{x}, \bar{x} \sim \mathbb{P}_{g}, \mathbb{P}_{\bar{x}}} {\mathbb{E}} \left[|D(\tilde{x}) - D(\bar{x})|\right], 0), \end{aligned} $$
where \(\mathbb {P}_{\tilde {x}}\), \(\mathbb {P}_{\bar {x}}\), and \(\mathbb {P}_{\bar {\bar {x}}}\) represent the distributions of reconstructed \(\tilde {x}\), positive reference \(\bar {x}\), and negative reference silhouette sequence \(\bar {\bar {x}}\), respectively.
Similar to generator network G, we train critic network D using the framework of the improved WGAN with a gradient penalty [9] together with the proposed hinge loss. Specifically, critic network D is trained to minimize the following loss function:
$$\begin{array}{*{20}l} L_{\text{critic}}= \underset{\tilde{x}, x \sim \mathbb{P}_{g}, \mathbb{P}_{r}} {\mathbb{E}} \left[D(\tilde{x}) - D(x) \right] + \lambda L_{GP} + \kappa L_{\text{hinge}}, \end{array} $$
where \( L_{GP}= \underset {{{\hat {x}}\sim {\mathbb {P}_{\hat {x}}}}} {\mathbb {E}} \left [ \left (\left \|{ \nabla _{\hat {x}}{D({\hat {x}})} }\right \|_{2}-1 \right)^{2} \right ] \), \(\hat {x} = \epsilon x+(1-\epsilon) \tilde {x}\), and λ is a gradient penalty coefficient and ε∼U[0,1]. We used Adam optimization [19] to update both network G and D with a batch size of 32 and learning rate of α=0.0001 for fixed number of iterations n for the generator network. The other hyperparameters for the Adam optimizer were set to β1=0.5 and β2=0.99. Algorithm ?? shows the complete algorithm for training our proposed framework in this paper. We used default λ=10, as suggested in [9], and γ=1000 according to [20]. The values of the coefficients κ and margin were determined empirically as 20 and 3, respectively, for each experiment. All the networks were implemented in Python with the Tensorflow library, and all experiments were trained from scratch. We normalized all silhouette sequences to be in the range [− 1,1].
To evaluate the accuracy of the proposed approach against a wide variety of occlusion patterns, we artificially simulated several occlusion patterns because there is no large-scale gait recognition database with occlusion variation that is publicly available, and systematic analysis for multiple occlusion patterns is necessary for evaluation. Regarding the evaluation, we used three sets of experiments to validate the proposed approach. These experiments were intended to address a variety of challenges for different occlusion patterns and different training settings that simulate multiple scenarios. We compared the results with the state-of-the-art approaches. The purposes of these experiments were to evaluate gait recognition for the following conditions:
The occlusion pattern was known and the same for a matching pair (probe against gallery)
The occlusion pattern was known and different for a matching pair
The occlusion pattern was unknown for a matching pair
We used the OU-ISIR Gait Database, Multi-View Large Population Dataset (OU-MVLP) [34], which is composed of gait image sequences with multiple views from 10,307 subjects, with a wide variety of ages and equal distribution of males and females. The image sequences were captured in a controlled environment with a green background for 25 fps using cameras placed approximately 8 m from the course at a height of 5 m. The silhouette sequence was extracted using a chroma key technique, and then the size was normalized by considering the top, bottom, and horizontal center of the silhouette regions for the subject of interest such that the height was 64 pixels and the aspect ratio of each region was maintained. Finally, 44×64 pixels silhouette images were generated. For our experiments, we choose a subset of side views and included only subjects (9001) that had at least 2 sequences. To simulate occlusion, 32 contiguous size-normalized silhouettes of a sequence were used; if a sequence had fewer than 32 samples, we repeated the last frame to make it uniform.
Occlusion pattern
We considered two categories of real-world occlusion that could occur in daily life, that is, relative dynamic and relative static occlusion, together with one artificial random occlusion. Regarding relative dynamic occlusion, we simulated an occlusion type in which a person walked from right to left occluded by a beam, pillar, or tree covering the entire height (e.g., Fig. 1a). As a result, we can imagine that occluder objects move from left to right in a continuous motion within the subject of interest in an image sequence if the person walked at a constant speed. To realize this pattern, we added a background rectangle mask (i.e., set to zero in the occluded position) to cover a certain area against the entire silhouette in the left-most position of first frame of a sequence, and gradually changed the position of the mask toward the end of the frame with the right-most position. Later, we refer to this type of occlusion as a relative dynamic occlusion from left to right (RDLR). Similarly, we simulated the relative dynamic occlusion from bottom to top (RDBT) when an occluder occluded a person from the bottom to the top (e.g., Fig. 1b).
Regarding relative static occlusion, we added a background mask in a fixed position for all frames in a sequence. Therefore, we simulated relative static occlusion in the bottom (RSB), top (RST), left (RSL), and right (RSR) positions. Regarding random occlusion, we added a background mask in a random position in horizontal and vertical directions across the silhouette sequence. Later, we refer to this as random occlusion horizontally (RandH) and random occlusion vertically (RandV), respectively. For each silhouette in a sequence, we added 30%, 40%, and 50% degrees of occlusion against the full area for each of type of occlusion. As a result, we simulated a total of 24 occlusion patterns. Figure 4 shows the simulated occluded silhouette sequence for a subject.
Example of simulated occlusion for a subject. The left-hand side of the figure: labels for the occlusion pattern, where the first term indicates the type of occlusion and the second term shows the degree of occlusion. The occluded area is gray only for visualization purposes; in the experiment, we masked the occluded area with black, namely the values of the masked area are set to zero; this value is the same for the background
Experimental settings
We divided the total subjects randomly into three disjoint sets of approximately equal size: 3000 training, 3001 validation, and 3000 test subjects. Then, the validation and test sets were divided into two subsets: gallery set and probe set. The validation set was used to select the best iteration number n for experiments, whereas the test set was used to evaluate the accuracy of our proposed approach and other state-of-the-art approaches. Because the number of samples was large for the experiments of unknown occlusion pattern compared with the experiments of known occlusion pattern, it took more iterations to converge. We, therefore, trained the proposed approach using a validation dataset for up to 30,000 iterations for experiments for known occlusion pattern, whereas we used 60,000 iterations for unknown occlusion pattern and saved the learned parameter every 3000 iterations to select the best iteration from them for testing. We followed the same procedure for all other state-of-the-art benchmarks for a fair comparison to select the best learned model using the validation dataset.
OU-MVLP contained multiple subsequences of more or fewer than 32 silhouette frames; therefore, we selected all the subsequences of 32 silhouette frames for training to increase the training sample, and the centered subsequences of 32 samples were used for the validation and test sets where the starting pose was not the same between the probe and gallery. We padded both sides of the width with zeros for each silhouette in a sequence to make a 64×64 pixels resolution from a 44×64 pixel resolution to fit the network. After reconstructing a sequence, we padded it out to make it the original size (44×64) of the silhouette.
Unlike the existing conditional video generative approaches [20, 38], those quantitatively evaluate their samples by rating manually, but we evaluate the accuracy of gait recognition from reconstructed silhouette sequence.
Because GEI is the most widely used feature in gait recognition, and it can achieve good recognition accuracy, we used the GEI as a gait feature. A GEI was constructed by averaging the subjects' silhouette image sequence over a gait cycle. The gait cycle was determined using normalized autocorrelation [17] of the silhouette image sequence along the temporal axis. If several gait cycles were detected, then we chose the first gait cycle. Finally, we calculated the dissimilarity using the L2 distance between two GEIs (i.e., probe and gallery).
We evaluated the accuracy of gait recognition using two modes: identification and verification. We plotted the cumulative matching curve (CMC) for identification and the receiver operating characteristic curve (ROC) for verification, which indicates the trade-off between the false rejection rate of genuine samples and false acceptance rate of imposter samples with varying thresholds. Moreover, we evaluated more specific measures for each evaluation mode: rank-1/5 for identification and the equal error rate (EER) for verification.
Comparison methods
In this section, we describe the three existing methods used for the evaluation of the experiments. Each of them is a state-of-the-art method for the generative approach. For the comparison, we retrained the model using our dataset from scratch to determine the best-performing model. We used the same hyperparameters as those mentioned in the original papers for the existing methods.
Context encoder [30]
We compared our results with those obtained from the CE, which is a state-of-the-art method for semantic image inpainting. The network architecture is similar to DCGAN [31], that is, the encoder and auxiliary discriminator architecture is similar to that of the discriminator of DCGAN, whereas the decoder is similar to the decoder of DCGAN. However, the bottleneck is 4000 instead of 100. We evaluated the CE by processing the restoration of pixels outside the occluded position for the experiment in which the occlusion pattern is known.
Video GAN (VideoGAN) [38]
VideoGAN is the first model for video generation from random noise. The model is also capable of predicting the future frame given a conditional input frame in the encoder network. Therefore, we adopt it as silhouette sequence reconstruction by changing its input to the occluded silhouette sequence in the encoder network. The architecture of the decoder is similar to that of DCGAN [31], except it is extended in time, whereas we added an encoder network with four strided convolutional layers followed by batch normalization for each layer and a ReLU activation function.
Improved video GAN (iVideoWGAN) [20]
iVideoWGAN is the improved version of VideoGAN. The major modification is that the discriminator network is replaced by a critic network and the network is trained using the framework of WGAN with gradient penalty [9].
In addition to the aforementioned existing methods, we evaluated our proposed generator network using the training of a critic network with WGAN and WGAN-hinge loss. Later, we refer to them as sVideoWGAN and sVideoWGAN-hinge, respectively. Similarly, we evaluated the proposed critic network (WGAN-hinge) with the generator networks of iVideoWGAN [20] and analyzed how the proposed critic could supervise the generator to update the parameter to reconstruct the silhouette sequence. Later, we refer to it as iVideoWGAN-hinge.
Experiment for the known and same occlusion pattern
In this section, we analyze the accuracy for gait recognition using the reconstructed silhouette sequence where the occlusion pattern is the same between a matching pair (the probe and gallery). To prepare the experiments, we selected typical occlusion patterns from artificially simulated relative dynamic-type occlusion, such as RDLR and RDBT, with the highest and lowest degrees of occlusion (i.e., 30% and 50%). We consequently prepared four subsets of occlusion patterns, denoted by RDLR_30, RDLR_50, RDBT_30, and RDBT_50, where the first and second subscripts indicate the type of occlusion and degree of occlusion, respectively. For the evaluation, the training sets for each subset were prepared in the same manner to reflect the corresponding test sets.
Figures 5 and 6 show the reconstructed silhouette sequences for the occlusion patterns of RDLR_50. From these silhouettes, we can see that sVideoWGAN-hinge, VideoWGAN-hinge, and iVideoWGAN-hinge could reconstruct the silhouette sequence well. In addition, we can also observe that the reconstructed silhouette sequence by comparing with ground truth, sVideoWGAN-hinge is similar with that of sVideoWGAN. We explain the causes in Section 4.9.1.
Reconstructed silhouette sequence for the experiment for the known and same occlusion pattern for RDLR_50. The left-hand side of the figure: second to seventh rows show the labels for the benchmark used to reconstruct the silhouette sequence, whereas the first and last rows show the input and GT, respectively. Values in the parentheses under each label show the average L2 distance for the reconstructed and the ground truth sequence. Occluded areas are gray only for visualization purposes; in the experiment, we masked the occluded area with black, namely the values of the masked area are set to zero; this value is the same for the background
Reconstructed silhouette sequence (every second frame) for the experiment for the known and same occlusion pattern for RDLR_50 to show how a benchmark can reconstruct silhouette sequence. Green and red colors indicate falsely reconstructed and falsely unreconstructed pixels, respectively, compared with GT. The left-hand side of the figure: second to seventh rows show the labels for the benchmark used to reconstruct the silhouette sequence, whereas the first and last rows show the input and GT, respectively. Occluded areas are gray only for visualization purposes; in the experiment, we masked the occluded area with black, namely the values of the masked area are set to zero; this value is the same for the background
The results for CMC and ROC are shown in Fig. 7, and Rank-1, Rank-5, and EER are shown in Table 1. From these results, we can see that our proposed generator with the proposed critic (i.e., sVideoWGAN-hinge) outperformed the existing benchmarks in all settings. We can also observe that the proposed generator and proposed critic improved accuracy separately. For example, if we compare the proposed generator and the generator for VideoGAN [20] with the critic of WGAN, referred to as sVideoWGAN and iVideoWGAN, respectively, then accuracy improved from 80.8 to 81.9% and 6.2 to 6.1% (see Table 1) for the Rank-1 and EERs, respectively, for the occlusion pattern of RDLR_30, and 71.3 to 74.7% and 7.4 to 6.8% for RDLR_50. Similarly, the accuracy improved for the proposed generator network from 81.4 to 82.4% and 6.1 to 6.0% for Rank-1 and EERs, respectively, for RDLR_30, and 73.2 to 75.9 and 6.8 to 6.6% for RDLR_50 while the critic was trained with WGAN-hinge. By contrast, the proposed critic WGAN-hinge also (i.e., incorporating hinge loss in WGAN) improved the accuracy separately, for example, 81.9 to 82.4% and 6.1 to 6.0%, for Rank-1 and EERs, respectively, while the generator network was proposed for the type of occlusion pattern of RDLR_30.
CMC and ROC curves for the different experiments for the known and same occlusion pattern. The left side shows the CMC curves, and the right side shows the ROC curves; P vs G means occlusion pattern of the probe and gallery, respectively, whereas RDLR_XX and RDBT_XX indicate relative dynamic occlusion left to right and relative dynamic occlusion from bottom to top, respectively, along with the degree of occlusion (XX%). Note that some benchmarks do not provide curves. a P vs G = RDLR_30 vs RDLR_30. b P vs G = RDLR_50 vs RDLR_50. c P vs G = RDBT_30 vs RDBT_30. d P vs G = RDBT_50 vs RDBT_50
Table 1 Rank-1/5 [%] and EER [%] for the experiment for the known and same occlusion pattern
Regarding existing benchmarks, CE reconstructed the silhouette sequence in blurred and easy-to-identify areas because it reconstructed only the occluded area frame by frame, which led to a bad recognition accuracy compared with other benchmarks, particularly for a high degree of occlusion. Although iVideoWGAN used an identical generator network to VideoGAN to reconstruct the silhouette sequence, it improved the accuracy for each experiment because the WGAN loss guided the generator network better than that of the discriminator of DCGAN.
Experiment for the known but different occlusion pattern
In this section, we analyze the accuracy of gait recognition using the reconstructed silhouette sequence where the occlusion pattern is different between a matching pair (the probe and gallery). To prepare such experiments, we selected patterns with the same occlusion type but different degrees of occlusion, and different occlusion types with different degrees of occlusion. Specifically, we compared the gait recognition accuracy of RDLR_30 against RDLR_50 and RDLR_30 against RDBT_50. For the evaluation, in the same manner as the previous experiments in which the occlusion pattern was known, the training sets for each experiment were prepared to reflect the corresponding test sets.
The results for CMC and ROC are shown in Fig. 8, and those for Rank-1, Rank-5, and EER are shown in Table 2. From these results, we can see that the recognition accuracy without reconstruction drastically changed because of the appearance change between the different occlusion patterns. However, the tendency of recognition accuracy for other benchmarks was the same as the experiment for the known and the same occlusion pattern.
CMC and ROC curves for the different experiments for the known but different occlusion pattern. The left side shows the CMC curves, and the right side shows the ROC curves; P vs G means the occlusion pattern of the probe and gallery, respectively, whereas RDLR_XX and RDBT_XX indicate the relative dynamic occlusion left to right and relative dynamic occlusion from bottom to top, respectively, along with the degree of occlusion (XX%). Note that some benchmarks do not provide curves. a P vs G = RDLR_30 vs RDLR_50. b P vs G = RDLR_30 vs RDBT_50
Table 2 Rank-1/5 [%] and EER [%] for the experiment for the known but different occlusion pattern
Experiment for the unknown occlusion pattern
In previous sections, we analyzed the experimental results for gait recognition from the reconstructed silhouette sequence within the same and different occlusion patterns, and trained the parameters of CNN using the same occlusion pattern as a test sample. Therefore, we know the occlusion pattern in advance. However, it is difficult to collect such data in a real-world scenario because of the uncooperative and non-intrusive nature of gait biometrics. In this section, we analyze the accuracy of gait recognition when the occlusion pattern is unknown. For this purpose, we trained the parameter of our proposed approach and other benchmark networks by considering all the occlusion patterns using training sets to make a robust model that was capable of reconstructing any type of occlusion pattern. For testing, we used the cooperative and uncooperative setting and the unknown but the same and different occlusion patterns.
Cooperative and uncooperative setting
The implicit assumption of the uncooperative setting is that the occlusion pattern is inconsistent for all samples throughout the probe and gallery sets [24] (i.e., the occlusion pattern is unknown), whereas for the cooperative setting, the occlusion pattern is consistent for all samples in a gallery set. To create such an uncooperative setting, occlusion patterns were randomly selected for each subject for the probe and gallery sets, whereas for the cooperative setting, ground truth samples were used in the gallery set.
The results for the cooperative and uncooperative settings for CMC and ROC are shown in Fig. 9, and Rank-1, Rank-5, and EER are shown in Table 3. From these results, we can see that the recognition accuracy for the cooperative setting was better than that for the uncooperative setting for each of the benchmarks. We can observe that the accuracy of CE degraded drastically from the cooperative to uncooperative settings compared with other benchmarks. For example, CE degraded the Rank-1 identification by 12%, whereas the maximum degradation for a benchmark was 8.2% (e.g., for iVideoWGAN-hinge). We believe that CE reconstructed the silhouette sequence frame by frame and therefore lost the motion information, particularly when a silhouette was completely occluded, as shown in Figs. 5 and 6. As a result, CE lost subject discrimination.
CMC and ROC curves for the experiment for cooperative and uncooperative settings for the unknown occlusion pattern. The left side shows the CMC curves, and the right side shows the ROC curves. Note that some benchmarks do not provide curves. a Cooperative. b Uncooperative
Table 3 Rank-1/5 [%] and EER [%] for the experiment for cooperative and uncooperative settings for the unknown occlusion pattern
We can also observe that sVideoWGAN-hinge did not improve the accuracy from sVideoWGAN for the cooperative setting. We think that the proposed generator network used element-wise addition of the encoder with the decoder to keep the unoccluded silhouette in the reconstructed silhouette as much as possible, and WGAN supervised the generator to reconstruct by comparing the reconstructed sequence with the ground truth sequence. However, the proposed critic (WGAN-hinge) supervised the generator by comparing not only the ground truth but also the positive and negative reference sequences. Therefore, the reconstructed silhouette sequence by comparing with ground truth, sVideoWGAN-hinge is similar or slightly worse than that of sVideoWGAN as shown in Figs. 5 and 6.
Unknown but the same and different occlusion pattern settings
Because the learned parameter of CNN for the experiment for the unknown occlusion pattern can reconstruct any type of occlusion pattern considered in this research, we selected the same and different occlusion patterns between the probe and gallery for evaluation. Hence, we chose the RDLR_30 occlusion pattern as the probe; two typical occlusion patterns for each type of relative dynamic occlusion, such as RDLR_30 and RDLR_50, and RDBT_30 and RDBT_50, together with the ground truth silhouette sequence as the gallery. Therefore, we could compare the accuracy of learned parameters of CNN for unknown occlusion patterns with known occlusion patterns.
The results for CMC and ROC are shown in Fig. 10, and Rank-1, Rank-5, and EER are shown in Table 4. From these results, we can see that the recognition accuracy for CE degraded for each combination when compared with that of the combination from the known occlusion pattern. For example, Rank-1 and EER were 72.6% and 7.2%, respectively, when the occlusion pattern was known for RDLR_30 vs RDLR_30, and 70.7% and 7.4% for the unknown occlusion pattern. We think that, because the occlusion pattern was unknown and we therefore did not know the occlusion position to replace the original unoccluded input pixel in the output as post processing, the reconstructed silhouette sequence for the experiment for the unknown occlusion pattern is worse than that of known occlusion pattern. Similar to the results for the experiment of cooperative setting, sVideoWGAN-hinge did not improve the accuracy from sVideoWGAN for RDLR_30 versus GT (see Table 4).
CMC and ROC curves for the experiment for the unknown but same and different occlusion pattern settings. The left side shows the CMC curves, and the right side shows the ROC curves; P vs G means the occlusion pattern of the probe and gallery, respectively, whereas RDLR_XX and RDBT_XX indicate the relative dynamic occlusion left to right and relative dynamic occlusion from bottom to top, respectively, along with the degree of occlusion (XX%). Note that some benchmarks do not provide curves. a P vs G = RDLR_30 vs RDLR_30. b P vs G = RDLR_30 vs RDLR_50. c P vs G = RDLR_30 vs RDBT_30. d P vs G = RDLR_30 vs RDBT_50. e P vs G = RDLR_30 vs GT
Table 4 Rank-1/5 [%] and EER [%] for the experiment for the unknown but same and different occlusion pattern settings
We can also see that the identification accuracy degraded for VideoGAN, iVideoWGAN, and iVideoWGAN-hinge when compared with the same combination for the known occlusion pattern; however, the verification accuracy improved. We think that those benchmarks used the same generator network of comparatively shallow architecture and therefore lost inter-subject discrimination when training the parameter for a wide variety of occlusion patterns. However, the proposed generator can manage a wide variety of occlusion patterns to train a robust model and improve accuracy.
We focused on gait recognition where all frames in a sequence were occluded. For this task, we proposed an approach based on deep conditional GAN that consisted of a generator and critic networks. It allowed us to reconstruct an unoccluded image from an occluded silhouette sequence for gait recognition. We showed that triplet hinge loss along with WGAN regularized the training of the generative network and reconstructed the silhouette sequence with a high discrimination ability, which led to the better accuracy for gait recognition. To demonstrate the effectiveness of the proposed approach, we considered several occlusion patterns with relative dynamic and relative static occlusion for different degrees of occlusion that were quite common in real-world scenarios and designed a set of experiments in which the occlusion pattern between the probe and gallery was the same/different and known/unknown. The experimental results demonstrated that the reconstructed silhouette sequence of the proposed approach achieved state-of-the-art accuracy. Therefore, we conclude that the proposed approach has the potential to tackle the challenges for gait recognition in the presence of occlusion.
There are a number of limitations that need to be addressed in future work. We considered artificially simulated occlusion for a side view silhouette sequence. In the future, we will use occlusion with multiple view variation.
The material related to this research can be publicly accessed at OU-MVLP dataset: http://www.am.sanken.osaka-u.ac.jp/BiometricDB/GaitMVLP.html.
M Arjovsky, S Chintala, L Bottou, Wasserstein GAN. CoRR (2017). abs/1701.07875, 1701.07875.
LJ Ba, R Kiros, GE Hinton, Layer normalization. CoRR (2016). abs/1607.06450.
M Bertalmio, G Sapiro, V Caselles, C Ballester, in Proc. of the 27th Annual Conf. on Computer Graphics and Interactive Techniques. SIGGRAPH '00. Image inpainting (ACM Press/Addison-Wesley Publishing Co.New York, 2000), pp. 417–424.
H Cai, C Bai, Y Tai, C Tang, in Computer Vision - ECCV 2018 - 15th European Conference, Munich, Germany, September 8-14 2018, Proceedings, Part II. Deep video generation, prediction and completion of human action sequences, (2018), pp. 374–390. https://doi.org/10.1007/978-3-030-01216-8_23.
C Chen, J Liang, H Zhao, H Hu, J Tian, Frame difference energy image for gait recognition with incomplete silhouettes. Pattern Recogn Lett. 30(11), 977–984 (2009).
A Dosovitskiy, T Brox, in Proc. of the Int. Conf. on Neural Information Processing Systems. Generating images with perceptual similarity metrics based on deep networks (Curran Associates Inc.USA, 2016), pp. 658–666.
AA Efros, TK Leung, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, vol 2. Texture synthesis by non-parametric sampling, (1999), pp. 1033–1038. https://doi.org/10.1109/iccv.1999.790383.
IJ Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, in Proc. of the Int. Conf. on Neural Information Processing Systems - Vol 2. Generative adversarial nets (MIT PressCambridge, 2014), pp. 2672–2680.
I Gulrajani, F Ahmed, M Arjovsky, V Dumoulin, AC Courville, in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017. Improved training of Wasserstein GANS (Long Beach, 2017), pp. 5769–5779.
J Han, B Bhanu, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, vol 2. Statistical feature fusion for gait-based human recognition, (2004), pp. 842–847. https://doi.org/10.1109/cvpr.2004.1315252.
K He, X Zhang, S Ren, J Sun, in Proc. of the IEEE Int. Conf. on Computer Vision. Delving deep into rectifiers: surpassing human-level performance on imagenet classification (Washington, 2015), pp. 1026–1034. https://doi.org/10.1109/iccv.2015.123.
M Hofmann, D Wolf, G Rigoll, in Proc. of the Int. Conf. on Computer Vision Theory and Applications. Identification and reconstruction of complete gait cycles for person identification in crowded scenes (Vilamoura, 2011), pp. 594–597. https://doi.org/10.5220/0003329305940597.
MA Hossain, Y Makihara, J Wang, Y Yagi, Clothing-invariant gait identification using part-based clothing categorization and adaptive weight control. Pattern Recogn. 43(6), 2281–2291 (2010).
How biometrics could change security, BBC (online). available from http://news.bbc.co.uk/2/hi/programmes/click_online/7702065.stm.
S Iizuka, E Simo-Serra, H Ishikawa, Globally and locally consistent image completion. ACM Trans Graph. 36(4), 107:1–107:14 (2017).
S Ioffe, C Szegedy, in Proc. of the Int. Conf. on International Conference on Machine Learning - Vol 37. Batch normalization: accelerating deep network training by reducing internal covariate shift (PMLRLille, 2015), pp. 448–456.
H Iwama, M Okumura, Y Makihara, Y Yagi, The OU-ISIR Gait Database comprising the large population dataset and performance evaluation of gait recognition. IEEE Trans Inf Forensic Secur. 7(5), 1511–1521 (2012).
S Ji, W Xu, M Yang, K Yu, 3D convolutional neural networks for human action recognition. IEEE Trans Pattern Anal Mach Intell. 35(1), 221–231 (2013).
DP Kingma, J Ba, Adam: a method for stochastic optimization. CoRR (2014). abs/1412.6980.
B Kratzwald, Z Huang, DP Paudel, LV Gool, Improving video generation for multi-functional applications. CoRR (2017). abs/1711.11453, 1711.11453.
Y Li, S Liu, J Yang, M Yang, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Generative face completion (Honolulu, 2017), pp. 5892–5900. https://doi.org/10.1109/cvpr.2017.624.
Z Liu, S Sarkar, Effect of silhouette quality on hard problems in gait recognition. IEEE Trans Syst Man Cybern Part B Cybern. 35(2), 170–183 (2005).
C Lu, M Hirsch, B Schölkopf, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Flexible spatio-temporal networks for video prediction (Honolulu, 2017), pp. 2137–2145. https://doi.org/10.1109/cvpr.2017.230.
R Martín-Félez, T Xiang, Uncooperative gait recognition by learning to rank. Pattern Recogn. 47(12), 3793–3806 (2014).
M Mirza, S Osindero, Conditional generative adversarial nets. CoRR (2014). abs/1411.1784.
D Muramatsu, Y Makihara, Y Yagi, in Int. Conf. on Biometrics (ICB). Gait regeneration for recognition, (2015a), pp. 169–176. https://doi.org/10.1109/icb.2015.7139048.
D Muramatsu, A Shiraishi, Y Makihara, M Uddin, Y Yagi, Gait-based person recognition using arbitrary view transformation model. IEEE Trans Image Process. 24(1), 140–154 (2015b).
P Nangtin, P Kumhom, K Chamnongthai, Gait identification with partial occlusion using six modules and consideration of occluded module exclusion. J Vis Commun Image Represent. 36:, 107–121 (2016).
J Ortells, RA Mollineda, B Mederos, R Martín-Félez, Gait recognition from corrupted silhouettes: a robust statistical approach. Mach Vis Appl. 28(1), 15–33 (2017).
D Pathak, P Krähenbühl, J Donahue, T Darrell, AA Efros, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Context encoders: feature learning by inpainting (Las Vegas, 2016), pp. 2536–2544. https://doi.org/10.1109/cvpr.2016.278.
A Radford, L Metz, S Chintala, in Int. Conf. on Learning Representations. Unsupervised representation learning with deep convolutional generative adversarial networks (San Juan, 2016).
O Ronneberger, P Fischer, Thomas Be, J Hornegger, WM Wells, AF Frangi, in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. U-net: Convolutional networks for biomedical image segmentation (Springer International PublishingCham, 2015), pp. 234–241.
A Roy, S Sural, J Mukherjee, G Rigoll, Occlusion detection and gait silhouette reconstruction from degraded scenes. Signal Image Video Proc. 5(4), 415 (2011).
N Takemura, Y Makihara, D Muramatsu, T Echigo, Y Yagi, Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition. IPSJ Trans Comput Vis Appl. 10(1), 4 (2018).
D Tran, L Bourdev, R Fergus, L Torresani, M Paluri, in Proc. of the IEEE Int. Conf. on Computer Vision. Learning spatiotemporal features with 3D convolutional networks (Washington, 2015), pp. 4489–4497. https://doi.org/10.1109/iccv.2015.510.
M Uddin, D Muramatsu, T Kimura, Y Makihara, Y Yagi, MultiQ: single sensor-based multi-quality multi-modal large-scale biometric score database and its performance evaluation. IPSJ Trans Comput Vis Appl. 9(1), 18 (2017).
M Uddin, TT Ngo, Y Makihara, N Takemura, X Li, D Muramatsu, Y Yagi, The OU-ISIR Large Population Gait Database with real-life carried object and its performance evaluation. IPSJ Trans Comput Vis Appl. 10(1), 5 (2018).
C Vondrick, H Pirsiavash, A Torralba, in Advances in Neural Information Processing Systems 29: Annual Conf. on Neural Information Processing Systems 2016. Generating videos with scene dynamics (Barcelona, 2016), pp. 613–621.
C Wang, H Huang, X Han, J Wang, Video inpainting by jointly learning temporal structure and spatial details. CoRR (2018). abs/1806.08482, 1806.08482. https://doi.org/10.1609/aaai.v33i01.33015232.
Y Wexler, E Shechtman, M Irani, Space-time completion of video. IEEE Trans Pattern Anal Mach Intell. 29(3), 463–476 (2007).
B Xu, N Wang, T Chen, M Li, Empirical evaluation of rectified activations in convolutional network. CoRR (2015). abs/1505.00853.
RA Yeh, C Chen, T Lim, AG Schwing, M Hasegawa-Johnson, MN Do, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Semantic image inpainting with deep generative models (Honolulu, 2017), pp. 6882–6890. https://doi.org/10.1109/cvpr.2017.728.
F Yu, V Koltun, Multi-scale context aggregation by dilated convolutions. CoRR (2015). abs/1511.07122.
J Yu, Z Lin, J Yang, X Shen, X Lu, TS Huang, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Generative image inpainting with contextual attention (Salt Lake City, 2018), pp. 5505–5514. https://doi.org/10.1109/cvpr.2018.00577.
S Yu, D Tan, T Tan, in Proc. of the 18th Int. Conf. on Pattern Recognition vol 4. A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition (Hong Kong, 2006), pp. 441–444. https://doi.org/10.1109/icpr.2006.67.
S Yu, H Chen, EBG Reyes, N Poh, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops. GaitGAN: invariant gait feature extraction using generative adversarial networks, (2017), pp. 532–539. https://doi.org/10.1109/cvprw.2017.80.
MD Zeiler, D Krishnan, GW Taylor, R Fergus, in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition. Deconvolutional networks, (2010), pp. 2528–2535. https://doi.org/10.1109/iccv.2011.6126474.
G Zhao, L Cui, H Li, Gait recognition using fractal scale. Pattern Anal Appl. 10(3), 235–246 (2007).
We thank Maxine Garcia, PhD, from Liwen Bianji, Edanz Group China (www.liwenbianji.cn/ac) for editing the English text of a draft of this manuscript.
This work was supported by JSPS KAKENHI Grant Number 15K12037.
The Institute of Scientific and Industrial Research, Osaka University, Osaka, 567-0047, Japan
Md. Zasim Uddin
, Daigo Muramatsu
, Md. Atiqur Rahman Ahad
& Yasushi Yagi
The Institute for Datability Science, Osaka University, Osaka, 567-0047, Japan
Noriko Takemura
Search for Md. Zasim Uddin in:
Search for Daigo Muramatsu in:
Search for Noriko Takemura in:
Search for Md. Atiqur Rahman Ahad in:
Search for Yasushi Yagi in:
MZU contributes the most including proposing the initial research idea, generating the dataset, conducting the experiments, and writing the initial draft of the manuscript. MZU and DM analyzed and discussed the evaluated accuracy and revised the manuscript. NT and MARA are responsible for suggesting possible improvements. YY supervised the work and provided technical support. All authors reviewed and approved the final manuscript.
Correspondence to Md. Zasim Uddin.
Uddin, M., Muramatsu, D., Takemura, N. et al. Spatio-temporal silhouette sequence reconstruction for gait recognition against occlusion. IPSJ T Comput Vis Appl 11, 9 (2019) doi:10.1186/s41074-019-0061-3
Silhouette reconstruction
Gait recognition occlusion handling
Video generation
Deep generative adversarial network
Wasserstein GAN | CommonCrawl |
VALIDATION OF NATURAL CONVECTION
The famous experiment of Nukiyama, carried out in 1934, was the first to classify the different regimes of pool boiling as shown in the figure below (Global Digital Central Encyclopedia, Thermal fluid central [13]). He observed that the bubble did not appear until the $\Delta T \approx 5°C $ in which $ T_{wall} = \Delta T + T_{sat} $ and this particular wall temperature is referred as $ T_{ONB} $ (the onset nucleate boiling, ONB). For $T_{wall}<T_{ONB}$, free convection boiling is expected to be the only mechanism for the system to evacuate heat to the surrounding domain.
Figure 10: Pool boiling curve for saturated water, according to Global Digital Central Encyclopedia, Thermal fluid central [13]
The boiling curve for propane, in logarithmic scale, is presented in Figure 3 (BIBLIOGRAPHIC RESEARCH - Boiling Regime section). The free convection boiling is said to exist for $ \Delta T < 7°C $, thus for heat fluxes lower than 2000W/m² before a region of transition regime that ranges between 2000 - 5000 W/m².
Therefore, single phase simulations were launched for various heat flux, from 200 to 5000 W/m². As discussed in the 'BIBLIOGRAPHIC RESEARCH - Boussinesq Approximation' section, the flow generated around the tube due to buoyancy force, as a result of variation of propane density, was estimated using Boussinesq Approximation. Results obtained from Neptune CFD were compared with correlations and experimental results in the 'Comparison' section to conclude on the reliability of Boussinesq Approximation and Neptune CFD to predict the fluid movement around the tube for natural convection boiling regime.
The studied domain was limited to a single isolated cylinder tube in a pool of static propane as demonstrated in 'MESH' section. The $ \Delta T$ is less than 6K for natural convection regime, hence the variation of density remains small and can be calculated with Boussinesq approximation (Equation 2). Considering the value of Rayleigh number for this maximum $\Delta T$, calculated in 'BIBLIOGRAPHIC RESEARCH Boussinesq approximation' section, laminar regime is adopted.
In the literature, the Correlations by Morgan (1975) [10] and by Churchill and Chu (1975) [11] are found to be well adapted for the range of Rayleigh number $Ra_D$ obtained for $ \Delta T$ lying between 1 - 6K.
Correlation by Morgan(1975) :
$Nu_D= \frac{hD}{k}=CRa_D^n$ (37)
in which C=0.125, n=0.333, for $Ra_D~10⁷ - 10^{12}$
Correlation by Churchill and Chu (1975) :
$Nu_D=(0.60+\frac{0.387Ra_D^{\frac{1}{6}}}{(1+(\frac{0.559}{Pr})^{\frac{9}{16}})^{\frac{8}{27}}})^2$ (38)
for $Ra_D<10^{12}$
And considering also the classical equation of heat transfer
$Q_{wall}=h\Delta T$ (39)
Neptune CFD : Monophasic simulation
In this phase of study, simulations were carried out for the range of temperature difference mentioned before by imposing a heat flux desity as boundary condition for the heating tube. The simulations were launched in saturation condition with Psat=474.3 kPa. The Boussinesq hypothesis (Equation 2 of BIBLIOGRAPHY RESEARCH - Boussinesq Approximation) was used to predict the variation of density due to ΔT, thus the flow in the studied domain caused by the difference of buoyancy force.
In order to set the properties and conditions, the interface of Neptune CFD, called EDAMOX, was used. It was developed for users to define and select different models as well as boundary conditions to be used during their simulation. The interface can be accessed by the following command in linux terminal:
edam &
or if the param file already exists
edam –U param &
Note that one has to be in 'DATA' directory to launch Edamox
Figure 11: Edamox interface
The first step of the simulation definition was to select the special module in case of necessity. For single phasic simulation, no specific module has to be selected.
Figure 12: Neptune Special module panel for natural convection
In Fluid & Flow prop windows, the properties of the fluid at the reference temperature are required. In this study, they were fixed at Tsat at Psat=474.3kPa. Since the propone was at static state, no turbulence was expected in the simulation. (see also Rayleigh Calculation in BIBLIOGRAPHIC RESEARCH, Boussinesq approximation)
The mesh file needs to be defined in input-output-control windows. The number of iteration and time of the simulation are required, as well as the output frequency, for post-processing.
Figure 13: Neptune Input-Output-Control panel for natural convection
In order to take into account the variation of density, modifications in USPHYV.F are required to impose the Boussinesq Approximation and, consequently, the option 'Variable physical properties (call USPHYV)' has to be selected in the Physical models module of param file.
Figure 14: Neptune Generalities panel for natural convection
In the Scalars windows, a total enthalpy scalar type was selected in order to be able to fix either a constant heat flux [W/m²] or impose a wall temperature as boundary condition around the tube. The parameter 'lam.dym.coef', corresponds in this case to the thermal conductivity of propane [W/m/K] over the specific heat capacity of propane [J/kg/K], as the scalar type is chosen to be total enthalpy.
However, the wall temperature, Tw and wall heat flux, Qwall cannot be accessed if the water/steam special module is not selected, as Neptune CFD does not calculate them for single phasic simulation. Therefore, a wall temperature was imposed, at the first place, as the boundary condition around the tube and the mesh was refined until the temperature of the first mesh around the tube was approximately (with a difference of 0.001) equal to the applied wall temperature. The wall heat flux could be easily accessed from the 'listing' file by dividing the energy transfer around the tube by the exchange surface. To impose a wall temperature, a dirichlet boundary condition has to be selected and the value imposed has to be in the same unit as the scalar. One should never considered the 'Timp[K]' option for boundary condition if either the cathare or thesis tables are not selected.
The refined mesh was used for simulations with various heat fluxes imposed, range from 300 W/m2 to 5000 W/m2, as the wall boundary condition. Using 'Paraview', a data analysis and visualisation application, the liquid temperature of the first mesh can be exported using the 'slices' function (Figure 15). Average wall temperature can, hence, be obtained by an integration of the exported data.
Figure 15: Slice of the geometry, Twall calculation for natural convection
The comparison between the simulation results, results predicted by correlations and experimental results is plotted below:
Figure 16: Comparison between simulation results, experimental results [4] and correlations [10][11] for natural convection
As shown in the figure above (Figure 16) the results predicted by Neptune_CFD are very close to the experimental results, where the difference is estimated to be lower than 15% for a heat flux up to 2000 W/m2. Further from 2000 W/m2, the simulation results and even those obtained by the correlations are no longer comparable with experimental results. By referring to the boiling curve of propane (Figure 3 in 'BIBLIOGRAPHY RESEARCH' section), it is said that a transition regime exists between 2000 - 5000 W/m2 before the boiling takes place. Therefore, the incoherence between results is expected as the simulations and correlations are initially developed for natural convection regime.
Consequently, the simulation results by Neptune_CFD are proved to be reliable and the Boussinesq hypothesis is well-adapted to predict the natural convection for propane while for higher heat flux, the nucleate boiling model has to be taken into consideration. | CommonCrawl |
Cone-fields without constant orbit core dimension
Transport, flux and growth of homoclinic Floer homology
October 2012, 32(10): 3621-3649. doi: 10.3934/dcds.2012.32.3621
Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity
Rui Huang 1, , Ming Mei 2, and Yong Wang 3,
School of Mathematical Sciences, South China Normal University, Guangzhou, Guangdong, 510631, China
Department of Mathematics, Champlain College Saint-Lambert, Quebec, J4P 3P2
Institute of Applied Mathematics, Academy of Mathematics and System Science, Chinese Academy of Sciences, Beijing, 100190
Received May 2011 Revised December 2011 Published May 2012
In this paper, we study a class of nonlocal dispersion equation with monostable nonlinearity in $n$-dimensional space \[ \begin{cases} u_t - J\ast u +u+d(u(t,x))= \displaystyle \int_{\mathbb{R}^n} f_\beta (y) b(u(t-\tau,x-y)) dy, \\ u(s,x)=u_0(s,x), \ \ s\in[-\tau,0], \ x\in \mathbb{R}^n, \end{cases} \] where the nonlinear functions $d(u)$ and $b(u)$ possess the monostable characters like Fisher-KPP type, $f_\beta(x)$ is the heat kernel, and the kernel $J(x)$ satisfies ${\hat J}(\xi)=1-\mathcal{K}|\xi|^\alpha+o(|\xi|^\alpha)$ for $0<\alpha\le 2$ and $\mathcal{K}>0$. After establishing the existence for both the planar traveling waves $\phi(x\cdot{\bf e}+ct)$ for $c\ge c_*$ ($c_*$ is the critical wave speed) and the solution $u(t,x)$ for the Cauchy problem, as well as the comparison principles, we prove that, all noncritical planar wavefronts $\phi(x\cdot{\bf e}+ct)$ are globally stable with the exponential convergence rate $t^{-n/\alpha}e^{-\mu_\tau t}$ for $\mu_\tau>0$, and the critical wavefronts $\phi(x\cdot{\bf e}+c_*t)$ are globally stable in the algebraic form $t^{-n/\alpha}$, and these rates are optimal. As application,we also automatically obtain the stability of traveling wavefronts to the classical Fisher-KPP dispersion equations. The adopted approach is Fourier transform and the weighted energy method with a suitably selected weight function.
Keywords: Fisher-KPP equation, time-delays, Nonlocal dispersion equations, global stability, Fourier transform., traveling waves, weighted energy.
Mathematics Subject Classification: Primary: 35K57, 34K20; Secondary: 92D2.
Citation: Rui Huang, Ming Mei, Yong Wang. Planar traveling waves for nonlocal dispersion equation with monostable nonlinearity. Discrete & Continuous Dynamical Systems, 2012, 32 (10) : 3621-3649. doi: 10.3934/dcds.2012.32.3621
P. Bates, P. C. Fife, X. Ren and X. Wang, Traveling waves in a convolution model for phase transitions, Arch. Rational Mech. Anal., 138 (1997), 105-136. doi: 10.1007/s002050050037. Google Scholar
M. Bramson, Convergence of solutions of the Kolmogorov equation to travelling waves, Mem. Amer. Math. Soc., 44 (1983), iv+190 pp. Google Scholar
E. Chasseigne, M. Chaves and J. Rossi, Asymptotic behavior for nonlocal diffusion equations, J. Math. Pure Appl. (9), 86 (2006), 271-291. doi: 10.1016/j.matpur.2006.04.005. Google Scholar
F. Chen, Almost periodic traveling waves of nonlocal evolution equations, Nonlinear Anal., 50 (2002), 807-838. doi: 10.1016/S0362-546X(01)00787-8. Google Scholar
X. Chen, Existence, uniqueness and asymptotic stability of traveling waves in nonlocal evolution equations, Adv. Differential Equations, 2 (1997), 125-160. Google Scholar
C. Cortazar, M. Elgueta, J. D. Rossi and N. Wolanski, How to approximate the heat equation with Neumann boundary conditions by nonlocal diffusion problems, Arch. Rational Mech. Anal., 187 (2008), 137-156. doi: 10.1007/s00205-007-0062-8. Google Scholar
J. Coville, On uniqueness and monotonicity of solutions of non-local reaction-diffusion equation, Annali. di Matematica Pura Appl. (4), 185 (2006), 461-485. doi: 10.1007/s10231-005-0163-7. Google Scholar
J. Coville, J. Dávila and S. Martínez, Nonlocal anisotropic dispersal with monostable nonlinearity, J. Differential Equations, 244 (2008), 3080-3118. doi: 10.1016/j.jde.2007.11.002. Google Scholar
J. Coville and L. Dupaigne, On a non-local equation arising in population dynamics, Proc. Roy. Soc. Edinburgh Sect. A, 137 (2007), 727-755. doi: 10.1017/S0308210504000721. Google Scholar
J. Coville and L. Dupaigne, Propagation speed of travelling fronts in non local reaction-diffusion equations, Nonlinear Anal., 60 (2005), 797-819. doi: 10.1016/j.na.2003.10.030. Google Scholar
P. C. Fife, "Mathematical Aspects of Reacting and Diffusing Systems," Lecture Notes in Biomathematics, 28, Springer-Verlag, Berlin-New York, 1979. Google Scholar
P. C. Fife and J. B. McLeod, A phase plane discussion of convergence to travelling fronts for nonlinear diffusion,, Arch. Rational Mech. Anal., 75 (): 281. doi: 10.1007/BF00256381. Google Scholar
T. Gallay, Local stability of critical fronts in nonlinear parabolic partial differential equations, Nonlinearity, 7 (1994), 741-764. doi: 10.1088/0951-7715/7/3/003. Google Scholar
J. García-Melián and F. Quirós, Fujita exponents for evolution problems with nonlocal diffusion, J. Evolution Equations, 10 (2010), 147-161. doi: 10.1007/s00028-009-0043-5. Google Scholar
S. A. Gourley, Travelling front solutions of a nonlocal Fisher equation, J. Math. Biol., 41 (2000), 272-284. doi: 10.1007/s002850000047. Google Scholar
S. A. Gourley and J. Wu, Delayed non-local diffusive systems in biological invasion and disease spread, in "Nonlinear Dynamics and Evolution Equations" (eds. H. Brunner, X.-Q. Zhao and X. Zou), Fields Institute Communications, 48, Amer. Math. Soc., Providence, RI, (2006), 137-200. Google Scholar
F. Hamel and L. Roques, Uniqueness and stability of properties of monostable pulsating fronts, J. European Math. Soc., 13 (2011), 345-390. doi: 10.4171/JEMS/256. Google Scholar
R. Huang, Stability of travelling fronts of the Fisher-KPP equation in $\mathbb R^N$, Nonlinear Differential Equations Appl., 15 (2008), 599-622. doi: 10.1007/s00030-008-7041-0. Google Scholar
L. Ignat and J. D. Rossi, Decay estimates for nonlocal problems via energy methods, J. Math. Pure Appl. (9), 92 (2009), 163-187. doi: 10.1016/j.matpur.2009.04.009. Google Scholar
L. Ignat and J. D. Rossi, A nonlocal convolution-diffusion equation, J. Func. Anal., 251 (2007), 399-437. doi: 10.1016/j.jfa.2007.07.013. Google Scholar
D. Ya. Khusainov, A. F. Ivanov and I. V. Kovarzh, Solution of one heat equation with delay, Nonlinear Oscillasions (N. Y.), 12 (2009), 260-282. doi: 10.1007/s11072-009-0075-3. Google Scholar
K. Kirchgassner, On the nonlinear dynamics of travelling fronts, J. Differential Equations, 96 (1992), 256-278. doi: 10.1016/0022-0396(92)90153-E. Google Scholar
A. N. Kolmogorov, I. G. Petrovsky and N. S. Piskunov, Etude de l' équation de la diffusion avec croissance de la quantité de matière et son application à un problème biologique, Bulletin Université d'Etat à Moscou, Série Internationale Sect. A, 1 (1937), 1-26. Google Scholar
K.-S. Lau, On the nonlinear diffusion equation of Kolmogorov, Petrovsky, and Piscounov, J. Differential Equations, 59 (1985), 44-70. doi: 10.1016/0022-0396(85)90137-8. Google Scholar
G. Li, M. Mei and Y. S. Wong, Nonlinear stability of traveling wavefronts in an age-structured reaction-diffusion population model, Math. Biosci. Engin., 5 (2008), 85-100. Google Scholar
J.-F. Mallordy and J.-M. Roquejoffre, A parabolic equation of the KPP type in higher dimensions, SIAM J. Math. Anal., 26 (1995), 1-20. doi: 10.1137/S0036141093246105. Google Scholar
M. Mei, C.-K. Lin, C.-T. Lin and J. W.-H. So, Traveling wavefronts for time-delayed reaction-diffusion equation. I. Local nonlinearity, J. Differential Equations, 247 (2009), 495-510. doi: 10.1016/j.jde.2008.12.026. Google Scholar
M. Mei, C.-K. Lin, C.-T. Lin and J. W.-H. So, Traveling wavefronts for time-delayed reaction-diffusion equation. II. Nonlocal nonlinearity, J. Differential Equations, 247 (2009), 511-529. doi: 10.1016/j.jde.2008.12.020. Google Scholar
M. Mei, J. W.-H. So, M. Li and S. Shen, Asymptotic stability of travelling waves for Nicholson's blowflies equation with diffusion, Proc. Roy. Soc. Edinburgh Sec. A, 134 (2004), 579-594. doi: 10.1017/S0308210500003358. Google Scholar
M. Mei and J. W.-H. So, Stability of strong travelling waves for a non-local time-delayed reaction-diffusion equation, Proc. Roy. Soc. Edinburgh Sec. A, 138 (2008), 551-568. doi: 10.1017/S0308210506000333. Google Scholar
M. Mei, C. Ou and X.-Q. Zhao, Global stability of monostable traveling waves for nonlocal time-delayed reaction-diffusion equations, SIAM J. Math. Anal., 42 (2010), 2762-2790; Erratum, SIAM J. Math. Anal., 44 (2012), 538-540. doi: 10.1137/090776342. Google Scholar
M. Mei and Y. Wang, Remark on stability of traveling waves for nonlocal Fisher-KPP equations, Int. J. Numer. Anal. Model. Seris B, 2 (2011), 379-401. Google Scholar
M. Mei and Y. S. Wong, Novel stability results for traveling wavefronts in an age-structured reaction-diffusion equations, Math. Biosci. Engin., 6 (2009), 743-752. doi: 10.3934/mbe.2009.6.743. Google Scholar
H. J. K. Moet, A note on the asymptotic behavior of solutions of the KPP equation, SIAM J. Math. Anal., 10 (1979), 728-732. doi: 10.1137/0510067. Google Scholar
S. Pan, W.-T. Li and G. Lin, Existence and stability of traveling wavefronts in a nonlocal diffusion equation with delay, Nonlinear Anal., 72 (2010), 3150-3158. doi: 10.1016/j.na.2009.12.008. Google Scholar
D. H. Sattinger, On the stability of waves of nonlinear parabolic systems, Adv. Math., 22 (1976), 312-355. doi: 10.1016/0001-8708(76)90098-0. Google Scholar
J. W.-H. So, J. Wu and X. Zou, A reaction-diffusion model for a single species with age structure: I. Traveling wavefronts on unbounded domains, Roy. Soc. London Proc. Series A Math. Phys. Eng. Sci., 457 (2001), 1841-1853. doi: 10.1098/rspa.2001.0789. Google Scholar
K. Uchiyama, The behavior of solutions of some nonlinear diffusion equations for large time, J. Math. Kyoto Univ., 18 (1978), 453-508. Google Scholar
J. Wu, D. Wei and M. Mei, Analysis on the critical speed of traveling waves, Appl. Math. Lett., 20 (2007), 712-718. doi: 10.1016/j.aml.2006.08.006. Google Scholar
H. Yagisita, Existence and nonexistence of traveling waves for a nonlocal monostable equation, Publ. Res. Inst. Math. Sci., 45 (2009), 925-953. doi: 10.2977/prims/1260476648. Google Scholar
Wenxian Shen, Zhongwei Shen. Transition fronts in nonlocal Fisher-KPP equations in time heterogeneous media. Communications on Pure & Applied Analysis, 2016, 15 (4) : 1193-1213. doi: 10.3934/cpaa.2016.15.1193
Jianping Gao, Shangjiang Guo, Wenxian Shen. Persistence and time periodic positive solutions of doubly nonlocal Fisher-KPP equations in time periodic and space heterogeneous media. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2645-2676. doi: 10.3934/dcdsb.2020199
Denghui Wu, Zhen-Hui Bu. Multidimensional stability of pyramidal traveling fronts in degenerate Fisher-KPP monostable and combustion equations. Electronic Research Archive, 2021, 29 (6) : 3721-3740. doi: 10.3934/era.2021058
Rui Huang, Ming Mei, Kaijun Zhang, Qifeng Zhang. Asymptotic stability of non-monotone traveling waves for time-delayed nonlocal dispersion equations. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1331-1353. doi: 10.3934/dcds.2016.36.1331
Jean-Michel Roquejoffre, Luca Rossi, Violaine Roussier-Michon. Sharp large time behaviour in $ N $-dimensional Fisher-KPP equations. Discrete & Continuous Dynamical Systems, 2019, 39 (12) : 7265-7290. doi: 10.3934/dcds.2019303
Aijun Zhang. Traveling wave solutions with mixed dispersal for spatially periodic Fisher-KPP equations. Conference Publications, 2013, 2013 (special) : 815-824. doi: 10.3934/proc.2013.2013.815
Hiroshi Matsuzawa. A free boundary problem for the Fisher-KPP equation with a given moving boundary. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1821-1852. doi: 10.3934/cpaa.2018087
Benjamin Contri. Fisher-KPP equations and applications to a model in medical sciences. Networks & Heterogeneous Media, 2018, 13 (1) : 119-153. doi: 10.3934/nhm.2018006
François Hamel, James Nolen, Jean-Michel Roquejoffre, Lenya Ryzhik. A short proof of the logarithmic Bramson correction in Fisher-KPP equations. Networks & Heterogeneous Media, 2013, 8 (1) : 275-289. doi: 10.3934/nhm.2013.8.275
Matt Holzer. A proof of anomalous invasion speeds in a system of coupled Fisher-KPP equations. Discrete & Continuous Dynamical Systems, 2016, 36 (4) : 2069-2084. doi: 10.3934/dcds.2016.36.2069
Margarita Arias, Juan Campos, Cristina Marcelli. Fastness and continuous dependence in front propagation in Fisher-KPP equations. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 11-30. doi: 10.3934/dcdsb.2009.11.11
Patrick Martinez, Jean-Michel Roquejoffre. The rate of attraction of super-critical waves in a Fisher-KPP type model with shear flow. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2445-2472. doi: 10.3934/cpaa.2012.11.2445
Yicheng Jiang, Kaijun Zhang. Stability of traveling waves for nonlocal time-delayed reaction-diffusion equations. Kinetic & Related Models, 2018, 11 (5) : 1235-1253. doi: 10.3934/krm.2018048
Zigen Ouyang, Chunhua Ou. Global stability and convergence rate of traveling waves for a nonlocal model in periodic media. Discrete & Continuous Dynamical Systems - B, 2012, 17 (3) : 993-1007. doi: 10.3934/dcdsb.2012.17.993
Rui Huang Ming Mei Yong Wang | CommonCrawl |
Complete multiplication and division facts in a table (10x10)
Distributive property for multiplication
Use the distributive property
Multiply a single digit number by a four digit number using algorithm
Multiplying a single digit number by a $4$4 digit number for area is a tricky task. One strategy we can use is to break it up into $4$4 separate multiplications and then add them all together.
When we multiply area, we multiply one side by another side. We've already looked at how to do this with a three digit number. Now let's look at how to do it with a four digit number.
We can break a four digit number up into $4$4 separate values and multiply it by each. For example, to find the area of the rectangle below, we can break the $4$4 digit number up into $4000+300+50+1$4000+300+50+1 which is the same as $4351$4351, then multiply each value by $2$2.
Then we can add all the answers together.
The video below will show you with an demonstration on how to do just that.
When multiplying a number by a four digit number, we can:
break the four digit number up into thousands, hundreds, tens and ones,
solve each multiplication separately, then
add all the answers together to find the final answer.
Find $2255\times4$2255×4 using the area model.
Find the area of the first rectangle.
Find the area of the second rectangle.
Find the area of the third rectangle.
Find the area of the fourth rectangle.
What is the total area of all four rectangles?
So what is $2255\times4$2255×4?
Use the area model to find $5054\times2$5054×2.
Fill in the areas of each rectangle.
$5000$5000 $50$50 $4$4
$2$2 $\editable{}$
$\editable{}$
What is the total area of all three rectangles?
NA3-6
Record and interpret additive and simple multiplicative strategies, using words, diagrams, and symbols, with an understanding of equality | CommonCrawl |
Plasmonic enhancement of betanin-lawsone co-sensitized solar cells via tailored bimodal size distribution of silver nanoparticles
Impact of hybrid plasmonic nanoparticles on the charge carrier mobility of P3HT:PCBM polymer solar cells
MirKazem Omrani, Hamidreza Fallah, … Mojtaba Abdi-Jalebi
Refractory plasmonics enabling 20% efficient lead-free perovskite solar cells
Ahmed A. Mohsen, Mohamed Zahran, … Nageh K. Allam
Bimetallic Implanted Plasmonic Photoanodes for TiO2 Sensitized Third Generation Solar Cells
Navdeep Kaur, Viplove Bhullar, … Aman Mahajan
Near field and far field plasmonic enhancements with bilayers of different dimensions AgNPs@DLC for improved current density in silicon solar
Maryam Hekmat, Azizollah Shafiekhani & Mehdi Khabir
Improved optical properties of perovskite solar cells by introducing Ag nanopartices and ITO AR layers
Yangxi Chen, Chaoling Du, … Daning Shi
Quantum dot assisted luminescent hexarhenium cluster dye for a transparent luminescent solar concentrator
Jun Choi, Kyungkon Kim & Sung-Jin Kim
Enhancement of color and photovoltaic performance of semi-transparent organic solar cell via fine-tuned 1D photonic crystal
Çağlar Çetinkaya, Erman Çokduygulular, … Süleyman Özçelik
Bio-inspired broadband absorbers induced by copper nanostructures on natural leaves
Trung Duc Dao, Dinh Dat Pham, … Tien Thanh Pham
A photoanode with hierarchical nanoforest TiO2 structure and silver plasmonic nanoparticles for flexible dye sensitized solar cell
Brishty Deb Choudhury, Chen Lin, … Mohammed Jasim Uddin
S. Sreeja ORCID: orcid.org/0000-0002-7688-74831 &
Bala Pesala1,2
Natural pigment-based photosensitizers are an attractive pathway for realizing low cost and environmentally friendly solar cells. Here, broadband light-harvesting is achieved using two natural pigments, betanin and lawsone, absorbing in the green and blue region of the solar spectrum respectively. The use of bimodal size distribution of AgNPs tailored for each of the pigments to further increase their efficiency is the key feature of this work. This study demonstrates a significant enhancement in current-density, voltage, and efficiency by 20.1%, 5.5%, and 28.6% respectively, in a betanin-lawsone co-sensitized solar cell, via plasmonic enhancement using silver nanoparticles (AgNPs). The optimum sizes of the nanoparticles have been calculated by studying their optical response and electric field profiles using Finite Difference Time Domain (FDTD) simulations, aimed at matching their resonant wavelengths with the absorption bands of the dyes. Simulations show that AgNPs of diameters 20 nm and 60 nm are optimum for enhanced absorption by lawsone and betanin respectively. The FDTD simulations of the plasmonic photoelectrodes demonstrated 30% and 15% enhancement in the power absorption by betanin and lawsone at the LSPR peaks of the 60 nm and 20 nm AgNPs respectively. An optimum overall concentration of 2% (v/v) and a ratio of 4:1 (20 nm:60 nm) of the bimodal distribution of the AgNPs, was determined for incorporation in the photoanodes. An average efficiency of 1.02 ± 0.006% was achieved by the betanin-lawsone co-sensitized solar cell with the bimodal distribution of AgNPs, compared to 0.793 ± 0.006% achieved by the non-plasmonic solar cell of otherwise identical configuration. Electrochemical impedance spectroscopy confirmed that the incorporation of the bimodal distribution of AgNPs in the solar cells also enabled enhanced electron lifetime and reduced recombination compared to the non-plasmonic counterpart, thereby improving the charge transfer. The plasmonic enhancement methodology presented here can be applied to further improve the efficiency of other natural dye-sensitized solar cells.
Since the seminal work in 1991 by O'Regan and Gratzel on Dye-Sensitized Solar Cells (DSSCs)1, immense interest has been directed towards their development in the last two decades. Owing to their inexpensive and facile processing requirements, DSSCs are becoming increasingly popular as an emerging photovoltaic technology2,3,4. State-of-the-art DSSCs, which have achieved maximum efficiencies of 11.9%5, widely use metal-based dyes as the photosensitizer. Although these dyes are highly efficient sensitizers that effectively capture the entire visible spectrum, they employ rare metals such as ruthenium or osmium which require elaborate synthetic procedures. Moreover, they are expensive and toxic to the environment, making their disposal a problem. Therefore, natural pigments derived from plant sources, which are environment-friendly and inexpensive6,7, have come to the forefront as alternative photosensitizers. Several groups8,9 have been investigating anthocyanins10,11, betalains12,13, carotenes14, and chlorophylls15,16 for their application in DSSCs. To date, average efficiencies achieved for natural DSSCs are ~ 0.4%, with the highest being ~ 1.5 – 2%8,10,11,12,13,14,15,16,17. Despite their several advantages, natural dye-sensitized solar cells (NDSSCs) exhibit low efficiencies and stabilities due to insufficient charge separation and limited light-harvesting. Improving the efficiencies of NDSSCs would render them an appealing technological innovation for application in low-light harvesting applications18, portable, disposable electronics19. Several groups have worked on various aspects of the design of photoanodes and its constituent materials for developing high-efficiency DSSCs20,21,22,23,24. Prior studies have demonstrated the co-sensitization of natural pigments for improved light harvesting and better photoelectric conversion efficiencies25,26,27,28,29,30,31. The present study examines the efficiency enhancement of a betanin-lawsone co-sensitized solar cell by the plasmonic route. The plasmonic route of efficiency enhancement has been explored in various solar cells32,33, hydrogen splitting34 but not extensively explored for augmenting the efficiency of NDSSCs. The solar cell uses a blend of two natural plant-based dyes: betanin and lawsone, harvesting electromagnetic radiation in the green and blue wavelength range respectively, of the solar spectrum. Betanin is a purplish-red pigment obtained from Beta vulgaris (beetroot). It is a glycoside of betacyanin, consisting of a betaine moiety (indole-2-carboxylic acid) N-linked to betalamic acid (pyridine dicarboxylic acid), via an acetyl group35, absorbing in light the wavelength range of 450 nm – 600 nm, covering most of the green region. A considerable segment of the incident solar spectrum36. The absorption peak of betanin, which is 535 nm in water12 and 545 nm in TiO237, results from the electron transitions that occur within its conjugated systems. It also has favorable functional groups, i.e. carboxylic groups (-COOH), that facilitate strong anchoring with the TiO2 surface through bi-dentate chelation32 thereby enhancing the charge transfer efficiencies to the TiO2 anode. Lawsone, chosen as the dye complementary to betanin in this study, absorbs both in the UV (280 nm – 400 nm) and visible region (between 400 nm – 550 nm) of the solar spectrum38. It is a reddish-orange colored dye (2-hydroxy-1, 4 naphthoquinone), present in the leaves of Lawsonia inermis (henna). The absorption of this dye in the visible region is mainly due to the electronic transitions occurring in the quinoid ring. Theoretically, it has been established that there is an appreciable percolation of electron density in lawsone molecules through intermolecular hydrogen bonding38. This suggests that lawsone molecules can efficiently transfer electrons upon photo-excitation and therefore are a good potential candidate as a photosensitizer for application in solar cells. The dye in DSSCs plays the vital role of absorbing light to photo-excite electrons into the conduction band of the semiconductor39, significantly influencing the performance of the device. An emerging transformational pathway to enhance light harvesting by the dyes is the utilization of the plasmonic effect of metal nanoparticles33,40,41,42,43.
At specific wavelengths of incident electromagnetic radiation, a cumulative oscillation of electrons is excited within the metal nanoparticles. This phenomenon, known as Surface Plasmon Resonance (SPR) is due to the constructive interference of the incident electromagnetic field with the surface plasmons of the metals44,45. Localized Surface Plasmon Resonance (LSPR), exhibited by bounded metals such as metal nanoparticles, results in enhanced extinction of light and also enhances the electromagnetic field around the metal nanoparticle. This is strongly determined by the size, shape or structure and electric permittivity of the metal nanoparticle and the proximal environment44, therefore by tailoring these properties appropriately, they can potentially be used for improved light harvesting, carrier generation and enhancement of photoelectric conversion efficiency of DSSCs46,47,48,49,50. Noble metals such as gold and silver exhibit plasmon resonances in the visible and infrared region of the electromagnetic spectrum and have been explored by several groups for its application in photovoltaics33,46,48,51,52,53,54,55,56,57,58,59,60. The plasmonic enhancement of efficiency in a solar cell is ascribed to one or more of the following: (i) far-field scattering of incident light by the metal nanoparticles44, (ii) near-field coupling of electromagnetic fields, wherein the metal nanoparticles generate intense electric fields at the surface that are greater than the incident light by several orders of magnitude. These couple with sensitizers that are in the immediate vicinity, resulting in enhanced light absorption44, (iii) Plasmon Induced Resonance Energy Transfer (PIRET), wherein the energy from the metal nanoparticles from the LSPR oscillations is relayed to the semiconductor or proximal dye, thereby promoting the generation of electron-hole pairs and separation of charges61,62,63. Persistent efforts are currently underway by several groups to quantify the relative contribution of each mechanism45,64,65,66,67 in enhancing the efficiency of the device.
Enhancing the light absorption by the dye by harnessing the plasmonic effect of metal nanoparticles is an effective approach for augmenting the photo-electric efficiency of NDSSCs. Silver (Ag) nanoparticles (NPs) that exhibit SPR in the wavelength range of 400 nm − 800 nm68 have been chosen for this study due to their comparatively lower cost, easy availability and facile synthesis procedures. The use of tailored bimodal size distribution of AgNPs for the plasmonic enhancement of absorption by both the dyes is the key feature of this work. The optimum dimensions of AgNPs required for their extinction band to match the absorption band of the dyes were determined using Finite Difference Time Domain (FDTD) simulations, aimed at matching their resonant wavelengths with the absorption bands of the dyes. The simulations demonstrated that to enhance light harvesting by betanin and lawsone, a bimodal distribution of AgNPs of diameters ~ 60 nm and ~ 20 nm needs to be incorporated in the photoanode. This will result in two LSPR peaks that coincide with the absorption peaks of betanin (λmax = 535 nm) and lawsone (λmax = 410 nm) in the co-sensitized photoanode, thereby enhancing the light absorption by these dyes. AgNPs of the optimized dimensions were synthesized by the standard citrate reduction methodology and their spectral properties were systematically investigated using confocal microscopy and spectroscopic methods to match the results from the FDTD simulations. To capture the essential physics demonstrating the plasmonic enhancement in the NDSSCs, a simplified layered structure comprising of NPs embedded in the dye on a TiO2 substrate was simulated. The AgNPs were incorporated into the photoanodes at optimized total concentrations and the performance of the solar cells was evaluated. The change in the electron lifetime and the internal resistances of the solar cells upon plasmonic enhancement was assessed through electrochemical impedance spectroscopy (EIS) studies of the solar cells. A study on the effect of incorporation of AgNPs on the degradation of dyes and the lifetime of the solar cells has also been carried out.
Design of the solar cell
Figure 1 is a schematic illustration of the betanin-lawsone co-sensitized solar cell with a bimodal distribution of 20 nm and 60 nm AgNPs incorporated in the photoanode. The structure of the solar cell follows a standard DSSC model2. A mesoporous TiO2 film which is coated on to the FTO substrate and sensitized with pigments, functions as the photoanode and a Pt-coated FTO substrate as the counter electrode with iodide/tri-iodide redox electrolyte filled in between.
An illustration of the betanin-lawsone sensitized solar cell with the bimodal distribution of 20 nm and 60 nm plasmonic AgNPs incorporated in the photoanode.
For augmenting the solar cell efficiency by plasmonic means, the AgNPs should have a high extinction cross-section coinciding with the absorption bands of the pigments. The optimal dimensions of AgNPs required for this were determined from FDTD simulations by studying the spectral response and the intensity profiles of their electric fields. AgNPs of the desired sizes were synthesized, characterized, incorporated and tested in various solar configurations.
Absorption studies of the dyes
To determine the absorption peaks of interest for the plasmonically augmenting the light absorption by the dyes, their absorption spectra were studied over a wavelength range of 300 nm to 800 nm. Betanin was extracted from beetroot (Beta vulgaris) using ethanol as the solvent and its peak absorbance was detected at 535 nm (Fig. 2a), which is typical of betanin69. The absorption peak is a result of HOMO → LUMO transitions between the betaine moiety and betalamic acid of the betanin molecule. The pair of nonbonding electrons located on the N-atom of the betaine moiety undergoes delocalization with the electrons located on the conjugated π systems. The subsequent n→π* transitions result in the absorption of electromagnetic radiation in the green wavelength range. Lawsone was extracted from Lawsonia inermis using acetone as the solvent and its peak absorbance was determined to be at 338 nm and 410 nm (Fig. 2b), characteristic of lawsone, confirmed with the studies reported in the literature38. The n→π* transitions in the C=O (and the π → π* transitions in the C=C regions present in the quinoidal ring of lawsone correspond to the HOMO → LUMO transitions which result in the absorption in the UV region i.e. 338 nm. The n→π* transitions are localized around non-bonding electrons of the O-atom and the quinoid ring resulting in the absorption peak in the blue region i.e. 410 nm. As observed in Fig. 2c, when sensitized onto TiO2, the absorption peak of betanin shifts to 544 nm. The red-shift of the absorption peaks of the pigment in TiO2 as compared to that in water is due to the higher refractive index of TiO270. When lawsone is sensitized onto TiO2, its absorption peak shifts to 430 nm (Fig. 2c), due to the change in the surrounding medium70 (as observed previously, in the case of betanin as well). Figure 2d shows the 3D molecular structures of (1) betanin and (2) lawsone. Prior studies of the relative NHE (Normal Hydrogen Electrode) levels have confirmed that the LUMO level of both betanin and lawsone, lies above the conduction band of TiO227,71,72, necessary for the ideal functioning of the DSSC.
Absorption spectra of (a) Betanin dye solution (b) Lawsone dye solution and the (c) dye-sensitized TiO2 photoanodes (d) 3D molecular structures of betanin (1) and lawsone (2).
In the present study, both betanin and lawsone are co-sensitized together for broadband absorption. To verify this, both the pigments were coated on TiO2 and the absorption spectrum of the co-sensitized photoanode was also characterized, shown in Fig. 2c. As expected, the peak around 430 nm results from absorption by lawsone and the peak at 544 nm results from absorption by betanin (Fig. 2c). The absorption spectrum of the co-sensitized photoanode is broadband overlapping well with the visible portion of the incident solar radiation36. For augmenting the efficiency of the solar cell, the absorption peaks of the pigments in the photoanode must match the plasmonic scattering peaks of the incorporated AgNPs. To deduce the optimum size of AgNPs that exhibits LSPR at the wavelengths of 430 nm and 544 nm, FDTD simulations have been performed, discussed in the next section.
Determination of optimum dimensions of AgNPs through FDTD simulations
Study of the spectral response of AgNPs in the betanin-lawsone environment
Lumerical v8.6.2 simulation software was used for performing Finite Difference Time Domain (FDTD) simulations of AgNPs in water, betanin-TiO2 environment, lawsone-TiO2 environment, and the betanin-lawsone-TiO2 environment. The spectral response of the AgNPs of sizes ranging from 20 nm – 100 nm was examined under the visible wavelength range (300 nm – 800 nm). From the simulation results, it could be deduced that the 20 nm AgNPs exhibits strong LSPR peak at 429 nm in the lawsone-TiO2 environment and the 60 nm AgNP exhibits a strong LSPR peak at 540 nm in the betanin-TiO2 environment73. These peak wavelengths from the simulation results coincide well with the experimentally obtained absorption peaks of betanin on TiO2, at 542 nm, and that of lawsone on TiO2 at 430 nm (discussed in the previous section), and hence AgNPs of 20 nm and 60 nm are suitable for the plasmonic enhancement of lawsone and betanin, respectively. Figure 3a,b show the extinction, absorption and scattering cross-section plots of the 20 nm and 60 nm AgNPs respectively, in the betanin-lawsone-TiO2 environment obtained from the simulations. Figure 3c shows the combined graph of the scattering spectra of the AgNPs of sizes 20 nm – 100 nm, in the betanin-lawsone-TiO2 environment. It is observed that the scattering cross-section increases with the increasing diameter of the AgNPs and it is higher than the absorption cross-section for sizes larger than 40 nm. This follows the Mie's theory44, which describes the dependence of the absorption cross-section and scattering cross-section with the 3rd power and 6th power of the diameter, respectively. The red-shift of the resonance peaks observed on altering the surrounding medium from water to betanin-lawsone-TiO2 which is due to the greater RI of the latter medium74,75. For AgNPs of diameters greater than 50 nm, a second peak of lower intensity occurs at a lower wavelength region as a result of quadrupole resonance, whose oscillation pattern is different from that of the dipole resonance76. Qs (normalized scattering cross-section) i.e. the scattering cross-section to the geometric cross-section of the nano-sphere is described as;
$${{\rm{Q}}}_{{\rm{s}}}=\frac{{\sigma }_{s}}{{\rm{\alpha }}},$$
where σs represents the scattering cross-section and
$$\alpha =\frac{{\rm{\pi }}}{4}{d}^{2},$$
where 'd' is the diameter of the nanosphere. At resonance, AgNPs exhibit scattering cross-sections much larger than absorption cross-sections demonstrating the benefit of plasmonic enhancements. Qs increases with an increase in diameter till diameter 60 nm, however, it decreases beyond this size with the LSPR peaks becoming less intense and broad. Moreover, AgNPs of larger sizes tend to agglomerate making them unsuitable for this application. Figure 3d, which shows the normalized scattering cross-section with respect to the wavelength of the incident light, demonstrates that maximum scattering occurs for the 60 nm AgNPs and the wavelength of scattering concurs with the absorption maximum of betanin (when sensitized onto TiO2). AgNPs of size 20 nm displays peak absorption and scattering at a wavelength coinciding with the absorption maximum of lawsone (when in TiO2). The intensity profiles of the electric field exhibited by the 20 nm AgNPs and 60 nm AgNPs were studied over the visible spectrum to verify the enhancement at its plasmon resonant wavelength.
Extinction, absorption and scattering cross sections of AgNPs of diameters (a) 20 nm (b) 60 nm (c) scattering spectra of AgNPs from 20 nm – 100 nm (d) normalized scattering cross-section of AgNPs sized 20 nm – 100 nm obtained from the FDTD simulations performed in the betanin-lawsone-TiO2 environment.
Electric field enhancement by the 20 nm and 60 nm AgNPs
From the scattering plots, it is understood that the AgNP of 20 nm size shows an extinction peak at 430 nm, optimum for enhancing the absorption by lawsone in the photoanode, and AgNPs of 60 nm size shows an extinction peak at 540 nm, optimum for enhancing the absorption by betanin in the photoanode. The electric field profiles around the 20 nm and 60 nm AgNPs are investigated in the betanin-lawsone-TiO2 environment at various wavelengths of incident light, ranging from 300 nm to 800 nm.
Figure 4a shows the FDTD layout used for simulation of the 20 nm AgNP in the betanin-lawsone-TiO2 environment. The ON resonance wavelength, i.e. the wavelength at which the electron oscillations are maximum and the intensity of electric field the highest, was found to be at 431 nm (Fig. 4b) for the 20 nm AgNP. Figure 4c shows low electric field intensity profiles around the 20 nm AgNP at 300 nm, which is an OFF resonance wavelength. Figure 4d shows the FDTD layout used for simulation of the 60 nm AgNP in the betanin-lawsone-TiO2 environment. The ON resonance wavelength for the 60 nm AgNP was found to be at 544 nm (Fig. 4e). The electric field intensity profile of the nanoparticle at an OFF resonant wavelength of 300 nm has been shown in Fig. 4f. At ON resonance, the magnitude of electric field intensity is enhanced than the incident field by a factor of 14, exhibiting an intense plasmonic effect.
(a) FDTD simulation set-up of the 20 nm AgNP in the betanin-lawsone-TiO2 environment with a Total Field Scattered Field (TFSF) light source of wavelength range 300 nm – 800 nm. Electric field (magnitude) profiles of the 20 nm AgNP at (b) ON resonance (431 nm) and (c) at OFF resonance (300 nm). (d) FDTD simulation set-up of the 60 nm AgNP in the betanin-lawsone-TiO2 environment with a Total Field Scattered Field (TFSF) light source of wavelength range 300 nm – 800 nm. Electric field (magnitude) profiles of the 60 nm AgNP at (e) ON resonance (544 nm) and (f) at OFF resonance (300 nm).
In the case of the 20 nm AgNPs, the electric field observed inside the metal nanoparticle is because the size of the particle is of the order of skin depth of silver at this wavelength of incident light44. It may also be observed that the dipole observed around the 20 nm AgNP is less distinct. This may be explained by the following: The applied field induces a dipole moment;
$${\rm{p}}={\varepsilon }_{{\rm{m}}}\alpha {{\rm{E}}}_{0,}$$
where εm is the complex permittivity of the metal, E0 is the amplitude of the electric field and α is the polarizability, which the measure of how easily charge within a particle may be displaced on the application of electric field44,77,78,79. The polarizability of a particle is proportional to the 3rd power of its diameter77,78,79. Since the 20 nm AgNP is very small, it shows low polarization and low dipole moment. Also, they are highly absorbing and have low scattered fields, as explained by Mie's theory44.
The ON resonance wavelengths of both 20 nm and 60 nm AgNPs are close to the absorption peaks of lawsone and betanin, respectively. These studies demonstrate that AgNPs of diameters 20 nm and 60 nm show enhanced absorption, scattering at the desired wavelength ranges and can be used to enhance the light-harvesting by lawsone and betanin, respectively. AgNPs of the optimized sizes of 20 nm and 60 nm were synthesized in-house and characterized before testing in the solar cells.
Experimental studies of the synthesized AgNPs
The absorption spectrum of the colloidal suspensions of the synthesized 20 nm and 60 nm AgNPs were measured across a 300 nm – 800 nm wavelength range and compared with the data determined from the simulations. Figure 5a shows the combined absorption plots of AgNPs in water for diameters ranging from 20 nm – 100 nm derived from the FDTD simulations73. This matches experimental data available in the literature80,81,82, hence validating the simulation set-up used in this study. To verify the size and optical properties of the synthesized AgNPs, they were observed using confocal microscopy and their measured absorption curves were matched against the simulated curves.
(a) Absorption spectra of AgNPs in water for diameters ranging from 20 nm to 100 nm obtained from FDTD simulations73. (b) Confocal micrographs of the 20 nm and 60 nm AgNPs. Comparison of simulated and measured absorption: (c) AgNPs of 20 nm diameter and (d) AgNPs of 60 nm diameter showing concurrence with the absorption maxima obtained experimentally.
The synthesized AgNPs were studied using Confocal Laser Scanning Microscopy (LSM-880). The system employed a 405 nm laser and a 458 nm laser for studying the 20 nm AgNPs and 60 nm AgNPs, respectively. The confocal laser scanning micrographs of a single 20 nm and 60 nm AgNP are shown in Fig. 5b. From the image, it can be observed that the AgNPs appear slightly bigger than their expected sizes (determined from the absorbance curves). This size difference is due to the scattering by the nanoparticles occupying more pixels in the image than the nanoparticle itself83. This is particularly helpful in the detection of nanoparticles of extremely small sizes such as the 20 nm AgNPs. Image analysis to determine the nanoparticle size was performed using ImageJ/Fiji84 using the thresholding methodology83,85,86 of the pixel brightness across the diameter of the scattered cross-section observed in the image. Since the array of pixels image a single nanoparticle, counting of the pixels and scaling it with respect to the scale bar of the image would provide an accurate estimation of the total diameter of the observed scattering cross-section of the nanoparticle83. The histogram of the pixel brightness across the selected cross-section showed a sudden drop in brightness close to the edges of the cross-section (Fig. 5b). The section of the particle up to which the pixel brightness remains relatively constant is the approximate diameter of the nanoparticle (this excludes the pixels showing scattered light around the nanoparticle). Three AgNPs from the 20 nm and 60 nm sample sets showed sizes of 21.7 ± 2.62 nm and 60.7 ± 1.89 nm respectively. For example, Fig. 5b shows that the measured sizes of one of the nanoparticles from each sample set to be 23 nm and 62 nm respectively.
Figures 5c,d show the measured absorption spectra of the colloidal suspension of the synthesized 20 nm and 60 nm AgNPs (in water), showing absorption maxima at 408 nm and 445 nm, respectively. By matching the experimental and simulated absorption spectra, that the sizes of the prepared AgNPs were deduced to be 20 nm and 60 nm, which is also verified by the highly controlled methodology used to synthesize them80. It is observed that although the absorption peaks from the theoretical and experimental results show a good match, the curves do not entirely coincide. This deviation could be because, under experimental conditions, an additional influence of multiple neighboring AgNPs in the chosen environment is expected to have on each other whereas in the simulations only a single AgNP in the chosen environment has been considered. Besides, the simulations use Ag material parameters for the bulk model, though in the nano-scale, the material specifications for AgNPs prepared via solution-processed methods, may slightly differ87. The capping agent used for the synthesis could cause result in the broadening of the peaks88. Despite adequate precautions followed to guarantee the creation of extremely monodisperse AgNPs according to the prescribed methods80, AgNPs of slightly varied sizes may be formed, which could result in the measured absorption curve to be slightly broader compared to the simulated curve.
Study of the photoelectrodes for plasmonic enhancement
Plasmonic dye-sensitized photoelectrodes were fabricated from the synthesized AgNPs and characterized via absorption and diffuse reflectance studies (see Supplementary Information).
Figure 6a–c show the betanin-lawsone co-sensitized photoanodes incorporated with monomodal and bimodal distributions of the 20 nm and 60 nm AgNPs. The plasmonic enhancement resulting in increased absorption by the dyes is observed at their corresponding absorption peaks due to the LSPR effect of the AgNPs.
Plasmonic enhancement in absorption spectra of the respective dyes observed (from measurements) in the betanin-lawsone (BL) co-sensitized photoanode on incorporation of monomodal distributions of the (a) 20 nm AgNPs (b) 60 nm AgNPs and (c) bimodal distribution of both 20 nm and 60 nm AgNPs (d) FDTD simulation set-up for the plasmonic betanin solar cell and the plasmonic lawsone solar. Mapping of the power absorption density (logarithmic scale) in the (e) plasmonic betanin photoelectrode and the (f) plasmonic lawsone photoelectrode. Comparison of power absorption density (from simulations) in the non-plasmonic photoelectrodes with that in the (g) plasmonic betanin solar cell and the (h) plasmonic lawsone solar cell. (i) The absorption enhancement (with respect to the non-plasmonic configuration) in the plasmonic betanin photoelectrode and the plasmonic lawsone photoelectrode (from simulations).
The plasmonic and non-plasmonic dye-sensitized photo-electrodes were simulated to understand the enhancement in power absorbed in the pigments due to the AgNP incorporation. Figure 6d shows the simulation layout used for the FDTD simulations. The power absorbed within the dye layer was calculated using appropriate script functions in Lumerical® for the betanin and lawsone photoelectrodes with and without the incorporation of AgNPs. Figures 6e,f show the intensity profiles of the optical absorption per unit volume at a selected cross-section of the plasmon-enhanced dye-sensitized photoelectrodes at the SPR wavelengths of the AgNPs incorporated. The absorption density is plotted on a logarithmic scale using the same color scale to facilitate the comparison. Figure 6e shows enhanced absorption into the betanin dye around the 60 nm AgNP at its SPR wavelength (i.e. 544 nm). Likewise, Fig. 6f shows enhanced absorption into the lawsone dye close to the 20 nm AgNP at its SPR wavelengths (i.e. 431 nm). The results indicate that the presence of the 60 nm and 20 nm AgNPs increases absorption by the dyes due to the enhanced scattering and absorption due to the LSPR of the AgNPs in the betanin and lawsone photoelectrodes. It may also be observed that the enhancement in the absorption by the dye due to scattering is higher in the plasmonic betanin photoelectrode compared to the plasmonic lawsone photoelectrode, whereas in the case of the plasmonic lawsone photoelectrode although some enhancement upon absorption by the dye due to scattering by the nanoparticle is seen, the absorption by the 20 nm AgNP itself is higher. This is because the 60 nm AgNP scatters more than it absorbs whereas the 20 nm AgNP absorbs more than it scatters (as mentioned previously).
Figures 6g,h show the power absorption density across the visible wavelength range, by the plasmonic betanin-sensitized photoelectrode and the plasmonic lawsone-sensitized photoelectrode respectively compared against their corresponding non-plasmonic configurations. About 30% absorption enhancement can be observed at the LSPR peak of the plasmonic betanin photoelectrode and about 15% absorption enhancement can be observed at the LSPR peak of the lawsone-TiO2 photoelectrode. The absorption enhancement g(λ) for the plasmonic dye-sensitized photoelectrodes calculated with respect to the corresponding non-plasmonic dye-sensitized photoelectrodes (shown in Fig. 6i. g(λ)) compares the efficiency of the plasmonic photoelectrode with a non-plasmonic photoelectrode and is defined as89,90;
$${\rm{g}}(\lambda )=\frac{QE{(\lambda )}_{plasmonic}}{QE{(\lambda )}_{bare}},$$
where QE (λ) or quantum efficiency is defined as89,90;
$${\rm{QE}}(\lambda )=\frac{P{(\lambda )}_{abs}}{P{(\lambda )}_{in}},$$
where P(λ)abs and P(λ)in are the power of absorbed light by the photoelectrode and the power of the incident light on the photoelectrode, respectively, at a wavelength λ89,90.
The simulation results approximately indicate an enhancement in the dye absorption in the presence of the AgNPs in both cases. It should be noted that the representation of the dye-sensitized photo-electrodes used in the simulations is not the most accurate representation of the mesoporous nature of the TiO2 layer and the chemical interaction of the dyes with it. Nevertheless, the demonstrated absorption enhancement will translate to an increase in current density in the plasmonic solar cells, and a combined increase is expected with the bimodal distribution of 20 nm and 60 nm AgNPs in the betanin-lawsone co-sensitized plasmonic solar cell.
Performance characteristics of the plasmonic solar cells
Current density-voltage measurements
To test the effect of plasmon enhancement in the photoelectric conversion efficiency, the plasmonic solar cells of various configurations were assembled and their J-V characteristics were evaluated under a standard AM 1.5 G illumination condition of 1000 W.cm−2. The performance and the conversion efficiency the solar cells were evaluated and compared by determining the short-circuit current density (Jsc), open-circuit voltage (Voc), fill factor (FF), and conversion efficiency (η), which are determined from the J–V characteristic curves of the cells. First, the optimum order for sensitization of the pigments was determined by coating the pigments in the below-described sequences: (i) betanin-lawsone, (ii) lawsone-betanin (iii) pre-mixed blend of betanin and lawsone at 1:1 ratio (v/v). It was observed that the pre-mixed solution gave better results (average η of 0.793%) in comparison with the solar cells prepared by sensitizing the pigments sequentially (average efficiencies of 0.791% and 0.773% obtained for configuration (i) and (ii) respectively). Each configuration was prepared in quintuplicates and their efficiencies were assessed (Fig. 7a). Next, the optimal concentration of the AgNPs required for the best functioning of the solar cell was determined. The colloidal suspensions of 20 nm and 60 nm AgNPs were mixed with the TiO2 paste at (v/v) concentrations varying from 1–5%. It was found that 1% and 4% were the optimum concentrations of 60 nm AgNPs and 20 nm AgNPs to be incorporated with betanin and lawsone, individually. Next, the optimum overall concentration for the incorporation of both sets of nanoparticles was determined by fabricating betanin-lawsone co-sensitized photoanodes with the bimodal distribution of 20 nm and 60 nm AgNPs at a ratio of 1:1. The overall concentration of the nanoparticles was varied from 1% to 5% (v/v) and the efficiencies of the solar cells were evaluated. The best performance was observed with the bimodal distribution of AgNPs at an overall concentration of 2%, clearly evident from Fig. 7b. The photoelectric conversion efficiencies of 3 samples prepared with each concentration are shown in Fig. 7b. Finally, the ratio of the distribution of the 20 nm and 60 nm AgNPs was optimized for the betanin-lawsone co-sensitized solar cell. Here, the overall concentration (v/v) was kept constant at 2% and the ratio of the 60 nm to the 20 nm AgNPs was varied as 1:1, 1:2, 1:3, 1:4, 1:5 (as it was previously determined that the 20 nm nanoparticles were required to be at higher concentration compared to the 60 nm AgNPs). The optimum ratio of the 60 nm AgNPs to the 20 nm AgNPs, was found to be 1:4. The photoelectric conversion efficiencies of 5 samples fabricated with each ratio are shown in Fig. 7c. Figures 7a–c show the distribution of the solar cell efficiencies of quintuplicate samples of each configuration. Following the optimization, the various solar cell configurations (described subsequently) were prepared and tested: (i) betanin-lawsone solar cell, (ii) betanin-lawsone solar cell with 20 nm AgNPs (iii) betanin-lawsone solar cell with 60 nm AgNPs and (iv) betanin-lawsone solar cell with 20 nm and 60 nm AgNPs. (The performance of solar cells individually sensitized with only betanin and lawsone pigments has been described in our earlier publications27,72) The J-V and P-V curves of the highest performing sample in each type of solar cell are shown in Fig. 7d,e. Figure 7f illustrates the distribution of the solar cell efficiencies of quintuplicate samples of the various solar cells investigated. Table 1 lists the average photovoltaic performance parameters of these solar cell configurations.
Averaged values of the performance parameters along with the standard deviations are shown for the solar cell configurations tested for the optimization of (a) Order of sensitization (b) Nanoparticle concentration in betanin-lawsone solar cells containing the 20 nm and 60 nm Ag nanoparticles. (c) The ratio of the 20 nm and 60 nm AgNPs in the bimodal distribution incorporated in the betanin-lawsone solar cell. (d) Photocurrent density-voltage (J-V) curves and (e) Power density-voltage (P-V) characteristic curves of the best performing betanin-lawsone co-sensitized DSSCs, with and without the incorporation of the AgNPs (f) Box-plot showing averaged values of the parameters along with the standard deviations for the four solar cell configurations with > 5 samples of each configuration. (g) Equivalent circuit model. (h) Nyquist plots of the betanin-lawsone solar cell, the betanin-lawsone solar cell with 20 nm AgNPs, the betanin-lawsone solar cell with 60 nm AgNPs and the betanin-lawsone solar cell with 20 nm and 60 nm AgNPs (i) Photographs of the non-plasmonic betanin-lawsone co-sensitized solar cell and the bimodal plasmonic betanin-lawsone co-sensitized solar cell.
Table 1 Average performance characteristics of the optimized configuration of the solar cells.
The enhancement in Jsc in the plasmon-enhanced DSSCs, clearly evident from Fig. 7d can be attributed to the enhanced light absorption of betanin and lawsone due to localized surface plasmons by the bimodal distribution of AgNPs. This demonstrates a significant improvement in current density by ~ 20.1%, a slight increase in voltage by ~ 5.5% and an enhancement in photoelectric conversion efficiency by ~ 28.6% in the bimodal plasmonic betanin-lawsone solar cell. The performance enhancement in the plasmonic solar cells is attributed to the LSPR effect of the AgNPs enabling enhanced absorption by the pigments thereby increasing the current generation. A 5.6% increase in Voc and a 5.5% increase in Vmax was observed for the bimodal plasmonic solar cell configurations with respect to the non-plasmonic solar cell. The increase of Voc could be ascribed to the more negative level of the quasi-Fermi energy of AgNP TiO2 photo-electrode driven by the added AgNPs91,92 (the potential difference between the Fermi energy level of the photo-electrode and the redox potential of the electrolyte determines the Voc). It has been understood from the literature that better photoelectron production also increases the photovoltage capacity of the solar cells93,94. The increase in Voc has been attributed to the photo charging effect, produced by the electron storage in metal nanoparticles, thus driving the Fermi level to more negative potentials93,94. Other factors such as greater impedance to recombination, longer electron lifetime and facile charge separation could also drive a negative shift of the Fermi level in turn benefiting Voc. However, the exact mechanism contributing to the Voc enhancement needs further investigation. It has been reported in the literature that the incorporation of metal nanoparticles at optimum concentrations in metal-oxide-semiconductor electrodes (such as TiO2) can result in an upward shift of its valence band edge, reduce the bandgap energy of the material and enhance the short-circuit current in DSSCs91.
Electrochemical Impedance Spectroscopy (EIS)
Electrical impedance spectroscopy (EIS) enables an in-depth understanding of the various physical processes that occur in DSSCs. The quintessential Nyquist plots of DSSCs exhibit 2 semicircles of which, the semicircle having the greater real part is correlated with the electron recombination processes occurring at the interface between the photoelectrode and the electrolyte, and the semicircle with the lesser real part is correlated with the electron transfer activities at the interface between Pt counter electrode and the electrolyte95,96,97. The EIS parameters are extracted by fitting the measured Nyquist plots into an equivalent circuit model96,98 (shown in Fig. 7g) that mimics the physical processes occurring in a DSSC. Fitting of the measured data into the model for each solar cell configuration was performed model using the Z-Fit function in the E-C Lab® software. χ2 (Chi-Square) values of less than ~ 10−4 (with an error of less than 1%) were maintained to ensure the best fitting. The impedance Z(f) of the equivalent circuit model used for describing DSSCs is expressed as96:
$$Z(f)={R}_{s}+\frac{{R}_{Pt}}{{R}_{Pt}{Q}_{Pt}{(j2\pi f)}^{{\alpha }_{1}}+1}+\frac{{R}_{Ti{O}_{2}}}{{R}_{Ti{O}_{2}}{Q}_{Ti{O}_{2}}{(j2\pi f)}^{{\alpha }_{2}}+1},$$
where Rs is the series ohmic resistance of the FTO including the transport loss in the electrolyte layer; RPt and QPt are the electron transport resistance and the capacitance respectively, at the Pt electrode/electrolyte interface; RTiO2 is the resistance to recombination and QTiO2 is the chemical capacitance of the double layer at the dye-TiO2 photoanode/electrolyte interface. Q denotes the Constant Phase Element (CPE), which describes non-ideal capacitive processes that are typically observed in systems such as DSSCs96, where the CPE index represented by α deviates from the ideal value i.e 198. The time constant distributions arising from the heterogeneity of the surface roughness and the surface energy at the interfaces result in this type of behavior. α1 and α2 in (6) are the CPE indices correlated with non-ideal processes occurring interfaces between the Pt-electrode and the and the dye/TiO2-photoelectrode and the electrolyte respectively. The extracted EIS parameters are summarized in Table 2. Figure 7h shows the comparison of the Nyquist plots of the 4 configurations of solar cells. Figure 7i shows the bimodal plasmonic and non-plasmonic photo-electrodes fabricated.
Table 2 Parameters determined from EIS spectra of the fabricated DSSCs.
In this study, the photoanode is being modified and hence is of primary interest, therefore, the second arc in the Nyquist plots, which corresponds to the electron transport and recombination process within dye-TiO2 photoanode/electrolyte interface is analyzed (moreover, from the observed results the first arc correlated with the Pt-electrode/electrolyte interface does not change significantly in the all the solar cell configurations as the composition of the counter electrode is the same in all). One of the major limiting factors for the performance of DSSCs is the electron-recombination during their transport across the photoelectrode99. Previous studies have reported the minimization of recombination processes in DSSCs due to the "electron-sink effect" caused by metal nanoparticles100. This phenomenon occurs by the formation of a Schottky barrier at the interface of the metal nanoparticle and the semiconductor (in this case, the Ag/TiO2 interface). The Schottky barrier results in the storage of electrons which are subjected to charge-equilibration with the photo-excited semiconductor, thereby driving the Fermi level towards increasingly negative potentials101. This results in an electron sink that inhibits recombination processes in the dye-sensitized photoanode102,103,104.
For the ideal performance of a solar cell, high impedance to electron-recombination is a requirement. The lifetime of electrons (τ) in the photoelectrode is proportionate to the impedance to recombination at the dye-TiO2 photoanode/electrolyte interface. Higher impedance to recombination and capacitance implies higher electron lifetimes and a reduced possibility of electron-recombination activities at the dye-TiO2 photoanode/electrolyte interface105. The efficiency of the cell will reduce if the recombination of electrons takes place in the dye-TiO2 photoanode/electrolyte interface. Therefore, a higher value of τ is favorable. Table 2 lists the electron lifetimes of the various solar cells determined by the following relationship96:
$$\tau ={(Q\ast R)}^{1/{\alpha }_{2}}.$$
The betanin-lawsone solar cell with the bimodal distribution of 20 nm and 60 nm AgNPs shows the higher value of recombination resistance (RTiO2 = 75.67 Ω) in comparison to that observed in the non-plasmonic solar cell (RTiO2 = 57.83 Ω). The higher value in the plasmonic configuration is attributed to the lower charge recombination and improved charge transfer due to the presence of AgNPs. The electron lifetime determined for the plasmonic solar cell is 612.79 ms, which is higher than the lifetime of 313.79 ms calculated for the non-plasmonic solar cell. Lower recombination rate of electrons occurs when their lifetime is longer thereby resulting in a higher current. This can also be observed from the higher photovoltaic conversion efficiency of the plasmonic solar cell with the bimodal distribution of the AgNPs. The Nyquist plots of the solar cell assemblies with only a monomodal distribution of either 20 nm or 60 nm AgNPs, were also studied for comparison. As expected, the RTiO2 values and the electron lifetime values were found to be intermediate to those of the above-discussed configurations.
The betanin-lawsone solar cells comprising the bimodal distribution of the 20 nm and 60 nm AgNPs demonstrate the longest electron lifetime of 612.78 ms. The longer electron lifetime exhibited by the plasmonic solar cells implies that by incorporating AgNPs in the photoanode, the electron transfer mechanism at the dye-TiO2 photoanode/electrolyte interface has also been improved. This can be attributed to more conducting pathways created by the presence of metal nanoparticles within the TiO2 mesoporous structure confirmed by the higher current densities and better solar cell efficiencies observed in the bimodal plasmonic betanin-lawsone solar cell.
A brief study was carried out to understand the effect of AgNPs incorporation on the photocatalytic degradation kinetics of the pigments betanin and lawsone and the lifetime of the solar cells (see Supplementary Information). The results showed that the efficiency decrease that is normally observed in NDSSCs due to PCA of TiO2 is further accelerated due to the incorporation of the AgNPs, slightly negating the positive effect of the nanoparticles on efficiency enhancement. The photodegradation of the pigments from the photocatalytic action of AgNP-TiO2 is found to be larger compared to that of TiO2, due to the accelerated formation of reactive species by TiO2 in the presence of moisture and oxygen. These studies emphasize that to effectively harness the plasmonic effect of nanoparticles for efficiency enhancement of natural dyes in solar cells, the lifetime and stability of the NDSSCs need to be significantly improved. Hence, it is necessary to develop and optimize robust no-heat sealing techniques that suit the fabrication process of NDSSCs. Most of the presently used components in DSSCs are designed for synthetic dyes and hence are not optimal for their use in NDSSCs. The breakthrough for further augmenting the performance and efficiencies of NDSSCs would be to develop alternative materials to render them more compatible with natural dyes. The photocatalytic-degradation of natural dyes can be controlled by using adequate stabilizers and by adopting effective encapsulation techniques similar to that being used in OLED fabrication, which prevent permeation of oxygen or water molecules (such as glass-lid encapsulation with desiccants, Barix multilayer encapsulation using low-temperature deposition methods, photocurable silica nanoparticle embedded nanocomposites for encapsulation106,107).
The results of these studies are promising enough to drive future research on improving the stability of such solar cells and further enhancing their efficiency using stacked/mixed configurations involving multiple dyes and nanoparticles tailored to enhance the absorption of each dye. The optimization framework for the plasmonic enhancement of the sensitizers described here can be easily adapted for augmenting the efficiencies of several reported DSSCs.
FDTD simulations of the silver nanoparticle
The optimal sizes of AgNPs for the desired LSPR effect were deduced using Finite Difference Time Domain (FDTD) simulations, performed on Lumerical® v8.6.2 software, (Lumerical Solutions, Inc.)108. This a computational tool that employs the discretization of Maxwell's equations in a three-dimensional space grid to simulate the interaction of electromagnetic waves with materials based on Yee's algorithm109. Ag parameters from Palik material database110 were used to perform the 3D simulations. The surrounding environment was set as water, betanin-TiO2, lawsone-TiO2, and betanin-lawsone-TiO2 as required for each simulation. For simulating the water environment, the refractive index (RI) was set as 1.33111, for the betanin-TiO2 environment an average refractive of 1.76 (average RI of anatase TiO2 and betanin), and for the lawsone-TiO2 environment, an average refractive index of 1.76 (average RI of anatase TiO2 and lawsone) was set. Likewise, for the betanin-lawsone-TiO2 environment, an average refractive index (RI) of 1.74 (average RI of anatase TiO2, betanin, and lawsone) was set. The RI of anatase TiO2 (annealed at 400–500 °C) = 1.82112, RI of betanin = 1.7113 and the RI of lawsone = 1.7114). Perfectly Matched Layer (PML) boundary conditions, designed to absorb all the outgoing waves, were applied on all the boundaries of the simulation region. The mesh size of the simulation volume was set at 0.3 nm. A Total Field Scattered Field (TFSF) light source of wavelength range 300 nm to 800 nm was used. By using the TFSF source, the simulation region can be demarcated into two distinct regions – one region with the total field (total of the incident and scattered fields), and the other region with only the scattered field. The incident light was injected along the forward direction of the y-axis at a polarization angle of 0°. The analysis groups "power monitors" were positioned in the scattered field region and the total field region to determine the scattering and absorption cross-sections. To observe the electric field intensity profiles of the AgNPs, "frequency profile monitors" were positioned in the total field region. The extinction, absorption and scattering spectra of AgNPs with sizes ranging from 20 nm – 100 nm (varied in steps of 10 nm) were derived through the simulations to determine the optimum dimensions required for plasmonic scattering to coincide with the absorption maxima of the pigments. Optimal dimensions of 60 nm AgNPs for plasmon enhancement of betanin and 20 nm for the plasmon enhancement of lawsone was determined. To quantify electric field enhancement due to LSPR, the intensity profiles of the electric field around the AgNPs of optimized dimensions were observed under incident light of the wavelength range 300 nm – 800 nm to determine their ON and OFF resonant wavelengths.
FDTD simulations of the non-plasmonic and plasmonic photoelectrodes
The simulation setup for one unit cell of the structure consists of a TiO2 substrate of 2 µm * 2 µm * 2 µm coated uniformly with a 0.1 µm dye layer embedded with the AgNPs, in contact with the TiO2 layer (Fig. 6d). The n, k material data for TiO2 from Devore et. al. (1951)115 was used to describe the substrate and the parameters for the AgNPs were chosen from the Palik database110. n, k optical constants for betanin and lawsone were calculated (using the Swanepoole methodology116) from the reflectance and transmittance data (available in literature117,118) and was used to describe the dye layer in the betanin and lawsone solar cells. PML boundary conditions were applied as boundary conditions on all sides to absorb the transmitted and reflected fields and a mesh size of 0.25 nm was set. A plane wave light source was used with a wavelength range of 380 nm to 780 nm. To calculate the power absorbed/scattered into the dye, two power monitors were placed one on top of the dye, and the other at a distance of 0.01 µm above the dye-TiO2 interface. The "power absorbed" analysis group was used to determine the power absorption density (W.µm−3). The absorption enhancement was calculated using appropriate script functions for the simulation setups with and without the bimodal distribution of AgNPs. Configurations of photoelectrodes studied here are: (i) 60 nm AgNPs on betanin-TiO2 photoelectrode and (ii) 20 nm AgNPs on lawsone-TiO2 photoelectrode.
Synthesis of silver nanoparticles
AgNPs of optimized dimensions were prepared using the citrate reduction method optimized by Bastus. et al.80. The procedure uses silver nitrate and trisodium citrate as the precursors. The chemical reaction of the synthesis can be expressed as follows: 4Ag+ + C6H5O7Na3 + 2H2O → 4Ag° + C6H5O7H3 + 3Na+ + H+ + O2↑119. 50 mL of an aqueous solution (de-ionized (DI) water was used to prepare the aqueous solutions) with 5 mM sodium citrate and 0.1 mM tannic acid, which was prepared in a three-necked round-bottomed flask and heated to 110 °C in a rota-mantle for 15 min with continuous stirring. A reflux was used to run cooling water and prevent the evaporation of the solvent. To prepare Ag seeds of the appropriate size required for the synthesis of the 20 nm and 60 nm AgNPs, the tannic acid of 0.1 mM concentration was used80. 625 μL of AgNO3 (20 mM) was fed into the solution after it began boiling, turning the solution yellow. Following this, the seed solution was diluted by removing 10 mL of sample and adding 8.5 mL of DI water. Subsequently, the temperature of the solution was brought down to 90 °C and 250 μL of sodium citrate (25 mM), 750 μL of tannic acid (2.5 mM), and 625 μL of AgNO3 (20 mM) were sequentially dropped using a micropipette with a time delay ~ 1 min. This methodology optimized by Bastus et al.80 results in monodisperse AgNPs with < 10% standard deviation in their size distributions80. The repetition 3 times and 9 times respectively, at intervals of 30 min between each step results in a step-by-step pre-calculated increase in the diameter of the AgNPs80 forming AgNPs of the desired sizes: 20 nm and 60 nm. The optical properties of the colloidal solution of the synthesized 20 nm and 60 nm AgNPs were characterized using spectroscopic methods and later incorporated into the photo-anodes of the solar cells.
Preparation of the solar cell
Fluorine-doped tin oxide (FTO) substrates (15 mm × 15 mm × 2 mm; resistivity <10 Ω.cm−1 and transmittance >83%, from Solaronix (Solaronix®, Switzerland) were thoroughly cleaned and the TiO2 paste was prepared according to methods optimized by Ito et al.120. The 20 nm and 60 nm AgNPs were homogeneously mixed with the TiO2 paste at optimized concentrations through ultra-sonication. The photoanodes for the four different solar cell configurations were prepared by dispensing the homogeneous suspension of the TiO2 paste mixed with 20 nm and 60 nm AgNP suspensions at appropriate concentrations on to the FTO substrates by the doctor blading technique and subsequently annealed at 450 °C for 1 h120. The concentration of nanoparticles and their ratios were optimized based on solar cell performances and has been described in the section Performance characteristics of the plasmonic solar cells. Before the TiO2 coating, the prepared 20 nm and 60 nm AgNP suspensions of optimized dimensions were spin-coated for 5 s at a rotational speed of ~ 2000 rpm upon the FTO substrates to enable better light harvesting. For the control solar cell, the photoanode was prepared by coating pristine TiO2 (without AgNPs) and annealed simultaneously at identical conditions. The thickness of the coating was verified to be an optimal 20–25 µm120 observed via the Z-stacking method of imaging using an optical microscope (Carl Zeiss SteREO Discovery V20. Betanin and lawsone pigments were extracted and purified from Beta vulgaris slices and Lawsonia inermis leaves respectively, by following procedures described in the literature12,121. The factors such as solvent, temperature, time and pH were optimized72. The photo-anodes were left immersed in the purified pigment extracts for 24 h in the dark for proper adsorption, following which they were gently rinsed with ethanol to remove debris and dried. For optimum performance, pre-and-post TiCl4 treatment of the photo-anodes was done to ensure the best efficiencies of the solar cells (according to optimized methods described in the literature12,120). Pt-coated FTO substrates, (Solaronix®) were used as the counter electrodes. A solution of LiI/I2 (1.0 M/0.1 M), in NMP (N-methyl-2-pyrrolidone), was mixed with 6% (w/w) PVDF (polyvinylidene fluoride) to make the gel-electrolyte122, which was used in all the solar cells. A spacer with a thickness of 35 µm was attached around the coated portion of the photo-electrodes. This will facilitate the electrolyte to fill up to the optimum thickness (35 µm) required for the electrolyte and also will protect the TiO2-coating-free portion of the FTO substrate from short-circuiting on contacting the counter electrode. The Pt-FTO counter electrode is assembled over this (Pt coating positioned to face the inside of the assembly) to build a standard DSSC sandwiched–configuration. Subsequently, the electrolyte is injected into the solar cell assembly via a 0.5 mm diameter hole drilled in the counter electrode and sealed. A photo-active area of 0.25 cm2 was defined using a black mask for all the prepared solar cells. The samples were prepared in quintuplicates to confirm the repeatability and reproducibility of the performance realized. The NDSSC fabrication described above was performed at room temperature and ambient laboratory conditions.
Characterization and measurement
A solar simulator (Newport Corporation, Oriel® Class AAA) and a Keithley® 2440 5 A (Keithley, Inc.) source-meter were employed to determine the current density-voltage characteristics of the solar cells. The performance characteristics of the solar cells were evaluated under 1 sun illumination (AM 1.5 G standard test conditions at 1000 W.cm−2). A UV-Vis spectrophotometer (Shimadzu® UV-2400 PC Series) was employed to study the absorption spectra of the liquid and solid samples and the diffuse reflectance of the solid samples across a wavelength range of 300 nm – 800 nm at a sampling interval of 0.5 nm and the slit width set at 5 nm. Confocal laser scanning microscopy (LSM-880, Axio observer; Carl Zeiss®, Germany) was employed to observe the plasmonic scattering/fluorescence by the nanoparticles at their resonant wavelengths. These have a high degree of magnification and resolution. The system employed a 405 nm laser and a 458 nm laser for studying the 20 nm AgNPs and 60 nm AgNPs, respectively. A drop of the colloidal nanoparticle suspensions was placed on a glass slide, covered with a cover-slip, and the edges were sealed. The scattering/fluorescence signals of the fixed samples were observed using the Plan-Apochromat 63×/1.4 Oil DIC M27 objective. An electrochemical workstation (Bio-Logic® VMP3B-20) was employed to perform electrochemical impedance spectroscopy studies of the solar cells. A constant DC voltage bias set at 0.5 V (~ Voc of the DSSC) was applied to the solar cells and their impedance parameters were recorded over a frequency range of 100 kHz–10 mHz. The measured Nyquist plots were fitted into an appropriate equivalent circuit using the Z Fit function in EC-Lab® software and the corresponding EIS parameters such as electron- electron-transport resistance, chemical capacitance and the impedance to electron-hole recombination, were extracted. Multiple measurements were recorded to ensure repeatability and reproducibility and they were performed in ambient laboratory conditions.
O'Regan, B. & Graetzel, M. A low-cost, high-efficiency solar cell based on dye-sensitized colloidal TiO2 films. Nat. (ISSN 0028-0836) 353, 737–740 (1991).
ADS Google Scholar
Hardin, B. E., Snaith, H. J. & McGehee, M. D. The renaissance of dye-sensitized solar cells. Nat. Photonics 6, 162 (2012).
Shalini, S. et al. Status and outlook of sensitizers/dyes used in dye sensitized solar cells (DSSC): a review. Int. J. Energy Res. 40, 1303–1320 (2016).
Sharma, K., Sharma, V. & Sharma, S. S. Dye-Sensitized Solar Cells: Fundamentals and Current Status. Nanoscale Res. Lett. 13, 381 (2018).
Green, M. A. et al. Solar cell efficiency tables (Version 53). Prog. Photovoltaics Res. Appl. 27, 3–12 (2019).
S. Shalini, R., Balasundaraprabhu, T., Satish Kumar, N., Prabavathy, S. S. & Prasanna, S. Status and outlook of sensitizers/dyes used in dye sensitized solar cells (DSSC): a review. Int. J. ENERGY Res. 40, 1303–1320 (2016).
Calogero, G. et al. Synthetic analogues of anthocyanins as sensitizers for dye-sensitized solar cells. Photochem. Photobiol. Sci. 12, 883–94 (2013).
Richhariya, G., Kumar, A., Tekasakul, P. & Gupta, B. Natural dyes for dye sensitized solar cell: A review. Renew. Sustain. Energy Rev. 69, 705–718 (2017).
Narayan, M. R. Review: Dye sensitized solar cells based on natural photosensitizers. Renew. Sustain. Energy Rev. 16, 208–215 (2012).
Chaiamornnugool, P. et al. Performance and stability of low-cost dye-sensitized solar cell based crude and pre-concentrated anthocyanins: Combined experimental and DFT/TDDFT study. J. Mol. Struct. 1127, 145–155 (2017).
Talip, L. F. A. et al. Hybrid TiO2-Gigantochloa Albociliata Charcoal in Dye Sensitized Solar Cell. IOP Conf. Ser. Mater. Sci. Eng. 209, 012086 (2017).
Sandquist, C. & McHale, J. L. Improved efficiency of betanin-based dye-sensitized solar cells. J. Photochem. Photobiol. A Chem. 221, 90–97 (2011).
Ramamoorthy, R. et al. Betalain and anthocyanin dye-sensitized solar cells. J. Appl. Electrochem. 46, 929–941 (2016).
Orona-Navar, A. et al. Astaxanthin from Haematococcus pluvialis as a natural photosensitizer for dye-sensitized solar cell. Algal Res. 26, 15–24 (2017).
Al-Alwani, M. A. M., Ludin, N. A., Mohamad, A. B., Kadhum, A. A. H. & Sopian, K. Extraction, preparation and application of pigments from Cordyline fruticosa and Hylocereus polyrhizus as sensitizers for dye-sensitized solar cells. Spectrochim. Acta - Part A Mol. Biomol. Spectrosc. 179, 23–31 (2017).
Nan, H. et al. Studies on the optical and photoelectric properties of anthocyanin and chlorophyll as natural co-sensitizers in dye sensitized solar cell. Opt. Mater. (Amst). 73, 172–178 (2017).
Al-Bat'hi, S. A. M., Alaei, I. & Sopyan, I. Natural photosensitizers for dye sensitized solar cells. Int. J. Renew. Energy Res. 3, 138–143 (2013).
Kim, S., Jahandar, M., Jeong, J. H. & Lim, D. C. Recent Progress in Solar Cell Technology for Low-Light Indoor Applications. 3–17 https://doi.org/10.2174/1570180816666190112141857 (2019).
Research and Markets. Global Dye Sensitized Solar Cells (DSSC/DSC) Report: Technologies, Markets, Players - 2013–2023. (2013).
Rho, W.-Y. et al. Ag Nanoparticle–Functionalized Open-Ended Freestanding TiO2 Nanotube Arrays with a Scattering Layer for Improved Energy Conversion Efficiency in Dye-Sensitized Solar Cells. Nanomaterials 6, 117 (2016).
Article PubMed Central CAS Google Scholar
Yun, M. J., Sim, Y. H., Cha, S. I., Seo, S. H. & Lee, D. Y. High Energy Conversion Efficiency with 3-D Micro-Patterned Photoanode for Enhancement Diffusivity and Modification of Photon Distribution in Dye-Sensitized Solar Cells. Sci. Rep. 7, 1–10 (2017).
Guo, X., Lu, G. & Chen, J. Graphene-Based Materials for Photoanodes in Dye-Sensitized Solar Cells. Front. Energy Res. 3, 1–15 (2015).
Chen, X., Tang, Y. & Liu, W. efficient dye-sensitized solar cells based on nanoflower-like ZnO photoelectrode. Molecules 22, 1–6 (2017).
Dawoud, B., Amer, E. & Gross, D. Experimental investigation of an adsorptive thermal energy storage. Int. J. energy Res. 31, 135–147 (2007).
Lim, A. et al. Higher performance of DSSC with dyes from cladophora sp. As mixed cosensitizer through synergistic effect. J. Biophys. 2015, (2015).
Colonna, D. et al. Efficient Cosensitization Strategy for Dye-Sensitized Solar Cells. Appl. Phys. Express 5, 22303 (2012).
Sreeja, S. & Pesala, B. Co-sensitization aided efficiency enhancement in betanin-chlorophyll solar cell. Mater. Renew. Sustain. Energy 7, 25 (2018).
Sreeja, S. & Pesala, B. Efficiency Enhancement of Betanin Dye-Sensitized Solar Cells Using Plasmon-Enhanced Silver Nanoparticles. in Advances in Energy Research, Vol 1: Selected Papers from ICAER (International Conference on Advances in Energy Research) 2017 (Springer Proceedings in Energy) (eds. Singh, S. & Ramadesigan, V.) 978–981 (Springer, 2020).
Sreeja, S. & Pesala, B. Efficiency Enhancement of Betanin Dye Sensitized Solar Cells Using Plasmon Enhanced Silver Nanoparticles. in Presented at the International Conference on Advances in Energy Research, IIT Bombay (2017).
Sreeja, S. & Pesala, B. Green solar cells using natural pigments having complementary absorption spectrum. In SPIE Photonics West 2016, International Society for Optics and Photonics, The Moscone Center, San Francisco, CA, United States 9743–55 (2016).
Sreeja, S. Green Solar Cells of Enhanced Efficiency. (Academy of Scientific and Innovative Research (AcSIR), 2018).
Park, K. et al. Dyes and Pigments Adsorption characteristics of gardenia yellow as natural photosensitizer for dye-sensitized solar cells. Dye. Pigment. 96, 595–601 (2013).
Berginc, M. & Krašovec, U. O. & I, M. T. Solution Processed Silver Nanoparticles in Dye-Sensitized Solar Cells. J. Nanomater. 2014, 49–56 (2014).
Pawar, G. S. et al. Enhanced Photoactivity and Hydrogen Generation of LaFeO 3 Photocathode by Plasmonic Silver Nanoparticle Incorporation. ACS Appl. Energy Mater. 1, 3449–3456 (2018).
Slimen, I. B., Najar, T. & Abderrabba, M. Chemical and Antioxidant Properties of Betalains. J. Agric. Chem. Food Chem. https://doi.org/10.1021/acs.jafc.6b04208 (2017).
(ASTM), A. S. for T. and M. Reference Solar Spectral Irradiance: Air Mass 1.5. Terrestrial Reference Spectra for Photovoltaic Performance Evaluation. (2003).
Zhang, D. et al. Betalain pigments for dye-sensitized solar cells. J. Photochem. Photobiol. A Chem. 195, 72–80 (2008).
S S. Khadtare, et al. Dye Sensitized Solar Cell with Lawsone Dye Using ZnO Photoanode: Experimental and TD-DFT Study. RSC Adv. https://doi.org/10.1039/C4RA14620D
Nazeeruddin, M. K., Liska, P., Moser, J., Vlachopoulos, N. & Grätzel, M. Conversion of Light into Electricity with Trinuclear Ruthenium Complexes Adsorbed on Textured TiO2Films. Helv. Chim. Acta 73, 1788–1803 (1990).
Stratakis, E. & Kymakis, E. Nanoparticle-based plasmonic organic photovoltaic devices. Mater. Today 16, 133–146 (2013).
Akimov, Y. A., Koh, W. S., Sian, S. Y. & Ren, S. Nanoparticle-enhanced thin film solar cells: Metallic or dielectric nanoparticles? Appl. Phys. Lett. 96, (2010).
Schade, M. et al. Regular arrays of Al nanoparticles for plasmonic applications. J. Appl. Phys. 115, (2014).
Pfeiffer, T. V. et al. Plasmonic nanoparticle films for solar cell applications fabricated by size-selective aerosol deposition. Energy Procedia 60, 3–12 (2014).
Maier, S. A. Plasmonics: Fundamentals and Applications. (Springer Science+Business Media LLC, 2007).
Atwater, H. A. & Polman, A. Plasmonics for improved photovoltaic devices. Nat. Mater. 9, 205–213 (2010).
Article ADS CAS PubMed Google Scholar
Dissanayake, M. A. K. L., Kumari, J. M. K. W., Senadeera, G. K. R. & Thotawatthage, C. A. Efficiency enhancement in plasmonic dye-sensitized solar cells with TiO2 photoanodes incorporating gold and silver nanoparticles. J. Appl. Electrochem. 46, 47–58 (2016).
Mazzoni, M. et al. P-DSSC Plasmonic enhancement in Dye sensitized solar cell. 5–7 (2012).
Jun, H. K., Careem, M. A. & Arof, A. K. Plasmonic Effects of Quantum Size Gold Nanoparticles on Dye-sensitized Solar. Cell. Mater. Today Proc. 3, S73–S79 (2016).
Peh, C. K. N., KE, L. & Ho, G. W. Modification of ZnO nanorods through Au nanoparticles surface coating for dye-sensitized solar cells applications. Mater. Lett. 64, 1372–1375 (2010).
Song, D. H., Kim, H.-S., Suh, J. S., Jun, B.-H. & Rho, W.-Y. Multi-Shaped Ag Nanoparticles in the Plasmonic Layer of Dye-Sensitized Solar Cells for Increased Power Conversion Efficiency. Nanomaterials 7, 136 (2017).
Tan, H., Santbergen, R., Smets, A. H. & Zeman, M. Plasmonic light trapping in thin-film silicon solar cells with improved self-assembled silver nanoparticles. Nano Lett. 12, 4070–4076 (2012).
Gangishetty, M. K., Lee, K. E., Scott, R. W. J. & Kelly, T. L. Plasmonic Enhancement of Dye Sensitized Solar Cells in the Red-to-near-Infrared Region using Triangular Core–Shell Ag@SiO 2 Nanoparticles. ACS Appl. Mater. Interfaces 5, 11044–11051 (2013).
Duche, D. et al. Improving light absorption in organic solar cells by plasmonic contribution. Sol. Energy Mater. Sol. Cells 93, 1377–1382 (2009).
Su, Y. H., Ke, Y. F., Cai, S. L. & Yao, Q. Y. Surface plasmon resonance of layer-by-layer gold nanoparticles induced photoelectric current in environmentally-friendly plasmon-sensitized solar cell. Light Sci. Appl. 1, 2–6 (2012).
Temple, T. L. & Bagnall, D. M. Optical properties of gold and aluminium nanoparticles for silicon solar cell applications. J. Appl. Phys. 109, (2011).
Lai, W. H., Su, Y. H., Teoh, L. G. & Hon, M. H. Commercial and natural dyes as photosensitizers for a water-based dye-sensitized solar cell loaded with gold nanoparticles. J. Photochem. Photobiol. A Chem. 195, 307–313 (2008).
Lu, L., Luo, Z., Xu, T. & Yu, L. Cooperative Plasmonic Effect of Ag and Au Nanoparticles on Enhancing Performance of Polymer Solar Cells. Nano Lett. 13, 59–64 (2012).
Article ADS PubMed CAS Google Scholar
Omelyanovich, M., Makarov, S., Milichko, V. & Simovski, C. Enhancement of perovskite solar cells by plasmonic nanoparticles. Mater. Sci. Appl. 7, 836–847 (2016).
Eli, D., Owolabi, J. A., Olowomofe, G. O. & Jonathan, E. Plasmon-Enhanced Efficiency in Dye Sensitized Solar Cells Decorated with Size- Controlled Silver Nanoparticles Based on…. Plasmon-Enhanced Efficiency in Dye Sensitized Solar Cells Decorated with Size-Controlled Silver Nanoparticles Based on Anthocyanin. J. Photonic Mater. Technol. 2, 6–13 (2016).
Omelyanovich, M., Ra'Di, Y. & Simovski, C. Perfect plasmonic absorbers for photovoltaic applications. J. Opt. (United Kingdom) 17, (2015).
Zhang, X. et al. Significant Broadband Photocurrent Enhancement by Au-CZTS Core-Shell Nanostructured Photocathodes. Sci. Rep. 6, 1–8 (2016).
Erwin, W. R., Zarick, H. F., Talbert, E. M. & Bardhan, R. Light trapping in mesoporous solar cells with plasmonic nanostructures. Energy Environ. Sci. 9, 1577–1601 (2016).
Zheng, X. & Zhang, L. Photonic nanostructures for solar energy conversion. Energy Environ. Sci. 9, 2511–2532 (2016).
Tian, Y. & Tatsuma, T. Mechanisms and Applications of Plasmon-Induced Charge Separation at TiO2 Films Loaded with Gold Nanoparticles. J. Am. Chem. Soc. 127, 7632–7637 (2005).
Tian, Y. & Tatsuma, T. Plasmon-induced photoelectrochemistry at metal nanoparticles supported on nanoporous TiO2. Chem. Commun. 1810–1811, https://doi.org/10.1039/B405061D (2004).
Furube, A., Du, L., Hara, K., Katoh, R. & Tachiya, M. Ultrafast Plasmon-Induced Electron Transfer from Gold Nanodots into TiO2 Nanoparticles. J. Am. Chem. Soc. 129, 14852–14853 (2007).
Mubeen, S., Hernandez-Sosa, G., Moses, D., Lee, J. & Moskovits, M. Plasmonic Photosensitization of a Wide Band Gap Semiconductor: Converting Plasmons to Charge Carriers. Nano Lett. 11, 5548–5552 (2011).
Chen, F. & Johnston, R. L. Plasmonic Properties of Silver Nanoparticles on Two Substrates. Plasmonics 4, 147–152 (2009).
Dumbravǎ, A., Enache, I., Oprea, C. I., Georgescu, A. & Gîrţu, M. A. Toward a more efficient utilisation of betalains as pigments for Dye-Sensitized solar cells. Dig. J. Nanomater. Biostructures 7, 339–351 (2012).
Popova, A. V. Spectral characteristics and solubility of beta-carotene and zeaxanthin in different solvents. (2017).
Kavitha, S., Praveena, K. & Lakshmi, M. A new method to evaluate the feasibility of a dye in DSSC application. Int. J. Energy Res. 41, 2173–2183 (2017).
Sreeja, S. & Pesala, B. Performance enhancement of betanin solar cells co-sensitized with indigo and lawsone: A Comparative Study. ACS Omega 4, 18023–18034 (2019).
Sreeja, S. & Pesala, B. Efficiency Enhancement of Betanin – Chlorophyll Cosensitized Natural Pigment Solar Cells Using Plasmonic Effect of Silver Nanoparticles. IEEE J. Photovoltaics 10, 124–134 (2020).
Li, X., Choy, W. C. H., Lu, H., Sha, W. E. I. & Ho, A. H. P. Efficiency Enhancement of Organic Solar Cells by Using Shape-Dependent Broadband Plasmonic Absorption in Metallic Nanoparticles. Adv. Funct. Mater. 23, 2728–2735 (2013).
Li, X. et al. Dual Plasmonic Nanostructures for High Performance Inverted Organic Solar Cells. Adv. Mater. 24, 3046–3052 (2012).
M. Shopa, K., Kolwas, A. & Derkachova, G. D. Organic field-effect transistors. Opto-Electronics Rev. 18, 121–136 (2010).
Rivera, V. A. G., Ferri, F. A. & Marega, E. Jr. Localized Surface Plasmon Resonances: Noble Metal Nanoparticle Interaction with Rare-Earth Ions. In Plasmonics – Principles and Applications 283–303 (2012).
Persson, B. N. J. Polarizability of small spherical metal particles: influence of the matrix environment. Surf. Sci. 281, 153–162 (1993).
Barnes, W. L. Particle Plasmons: Why Shape Matters. Plasmonics 1–10.
Bastús, N. G., Merkoçi, F., Piella, J. & Puntes, V. Synthesis of Highly Monodisperse Citrate- Stabilized Silver Nanoparticles of up to 200 nm: Kinetic Control and Catalytic Properties. Chem. Mater. 26, 2836–2846 (2014).
Joshi, D. N., Ilaiyaraja, P., Sudakar, C. & Prasath, R. A. Facile one-pot synthesis of multi-shaped silver nanoparticles with tunable ultra-broadband absorption for efficient light harvesting in dye-sensitized solar cells. Sol. Energy Mater. Sol. Cells 185, 104–110 (2018).
Evanoff, D. et al. Size-Controlled Synthesis of Nanoparticles. 2. Measurement of Extinction, Scattering, and Absorption Cross Sections. J. Phys. Chem. B 108, 13957–13962 (2004).
Klein, S. Quantitative visualization of colloidal and intracellular gold nanoparticles by confocal microscopy. 15, 1–11 (2010).
Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).
Moallem, P. & Razmjooy, N. Optimal Threshold Computing in Automatic Image Thresholding using Adaptive Particle Swarm Optimization Optimal Threshold Computing in Automatic Image Thresholding using Adaptive Particle Swarm Optimization. https://doi.org/10.22201/icat.16656423.2012.10.5.361 (2012).
Sezgin, M. Survey over image thresholding techniques and quantitative performance evaluation. 13, 146–165 (2004).
Chouhan, N. Silver Nanoparticles: Synthesis, Characterization and Applications. In Silver Nanoparticles - Fabrication, Characterization and Applications 21–56 https://doi.org/10.5772/intechopen.75611 (IntechOpen, 2018).
Daniels, J. L., Crawford, T. M., Andreev, O. A. & Reshetnyak, Y. K. Synthesis and characterization of pHLIP ® coated gold nanoparticles. Biochem. Biophys. Reports 10, 62–69 (2017).
Markvart, T. & Castañer, L. Chapter IA-1-Principles of Solar Cell Operation. Sol. Cells (Second Ed. A. McEvoy, L. Castañer, T. Markvart, Eds., Second Edi, Elsevier 3–25 (2013).
Lumerical FDTD. Solar cell methodology. Lumerical Support (2020). Available at: https://support.lumerical.com/hc/en-us/articles/360042165634-Solar-cell-methodology. (Accessed: 8th March 2020).
Villanueva-Cab, J. et al. Photocharging and Band Gap Narrowing Effects on the Performance of Plasmonic Photoelectrodes in Dye-Sensitized Solar Cells. ACS Appl. Mater. Interfaces 10, 31374–31383 (2018).
Deepa, K. G., Lekha, P. & Sindhu, S. Efficiency enhancement in DSSC using metal nanoparticles: A size dependent study. Sol. Energy 86, 326–330 (2012).
Sardar, S. et al. Enhanced photovoltage in DSSCs: synergistic combination of a silver modified TiO 2 photoanode and a low cost counter electrode. RSC Adv. 6, 33433–33442 (2016).
Yan, L., Wang, H., Quanyou, F. & Gang, Z. Gold nanoparticles inlaid TiO2 photoanodes: a superior candidate for high-efficiency dye-sensitized solar cells. Energy Environ. Sci. 6, 2156 (2013).
Fabregat-Santiago, F., Bisquert, J., Palomares, E., Haque, S. A. & Durrant, J. R. Impedance spectroscopy study of dye-sensitized solar cells with undoped spiro-OMeTAD as hole conductor. J. Appl. Phys. 100, (2006).
Bisquert, J. & Fabregat-Santiago, F. Impedance spectroscopy: a general introduction and application to dye-sensitized solar cells. Dye. Sol. Cells 604 (2010).
Fabregat-Santiago, F. et al. Correlation between Photovoltaic Performance and Impedance Spectroscopy of Dye-Sensitized Solar Cells Based on Ionic Liquids. J. Phys. Chem. C 111, 6550–6560 (2007).
Liberatore, M. et al. Using EIS for diagnosis of dye-sensitized solar cells performance. J. Appl. Electrochem. 39, 2291–2295 (2009).
Haque, S. A. et al. Charge Separation versus Recombination in Dye-Sensitized Nanocrystalline Solar Cells. The Minimization of Kinetic Redundancy. 13, 3456–3462 (2005).
Wu, J. L. et al. Surface plasmonic effects of metallic nanoparticles on the performance of polymer bulk heterojunction solar cells. ACS Nano 5, 959–967 (2011).
Link, S. & El-Sayed, M. A. Shape and size dependence of radiative, non-radiative and photothermal properties of gold nanocrystals. Int. Rev. Phys. Chem. 19, 409–453 (2000).
Lim, S. P., Pandikumar, A., Lim, H. N., Ramaraj, R. & Huang, N. M. Boosting Photovoltaic Performance of Dye-Sensitized Solar Cells Using Silver Nanoparticle-Decorated N,S-Co-Doped-TiO2 Photoanode. Sci. Rep. 5, 11922 (2015).
Buda, S., Shafie, S., Rashid, S. A., Jaafar, H. & Sharif, N. F. M. Enhanced visible light absorption and reduced charge recombination in AgNP plasmonic photoelectrochemical cell. Results Phys. 7, 2311–2316 (2017).
Article ADS Google Scholar
Chien, T. et al. Study of the Plasmon Energy Transfer Processes in Dye Sensitized Solar Cells. 2015 (2015).
Wang, M., Chen, P., Humphry-baker, R., Zakeeruddin, S. M. & Grätzel, M. The Influence of Charge Transport and Recombination on the Performance of Dye-Sensitized Solar Cells. 290–299, https://doi.org/10.1002/cphc.200800708 (2009).
Lee, S.-M. A Review of Flexible OLEDs Toward Highly Durable Unusual Displays. XX, 1–10 (2017).
Rui-Peng, X., Yan.-Q., L. & Jian.-X., T. Recent advances in flexible organic light-emitting diodes. J. Mater. Chem. C 4, 9116 (2016).
FDTD Solutions (8.6.2), Lumerical Solutions, Inc. (2013).
Yee, K. S. Numerical Solution of Initial Boundary Value Problems Involving Maxwell's Equations in Isotropic Media. IEEE Trans. Antennas Propag. 14, 302–307 (1966).
Article ADS MATH Google Scholar
D. Palik, E. Handbook of Optical Constants of Solids. (Academic Press, 1998).
Hale, G. M. & Querry, M. R. Optical Constants of Water in the 200-nm to 200-microm Wavelength Region. Appl. Opt. 12, 555–563 (1973).
Yarmand, B. & Sadrnezhaad, S. K. Influence of annealing temperature on structural and optical properties of mesoporous TiO 2 thin films prepared by sol-gel templating technique. J. Optoelectron. Adv. Mater. 12, 1490–1497 (2010).
PubChem Identifier CID 6540685. Available at: https://pubchem.ncbi.nlm.nih.gov/compound/6540685. (Accessed: 18th June 2018).
PubChem Identifier CID 6755. Available at: https://pubchem.ncbi.nlm.nih.gov/compound/Lawsone#section=CAS. (Accessed: 14th July 2018)
DeVore, J. R. Refractive Indices of Rutile and Sphalerite. J. Opt. Soc. Am. 41, 416–419 (1951).
Shaaban, E. R. & Yahia, I. S. Validity of Swanepoel' s Method for Calculating the Optical Constants of Thick Films. 121, 628–635 (2012).
Pallipurath, A. et al. Crystalline adducts of the Lawsone molecule (2-hydroxy-1,4-naphthaquinone): optical properties and computational modelling. CrystEngComm 17, 7684–7692 (2015).
De, D., Sinha, D. & Ayaz, A. Performance Evaluation of Beetroot Sensitized Solar Cell Device. in Proceedings of the 2nd International Conference on Communication, Devices and Computing 223–228 (Springer, 2020).
Pacioni, N. L., Borsarelli, C. D., Rey, V. & Veglia, A. V. Silver Nanoparticle Applications. https://doi.org/10.1007/978-3-319-11262-6 (2015).
Ito, S. et al. Fabrication of Screen-Printing Pastes From TiO 2 Powders for Dye-Sensitised Solar Cells. 10.1002/pip (2007).
Tan, M. C. & Ho, C. W. Effect of Extraction Solvent System, Time, and Temperature on Total Phenolic Content of Henna Stems. Int. Food Res. J. 20, 3117–3123 (2013).
Man-Gu, K, Park, N.-G, Kim, K.-M & Chang, S.-H. Dye Sensitized Solar Cells Including Polymer Electrolyte Gel Containg Poly(Vinyldene Fluoride) Patent No. US 6,756,537 B2. (2004).
The first author thanks CSIR for the award of Senior Research Fellowship (Award No. 31/57(002)/2015-EMRI) to pursue research at CSIR-CEERI. The authors thank Dr. K. Ramesha, Principal Scientist, CSIR-CECRI for the kind support and facilities provided to carry out the work. The authors acknowledge the facilities and the scientific-technical assistance of Advanced Microscopy Facility at the National Cancer Tissue Biobank (NCTB), Department of Biotechnology, IIT Madras, Chennai.
Academy of Scientific and Innovative Research (AcSIR), 600113, Chennai, India
S. Sreeja & Bala Pesala
CSIR - Central Electronics Engineering Research Institute (CSIR-CEERI), CSIR Madras Complex, Taramani, 600113, Chennai, India
Bala Pesala
S. Sreeja
S.S. carried out the experiments pertaining to this research work as part of her doctoral thesis work and wrote the manuscript. B.P. supervised the research work and contributed to multiple revisions of the manuscript.
Correspondence to Bala Pesala.
Sreeja, S., Pesala, B. Plasmonic enhancement of betanin-lawsone co-sensitized solar cells via tailored bimodal size distribution of silver nanoparticles. Sci Rep 10, 8240 (2020). https://doi.org/10.1038/s41598-020-65236-1
Top 100 in Materials Science | CommonCrawl |
Advances in Difference Equations
Some results on the fractional order Sturm-Liouville problems
Yuanfang Ru1,
Fanglei Wang1,
Tianqing An1 &
Yukun An2
Advances in Difference Equations volume 2017, Article number: 320 (2017) Cite this article
In this work, we introduce some new results on the Lyapunov inequality, uniqueness and multiplicity results of nontrivial solutions of the nonlinear fractional Sturm-Liouville problems
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)f(u(t))=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0, \end{cases} $$
where α, β, γ, δ are constants satisfying \(0\neq \vert\beta\gamma+\alpha\gamma\int_{0}^{1}\frac{1}{p(\tau)}\,d\tau +\alpha \delta\vert<+\infty\), \(p(\cdot)\) is positive and continuous on \([0,1]\). In addition, some existence results are given for the problem
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)f(u(t),\lambda)=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0, \end{cases} $$
where \(\lambda\geq0\) is a parameter. The proof is based on the fixed point theorems and the Leray-Schauder nonlinear alternative for single-valued maps.
On the one hand, since a Lyapunov-type inequality has found many applications in the study of various properties of solutions of differential equations, such as oscillation theory, disconjugacy and eigenvalues problems, there have been many extensions and generalizations as well as improvements in this field, e.g., to nonlinear second order equations, to delay differential equations, to higher order differential equations, to difference equations and to differential and difference systems. We refer the readers to [1–4] (integer order). Fractional differential equations have gained considerable popularity and importance due to their numerous applications in many fields of science and engineering including physics, population dynamics, chemical technology, biotechnology, aerodynamics, electrodynamics of complex medium, polymer rheology, control of dynamical systems. With the rapid development of the theory of fractional differential equation, there are many papers which are concerned with the Lyapunov type inequality for a certain fractional order differential equations, see [5–7] and the references therein. Recently, Ghanbari and Gholami [7] introduced the Lyapunov type inequality for a certain fractional order Sturm-Liouville problem in sense of Riemann-Liouville
$$\textstyle\begin{cases} D_{a^{+}}^{\alpha} (p(t)u'(t))+q(t)u(t)=0,\quad1< \alpha\leq2, t\in (a,b), b\neq0,\\ u(a)=u'(a)=0,\qquad u(b)=0 \end{cases} $$
$$\int_{a}^{b} \int_{a}^{b} \biggl\vert \frac{q(s)}{p(\omega)} \biggr\vert \,ds \,d\omega>\frac{\Gamma(\alpha)}{2(b-a)^{\alpha-1}}. $$
On the other hand, many authors have studied the existence, uniqueness and multiplicity of solutions for nonlinear boundary value problems involving fractional differential equations, see [8–19]. But Lan and Lin [20] pointed out that the continuity assumptions on nonlinearities used previously are not sufficient and obtained some new results on the existence of multiple positive solutions of systems of nonlinear Caputo fractional differential equations with some of general separated boundary conditions
$$\textstyle\begin{cases} -{}^{c}D^{q} z_{i}(t)=f_{i}(t,z(t)),\quad t\in (0,1), \\ \alpha z_{i}(0)-\beta z_{i}'(0)=0,\qquad \gamma z_{i}(1)+\delta z_{i}'(1)=0, \end{cases} $$
where \(z(t)=(z_{1}(t),\ldots,z_{n}(t))\), \(f_{i}:[0,1]\times \mathbb{R}_{+}^{n}\rightarrow\mathbb{R}_{+}\) is continuous on \([0,1]\times\mathbb{R}_{+}^{n}\), \({}^{c}D^{q}\) is the Caputo differential operator of order \(q\in(1,2)\). The α, β, γ, δ are positive real numbers. The relations between the linear Caputo fractional differential equations and the corresponding linear Hammerstein integral equations are studied, which shows that suitable Lipschitz type conditions are needed when one studies the nonlinear Caputo fractional differential equations.
Motivated by these excellent works, in this paper we focus on the representation of the Lyapunov type inequality and the existence of solutions for a certain fractional order Sturm-Liouville problem
$$ \textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)f(u(t))=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0, \end{cases} $$
where α, β, γ, δ are constants satisfying \(0\neq \vert\beta\gamma+\alpha\gamma\int_{0}^{1}\frac{1}{p(\tau)}\,d\tau +\alpha \delta\vert<+\infty\), \(p(\cdot)\) is a positive continuous function on \([0,1]\), \(\Lambda(t): [0, 1] \rightarrow\mathbb{R}\) is a nontrivial Lebesgue integrable function, \(f: \mathbb{R}\rightarrow\mathbb{R}\) is continuous. In addition, some existence results are given for the problem
$$ \textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)f(u(t),\lambda)=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0, \end{cases} $$
where \(\lambda\geq0\) is a parameter, \(f: \mathbb{R}\times \mathbb{R}_{+}\rightarrow\mathbb{R}\) is continuous. For the Sturm-Liouville problems, there are many literature works on the studies of the existence and behavior of solutions to nonlinear Sturm-Liouville equations, for example, [21, 22] (integer order) and [23, 24] (fractional order).
The discussion of this manuscript is based on the fixed point theorems and the Leray-Schauder nonlinear alternative for single-valued maps. For convenience, we list the crucial lemmas as follows.
Lemma 1.1
([25])
Let ν be a positive measure and Ω be a measurable set with \(\nu(\Omega)=1\). Let I be an interval and suppose that u is a real function in \(L(d\nu)\) with \(u(t) \in I\) for all \(t\in\Omega\). If f is convex on I, then
$$ f\biggl( \int_{\Omega}u(t)\,d\nu(t) \biggr) \geq \int_{\Omega}f \circ)u(t)\,d\nu(t). $$
If f is concave on I, then inequality (1.3) holds with '≥' substituted by '≤'.
Let E be a Banach space, \(E_{1}\) be a closed, convex subset of E, Ω be an open subset of \(E_{1}\), and \(0\in\Omega\). Suppose that \(T:\overline{\Omega}\rightarrow E_{1}\) is completely continuous. Then either
T has a fixed point in Ω̅, or
there are \(u\in\partial\Omega\) (the boundary of Ω in \(E_{1}\)) and \(\lambda\in(0,1)\) with \(u=\lambda Tu\).
Let E be a Banach space and \(K\subset E\) be a cone in E. Assume that \(\Omega_{1}\), \(\Omega_{2}\) are open subsets of E with \(0\in\Omega_{1}\), \(\overline{\Omega}_{1}\subset\Omega_{2}\), and let \(T:K\cap(\overline{\Omega}_{2}\setminus\Omega_{1})\rightarrow K\) be a completely continuous operator such that either
\(\Vert Tu \Vert \leq \Vert u \Vert \), \(u\in K\cap\partial\Omega_{1}\) and \(\Vert Tu \Vert \geq \Vert u \Vert \), \(u\in K\cap\partial\Omega_{2}\); or
\(\Vert Tu \Vert \geq \Vert u \Vert \), \(u\in K\cap\partial\Omega_{1}\) and \(\Vert Tu \Vert \leq \Vert u \Vert \), \(u\in K\cap\partial\Omega_{2}\).
Then T has a fixed point in \(K\cap(\overline{\Omega}_{2}\setminus \Omega_{1})\).
Let E be a Banach space and \(K\subset E\) be a cone in E. Assume that \(\Omega_{1}\), \(\Omega_{2}\) are open subsets of E with \(\Omega_{1}\cap{K}\neq\emptyset\), \(\overline{\Omega_{1}\cap K}\subset \Omega_{2}\cap K\). Let \(T:\overline{\Omega_{2}\cap K}\rightarrow{K}\) be a completely continuous operator such that:
\(\parallel Tu\parallel\leq\parallel u\parallel\), \(\forall u\in \partial(\Omega_{1}\cap K)\), and
there exists \(e\in K\setminus{\{0\}}\) such that
$$u\neq Tu+\mu e,\quad \textit{for } u\in\partial(\Omega_{2}\cap K) \textit{ and } \mu>0. $$
Then T has a fixed point in \(\overline{\Omega_{2}\cap K}\setminus \Omega_{1}\cap K\). The same conclusion remains valid if (A) holds on \(\partial(\Omega_{2}\cap K)\) and (B) holds on \(\partial (\Omega_{1}\cap K)\).
Definition 2.1
For a function u given on the interval [a,b], the Riemann-Liouville derivative of fractional order q is defined as
$$D_{a^{+}}^{q}u(t)=\frac{1}{\Gamma(n-q)}\frac{d^{n}}{dt^{n}} \int_{a}^{t} (t-s)^{n-q-1}u(s)\,ds, $$
where \(n=[q]+1\).
The Riemann-Liouville fractional integral of order q for a function u is defined as
$$I_{a^{+}}^{q} u(t)=\frac{1}{\Gamma(q)} \int_{a}^{t} (t-s)^{q-1}u(s)\,ds,\quad q>0 $$
provided that such integral exists.
Let \(q> 0\). Then
$$I_{a^{+}}^{q}D_{a^{+}}^{q}u(t) = u(t)+\sum _{k=1}^{n}c_{k} t^{q-k}, \quad n = [q] + 1. $$
Let \(h(t)\in AC[0,1]\). Then the fractional Sturm-Liouville problem
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+h(t)=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0 \end{cases} $$
has a unique solution \(u(t)\) in the form
$$u(t)= \int_{0}^{1} G(t,s)h(s)\,ds, $$
$$\begin{aligned}& G(t,s)= \frac{1}{\rho\Gamma(q)} \textstyle\begin{cases} [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta (1-s)^{q-1}+\gamma\int_{t}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}] -H(t,s),&0 \leq s \leq t \leq1;\\ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta (1-s)^{q-1}+\gamma\int_{s}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}],&0 \leq t \leq s \leq1; \end{cases}\displaystyle \\& \rho=\beta\gamma+\alpha\gamma \int_{0}^{1}\frac{1}{p(\tau)}\,d\tau +\alpha\delta, \qquad H(t,s)=\alpha\biggl[\delta+\gamma \int_{t}^{1}\frac {d\tau}{p(\tau)}\biggr] \int_{s}^{t}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau. \end{aligned}$$
From Definitions 2.1, 2.2 and Lemma 2.3, it follows that
$$\begin{aligned}& u'(t)=\frac{c_{1}}{p(t)}-\frac{1}{\Gamma(q)p(t)} \int_{0}^{t} (t-s)^{q-1}h(s)\,ds, \\& u(t)=c_{2}+ \int_{0}^{t}\frac{c_{1}}{p(\tau)}\,d\tau- \int_{0}^{t} \int_{0}^{\tau}\frac{(\tau-s)^{q-1}q(s)}{\Gamma(q)p(\tau)}\,ds\,d\tau. \end{aligned}$$
Furthermore, we have
$$\begin{aligned}& u(0)=c_{2}, u'(0)=\frac{c_{1}}{p(0)}, \\& u(1)=c_{2}+ \int_{0}^{1}\frac{c_{1}}{p(\tau)}\,d\tau- \int_{0}^{1} \int_{0}^{\tau}\frac{(\tau-s)^{q-1}q(s)}{\Gamma(q)p(\tau)}\,ds\,d\tau, \\& u'(1)=\frac{c_{1}}{p(1)}-\frac{1}{\Gamma(q)p(1)} \int_{0}^{1} (1-s)^{q-1}q(s)\,ds. \end{aligned}$$
Combining the boundary conditions, we directly get
$$\begin{gathered} c_{1}=\frac{\alpha\gamma\int_{0}^{1}\int_{0}^{\tau}\frac{(\tau-s)^{q-1}h(s)}{\Gamma(q)p(\tau)}\,ds\,d\omega+\alpha\delta \int_{0}^{1}\frac{(1-s)^{q-1}h(s)}{\Gamma(q)} \,ds}{\rho}, \\ c_{2}=\frac{\beta\gamma\int_{0}^{1}\int_{0}^{\tau}\frac{(\tau-s)^{q-1}h(s)}{\Gamma(q)p(\tau)}\,ds\,d\tau+\beta\delta \int_{0}^{1} \frac{ (1-s)^{q-1}h(s)}{\Gamma(q)}\,ds}{\rho}. \end{gathered} $$
Finally, substituting \(c_{1}\) and \(c_{2}\), we obtain
$$\begin{aligned} u(t) =&\frac{\beta\gamma\int_{0}^{1}\int_{0}^{\tau}\frac{(\tau-s)^{q-1}h(s)}{\Gamma(q)p(\tau)}\,ds\,d\tau+\beta\delta \int_{0}^{1}\frac{(1-s)^{q-1}h(s)}{\Gamma(q)}\,ds }{\rho} \\ &{}+ \int_{0}^{t}\frac{1}{p(\omega)}\,d\tau \frac{\alpha\gamma\int _{0}^{1}\int_{0}^{\omega}\frac{(\omega-s)^{q-1}h(s)}{\Gamma(q)p(\tau)}\,ds\,d\omega+\alpha \delta\int_{0}^{1} \frac{ (1-s)^{q-1}h(s)}{\Gamma(q)}\,ds}{\rho} \\ &{}- \int_{0}^{t} \int_{0}^{\tau}\frac{(\tau-s)^{q-1}h(s)}{\Gamma(q)p(\tau)}\,ds\,d\tau \\ =&\frac{\beta\gamma\int_{0}^{1}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\beta\delta \int_{0}^{1}\frac{(1-s)^{q-1}}{\Gamma(q)}h(s)\,ds }{\rho} \\ &{}+ \int_{0}^{t}\frac{1}{p(\tau)}\,d\tau \frac{\alpha\gamma\int _{0}^{1}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\alpha\delta\int_{0}^{1} \frac{ (1-s)^{q-1}}{\Gamma(q)}h(s)\,ds}{\rho} \\ &{}- \int_{0}^{t}\biggl[ \int_{s}^{t} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau\biggr]h(s)\,ds \\ =& \int_{0}^{1} G(t,s)h(s)\,ds. \end{aligned}$$
For \(0\leq t\leq s\leq1\),
$$\begin{aligned} u(t) =&\frac{\beta\gamma\int_{t}^{1}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\beta\delta \int_{t}^{1}\frac{(1-s)^{q-1}}{\Gamma(q)}h(s)\,ds }{\rho} \\ &{}+ \int_{0}^{t}\frac{1}{p(\tau)}\,d\tau \frac{\alpha\gamma\int _{t}^{1}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\alpha\delta\int_{t}^{1} \frac{ (1-s)^{q-1}}{\Gamma(q)}h(s)\,ds}{\rho} \\ =& \int_{t}^{1}\frac{1}{\rho}\biggl[\beta+\alpha \int_{0}^{t}\frac{d\tau }{p(\tau)}\biggr] \biggl[ \delta(1-s)^{q-1}+\gamma \int_{s}^{1}\frac{(\tau -s)^{q-1}\,d\tau}{p(\tau)}\biggr]h(s)\,ds. \end{aligned}$$
For \(0\leq s\leq t\leq1\),
$$\begin{aligned} u(t) &=\frac{\beta\gamma\int_{0}^{t}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\beta\delta \int_{0}^{t}\frac{(1-s)^{q-1}}{\Gamma(q)}h(s)\,ds }{\rho} \\ &\quad {}+ \int_{0}^{t}\frac{1}{p(\tau)}\,d\tau \frac{ac\int_{0}^{t}[\int_{s}^{1} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau] h(s)\,ds+\alpha\delta\int_{0}^{t} \frac{ (1-s)^{q-1}}{\Gamma(q)}h(s)\,ds}{\rho} \\ &\quad {}- \int_{0}^{t}\biggl[ \int_{s}^{t} \frac{(\tau-s)^{q-1}}{\Gamma(q)p(\tau)}\,d\tau\biggr]h(s)\,ds \\ &\begin{aligned} &=\frac{1}{\rho} \int_{0}^{t}\biggl[\beta+\alpha \int_{0}^{t}\frac{d\tau }{p(\tau)}\biggr] \biggl[ \delta(1-s)^{q-1}+\gamma \int_{t}^{1}\frac{(\tau -s)^{q-1}\,d\tau}{p(\tau)}\biggr] \\ &\quad {}-\alpha\biggl[\delta+\gamma \int_{t}^{1}\frac{d\tau}{p(\tau)}\biggr] \int _{s}^{t}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau h(s)\,ds. \end{aligned} \end{aligned}$$
Assume that \(\alpha,\beta,\gamma, \delta>0\), and \(p(\cdot ):[0,1]\rightarrow(0,+\infty)\). The Green function \(G(t,s)\) satisfies the following properties:
\(G(t,s)\geq0\) for \(0\leq t,s\leq1 \);
For \(0\leq t,s\leq1\), there exists \(C(t)>0\) such that \(G(t,s)\) satisfies the inequalities
$$C(t)G(s,s)\leq G(t,s) $$
$$\min_{t\in[\theta,1-\theta]} C(t)< 1\quad \textit{for }\theta\in\biggl(0, \frac{1}{2}\biggr). $$
The maximum value estimate of \(G(t,s)\)
$$\begin{aligned} \overline{G} =&\max_{0\leq t,s\leq1}G(t,s) \\ =& \max\Bigl\{ \max_{s\in[0,1]}G(s,s),\max_{s\in[0,1]}G \bigl(t_{0}(s),s\bigr)\Bigr\} , \end{aligned}$$
$$t_{0}(s)=s+\biggl[\frac{\alpha\delta(1-s)^{q-1}+\alpha\gamma\int_{s}^{1}\frac {(\tau-s)^{q-1}}{p(\tau)}\,d\tau}{\rho}\biggr]^{\frac{1}{{q-1}}}. $$
(i) On the one hand, since \(\alpha,\beta, \gamma, \delta>0\), and \(\beta \gamma+\alpha\gamma\int_{0}^{1}\frac{1}{p(\tau)}\,d\tau+\alpha\delta >0 \), it is clear that \(G(t,s)\geq0\) for \(0\leq t\leq s\leq1\). On the other hand, for \(0\leq s\leq t\leq1\), we can verify the following inequalities:
$$\begin{aligned}& \alpha\delta \int_{0}^{t}\frac{(1-s)^{q-1}}{p(\tau)}\,d\tau-\alpha \delta \int_{s}^{t}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau\geq 0, \\& \frac{\alpha\gamma\int_{0}^{t}\frac{d\tau}{p(\tau)}\int_{t}^{1}\frac {(\tau-s)^{q-1}\,d\tau}{p(\tau)}}{\alpha\gamma\int_{t}^{1}\frac{d\tau }{p(\tau)}\int_{s}^{t}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau} \geq \frac{\int_{0}^{t}\frac{d\tau}{p(\tau)}\int_{t}^{1}\frac {(t-s)^{q-1}\,d\tau}{p(\tau)}}{\int_{t}^{1}\frac{d\tau}{p(\tau)}\int _{s}^{t}\frac{(t-s)^{q-1}}{p(\tau)}\,d\tau}\geq1. \end{aligned}$$
Then we get \(G(t,s)\geq0\) for \(0\leq s\leq t\leq1\).
(ii) For \(0 \leq t \leq s \leq1\),
$$ \frac{\partial G(t,s)}{\partial t}=\frac{\alpha}{\rho\Gamma(q)p(t)}\biggl[\delta(1-s)^{q-1}+\gamma \int _{s}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}\biggr]\geq 0. $$
Then it is easy to obtain
$$\begin{aligned} G(t,s)\leq G(s,s) \quad \mbox{for } 0\leq t\leq s\leq1. \end{aligned}$$
For \(0 \leq s \leq t \leq1\),
$$\begin{aligned} \frac{\partial G(t,s)}{\partial t} =&\frac{1}{\Delta}\biggl\{ -\beta\gamma\frac{(t-s)^{q-1}}{p(t)} + \frac{\alpha\delta(1-s)^{q-1}}{p(t)}+\alpha\gamma\frac {1}{p(t)} \int_{t}^{1}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau \\ & -\alpha\gamma \int_{0}^{t}\frac{d\tau}{p(\tau)}\frac{(t-s)^{q-1}}{p(t)}- \alpha \delta\frac{(t-s)^{q-1}}{p(t)}+\alpha\gamma\frac{1}{p(t)} \int _{s}^{t}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau \\ &{}-\alpha\gamma \int_{t}^{1}\frac{d\tau}{p(\tau)}\frac {(t-s)^{q-1}}{p(t)} \biggr\} \\ =&\frac{1}{\rho\Gamma(q) p(t)}\biggl[-\rho(t-s)^{q-1}+\alpha \delta(1-s)^{q-1}+\alpha\gamma \int _{s}^{1}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau\biggr]. \end{aligned}$$
$$F(t)=-\rho(t-s)^{q-1}+\alpha\delta(1-s)^{q-1}+\alpha\gamma \int _{s}^{1}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau. $$
It is clear that \(F'(t)=-\rho(q-1)(t-s)^{q-2}<0\), which implies that \(F(\cdot)\) is decreasing on \(t\in(s,1]\). Since \(F(s)>0\) and \(F(1)<0\), there exists unique \(t_{0}(s)\in(s,1)\) such that \(F(t_{0})=0\), namely,
$$\begin{aligned} t_{0}(s)=s+\biggl[\frac{\alpha\delta(1-s)^{q-1}+\alpha\gamma\int_{s}^{1}\frac {(\tau-s)^{q-1}}{p(\tau)}\,d\tau}{\rho}\biggr]^{\frac{1}{{q-1}}}. \end{aligned}$$
From the above discussion, we get the conclusions
$$\begin{aligned}& \frac{\partial G(t,s)}{\partial t}\geq 0,\quad \mbox{for }t\in[s,t_{0}],\mbox{ and }G(s,s)\leq G(t,s)\leq G(t_{0},s), \\& \frac{\partial G(t,s)}{\partial t}\leq0,\quad \mbox{for }t\in[t_{0},1],\mbox{ and }G(1,s) \leq G(t,s)\leq G(t_{0},s). \end{aligned}$$
Furthermore, we obtain the estimate
$$G(t,s)\leq G\bigl(t_{0}(s),s\bigr),\quad \mbox{for } 0\leq s\leq t \leq1. $$
For \(0 \leq t \leq s \leq1\),
$$\frac{G(t,s)}{G(s,s)}=\frac{\beta+\alpha\int_{0}^{t}\frac{d\tau }{p(\tau)}}{\beta+\alpha\int_{0}^{s}\frac{d\tau}{p(\tau)}} \geq\frac{\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}}{\beta +\alpha\int_{0}^{1}\frac{d\tau}{p(\tau)}}=C_{1}(t). $$
$$\begin{aligned} \frac{G(t,s)}{G(s,s)}&=\frac{[\beta+\alpha\int_{0}^{t}\frac{d\tau }{p(\tau)}][\delta(1-s)^{q-1}+\gamma\int_{t}^{1}\frac{(\tau -s)^{q-1}\,d\tau}{p(\tau)}] -H(t,s)}{[\beta+\alpha\int_{0}^{s}\frac{d\tau}{p(\tau)}][\delta (1-s)^{q-1}+\gamma\int_{s}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}]} \\ &\geq\frac{[\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta (1-s)^{q-1}+\gamma\int_{t}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}] -H(t,s)}{[\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta +\gamma\int_{0}^{1}\frac{d\tau}{p(\tau)}]} \\ & \begin{aligned} &= \frac{[\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta (1-s)^{q-1}+\gamma\int_{t}^{1}\frac{(t-s)^{q-1}\,d\tau}{p(\tau)}] -\alpha[\delta+\gamma\int_{t}^{1}\frac{d\tau}{p(\tau)}]\int _{s}^{t}\frac{(t-s)^{q-1}}{p(\tau)}\,d\tau}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\geq\frac{\beta\delta(1-t)^{q-1}+\beta\gamma\int_{t}^{1}\frac {(t-s)^{q-1}\,d\tau}{p(\tau)}+\alpha\delta(1-s)^{q-1}\int_{0}^{t}\frac {d\tau}{p(\tau)}+\alpha\gamma\int_{0}^{t}\frac{d\tau}{p(\tau)}\int _{t}^{1}\frac{(t-s)^{q-1}\,d\tau}{p(\tau)} }{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\quad {}-\frac{\alpha\delta(1-s)^{q-1}\int_{s}^{t}\frac{d\tau}{p(\tau )}+\alpha\gamma\int_{t}^{1}\frac{d\tau}{p(\tau)}\int_{s}^{t}\frac {(t-s)^{q-1}}{p(\tau)}\,d\tau}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &=\frac{\beta\delta(1-t)^{q-1}+\gamma(t-s)^{q-1}[\beta\int _{t}^{1}\frac{d\tau}{p(\tau)}+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau )}\int_{t}^{1}\frac{d\tau}{p(\tau)}- \alpha\int_{t}^{1}\frac{d\tau}{p(\tau)}\int_{s}^{t}\frac{d\tau}{p(\tau )}]}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\quad {}+\frac{\alpha\delta(1-s)^{q-1}[\int_{0}^{t}\frac{d\tau}{p(\tau )}-\int_{s}^{t}\frac{d\tau}{p(\tau)}]}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\geq\frac{\beta\delta(1-t)^{q-1}+\gamma(t-s)^{q-1}[\beta\int _{t}^{1}\frac{d\tau}{p(\tau)}+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau )}\int_{t}^{1}\frac{d\tau}{p(\tau)}- \alpha\int_{t}^{1}\frac{d\tau}{p(\tau)}\int_{0}^{t}\frac{d\tau}{p(\tau )}]}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\quad {}+\frac{\alpha\delta(1-s)^{q-1}\int_{0}^{s}\frac{d\tau}{p(\tau)}}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]} \\ &\geq\frac{\beta\delta(1-t)^{q-1}}{ [\beta+\alpha\int_{0}^{t}\frac{d\tau}{p(\tau)}][\delta+\gamma\int _{0}^{1}\frac{d\tau}{p(\tau)}]}=C_{2}(t). \end{aligned} \end{aligned}$$
Choosing \(C(t)=\min\{C_{1}(t), C_{2}(t)\}\), we get \(C(t)G(s,s)\leq G(t,s)\). □
Existence results I
Theorem 3.1
(Lyapunov type inequality)
Assume that \(\alpha,\beta,\gamma, \delta>0\), \(p(\cdot):[0,1]\rightarrow(0,+\infty)\), and let \(\Lambda(t): [0, 1] \rightarrow R\) be a nontrivial Lebesgue integrable function. Then, for any nontrivial solution of the fractional Sturm-Liouville problem
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)u(t)=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0, \end{cases} $$
the following so-called Lyapunov type inequality will be satisfied:
$$\int_{0}^{1} \bigl\vert \Lambda(s) \bigr\vert \,ds> \frac{1}{\overline{G}}, $$
where G̅ is defined in (iii) of Lemma 2.5.
From Lemma 2.4 and the triangular inequality, we get
$$\begin{aligned} \bigl\vert u(t) \bigr\vert = \biggl\vert \int_{0}^{1} G(t,s)\Lambda (s)u(s)\,ds \biggr\vert \leq \int_{0}^{1} G(t,s) \bigl\vert \Lambda(s)u(s) \bigr\vert \,ds. \end{aligned}$$
Let E denote the Banach space \(C[0,1]\) with the norm defined by \(\Vert u \Vert =\max_{t\in[0,1]} \vert u(t) \vert \). Via some simple computations, we can obtain
$$\begin{aligned} \bigl\Vert u(t) \bigr\Vert &\leq \max_{t\in[0,1]} \int_{0}^{1} G(t,s) \bigl\vert \Lambda(s)u(s) \bigr\vert \,ds \\ &\leq \bigl\Vert u(t) \bigr\Vert \max_{t\in[0,1]} \int_{0}^{1} G(t,s) \bigl\vert \Lambda(s) \bigr\vert \,ds \\ &\leq \bigl\Vert u(t) \bigr\Vert \int_{0}^{1} \Bigl[\max_{t\in [0,1]}G(t,s) \Bigr] \bigl\vert \Lambda(s) \bigr\vert \,ds, \end{aligned} $$
namely,
$$\int_{0}^{1} \bigl\vert \Lambda(s) \bigr\vert \,ds>\frac{1}{\overline{G}}. $$
(Generalized Lyapunov type inequality)
Assume that \(\alpha,\beta,\gamma, \delta>0\), \(p(\cdot):[0,1]\rightarrow(0,+\infty)\), and let \(\Lambda(t): [0, 1] \rightarrow\mathbb{R}\) be a nontrivial Lebesgue integrable function, \(f(u)\) is a positive function on \(\mathbb{R}\). Then, for any nontrivial solution of the fractional Sturm-Liouville problem (1.1), the following so-called Lyapunov type inequality will be satisfied:
$$\int_{0}^{1} \bigl\vert \Lambda(s) \bigr\vert \,ds> \frac {u^{*}}{\overline{G}\max_{u\in[u_{*},u^{*}]}f(u)}, $$
$$u_{*}=\min_{t\in[0,1]}u(t),\qquad u^{*}=\max_{t\in[0,1]}u(t). $$
From the similar proof of Theorem 3.1, we get
$$\begin{aligned} \bigl\vert u(t) \bigr\vert \leq& \int_{0}^{1} G(t,s) \bigl\vert \Lambda(s) \bigr\vert f\bigl(u(s)\bigr)\,ds. \end{aligned}$$
Since f is continuous and concave, then using Jensen"s inequality (1.3), we obtain
$$\begin{aligned} \bigl\Vert u(t) \bigr\Vert \leq& \max_{t\in[0,1]} \int_{0}^{1} G(t,s) \bigl\vert \Lambda(s) \bigr\vert f\bigl(u(s)\bigr)\,ds \\ \leq& \int_{0}^{1} \Bigl[\max_{t\in[0,1]}G(t,s) \Bigr] \bigl\vert \Lambda (s) \bigr\vert f\bigl(u(s)\bigr)\,ds \\ \leq&\overline{G} \bigl\vert \Lambda(t) \bigr\vert _{L^{1}} \int_{0}^{1} \frac{ \vert \Lambda(s) \vert }{ \vert \Lambda (t) \vert _{L^{1}}}f\bigl(u(s)\bigr)\,ds \\ \leq&\overline{G}\max_{u\in[u_{*},u^{*}]}f(u) \bigl\vert \Lambda (t) \bigr\vert _{L^{1}}, \end{aligned}$$
$$\begin{aligned} \int_{0}^{1} \bigl\vert \Lambda(s) \bigr\vert \,ds>\frac{u^{*}}{\overline {G}\max_{u\in[u_{*},u^{*}]}f(u)}. \end{aligned}$$
For convenience, we give some notations:
$$\begin{aligned}& \varpi=\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]; \\& \varsigma=\min_{t\in[\theta,1-\theta]} C(t)\cdot \int_{0}^{1} G(s,s)\Lambda(s)\,ds. \end{aligned}$$
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a continuous function satisfying the Lipschitz condition
$$\bigl\vert f(x)-f(y) \bigr\vert \leq L \vert x-y \vert ,\quad \forall x,y\in \mathbb{R}, L>0. $$
Then problem (1.1) has a unique solution if \(L\varpi<1\).
By Lemma 2.4, the solution of problem (1.1) is equivalent to a fixed point of the operator \(T:E\rightarrow E\) defined by \(T(u(t))=\int_{0}^{1} G(t,s)\Lambda(s)f(u(s))\,ds\).
Let \(\sup_{t\in[0,1]} \vert f(0) \vert =\nu\). Now we show that \(T:B_{r}\subset B_{r}\), where \(B_{r}=\{u\in C[0,1]: \Vert u \Vert < r\} \) with \(r>\frac{\nu\varpi}{1-L\varpi}\). For \(u\in B_{r}\), one has \(\vert f(u) \vert = \vert f(u)-f(0)+f(0) \vert \leq L \vert u \vert +\nu\leq L r+\nu\). Furthermore, we have
$$\begin{aligned} \bigl\Vert T(u) (t) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =& \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \\ \leq& \biggl\Vert \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ \leq& (L r+\nu)\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr] \\ =& (L r+\nu)\varpi\leq r, \end{aligned}$$
which yields \(T:B_{r}\subset B_{r}\).
For any \(x,y\in E\), we have
$$\begin{aligned} \bigl\Vert T(x)-T(y) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(x(s)\bigr)\,ds- \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(y(s)\bigr)\,ds \biggr\Vert \\ \leq& \sup_{t\in[0,1]}\biggl\{ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s) \bigl\vert f\bigl(x(s)\bigr)-f\bigl(y(s)\bigr) \bigr\vert \,ds \\ &{}+ \int_{t}^{1} G(s,s)\Lambda(s) \bigl\vert f \bigl(x(s)\bigr)-f\bigl(y(s)\bigr) \bigr\vert \,ds \biggr\} \\ \leq& L\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr] \Vert x-y \Vert \\ =& L\varpi \Vert x-y \Vert . \end{aligned}$$
Since \(L\varpi<1\), from the Banach's contraction mapping principle it follows that there exists a unique fixed point for the operator T which corresponds to the unique solution for problem (1.1). This completes the proof. □
Let \(\Lambda(t): [0, 1] \rightarrow R^{+}\) be a nontrivial Lebesgue integrable function and \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a continuous function satisfying the following:
There exists a positive constant K such that \(\vert f(u) \vert \leq K\) for \(u\in\mathbb{R}\).
Then problem (1.1) has at least one solution.
First, since the function \(p:[0,1]\rightarrow (0,+\infty)\) is continuous, we get \(p_{*}= [4]\min_{t\in[0,1]}p(t)>0\). Further, from (2.1) and (2.2), we get the following estimates respectively:
$$\begin{aligned} 0 < &\frac{\partial G(t,s)}{\partial t}=\frac{\alpha}{\rho\Gamma(q)p(t)}\biggl[\delta(1-s)^{q-1}+ \gamma \int_{s}^{1}\frac{(\tau-s)^{q-1}\,d\tau}{p(\tau)}\biggr] \\ \leq&\frac{\alpha}{\rho\Gamma(q)p_{*}}\biggl[\delta+\gamma \int _{0}^{1}\frac{d\tau}{p(\tau)}\biggr]; \end{aligned}$$
$$\begin{aligned} \biggl\vert \frac{\partial G(t,s)}{\partial t} \biggr\vert =& \biggl\vert \frac{1}{\rho\Gamma(q) p(t)} \biggl[-\rho(t-s)^{q-1}+\alpha\delta(1-s)^{q-1}+\alpha\gamma \int _{s}^{1}\frac{(\tau-s)^{q-1}}{p(\tau)}\,d\tau\biggr]\biggr\vert \\ \leq&\frac{1}{\rho\Gamma(q) p_{*}}\biggl[\rho+\alpha\delta+\alpha\gamma \int_{0}^{1}\frac{d\tau}{p(\tau)}\biggr]; \end{aligned}$$
which implies that \(\vert \frac{\partial G(t,s)}{\partial t} \vert \) is bounded for \(0 \leq s, t \leq1\), namely, there exists \(S>0\) such that \(\vert \frac{\partial G(t,s)}{\partial t} \vert \leq S\). Combining with \(\vert f(t,u) \vert \leq K\) for \(t\in[0,1]\), \(t\in R\), we obtain
$$\bigl\vert (Tu)'(t) \bigr\vert = \biggl\vert \int_{0}^{1}\frac{\partial G(t,s)}{\partial t}\Lambda(s)f\bigl(u(s) \bigr)\,ds \biggr\vert \leq SK \bigl\Vert \Lambda(t) \bigr\Vert _{L^{1}}. $$
Hence, for any \(t_{1},t_{2}\in[0,1]\), we have
$$\begin{aligned} \bigl\vert (Tu) (t_{2})-(Tu) (t_{1}) \bigr\vert =& \biggl\vert \int _{t_{1}}^{t_{2}}(Tu)'(t)\,dt \biggr\vert \leq SK \bigl\Vert \Lambda(t) \bigr\Vert _{L^{1}} \vert t_{2}-t_{1} \vert . \end{aligned}$$
This means that T is equicontinuous on [0,1]. Thus, by the Arzelà-Ascoli theorem, the operator T is completely continuous.
Finally, let \(B_{r}=\{u\in E: \Vert u \Vert < r\}\) with \(r=K\varpi+1\). If u is a solution for the given problem, then, for \(\lambda\in(0,1)\), we obtain
$$\begin{aligned} \Vert u \Vert =&\lambda \bigl\Vert Tu(t) \bigr\Vert =\lambda \biggl\Vert \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =&\lambda \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ < & \max_{t\in[0,1]} \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s) \bigl\vert f\bigl(u(s)\bigr) \bigr\vert \,ds+ \int_{t}^{1} G(s,s) \bigl\vert \Lambda(s)f \bigl(u(s)\bigr) \bigr\vert \,ds \\ \leq& K\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr] \\ \leq& K\varpi, \end{aligned}$$
which yields a contradiction. Therefore, by Lemma 1.2, the operator T has a fixed point in E. □
Let \(\Lambda(t): [0, 1] \rightarrow R^{+}\) be a nontrivial Lebesgue integrable function and \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) be a continuous function satisfying (F0). In addition, the following assumption holds:
There exists a positive constant \(r_{1}\) such that
$$f(u)\geq\varsigma^{-1}r_{1}\quad \textit{for } u \in[0,r_{1}]. $$
Define a cone P of the Banach space E as \(P=\{u\in E:u\geq0\}\). From the proof of Theorem 3.4, we know that \(T:P\rightarrow P\) is completely continuous. Set \(P_{r_{i}}=\{u\in P: \Vert u \Vert < r_{i}\}\).
For \(u\in\partial P_{r_{1}}\), one has \(0\leq u\leq r_{1}\). For \(t\in [\theta,1-\theta]\), we have
$$\begin{aligned} T\bigl(u(t)\bigr) =& \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \geq& \int_{0}^{1} C(t) G(s,s)\Lambda(s)f\bigl(u(s)\bigr) \,ds \\ \geq&\min_{t\in[\theta,1-\theta]} C(t)\cdot \int_{0}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \geq&\min_{t\in[\theta,1-\theta]} C(t)\cdot \int_{0}^{1} G(s,s)\Lambda(s)\,ds\cdot r_{1} \\ >&r_{1}= \Vert u \Vert . \end{aligned}$$
Choosing \(r_{2}>K\varpi\). Then, for \(u\in\partial P_{r_{2}}\), we have
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =& \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ \leq& \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]K \\ < & r_{2}= \Vert u \Vert . \end{aligned}$$
Then, by Lemma 1.3, problem (1.1) has at least one positive solution \(u(t)\) belonging to E such that \(r_{1} \leq \Vert u \Vert \leq r_{2}\). □
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function, \(f:\mathbb{R}\rightarrow\mathbb{R}\) be a continuous function and satisfy the following assumptions:
There exists a nondecreasing function \(\varphi:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) such that
$$\bigl\vert f(u) \bigr\vert \leq\varphi\bigl( \Vert u \Vert \bigr),\quad \forall u\in\mathbb{R}; $$
There exists a constant \(R>0\) such that \(\frac{R}{\varpi\varphi(R)}>1\).
From the proof of Theorem 3.4, we know that T is completely continuous. Now we show that (ii) of Lemma 1.2 does not hold. If u is a solution of (1.1), then, for \(\lambda\in(0,1)\), we obtain
$$\begin{aligned} \Vert u \Vert =&\lambda \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert = \lambda \biggl\Vert \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =&\lambda \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ < & \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s) \bigl\vert f\bigl(u(s)\bigr) \bigr\vert \,ds+ \int_{t}^{1} G(s,s)\Lambda(s) \bigl\vert f \bigl(u(s)\bigr) \bigr\vert \,ds\biggr] \\ \leq& \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]\varphi\bigl( \Vert u \Vert \bigr) \\ \leq&\varpi\varphi\bigl( \Vert u \Vert \bigr). \end{aligned}$$
Let \(B_{R}=\{u\in E: \Vert u \Vert < R\}\). From the above inequality and (F3), it yields a contradiction. Therefore, by Lemma 1.2, the operator T has a fixed point in \(B_{R}\). □
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and \(f:[0,1]\times\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) be a continuous function. Suppose that (F2) and (F3) hold. In addition, the following assumption holds:
There exists a positive constant r with \(r< R\) and a function \(\psi:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) satisfying
$$\begin{aligned}& f(u)\geq\psi\bigl( \Vert u \Vert \bigr), \quad \textit{for } u\in[0,\varsigma r], \\& \psi(\varsigma r)\geq r. \end{aligned}$$
If \(\varsigma<1\), then (1.1) has at least one positive solution \(u(t)\).
Let \(B_{r}=\{u\in E: \Vert u \Vert < r\}\).
Part (I). For any \(u\in\partial(B_{R}\cap P)\), from (F3) and (F4) it follows that
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =& \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ < & \max_{t\in[0,1]} \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \leq& \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G\bigl(s,s\Lambda(s)\bigr)\,ds\biggr] \varphi\bigl( \Vert u \Vert \bigr) \\ =&\varpi\Vert\varphi(R) \\ \leq& R= \Vert u \Vert , \end{aligned}$$
which implies that (A) of Lemma 1.4 holds.
Now we prove that \(u\neq T(u)+\mu\) for \(u\in\partial(B_{\varsigma r}\cap P)\) and \(\mu>0\). On the contrary, if there exists \(u_{0}\in\partial(B_{\varsigma r}\cap P)\) and \(\mu_{0}>0\) such that \(u_{0}=T(u_{0})+\mu_{0}\), then, for \(t\in[\theta,1-\theta]\), one has \(\min_{t\in[\theta,1-\theta]} C(t)>0\). Furthermore, from (F5) it follows that
$$\begin{aligned} u_{0}(t)&=T\bigl(u_{0}(t) \bigr)+\mu_{0} \\ &= \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u_{0}(s) \bigr)\,ds+\mu_{0} \\ &\geq \int_{0}^{1} C(t) G(s,s)\Lambda(s)f \bigl(u_{0}(s)\bigr)\,ds+\mu_{0} \\ &\geq\min_{t\in[\theta,1-\theta]} C(t) \int_{0}^{1} G(s,s)\Lambda (s)f\bigl(u_{0}(s) \bigr)\,ds+\mu_{0} \\ &\geq\min_{t\in[\theta,1-\theta]} C(t) \int_{0}^{1} G(s,s)\Lambda (s)\psi(\varsigma r)\,ds+ \mu_{0} \\ &=\varsigma r+\mu_{0}. \end{aligned} $$
Furthermore, we get
$$\varsigma r>\min_{t\in[\theta,1-\theta]}u_{0}(t)\geq\varsigma r+ \mu_{0}>\varsigma r, $$
which yields a contradiction. So (B) of Lemma 1.4 holds.
Therefore, Lemma 1.3 guarantees that T has at least one fixed point. □
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and \(f:\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}\) be a continuous function satisfying (F0). In addition, the following assumptions hold:
\(\lim_{u\rightarrow0^{+}}\frac{f(u)}{u}=0\);
There exists \(\overline{R}>0\) such that \(\min_{u\in[\vartheta\overline{R}, \overline{R}]}f(u)>\sigma \overline{R}\), where
$$\begin{aligned}& 0< \vartheta=\eta\Bigl[\min_{t\in[\theta,1-\theta]} C(t)\Bigr]< 1, \\& 0< \eta=\biggl[\max_{0\leq s\leq1}\frac{G(t_{0}(s),s)}{G(s,s)} \biggr]^{-1}\leq1, \\& \sigma=\biggl[\min_{t\in[\theta,1-\theta]} C(t) \int_{\theta}^{1-\theta} G(s,s)\Lambda(s)\,ds \biggr]^{-1}. \end{aligned}$$
Then problem (1.1) has at least two solutions.
From Lemma 2.5, we can derive the following inequalities:
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert &= \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ &= \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ &\leq \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr] \\ & \begin{aligned} &=\max_{t\in[0,1]}\biggl[ \int_{0}^{t} \frac{G(t_{0}(s),s)}{G(s,s)}G(s,s)\Lambda(s)f \bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr] \\ &\leq\max_{0\leq s\leq1}\frac{G(t_{0}(s),s)}{G(s,s)}\biggl[ \int_{0}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr] \end{aligned} \end{aligned}$$
$$\begin{aligned} T\bigl(u(t)\bigr) =& \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \geq& \int_{0}^{1} C(t) G(s,s)\Lambda(s)f\bigl(u(s)\bigr) \,ds. \end{aligned}$$
Combining the two inequalities, we have
$$\begin{aligned} T\bigl(u(t)\bigr) \geq C(t)\eta \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert . \end{aligned}$$
Define a subcone P̂ of the Banach space E as \(\widehat{P}=\{u\in E:u\geq C(t)\eta \Vert u(t) \Vert \} \). From the standard process, we know that \(T:\widehat{P}\rightarrow\widehat{P}\) is completely continuous. Set \(\widehat{P}_{r}=\{u\in \widehat{P}: \Vert u \Vert < r\}\).
Since \(\lim_{u\rightarrow0^{+}}\frac{f(u)}{u}=0\), there exist \(\epsilon>0\) and \(r>0\) such that \(f(u)<\epsilon u\), for \(0\leq u\leq r\), where ϵ satisfies \(\epsilon\varpi<1\). For \(u\in \partial\widehat{P}_{r}\), we have
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =& \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ \leq& \epsilon\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr] \Vert u \Vert \\ < & \Vert u \Vert . \end{aligned}$$
In a similar way, we choose \(R>K\varpi\). Then, for \(u\in\partial \widehat{P}_{R}\), we have
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ =& \biggl\Vert \int_{0}^{t} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(t,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ \leq& \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]K \\ < & R= \Vert u \Vert . \end{aligned}$$
For any \(u\in\partial P_{\overline{R}}\), choosing \(t^{*}\in(\theta,1-\theta)\), it is easy to verify that \(u(t^{*})\in [\vartheta\overline{R},\overline{R}]\). Furthermore, we have
$$\begin{aligned} T\bigl(u\bigl(t^{*}\bigr)\bigr) =& \int_{0}^{1} G\bigl(t^{*},s\bigr)\Lambda(s)f\bigl(u(s) \bigr)\,ds \\ \geq& C\bigl(t^{*}\bigr) \int_{\theta}^{1-\theta} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \geq& C\bigl(t^{*}\bigr) \int_{\theta}^{1-\theta} G(s,s)\Lambda(s)\min _{u\in [\vartheta\overline{R}, \overline{R}]}f\bigl(u(s)\bigr)\,ds \\ \geq& \Bigl[\min_{t\in[\theta,1-\theta]} C(t)\Bigr] \int_{\theta}^{1-\theta } G(s,s)\Lambda(s)\sigma\overline{R}\,ds \\ =&\overline{R}= \Vert u \Vert . \end{aligned}$$
Then by Lemma 1.3, problem (1.1) has at least two positive solutions \(r\leq \Vert u_{1}(t) \Vert \leq\overline{R}\) and \(\overline{R}\leq \Vert u_{2}(t) \Vert \leq R\). □
Let us consider the problem
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t)\arctan u=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
Since \(\vert f(u) \vert = \vert \arctan u \vert <\pi\), this problem has a solution by Theorem 3.4. If \(\Lambda(t)\) satisfies
$$\varpi=\max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]< 1. $$
It is easy to get that
$$f'(u)=(\arctan u)'=\frac{1}{1+u^{2}}\leq1=L. $$
Therefore, this problem has a unique solution by Theorem 3.3.
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t) e^{-u^{100}}=0,\quad 1 < q\leq2, t\in(0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
Since \(f(u)=e^{-u^{100}}\leq1\), we can choose \(r_{1}=\varpi+1\). Then it is clear that
$$f(u)\leq1< \varpi^{-1}r_{1}\quad \mbox{for }u \in[0,r_{1}], $$
which implies that (F1) holds. Finally, for any \(r>0\), we have \(f(u)\geq e^{-r^{100}}\) for \(u\in[0,r]\). Since \(\lim_{r\rightarrow 0^{+}}\frac{e^{-r^{100}}}{\varsigma^{-1}r}=+\infty\), there exists \(r_{2}< r_{1}\) such that \(f(u)\geq\varsigma^{-1}r_{2} \) for \(u\in [0,r_{2}]\), which implies that (F1) holds. Therefore, this problem has a unique solution by Theorem 3.5.
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t) e^{-u^{2}}(\arctan u+\sin u+2)=0, \quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0, \qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
It is clear that \(\vert f(u) \vert =\vert e^{-u^{2}}(\arctan u^{\frac{1}{5}}+\sin u^{\frac{1}{3}}+2)\vert\leq \Vert u \Vert ^{\frac{1}{5}}+ \Vert u \Vert ^{\frac{1}{3}}+2=\varphi( \Vert u \Vert )\), \(\forall u\in R\). Then (F2) holds. Furthermore, for sufficiently large \(R>0\), the inequality \(\frac{R}{\varpi\varphi(R)}>1\) obviously holds, namely, (F3) holds. Then this problem has at least one solution by Theorem 3.6.
For \(u\in R^{+}\), since \(f(u)=e^{-u^{2}}(\arctan u^{\frac{1}{5}}+\sin u^{\frac{1}{3}}+2)\geq e^{-u^{2}}\geq e^{- \Vert u \Vert ^{2}}=\psi( \Vert u \Vert )\), we have \(f(u)\geq\psi( \Vert u \Vert ) \) for \(u\in [0,\varsigma r]\), for any \(r>0\). Via some simple computations, we get \(\lim_{r\rightarrow 0^{+}}\frac{\psi(\varsigma r)}{r}=+\infty\). Then there exists sufficiently small \(r>0\) such that \(\psi(\varsigma r)\geq r\). From the above discussions, we have that (F4) holds. Therefore, this problem has at least one positive solution \(u(t)\) for \(\varsigma<1\) by Theorem 3.7.
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t) \frac{2\sigma}{(2\vartheta)^{2}e^{-2\vartheta}}u^{2}e^{-u}=0,\quad1 < q\leq 2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
Since \(f(u)=\frac{2\sigma+1}{(2\vartheta)^{2}e^{-2\vartheta}}u^{2}e^{-u}\), via some simple computations, we can verify that (F0) and (F5) hold. In addition, since \(f'(u)=\frac{2\sigma+1}{(2\vartheta)^{2}e^{-2\vartheta }}e^{-u}(2u-u^{2})=\frac{2\sigma+1}{(2\vartheta)^{2}e^{-2\vartheta }}e^{-u}u(2-u)\), it is clear that \(f'(u)>0\) for \(u\in(0,2)\); \(f'(u)<0\) for \(u\in(2,+\infty)\). Let \(\overline{R}=2\), then for any \(u\in[2\vartheta,2]\), we have \(\min_{u\in[2\vartheta, 2]}f(u)=\frac{2\sigma+1}{(2\vartheta)^{2}e^{-2\vartheta}}(2\vartheta )^{2}e^{-2\vartheta}>2\sigma\). Therefore, this problem has at least two positive solutions \(u(t)\) by Theorem 3.8.
Existence results II
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and \(f:\mathbb{R}\times[0,+\infty)\rightarrow\mathbb{R}\) be a continuous function satisfying the following:
There exists a positive constant K such that \(\vert f(u,\lambda) \vert \leq K\) for \(u\in\mathbb{R}\), \(\lambda\in\mathbb{R}_{+}\).
This result can be directly derived from the proof of Theorem 3.4.
Now define a cone P of the Banach space E as \(P=\{u\in E:u\geq0\}\). Let \(P_{r_{i}}=\{u\in P: {\Vert u \Vert < r_{i}}\}\). Define T by
$$T\bigl(u(t)\bigr)= \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s),\lambda \bigr)\,ds. $$
From the proof of Theorem 3.4, we know that \(T:P\rightarrow P\) is completely continuous.
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and f be a nonnegative continuous function satisfying (H). If \(f(0,0)>0\), then there exists \(\lambda^{*}>0\) such that problem (1.2) has at least one solution for \(0\leq\lambda<\lambda^{*}\).
Since \(f(u,\lambda)\) is continuous and \(f(0,0)>0\), for any given \(\epsilon>0\) (sufficiently small), there exists \(\delta>0\) such that \(f(u,\lambda)>f(0,0)-\epsilon\) if \(0\leq u<\delta\), \(0\leq\lambda<\delta\). Choosing \(r_{1}<\min\{\delta,\varsigma(f(0,0)-\epsilon)\}\) and \(\lambda^{*}=\delta\). Then, for any \(u\in \partial P_{r_{1}}\) and \(t\in[\theta,1-\theta]\), we have
$$\begin{aligned} T\bigl(u(t)\bigr) =& \int_{0}^{1} G(t,s)\Lambda(s)f\bigl(u(s),\lambda \bigr)\,ds \\ \geq& \int_{0}^{1} C(t) G(s,s)\Lambda(s)f\bigl(u(s), \lambda\bigr)\,ds \\ \geq&\min_{t\in[\theta,1-\theta]} C(t)\cdot \int_{0}^{1} G(s,s)\Lambda(s)f\bigl(u(s),\lambda \bigr)\,ds \\ \geq&\min_{t\in[\theta,1-\theta]} C(t)\cdot \int_{0}^{1} G(s,s)\Lambda(s)\,ds\cdot\bigl(f(0,0)- \epsilon\bigr) \\ >&r_{1}= \Vert u \Vert . \end{aligned}$$
$$\begin{aligned} \bigl\Vert T\bigl(u(t)\bigr) \bigr\Vert =& \biggl\Vert \int_{0}^{1} G(t,s)\Lambda (s)f\bigl(u(s)\bigr)\,ds \biggr\Vert \\ \leq& \max_{t\in[0,1]} \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)f\bigl(u(s)\bigr)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)f\bigl(u(s)\bigr)\,ds \\ \leq& \max_{t\in[0,1]}\biggl[ \int_{0}^{t} G\bigl(t_{0}(s),s\bigr) \Lambda(s)\,ds+ \int_{t}^{1} G(s,s)\Lambda(s)\,ds\biggr]K \\ < & r_{2}= \Vert u \Vert . \end{aligned}$$
Corollary 4.3
Let \(\Lambda(t): [0, 1] \rightarrow \mathbb{R}_{+}\) be a nontrivial Lebesgue integrable function and f be a nonnegative continuous function satisfying (H). If \(\lim_{u\rightarrow 0^{+}}f(u,\lambda)=f(0,0)>0\), then problem (1.2) has at least one solution for any \(\lambda\geq0\).
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t) ( \arctan u^{2}+e^{-\lambda})=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
It is clear that (H) holds and \(f(0,0)>0\). Then there exists \(\lambda ^{*}>0\) such that this problem has at least one solution for \(0\leq\lambda<\lambda^{*}\).
$$\textstyle\begin{cases} D_{0^{+}}^{q} (p(t)u'(t))+\Lambda(t) e^{-\lambda u^{100}}=0,\quad1 < q\leq2, t\in (0,1), \\ \alpha u(0)-\beta p(0)u'(0)=0,\qquad\gamma u(1)+\delta p(1)u'(1)=0. \end{cases} $$
It is clear that (H) holds and \(\lim_{u\rightarrow 0^{+}}f(u,\lambda)=f(0,0)>0\). Then this problem has at least one solution for any \(\lambda>0\).
In this manuscript, the authors prove some new existence results as well as uniqueness and multiplicity results on fractional boundary value problems.
Lyapunov, A: Problème général de la stabilité du mouvement. Ann. Fac. Sci. Univ. Toulouse Sci. Math. Sci. Phys. 9, 203-474 (1907)
Lyapunov, A: The general problem of the stability of motion. Int. J. Control 55, 521-790 (1992)
Yang, X: On Lyapunov type inequalities for certain higher order differential equations. Appl. Math. Comput. 134, 307-317 (2003)
Yang, X, Kim, Y, Lo, K: Lyapunov-type inequalities for a class of higher-order linear differential equations. Appl. Math. Lett. 34, 86-89 (2014)
Ferreira, RAC: A Lyapunov-type inequality for a fractional boundary value problem. Fract. Calc. Appl. Anal. 16, 978-984 (2013)
Ferreira, RAC: On a Lyapunov-type inequality and the zeros of a certain Mittag-Leffler function. J. Math. Anal. Appl. 412, 1058-1063 (2014)
Ghanbari, K, Gholami, Y: Lyapunov type inequalities for fractional Sturm-Liouville problems and fractional Hamiltonian system and applications. J. Fract. Calc. Appl. 7, 176-188 (2016)
Ahmad, B, Agarwal, RP, Alsaedi, A: Fractional differential equations and inclusions with semiperiodic and three-point boundary conditions. Bound. Value Probl. 2016, 28 (2016)
Agarwal, RP, Ahmad, B: Existence theory for anti-periodic boundary value problems of fractional differential equations and inclusions. Comput. Math. Appl. 62, 1200-1214 (2011)
Agarwal, RP, Lakshmikantham, V, Nieto, JJ: On the concept of solution for fractional differential equations with uncertainty. Nonlinear Anal. 72, 2859-2862 (2010)
Ahmad, B, Nieto, JJ, Alsaedi, A: Existence and uniqueness of solutions for nonlinear fractional differential equations with non-separated type integral boundary conditions. Acta Math. Sci. Ser. B Engl. Ed. 31(6), 2122-2130 (2011)
Ahmad, B, Ntouyas, SK: Nonlinear fractional differential equations and inclusions of arbitrary order and multi-strip boundary conditions. Electron. J. Differ. Equ. 98, 1 (2012)
Balean, D, Khan, H, Jafari, H, Khan, RA, Alipour, M: On existence results for solutions of a coupled system of hybrid boundary value problems with hybrid conditions. Adv. Differ. Equ. 2015, 318 (2015)
Bai, Z, Lü, H: Positive solutions for boundary value problem of nonlinear fractional differential equation. J. Math. Anal. Appl. 311, 495-505 (2005)
Cabada, A, Wang, G: Positive solutions of nonlinear fractional differential equations with integral boundary value conditions. J. Math. Anal. Appl. 389, 403-411 (2012)
Delbosco, D, Rodino, L: Existence and uniqueness for a nonlinear fractional differential equation. J. Math. Anal. Appl. 204, 609-625 (1996)
Jia, M, Liu, X: Three nonnegative solutions for fractional differential equations with integral boundary conditions. Comput. Math. Appl. 62, 1405-1412 (2011)
Liang, S, Zhang, J: Positive solutions for boundary value problems of nonlinear fractional differential equation. Nonlinear Anal. 71, 5545-5550 (2009)
Zhou, W, Chu, Y, Bǎleanu, D: Uniqueness and existence of positive solutions for a multi-point boundary value problem of singular fractional differential equations. Adv. Differ. Equ. 2013, 114 (2013)
Lan, K, Lin, W: Positive solutions of systems of Caputo fractional differential equations. Commun. Appl. Anal. 17, 61-86 (2013)
Guseinov, GS, Yaslan, I: Boundary value problems for second order nonlinear differential equations on infinite intervals. J. Math. Anal. Appl. 290, 620-638 (2004)
Yardimci, S, Uǧurlu, E: Nonlinear fourth order boundary value problem. Bound. Value Probl. 2014(1), 189 (2014)
Baleanu, D, Uǧurlu, E: Regular fractional dissipative boundary value problems. Adv. Differ. Equ. 2016, 175 (2016)
Uǧurlu, E, Baleanu, D, Tas, K: Regular fractional differential equations in the Sobolev space. Fract. Calc. Appl. Anal. 20, 810-817 (2017)
Rudin, W: Real and Complex Analysis, 3rd edn. McGraw-Hill, New York (1987)
Guo, D, Lashmikanthan, V: Nonlinear Problems in Abstract Cones. Academic Press, San Diego (1988)
Kilbas, A, Srivastava, H, Trujillo, J: Theory and Applications of Fractional Differential Equations. North-Holland Mathematics Studies, vol. 204. Elsevier, Amsterdam (2006)
The authors would like to thank the referees for the helpful suggestions. The second author is supported by NNSF of China (No. 11501165), the Fundamental Research Funds for the Central Universities (2015B19414).
Department of Mathematics, College of Science, Hohai University, Nanjing, 210098, China
Yuanfang Ru
, Fanglei Wang
& Tianqing An
Department of Mathematics, College of Science, Nanjing University of Aeronautics and Astronautics, Nanjing, 210098, China
Yukun An
Search for Yuanfang Ru in:
Search for Fanglei Wang in:
Search for Tianqing An in:
Search for Yukun An in:
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Correspondence to Fanglei Wang.
Ru, Y., Wang, F., An, T. et al. Some results on the fractional order Sturm-Liouville problems. Adv Differ Equ 2017, 320 (2017) doi:10.1186/s13662-017-1377-x
DOI: https://doi.org/10.1186/s13662-017-1377-x
fractional differential equations
Sturm-Liouville problems
Lyapunov inequality
fixed point theorem
Advances in Fractional Differential Equations and Their Real World Applications | CommonCrawl |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.